Automatica 46 (2010) 968–978
Contents lists available at ScienceDirect
Automatica journal homepage: www.elsevier.com/locate/automatica
Nonconflict check by using sequential automaton abstractions based on weak observation equivalenceI Rong Su a,∗ , Jan H. van Schuppen b , Jacobus E. Rooda a , Albert T. Hofkamp a a
Mechanical Engineering Department, Eindhoven University of Technology, PO Box 513, 5600 MB Eindhoven, The Netherlands
b
Centrum voor Wiskunde en Informatica (CWI), PO Box 94079, 1090 GB Amsterdam, The Netherlands
article
info
Article history: Received 24 October 2008 Received in revised form 8 February 2010 Accepted 17 February 2010 Available online 23 March 2010 Keywords: Discrete-event systems Nondeterministic finite-state automata Automaton abstraction Nonconflict check
abstract In Ramadge–Wonham supervisory control theory we often need to check nonconflict of plants and corresponding synthesized supervisors. For a large system such a check imposes a great computational challenge because of the complexity incurred by the composition of plants and supervisors. In this paper we present a novel procedure based on automaton abstractions, which removes internal transitions of relevant automata at each step, allowing the nonconflict check to be performed on relatively small automata, even though the original product system can be fairly large. © 2010 Elsevier Ltd. All rights reserved.
1. Introduction Since Ramadge–Wonham supervisory control theory was proposed (Ramadge & Wonham, 1987; Wonham & Ramadge, 1987) in the 1980’s, significant improvement and extensions have been made. The Ramadge–Wonham paradigm relies on synchronous product to compose local components and specifications together, upon which a standard supervisor synthesis procedure (i.e. achieving controllability, observability and nonblockingness) is performed. Unfortunately, the computational complexity of synchronous product is exponentially high with respect to the number of components and their individual numbers of states. To overcome this complexity issue, many new synthesis approaches have been developed. For example, in Wonham and Ramadge (1988) the authors propose the concept of modularity, which is then extended to the concept of local modularity in de Queiroz and Cury (2000). When local supervisors are (locally) modular, a globally nonblocking supervisor becomes a product of local supervisors achievable through local synthesis. Recently new modular synthesis approaches have been proposed, e.g. in Feng and Wonham
I The material in this paper was partially presented at 10th European Control Conference, Budapest, Hungary, August 23–26, 2009. This paper was recommended for publication in revised form by Associate Editor Bart De Schutter under the direction of Editor Ian R. Petersen. ∗ Corresponding author. Tel.: +31 40 2473334; fax: +31 40 2452505. E-mail addresses:
[email protected] (R. Su),
[email protected] (J.H. van Schuppen),
[email protected] (J.E. Rooda),
[email protected] (A.T. Hofkamp).
0005-1098/$ – see front matter © 2010 Elsevier Ltd. All rights reserved. doi:10.1016/j.automatica.2010.02.025
(2006), Hill, Tilbury, and Lafortune (2008) and Schmidt and Breindl (2008), which perform model abstractions first, upon which the standard synthesis approach is applied. In Leduc, Lawford, and Wonham (2005) the authors present a hierarchical interfacebased approach, which, by imposing a specific interface invariance condition, decouples a large system into several independent local modules, and supervisor synthesis can be performed on each local module whose size is usually much smaller than the overall system. When applying those new techniques, particularly modular approaches, we often need to check whether a collection of finitestate automata are nonconflicting with each other. Such a check is necessary for at least three reasons. First, a synthesis approach may require it, e.g. in de Queiroz and Cury (2000) and Wonham and Ramadge (1988) (local) modularity needs to be tested. Second, during synthesis if we can locate conflicting modules directly, then it is much more efficient to compute appropriate coordinators, that can resolve conflicts among local supervisors, as proposed in Feng and Wonham (2006), Hill et al. (2008) and Schmidt and Breindl (2008). Finally, even though theoretically a synthesis approach can guarantee that a supervisor is nonconflicting with a plant, practically we still need to test it because a synthesis program may contain coding errors causing incorrect results. Thus, a nonconflict check may help a user decide whether the obtained supervisor is indeed nonconflicting with the plant. There has been some work on nonconflict testing, e.g. in, Flordal and Malik (2006) Pena, Carrilho da Cunha, Cury, and Lafortune (2008) and Pena, Cury, and Lafortune (2006) the authors propose to utilize model abstractions to avoid high complexity caused by
R. Su et al. / Automatica 46 (2010) 968–978
synchronous product. The relationship between those model abstraction techniques are discussed in Malik, Flordal, and Pena (2008). To compute model abstraction (Pena et al., 2008, 2006) use natural projections, which are required to be observers (Wong & Wonham, 2004). The main disadvantage of using observers is that the alphabet of the codomain of a projection may need to be fairly large for the sake of achieving the observer property, potentially causing the size of the projected image too large for effective nonconflict test. To overcome this difficulty, we propose to use an automaton abstraction procedure, which is extended from the one presented in Su, van Schuppen, and Rooda (2008) and Su, van Schuppen, and Rooda (in press) by replacing the weak bisimilarity with the largest weak observation equivalence relation and revising the concept of standardized automata originally defined in Su et al. (in press) so that the nonblocking equivalence relation holds during abstraction and product, making our abstraction-based nonconflict check approach valid. Although the reduction approach in Flordal and Malik (2006) and several other approaches in the literature are based on nondeterministic automata, e.g. Flordal and Malik (2006), Flordal, Malik, Fabian, and Akesson (2007), Hill et al. (2008), Malik and Flordal (2008) and Su and Thistle (2006), they are different from ours that can be explained as follows. First, they use different equivalence relations during quotient construction, e.g. in Flordal and Malik (2006), Hill et al. (2008) and Su and Thistle (2006) the weak bisimilarity is used, and in Flordal et al. (2007) and Malik and Flordal (2008) the bisimilarity is used. As a contrast, we use the largest weak observation equivalence relation, which contains two different features compared with the (weak) bisimilarity: given two equivalent states, say x and x0 , under the largest weak observation equivalence relation, (1) x and x0 must have the same marking status, which need not be true under the (weak) bisimilarity; and (2) x and x0 may have inequivalent ‘unobservable’ extensions (in the sense that x can reach a state, say y, via an ‘unobservable’ string, but x0 cannot reach any state y0 weak observation equivalent to y via an ‘unobservable’ string), which can not happen under the (weak) bisimilarity. The first feature potentially makes the state set of an abstraction larger than the one under the (weak) bisimilarity, and the second one affects the size of the state set in an opposite way. Since it is common that there are only a few marker states in a large automaton, the second feature is usually the dominant factor for state reduction, making the largest weak observation equivalence relation favored over the (weak) bisimilarity. The second feature also forces us to adopt a nonstandard way to construct the transition map of an abstraction, where two quotient states are connected only through ‘observable’ events, and no silent event is used, which is different from the standard quotient construction under the (weak) bisimilarity in those existing automaton-based abstraction techniques. Our second contribution is to present an efficient sequential abstraction procedure (SAP), which bears similarity to an algorithm called Computational Procedure for Global Consistency (CPGC) provided in Su and Wonham (2005), except that CPGC is based on natural projections, which, as we have pointed out before, is not suitable for nonconflict check unless all relevant projections are observers. The aggregative nature of SAP is also reflected in Flordal and Malik (2006). But as mentioned above, the abstraction technique of Flordal and Malik (2006) is different from ours. Our third contribution is to present a procedure for nonconflict check (PNC) using SAP, which can handle a large number of automata. The efficiency of PNC has been illustrated by experimental results. This paper is organized as follows. In Section 2 we introduce basic concepts of languages and automata, and the problem statement of nonconflict check. Then we introduce automaton abstraction and relevant properties in Section 3. A procedure called PNC for nonconflict check based on sequential abstractions is presented in Section 4, along with experimental results. Conclusions are stated in Section 5. Long proofs are presented in the Appendix.
969
2. The problem of nonconflict check In this section we first introduce basic concepts of languages, operations on languages, nondeterministic automata and automaton product. Then we provide a problem statement about nonconflict check. Notations in this section follow rules in Wonham (2007). Let ∆ be a finite alphabet, and ∆∗ the Kleene closure of ∆, i.e. the collection of all finite sequences of events taken from ∆. Given two strings s, t ∈ ∆∗ , s is called a prefix substring of t, written as s ≤ t, if there exists s0 ∈ ∆∗ such that ss0 = t, where ss0 denotes the concatenation of s and s0 . We use to denote the empty string of ∆∗ such that for any string s ∈ ∆∗ , s = s = s. For σ ∈ ∆ and s ∈ ∆∗ , we use σ ∈ s to denote that s contains σ . A subset L ⊆ ∆∗ is called a language. L = {s ∈ ∆∗ |(∃t ∈ L) s ≤ t } ⊆ ∆∗ is called the prefix closure of L. Given two languages L, L0 ⊆ ∆∗ , the concatenation of L and L0 is LL0 := {ss0 ∈ ∆∗ |s ∈ L ∧ s0 ∈ L0 }. ∗ Let ∆0 ⊆ ∆. A map P : ∆∗ → ∆0 is called the natural projection 0 with respect to (∆, ∆ ), if (1) P () =
n
σ
if σ ∈ ∆0
(2) (∀σ ∈ ∆) P (σ ) := otherwise (3) (∀sσ ∈ ∆∗ ) P (sσ ) = P (s)P (σ ). Given a language L ⊆ ∆∗ , P (L) := {P (s) ⊆ ∆0 |s ∈ L}. The inverse image map of P is ∗
0∗
P −1 : 2∆
∗
→ 2∆ : L 7→ P −1 (L) := {s ∈ ∆∗ |P (s) ∈ L}.
Given L1 ⊆ ∆∗1 and L2 ⊆ ∆∗2 , the synchronous product of L1 and L2 is defined as: L1 k L2 := P1−1 (L1 ) ∩ P2−1 (L2 ) where P1 : (∆1 ∪ ∆2 )∗ → ∆∗1 and P2 : (∆1 ∪ ∆2 )∗ → ∆∗2 are natural projections. It is clear that k is commutative and associative. A nondeterministic finite-state automaton is a 5-tuple A = (Y , ∆, η, y0 , Ym ), where Y stands for the state set, ∆ for the alphabet, η : Y × ∆ → 2Y for the nondeterministic transition function, y0 for the initial state, and Ym for the marker state set. As usual, η is extended to Y × ∆∗ . A is called deterministic if for all y ∈ Y and σ ∈ ∆, η(y, σ ) contains no more than one element. Let B(A) := {s ∈ ∆∗ |(∃y ∈ η(y0 , s))(∀s0 ∈ ∆∗ ) η(y, s0 ) ∩ Ym = ∅}. Any string s ∈ B(A) can lead to a state y, from which no marker state is reachable, i.e. for any s0 ∈ ∆∗ , η(y, s0 ) ∩ Ym = ∅. Such a state y is called a blocking state of A, and we call B(A) the blocking set. A state that is not a blocking state is called a nonblocking state. We say A is nonblocking if B(A) = ∅. In this paper we will use |Y | to denote the size of Y . Given Ai = (Yi , ∆i , ηi , yi,0 , Yi,m ) (i = 1, 2), the product of A1 and A2 is an automaton A1 × A2 = (Y1 × Y2 , ∆1 ∪ ∆2 , η1 × η2 , (y1,0 , y2,0 ), Y1,m × Y2,m ) where η1 ×η2 : Y1 × Y2 ×(∆1 ∪ ∆2 ) → 2Y1 ×Y2 is defined as follows,
(η1 × η2 )((y1 , y2 ), σ ) ( η1 (y1 , σ ) × {y2 } := {y1 } × η2 (y2 , σ ) η1 (y1 , σ ) × η2 (y2 , σ )
if σ ∈ ∆1 − ∆2 if σ ∈ ∆2 − ∆1 if σ ∈ ∆1 ∩ ∆2 .
Clearly, × is commutative and associative. By a slight abuse of notations, from now on we use A1 × A2 to denote its reachable part, which contains all states reachable from (y1,0 , y2,0 ) by η1 × η2 and transitions among these states. η1 × η2 is extended to Y1 × Y2 × (∆1 ∪ ∆2 )∗ .
970
R. Su et al. / Automatica 46 (2010) 968–978
Definition 1. Let A = {Ai = (Yi , ∆i , ηi , yi,0 , Yi,m )|i ∈ I } be a collection of nondeterministic finite-state automata, where I is a finite index set. A is nonconflicting if B(×i∈I Ai ) = ∅. The Nonconflict Check Problem (NCP) is: How to determine whether A is nonconflicting? If for each Ai , we use L(Ai ) := {s ∈ ∆∗ |η(y0 , s) 6= ∅} to denote the closed behavior of Ai , and Lm (Ai ) := {s ∈ ∆∗ |η(y0 , s) ∩ Xm 6= ∅} for the marked behavior of Ai , and all Ai ’s are deterministic, then to determine whether A is nonconflicting is equivalent to checking whether
ki∈I Lm (Ai ) = ki∈I L(Ai ) which is the problem considered in Pena et al. (2006). To show such equivalence, it can be checked that
ki∈I Lm (Ai ) = Lm (×i∈I Ai ) and
ki∈I L(Ai ) = L(×i∈I Ai ) = B(×i∈I Ai ) ∪ Lm (×i∈I Ai ). If all Ai ’s are deterministic, so is ×i∈I Ai . Then B(×i∈I Ai ) ∩ N (×i∈I Ai ) = ∅, which means B(×i∈I Ai ) = ∅
⇐⇒ ki∈I L(Ai ) = B(×i∈I Ai ) ∪ Lm (×i∈I Ai ) = ki∈I Lm (Ai ). When Ai ’s are nondeterministic, it is possible that B(×i∈I Ai ) 6= ∅ but ki∈I Lm (Ai ) = ki∈I L(Ai ) holds. Therefore, the language-based nonconflict check may not be sufficient to determine whether a set of nondeterministic finite-state automata are nonconflicting. Our statement of NCP is an extension of the one considered in Pena et al. (2006) to deal with nonconflict check in a nondeterministic setup. In practical applications it is usually infeasible to compute ×i∈I Ai , owing to the complexity of automaton product. This paper is aimed to check whether B(×i∈I Ai ) = ∅ without actually computing ×i∈I Ai . To this end, we introduce automaton abstraction next. 3. Automaton abstraction In this section we first introduce concepts of standardized automaton and automaton abstraction, which are needed in Section 4 for developing an algorithm to solve the NCP. Then we present properties of automaton abstraction that will be used to prove the correctness of the algorithm presented in Section 4. 3.1. Concepts of automaton abstraction Given an alphabet ∆, we bring in two new symbols τ and µ that are not contained in ∆. We call τ the initial event and µ the marking event. Let Σ := ∆ ∪ {τ , µ}. For notation simplicity, from now on we always use Σ (or Σi ) to denote an alphabet that contains τ and µ; and use ∆ (or ∆i ) to denote an alphabet without τ and µ. Definition 2. We say G = (X , Σ , ξ , x0 , Xm ) is standardized if the following hold (1) x0 6∈ Xm ∧ (∀x ∈ X ) [ξ (x, τ ) 6= ∅ ⇐⇒ x = x0 ] ∧ (∀σ ∈ Σ − {τ }) ξ (x0 , σ ) = ∅ (2) (∀x ∈ X )(∀σ ∈ Σ ) x0 6∈ ξ (x, σ ) (3) (∀x ∈ X ) x ∈ Xm ⇒ x ∈ ξ (x, µ). A standardized automaton is an automaton, in which x0 is not marked (by condition 1), τ is only used at x0 , which only has outgoing transitions labeled by τ (by condition 1) and no incoming transition (by condition 2), and each marker state has a selflooping transition µ (by condition 3). Let φ(Σ ) be the set of all standardized finite-state automata, whose alphabets are Σ .
Lemma 3. Let Gi ∈ φ(Σi ) (i = 1, 2) be standardized. Then G1 × G2 is also standardized. The proof of Lemma 3 is provided in the Appendix. Lemma 3 says that standardization is preserved under automaton product. Next, we introduce the concept of marking weak bisimilarity. Definition 4. Given G = (X , Σ , ξ , x0 , Xm ) ∈ φ(Σ ), let Σ 0 ⊆ Σ ∗ and P : Σ ∗ → Σ 0 be the natural projection. A weak observation equivalence relation on X with respect to Σ 0 is an equivalence relation R ⊆ {(x, x0 ) ∈ X × X |x ∈ Xm ⇐⇒ x0 ∈ Xm } such that for all (x, x0 ) ∈ R and all s ∈ Σ ∗ , if P (s) 6= then for all y ∈ ξ (x, s),
(∃s0 ∈ Σ ∗ ) P (s) = P (s0 ) ∧ (∃y0 ∈ ξ (x0 , s0 )) (y, y0 ) ∈ R. The largest weak observation equivalence relation on X with respect to Σ 0 is denoted as ≈Σ 0 ,G . A weak observation equivalence relation is close to a weak bisimulation relation defined in, e.g. Su and Thistle (2006), which is extended from Milner (1990). It says for all (x, x0 ) ∈ R, s ∈ Σ ∗ and y ∈ ξ (x, s), there is s0 ∈ Σ ∗ such that P (s) = P (s0 ) and
(∃y0 ∈ ξ (x0 , s0 )) (y, y0 ) ∈ R ∧ [y ∈ Xm ⇐⇒ y0 ∈ Xm ]. But our definition has two different features: (1) it is possible that (x, x0 ) can have inequivalent ‘‘unobservable’’ extensions (i.e. x can reach a state y via s with P (s) = , but x0 cannot reach any state y0 equivalent to y via some s0 with P (s) = P (s0 )), which is not allowed in the definition of Su and Thistle (2006), and (2) two equivalent states must have the same marking status, which does not need to be true in the definition of Su and Thistle (2006). Given an equivalence relation E ⊆ X × X , we use X /E := {hxiE := {x0 ∈ X |(x, x0 ) ∈ E }|x ∈ X } to denote the quotient of X with respect to E, which represents a state partition of X . Then the first aforementioned feature usually results in a coarser state partition than a weak bisimulation relation can achieve. The second feature has an opposite effect, which can be analyzed as follows. Let ∼Σ 0 ,G denote the largest weak bisimulation relation, which is called the weak bisimilarity. We partition X into two subsets: X1 := {x ∈ X |(∀x0 ∈ Xm ) (x, x0 ) 6∈ ∼Σ 0 ,G } and X2 := X −X1 . Then we can check that, |X1 /∼Σ 0 ,G | ≥ |X1 /≈Σ 0 ,G |. For X2 we have
|(X2 − Xm )/∼Σ 0 ,G | ≥ |X2 /≈Σ 0 ,G | − |Xm /≈Σ 0 ,G |. We can check that (X1 / ∼Σ 0 ,G ) ∩ (X2 / ∼Σ 0 ,G ) = ∅. Thus, |X /∼Σ 0 ,G | ≥ |X1 /∼Σ 0 ,G | + |(X2 − Xm )/∼Σ 0 ,G |. On the other hand, since (X1 / ≈Σ 0 ,G ) ∩ (X2 / ≈Σ 0 ,G ) need not be empty, we have
|X1 /≈Σ 0 ,G | + |X2 /≈Σ 0 ,G | ≥ |X /≈Σ 0 ,G |. Therefore, we can derive that,
|X /∼Σ 0 ,G | ≥ |X1 /∼Σ 0 ,G | + |(X2 − Xm )/∼Σ 0 ,G | ≥ |X1 /≈Σ 0 ,G | + |X2 /≈Σ 0 ,G | − |Xm /≈Σ 0 ,G | ≥ |X /≈Σ 0 ,G | − |Xm /≈Σ 0 ,G |. Since the total number of marker states in G is usually small, which makes |Xm /≈Σ 0 ,G | small, the first feature is usually the dominant factor that determines the difference between |X /≈Σ 0 | and |X /∼Σ 0 ,G | in large automata, and is in favor of the largest weak observation equivalence relation over the weak bisimilarity because, without considering the marking information, the former is weaker than the latter. We now introduce abstraction. Definition 5. Given G = (X , Σ , ξ , x0 , Xm ) ∈ φ(Σ ), let Σ 0 ⊆ Σ with τ , µ ∈ Σ 0 . The automaton abstraction of G with respect to ≈Σ 0 ,G is an automaton G/ ≈Σ 0 ,G := (Z , Σ 0 , λ, z0 , Zm ) where (1) Z := X / ≈Σ 0 ,G
R. Su et al. / Automatica 46 (2010) 968–978
971
(2) z0 := hx0 iΣ 0 ,G ∈ Z (3) Zm := {hxiΣ 0 ,G ∈ Z |hxiΣ 0 ,G ∩ Xm 6= ∅} (4) λ : Z × Σ 0 → 2Z , where for any (z , σ ) ∈ Z × Σ 0 , λ(z , σ ) := {z 0 ∈ Z |(∃x ∈ z )(∃u, u0 ∈ (Σ − Σ 0 )∗ )ξ (x, uσ u0 ) ∩ z 0 6= ∅}. Definition 5 is almost the same as the one introduced in Su et al. (2008, in press), except that each Σ 0 must contain µ along with τ and we use the largest weak observation equivalence relation instead of the weak bisimilarity. The involvement of µ leads to many good properties of abstraction crucial for the success of nonconflict check, which cannot be obtained in Su et al. (in press) directly. In other comparable automaton-based abstraction techniques, e.g. Flordal and Malik (2006), Hill et al. (2008), Malik and Flordal (2008) and Su and Thistle (2006), they use the weak bisimilarity and the definition of λ follows the standard quotient construction, i.e., if σ 6= ν then
λ(z , σ ) := {z 0 ∈ Z |(∃x ∈ z )(∃x0 ∈ z 0 ) x0 ∈ ξ (x, σ )} and if σ = ν then
λ(z , ν) := {z 0 ∈ Z |(∃x ∈ z )(∃x0 ∈ z 0 )(∃σ 0 ∈ Σ − Σ 0 ) x0 ∈ ξ (x, σ 0 )} where ν is called the silent event, which is used to denote all internal events that are not included in Σ 0 (i.e. σ 0 ∈ Σ − Σ 0 ). We should point it out that, in the standard literature usually τ is used to denote the silent event. Because we have already used τ for the initial transitions in a standardized automaton, to avoid potential confusion we use ν to denote the silent event. Compared with those techniques, our way of constructing the transition map λ is different in the sense that, two quotient states are only connected by events in Σ 0 and no silent event is used. This is because we use the weak observation equivalence relation ≈Σ 0 , which makes the standard quotient construction of λ fails to work (recall that two equivalent states under ≈Σ 0 may have inequivalent ‘‘unobservable’’ extensions). As mentioned above, when |X | is large but |X /≈Σ 0 ,G | is small, usually our abstraction technique can generate an abstraction that has fewer states than the one computed by other abstraction techniques, which utilize the standard quotient construction under the weak bisimilarity. We will illustrate this shortly by a small example. The time complexity of computing G/ ≈Σ 0 ,G mainly results from computing X / ≈Σ 0 ,G , which can be estimated as follows. We first define a new automaton G0 = (X , Σ 0 , ξ 0 , x0 , Xm ), where for any x, x0 ∈ X and σ ∈ Σ , x0 ∈ ξ 0 (x, σ ) if there exist u, u0 ∈ (Σ − Σ 0 )∗ such that x0 ∈ ξ (x, uσ u0 ). Then we compute X / ≈Σ 0 ,G0 , and we can show that the result is equal to X / ≈Σ 0 ,G . The total number of transitions in G0 is no more than mn2 , where n = |X | and m is the number of transitions in G. Based on a result shown in Fernandez (1990), the time complexity of computing X / ≈Σ 0 ,G0 is O(mn2 log n) if we ignore the complexity caused by checking the condition ‘‘x ∈ Xm ⇐⇒ x0 ∈ Xm ’’ in Definition 4. If we consider this extra condition, which requires to compare at most 12 n(n − 1) pairs of states in the worst case scenario, then the overall complexity is O( 12 n(n − 1) + mn2 log n) = O(mn2 log n). From now on, when G is clear from the context, we simply use ≈Σ 0 to denote ≈Σ 0 ,G , and use hxiΣ 0 for an element of X / ≈Σ 0 ,G . If Σ 0 is also clear from the context, then we simply use hxi for hxiΣ 0 . We have the following result. Lemma 6. Let G ∈ φ(Σ ) be a standardized automaton and Σ 0 ⊆ Σ with τ , µ ∈ Σ 0 . Then G/ ≈Σ 0 is also a standardized automaton. The proof of Lemma 6 is given in the Appendix. Lemma 6 says that standardization is preserved under abstraction as long as τ and µ are in the abstraction alphabet. To illustrate abstraction, let A = (Y , ∆, η, y0 , Ym ) be a nonstandardized automaton depicted in
Fig. 1. Example 1: Automaton A, Standardized Automaton G, Abstractions G/ ≈Σ 0 , and A/ ∼Σ 0 .
Fig. 1, where ∆ = {a, b, u}. Its standardized version is G = (X , Σ , ξ , x0 , Xm ) ∈ φ(Σ ) depicted in Fig. 1, where Σ = ∆ ∪ {τ , µ}. Assume Σ 0 = {τ , µ, a b}. Then we have X / ≈Σ 0 = {h0i = {0}, h1i = {1, 4}, , h10i = {10},
h2i = {2, 3, 8}, h5i = {5, 6, 9}, h7i = {7, 11}} Since there is a string τ u from state 0 to state 2 in G, by Definition 5, we have h2i ∈ λ(h0i, τ ). Similarly, there is a string uua from state 1 to state 10 in G, we have h10i ∈ λ(h1i, a). The rest of transitions are constructed in a similar way. The abstraction G/ ≈Σ 0 is depicted in Fig. 1. As a comparison, we apply the standard quotient construction on A. We use ∼Σ 0 to denote the weak bisimilarity. To distinguish the quotient states of Y / ∼Σ 0 from those of X / ≈Σ 0 , we use [y] to denote each element of Y / ∼Σ 0 . We can check that Y / ∼Σ 0 = {[1] = {1}, [2] = {2}, [3] = {3, 8}, [4] = {4},
[5] = {5, 9}, [6] = {6}, [7] = {7, 11}, [10] = {10}}. The quotient automaton A/ ∼Σ 0 is depicted in Fig. 1, which has more states and transitions than G/ ≈Σ 0 has. In this example, we can see that our abstraction technique does enjoy some computational advantage over automaton-based abstraction techniques, which utilize the standard quotient construction under the weak bisimilarity. In general, abstraction may create a large number of transitions. In the worst case, the overall size of G/ ≈Σ 0 is |Z | + |Σ 0 ||Z |2 , where |Σ 0 ||Z |2 is the maximum number of transitions that G/ ≈Σ 0 can have. Thus, for small automata the effect of size reduction by abstraction is not obvious because, although |Z | ≤ |X |, the total number of states and transitions in G/ ≈Σ 0 may even increase. But when automaton G contains a large number of states such that |X | |Z |, and the target alphabet Σ 0 is much smaller than Σ , usually the size reduction by abstraction is significant. This can be seen in Section 4.
972
R. Su et al. / Automatica 46 (2010) 968–978
∗
Proof. Let P : Σ ∗ → Σ 0 be the natural projection.
3.2. Properties of automaton abstraction Given an automaton G = (X , Σ , ξ , x0 , Xm ) ∈ φ(Σ ), for each x ∈ X , we define another set
B(G × G0 ) = ∅ ⇐⇒ P (B(G × G0 )) = ∅
⇐⇒ B((G × G0 )/ ≈Σ 0 ) = ∅ by Proposition 7.
NG (x) := {s ∈ Σ ∗ |ξ (x, s) ∩ Xm 6= ∅ ∧ µ ∈ s}.
Since
For the notation simplicity, we use N (G) to denote NG (x0 ). It is possible that B(G) ∩ N (G) 6= ∅, due to nondeterminism.
(G × G0 )/ ≈Σ 0 ∼ = (G/ ≈Σ 0 ) × (G0 / ≈Σ 0 ) by Proposition 12 ∼ = (G/ ≈Σ 0 ) × G0 because G0 ∼ = G0 / ≈Σ 0 and by Proposition 9
Proposition 7. Let G ∈ φ(Σ ), Σ 0 ⊆ Σ with τ , µ ∈ Σ 0 , and ∗ P : Σ ∗ → Σ 0 be the natural projection. Then P (B(G)) = B(G/ ≈Σ 0 ) and P (N (G)) = N (G/ ≈Σ 0 ). The proof of Proposition 7 is presented in the Appendix. In nonconflict check we specifically choose Σ 0 = {τ , µ}, which is described in the next section. We now introduce a concept, which is extensively used in this paper. Definition 8. Given Gi = (Xi , Σi , ξi , xi,0 , Xi,m ) (i = 1, 2), we say G1 is nonblocking preserving with respect to G2 , denoted as G1 v G2 , if (1) B(G1 ) ⊆ B(G2 ); (2) N (G1 ) = N (G2 ); and (3) for all s ∈ N (G1 ) and x1 ∈ ξ1 (x1,0 , s), there exists x2 ∈ ξ2 (x2,0 , s) such that NG2 (x2 ) ⊆ NG1 (x1 ) ∧ [x1 ∈ X1,m ⇐⇒ x2 ∈ X2,m ]. We say G1 is nonblocking equivalent to G2 , denoted as G1 ∼ = G2 , if G1 v G2 and G2 v G1 . Definition 8 says that, if G1 is nonblocking preserving with respect to G2 then their nonblocking behaviors are equal, but G2 ’s blocking behavior may be larger. The last condition is used to guarantee that nonblocking preserving is preserved under automaton product and abstraction (see the proofs of Propositions 9 and 10). If G2 is also nonblocking preserving with respect to G1 , then they are nonblocking equivalent. Proposition 9. Let G1 , G2 ∈ φ(Σ ), G3 ∈ φ(Σ 0 ). If G1 v G2 then G1 × G3 v G2 × G3 . If G1 ∼ = G2 then G1 × G3 ∼ = G2 × G3 . The proof of Proposition 9 is given in the Appendix. It says that nonblocking preserving and equivalence are invariant under product. Proposition 10. Let G1 , G2 ∈ φ(Σ ) and Σ ⊆ Σ with τ , µ ∈ Σ . If G1 v G2 , then we have G1 / ≈Σ 0 v G2 / ≈Σ 0 . If G1 ∼ = G2 . Then G1 / ≈Σ 0 ∼ = G2 / ≈Σ 0 . 0
0
The proof of Proposition 10 is provided in the Appendix. It says that nonblocking preserving and equivalence is invariant under abstraction. Proposition 11. Suppose Σ 00 ⊆ Σ 0 ⊆ Σ with τ , µ ∈ Σ 00 and G ∈ φ(Σ ). Then G/ ≈Σ 00 ∼ = (G/ ≈Σ 0 )/ ≈Σ 00 . The proof of Proposition 11 is given in the Appendix. It says that an automaton abstraction can be replaced by a sequence of automaton abstractions, and the results are nonblocking equivalent. Proposition 12. Given Gi ∈ φ(Σi ) with i = 1, 2, let Σ 0 ⊆ Σ1 ∪ Σ2 with τ , µ ∈ Σ 0 . If Σ1 ∩ Σ2 ⊆ Σ 0 , then we have that (G1 × G2 )/ ≈Σ 0 ∼ = (G1 / ≈Σ1 ∩Σ 0 ) × (G2 / ≈Σ2 ∩Σ 0 ). The proof of Proposition 12 is provided in the Appendix. It is about the distribution of automaton abstraction over automaton product. We have the following main result about abstraction. Theorem 13. Given G ∈ φ(Σ ), let Σ 0 ⊆ Σ with τ , µ ∈ Σ 0 . For any G0 ∈ φ(Σ 0 ), B(G × G0 ) = ∅ if and only if B((G/ ≈Σ 0 ) × G0 ) = ∅.
we have B(G × G0 ) = ∅ ⇐⇒ B((G/ ≈Σ 0 ) × G0 ) = ∅.
B((G × G0 )/ ≈Σ 0 ) = ∅
⇐⇒
In practical applications G usually consists of a large number of small automata, namely G = G1 × · · · × Gn for some very large number n ∈ N, where Gi ∈ φ(Σi ) for each i = 1, 2, . . . , n. How to compute G/ ≈Σ 0 imposes a great computational challenge. To overcome it, we borrow an algorithm presented in Su and Wonham (2005) but with the newly developed automaton abstraction technique. S Let I = {1, . . . , n} for some n ∈ N. For any J ⊆ I, let ΣJ := j∈J Σj . Sequential Abstraction over Product: (SAP) (1) InputsS of SAP: a collection {Gi ∈ φ(Σi )|i ∈ I } and an alphabet Σ 0 ⊆ i∈I Σi with τ , µ ∈ Σ 0 . (2) For k = 1, 2, . . . , n, we do the following computation. • Set Jk := {1, 2, . . . , k}, Tk := ΣJk ∩ (ΣI −Jk ∪ Σ 0 ). • If k = 1 then W1 := G1 / ≈T1 • If k > 1 then Wk := (Wk−1 × Gk )/ ≈Tk . (3) Output of SAP: Wn . Theorem 14. Suppose Wn is computed by SAP. Then (×i∈I Gi )/ ≈Σ 0 ∼ = Wn . Proof. We use induction to show that
(∀k : 1 ≤ k ≤ n) (×j∈Jk Gj )/ ≈Tk ∼ (1) = Wk . Clearly, G1 / ≈T1 ∼ = W1 . Suppose Eq. (1) is true for k ≤ l ∈ N. Then we need to show that it also holds for k = l + 1. In the following proof we need to use a fact that, for all G = (X , Σ , ξ , x0 , Xm ) ∈ φ(Σ ) we have G ∼ = G/ ≈Σ . This is true because B(G) = B(G/ ≈Σ ), N (G) = N (G/ ≈Σ ), and it is straightforward to see NG (x) = NG/ ≈Σ (hxi) and x ∈ Xm ⇐⇒ hxi ∈ Xm / ≈Σ . We now continue our proof. By SAP,
(×j∈Jl+1 Gj )/ ≈Tl+1 ∼ = ((×j∈Jl+1 Gj )/ ≈Tl ∪Σl+1 )/ ≈Tl+1 by Proposition 11 ∼ = (((×j∈Jl Gj )/ ≈Tl ) × (Gl+1 / ≈Σl+1 ))/ ≈Tl+1 becauseΣl+1 ∩ ΣJl ⊆ Tl ∪ Σl+1 and Propositions 10 and 12 ∼ = (((×j∈Jl Gj )/ ≈Tl ) × Gl+1 )/ ≈Tl+1 because Gl+1 ∼ = Gl+1 / ≈Σl+1 and Propositions 9 and 10 ∼ = (Wl × Gl+1 )/ ≈Tl+1 by the induction hypothesis and Propositions 9 and 10
= Wl+1 . Therefore Eq. (1) holds for all k, in particular for k = n. The proposition follows. Theorem 14 confirms that SAP allows us to obtain an abstraction of the entire system G = ×i∈I Gi in a sequential way. Thus, we can avoid computing G explicitly, which may be prohibitively large for many industrial systems. The results of our numerical experiments indicate that the ordering of those automata affects the computational complexity, which is defined as the corresponding maximum number of states and transitions appearing in SAP at each step, i.e. maxk kWk−1 × Gk k, where kWk−1 × Gk k denotes the number of states and transitions of Wk−1 × Gk . Unfortunately, finding an ordering that results in the minimum complexity requires to
R. Su et al. / Automatica 46 (2010) 968–978
973
Table 1 Experimental results. CN
LC
1
G1 G2 G3 G4 G5
(15, 219); (408, 3664); (39, 414); (43, 571); (33, 117)
2
G1 G2 G3 G4 G5
(24, 80); (162, 1620); (12, 80); (66, 547); (9, 78)
3
G1 G2 G3 G4 G5
(16, 69); (112, 672); (108, 1332); (39, 414); (64, 404)
G1 G2 G3 G4
(162, 1620); (112, 672); (108, 1332); (33, 117);
4
SSAC
SRP
MSP/CTP (s)
CM
CP
(1159920, 23686344)
W1 W2 W3 W4 W5
(11, 54); (35, 350); (21, 196); (31, 420); (3, 6)
(161, 3295)/10
NC
NC
(1135296, 25014096)
W1 W2 W3 W4 W5
(3, 11); (11, 140); (81, 1312); (25, 621); (4, 9)
(177, 2840)/3
C
C
(1005120, 19683696)
W1 W2 W3 W4 W5
(39, 745); (305, 6396); (8, 32); (37, 899); (3, 6)
(305, 6296)/25
NC
NC
W1 W2 W3 W4
(9, 39); (5, 20); (11, 34); (3, 6)
(17, 206)/0.1
NC
NC
N/A
NC
(5272128, 150273792)
38 FSAs (inter-link)
1.76 × 1037
N/A
(3912, 30605)/155
6
38 FSAs (intra-link)
37
1.84 × 10
N/A
(1894, 10853)/53
N/A
NC
7
38 FSAs (re-entrance)
1.20 × 1037
N/A
(4621, 33319)/154
N/A
NC
8
38 FSAs (parallel)
2.56 × 1037
N/A
(1180, 6116)/45
N/A
NC
5
check all possible orderings, which is undesirable in practice. So we come up with a heuristic rule: given Jk−1 = {r1 , r2 , . . . , rk−1 }, the choice of Σrk at step k in SAP maximizes the ratio of the size of ΣJk = ΣJk−1 ∪ Σrk , denoted as |ΣJk |, and the size of Tk , namely
|ΣJk−1 ∪ Σrk | |(ΣJk−1 ∪ Σrk ) ∩ (ΣI −(Jk−1 ∪{rk }) ∪ Σ 0 )| |ΣJk−1 ∪ Σi | = max . i∈I −Jk−1 |(ΣJ ∪ Σi ) ∩ (ΣI −(Jk−1 ∪{i}) ∪ Σ 0 )| k−1 The rationality of this rule is that a large ratio of alphabets implies a large ratio of automaton sizes k ×j∈Jk Gj k/k(×j∈Jk Gj )/≈Tk k, which can be treated as the model reduction rate. A large reduction rate means we have a small automaton (×j∈Jk Gj )/ ≈Tk compared with ×j∈Jk Gj . By Theorem 14, we have (×j∈Jk Gj )/ ≈Tk ∼ = (Wk−1 × Grk )/ ≈Tk . Although nonblocking equivalence does not necessarily provide a precise comparison between sizes of relevant automata, it often heuristically implies that those sizes are close to each other. Therefore, from k(×j∈Jk Gj )/≈Tk k being small we can estimate that k(Wk−1 × Grk )/≈Tk k is also small. If k(Wk−1 × Grk )/≈Tk k is small, usually kWk−1 × Grk k is not big because the reduction rate at each step k is usually not big (mostly ≤ 10) for practical applications. Although the rationality can not be formally proven, it does provide a good heuristics that usually results in small complexity in SAP, as illustrated by our experimental results listed in Table 1. Next, we discuss how to use the SAP to check nonconflict of a large number of finite-state automata and provide experiment results. 4. Check nonconflict of finite-state automata Recall that in the nonconflict check problem, given a set of nondeterministic finite-state automata A := {Ai = (Yi , ∆i , ηi , yi,0 , Yi,m )|i ∈ I }, we need to determine whether B(×i∈I Ai ) = ∅. To solve this problem we propose the procedure of nonconflict check (PNC): (1) Input: A = {Ai |i ∈ I = {1, 2, . . . , n}}. (2) For each i ∈ I, we create a standardized automaton Gi = (Xi , Σi , ξi , xi,0 , Xi,m ) as follows:
(a) (b) (c) (d)
Xi := Yi ∪ {xi,0 }, where xi,0 6∈ Yi Xi,m := Yi,m
Σi := ∆i ∪ {τ , µ} ξi : Xi × Σi → 2Xi is defined as follows: • For all x ∈ Yi , σ ∈ ∆i , ξi (x, σ ) := ηi (x, σ ) • ξi (xi,0 , τ ) := {y0 } • For any x ∈ Yi,m , ξi (x, µ) := {x} (3) Let Σ 0 := {τ , µ}. Use SAP on {Gi |i ∈ I } for Wn . (4) Output: claim B(×i∈I Ai ) = ∅ iff B(Wn ) = ∅. What PNC does is to first convert each Ai into a standardized automaton Gi by adding an extra state xi,0 , which is treated as the initial state of Gi and connected with the initial state yi,0 of Ai by τ ; then selflooping µ at each marker state of Ai . After that, we run SAP on those standardized automata {Gi |i ∈ I }, and draw a conclusion on whether A is nonconflicting by checking the emptiness of B(Wn ). To show that the claim made by PNC is correct we need the following lemma. Lemma 15. Let A = {Ai |i ∈ I } be a collection of finite-state automata and {Gi |i ∈ I } the relevant standardized finite-state automata of A obtained in PNC. Then B(×i∈I Ai ) = ∅ if and only if B(×i∈I Gi ) = ∅. The proof of Lemma 15 is provided in the Appendix. The correctness of PNC is shown in the main result below. Theorem 16. B(×i∈I Ai ) = ∅ if and only if in PNC we have B(Wn ) = ∅. Proof. By Lemma 15 we get S that B(×i∈I Ai ) = ∅ if and only ∗ if B(×i∈I Gi ) = ∅. Let P : ( i∈I Σi )∗ → Σ 0 be the natural projection. We have B(×i∈I Gi ) = ∅ ⇐⇒ P (B(×i∈I Gi )) = ∅. By Proposition 7 we get that: P (B(×i∈I Gi )) = ∅ ⇐⇒ B((×i∈I Gi )/ ≈Σ 0 ) = ∅. By Theorem 14 we have (×i∈I Gi )/ ≈Σ 0 ∼ = Wn . Thus, B((×i∈I Gi )/ ≈Σ 0 ) = ∅ ⇐⇒ B(Wn ) = ∅, which means B(×i∈I Gi ) = ∅ if and only if B(Wn ) = ∅, and the theorem follows.
974
R. Su et al. / Automatica 46 (2010) 968–978
Theorem 16 confirms that we can use PNC to determine whether A is nonconflicting. As an illustration we apply PNC to some examples to check its efficiency. The results are summarized in Table 1. The first four cases are taken from a case study of a Philips Patient Support System (Theunissen, Schiffelers, van Beek, & Rooda, 2008). Each Gi in the table is the product of several local components associated with their relevant local supervisors within the same module. The next four cases are taken from a case study of 6-link linear cluster tools (Yi, Zhang, Ding, & van der Meulen, 2007), where each case uses a different routing sequence (either inter-link, intra-link, re-entrance or parallel), resulting in a different set of local supervisors and coordinators. The notations in Table 1 have the following meanings:
• ‘CN’ is a brevity for Case Number • ‘LC’ for Local Components • ‘SSAC’ for Size of Synchronization of All Components (i.e. the size of ×i∈I Gi ) • ‘SRP’ for Sizes of Results of PNC • ‘MSP’ for Maximum Size in PNC • ‘CTP’ for Computation Time of PNC • ‘CM’ for Claim by Monolithic Check • ‘CP’ for Claim by PNC • ‘NC’ for nonconflicting and ‘C’ for conflicting • ‘N/A’ for not available. CTP is based on a Python program running on Intel(R) Core(TM)2 2.4 GHz CPU with 3.5 GB RAM. We do not count the computation time for the monolithic approach because each check is done manually. But the computation time for the synchronization alone takes 10 s for Case 1, 33 s for Case 2, 23 s for Case 3 and 381 s for Case 4. The ordering of local components in PNC is determined by the heuristic rule described in Section 3. In the monolithic approach, by using the ‘trim’ operation on ×i∈I Gi in TCT (Design Software: XPTCT, 2008), we can determine whether ×i∈I Gi is reachable and coreachable, i.e. we can decide whether local components are nonconflicting. Based on this approach we obtain results in the column of CMC for the first four cases, which agree with claims made by PNC. Since the systems for the last four cases are large, owing to the limited space, we can neither list the size of each individual finite-state automaton (FSA), nor list the size of each result in PNC. Furthermore, we cannot perform centralized check for those cases because of the prohibitively large space complexity. Thus, the column of LC for the last four cases only lists the total number of FSAs in the system for each case, and the column of SSAC only lists an estimate of the size of the centralized system, and the columns of SRPNC and CPNC for the last four cases are labeled by ‘‘N/A’’. From Table 1 we can see that, PNC has a substantial computational advantage over the monolithic approach. 5. Conclusions In this paper we first introduce an automaton-based abstraction technique and provide relevant properties. Then we propose a sequential abstraction procedure (SAP), based on which we propose the procedure PNC to check nonconflict of a large number of nondeterministic finite-state automata, which is commonly encountered in Ramadge–Wonham supervisory control theory. Numerical results show that, PNC can help us avoid high complexity in nonconflict check. Acknowledgements We would like to thank Mr Pascal A.W. van Overmeire of the Systems Engineering Group at Eindhoven University of Technology for providing the computational results for the last four cases about the cluster tools in Table 1.
Appendix Proof of Lemma 3. Suppose Gi = (Xi , Σi , ξi , xi,0 , Xi,m ) (i = 1, 2). First, for any (x1 , x2 ) ∈ X1 × X2 we have
ξ1 × ξ2 ((x1 , x2 ), τ ) 6= ∅ ⇐⇒ ξ1 (x1 , τ ) 6= ∅ ∧ ξ2 (x2 , τ ) 6= ∅ ⇐⇒ x1 = x1,0 ∧ x2 = x2,0 for G1 , G2 are standardized ⇐⇒ (x1 , x2 ) = (x1,0 , x2,0 ). For any σ ∈ (Σ1 ∪ Σ2 ) − {τ }, if σ ∈ Σi for i = 1 or 2, we have ξi (xi,0 , σ ) = ∅. Thus, ξ1 ×ξ2 ((x1,0 , x2,0 ), σ ) = ∅. For any (x1 , x2 ) ∈ X1 × X2 − {(x1,0 , x2,0 )} and σ ∈ (Σ1 ∪ Σ2 ), since G1 and G2 are standardized, we have
ξ1 × ξ2 ((x1 , x2 ), σ ) 6= ∅ ⇒ (x1 , x2 ) 6= (x1,0 , x2,0 ). Finally, for any (x1 , x2 ) ∈ X1 × X2 , we have
(x1 , x2 ) ∈ X1,m × X2,m ⇒ x1 ∈ X1,m ∧ x2 ∈ X2,m ⇒ x1 ∈ ξ1 (x1 , µ) ∧ x2 ∈ ξ2 (x2 , µ) for G1 , G2 are standardized
⇒ (x1 , x2 ) ∈ ξ1 × ξ2 ((x1 , x2 ), µ). Thus, G1 × G2 is standardized.
Proof of Lemma 6. Let G = (X , Σ , ξ , x0 , Xm ) and G/ ≈Σ 0 (Y , Σ 0 , η, y0 , Ym ). Then for any y ∈ Y = X / ≈Σ 0 we have
⇐⇒ ⇐⇒ ⇐⇒ ⇐⇒ ⇐⇒
=
η(y, τ ) 6= ∅ (∃x ∈ y)(∃u, u0 ∈ (Σ − Σ 0 )∗ ) ξ (x, uτ u0 ) 6= ∅ (∃x0 ∈ ξ (x, u)) ξ (x0 , τ ) 6= ∅ (∃x0 ∈ ξ (x, u)) x0 = x0 for G is standardized x = x0 for (∀ˆx ∈ X − {x0 }) (∀σ ∈ Σ ) x0 6∈ ξ (ˆx, σ ) y = hxi = hx0 i = y0 .
Since τ ∈ Σ 0 , for any σ ∈ Σ 0 − {τ } and u ∈ (Σ − Σ 0 )∗ , we have ξ (x0 , uσ ) = ∅. Thus,
η(y0 , σ ) = {hxi ∈ Y |(∃u, u0 ∈ (Σ − Σ 0 )∗ ) x ∈ ξ (x0 , uσ u0 )} = ∅. For any y ∈ Y − {y0 } and σ ∈ Σ 0 , since G, we have
η(y, σ ) 6= ∅ ⇒ (∃x ∈ y)(∃u, u0 ∈ (Σ − Σ 0 )∗ ) ξ (x, uσ u0 ) 6= ∅ ⇒ x0 ∈ 6 ξ (x, uσ u0 ) because (∀ˆx ∈ X − {x0 }) (∀σ ∈ Σ ) x0 6∈ ξ (ˆx, σ ) ⇒ hx0 i = y0 6∈ η(y, σ ). Finally, for any y ∈ Y , we have y ∈ Ym ⇒ (∀x ∈ y) x ∈ Xm
⇒ x ∈ ξ (x, µ) because G is standardized ⇒ hxi = y ∈ η(hxi, µ) = η(y, µ) by Definition 5. Thus, G/ ≈Σ 0 is standardized.
Proof of Proposition 7. Let G = (X , Σ , ξ , x0 , Xm ) and ξ 0 be the transition map of G/ ≈Σ 0 . First we show that P (B(G)) ⊆ B(G/ ≈Σ 0 ). For all s ∈ P (B(G)), there exists t ∈ B(G) with P (t ) = s such that
(∃x ∈ ξ (x0 , t )) (∀t 0 ∈ Σ ∗ ) ξ (x, t 0 ) ∩ Xm = ∅. Since G is standardized, we have hxi ∈ ξ 0 (hx0 i, P (t )). Furthermore, from the condition
(∀t 0 ∈ Σ ∗ ) ξ (x, t 0 ) ∩ Xm = ∅ we have (∀s0 ∈ Σ 0 ) ξ 0 (hxi, s0 ) ∩ (Xm / ≈Σ 0 ) = ∅. Thus, s = P (t ) ∈ B(G/ ≈Σ 0 ). ∗
R. Su et al. / Automatica 46 (2010) 968–978
Next we show B(G/ ≈Σ 0 ) ⊆ P (B(G)). For all s ∈ B(G/ ≈Σ 0 ), there exists hxi ∈ ξ 0 (hx0 i, s) such that ∗
(∀s0 ∈ Σ 0 ) ξ 0 (hxi, s0 ) ∩ (Xm / ≈Σ 0 ) = ∅. Clearly, x 6∈ Xm . By Definition 5, there exists t ∈ Σ ∗ such that P (t ) = s, x ∈ ξ (x0 , t ) and
975
we have B(G1 ) ⊆ B(G2 ). Thus, by Proposition 7, we have s ∈ P (B(G2 )) ⊆ B(G2 / ≈Σ 0 ), which means B(G1 / ≈Σ 0 ) ⊆ B(G2 / ≈Σ 0 ). Finally, for all s ∈ N (G1 / ≈Σ 0 ), there are t ∈ Σ ∗ , x1 ∈ ξ1 (x1,0 , t ) such that hx1 i ∈ ξ10 (hx1,0 i, P (s)) and ∗
(∃s0 ∈ Σ 0 ) ξ10 (hx1 i, s0 ) ∩ (X1,m / ≈Σ 0 ) 6= ∅
(∀t 0 ∈ Σ ∗ ) ξ (x0 , t 0 ) ∩ Xm 6= ∅ ⇒ t 0 ∈ (Σ − Σ 0 )∗ .
which means there are x01 ∈ hx1 i, t 0 ∈ Σ 0 such that
We claim that x is a blocking state of G because, otherwise, there exists t 0 ∈ Σ ∗ such that ξ (x, t 0 ) ∩ Xm 6= ∅. Since G is standardized, we get that ξ (x, t 0 µ) ∩ Xm 6= ∅. But s0 µ 6∈ (Σ − Σ 0 )∗ . Since x is a blocking state, we get t ∈ B(G), which means P (t ) = s ∈ P (B(G)). To show P (N (G)) ⊆ N (G/ ≈Σ 0 ), let s ∈ P (N (G)). Then (∃t ∈ N (G)) P (t ) = s ∧ ξ (x0 , t ) ∩ Xm 6= ∅. Since G is standardized, we have ξ 0 (hx0 i, P (t )) ∩ (Xm / ≈Σ 0 ) 6= ∅. Thus, s ∈ N (G/ ≈Σ 0 ). Finally we show that N (G/ ≈Σ 0 ) ⊆ P (N (G)). Let s ∈ N (G/ ≈Σ 0 ). Then ξ 0 (hx0 i, s) ∩ (Xm / ≈Σ 0 ) 6= ∅. Thus, there exists t ∈ Σ ∗ with P (t ) = s such that ξ (x0 , t ) ∩ Xm 6= ∅, which means t ∈ N (G). Thus, P (t ) = s ∈ P (N (G)).
P (t 0 ) = s0 ∧ ξ1 (x01 , t 0 ) ∩ X1,m 6= ∅.
Proof of Proposition 9. We first show that, if G1 v G2 , then G1 × G3 v G2 × G3 . Let Gi = (Xi , Σi , ξi , xi,0 , Xi,m ) with i = 1, 2, 3, where Σ1 = Σ2 = Σ and Σ3 = Σ 0 . Let P : (Σ ∪ Σ 0 )∗ → Σ ∗ and ∗ P 0 : (Σ ∪ Σ 0 )∗ → Σ 0 be natural projections. We first show that N (G1 × G3 ) = N (G2 × G3 ). Clearly, we have N (G1 × G3 ) = N (G1 ) k N (G3 ). Since G1 v G2 , we have N (G1 ) = N (G2 ). Thus, we have N (G1 × G3 ) = N (G2 × G3 ). To show B(G1 ×G3 ) ⊆ B(G2 ×G3 ), let s ∈ B(G1 ×G3 ). By Definition 5, there exists x1 ∈ X1 such that x1 ∈ ξ1 (x1,0 , P (s)). There are two cases to consider. Case 1: x1 is a blocking state. Then P (s) ∈ B(G1 ) ⊆ B(G2 ). Thus, s ∈ B(G2 × G3 ). Case 2: x1 is a nonblocking state. Since G1 v G2 , there exists x2 ∈ ξ2 (x2,0 , P (s)) such that NG1 (x1 ) ⊇ NG2 (x2 ). Since s ∈ B(G1 × G3 ), there exists x3 ∈ X3 such that (x1 , x3 ) ∈ ξ1 × ξ3 ((x1,0 , x3,0 ), s) and NG1 ×G3 (x1 , x3 ) = ∅. We have NG2 ×G3 (x2 , x3 ) = NG2 (x2 ) k NG3 (x3 ) ⊆ NG1 (x1 ) k NG3 (x3 )
= NG1 ×G3 (x1 , x3 ) = ∅. Thus, (x2 , x3 ) is a blocking state of G2 × G3 , which means s ∈ B(G2 × G3 ). Therefore, in either case we have B(G1 × G3 ) ⊆ B(G2 × G3 ). Finally, from the argument in Case 2, for any s ∈ (Σ ∪ Σ 0 )∗ and (x1 , x3 ) ∈ ξ1 × ξ3 ((x1,0 , x3,0 ), s), we have (x2 , x3 ) ∈ ξ2 × ξ3 ((x2,0 , x3,0 ), s) such that NG2 ×G3 (x2 , x3 ) ⊆ NG1 ×G3 (x1 , x3 ). Thus, G1 × G3 v G2 × G3 . If G1 ∼ = G2 , by Definition 8 we have G1 v G2 and G2 v G1 . Then by the above argument we get G1 × G3 v G2 × G3 and G2 × G3 v G1 × G3 .Thus, G1 × G3 ∼ = G2 × G3 . Proof of Proposition 10. We first show that, if G1 v G2 , then G1 / ≈Σ 0 v G2 / ≈Σ 0 . Let Gi = (Xi , Σ , ξi , xi,0 , Xi,m ), where i = 1, 2, ∗ and P : Σ ∗ → Σ 0 be the natural projection. Since G1 v G2 , by Proposition 7 we have N (G1 / ≈Σ 0 ) = P (N (G1 )) = P (N (G2 )) = N (G2 / ≈Σ 0 ). To show B(G1 / ≈Σ 0 ) ⊆ B(G2 / ≈Σ 0 ), let ξi0 (i = 1, 2) be the transition map of Gi / ≈Σ 0 . For all s ∈ B(G1 / ≈Σ 0 ), there is x1 ∈ X1 such that hx1 i ∈ ξ10 (hx1,0 i, s) and ∗
(∀s0 ∈ Σ 0 ) ξ10 (hx1 i, s0 ) ∩ (X1,m / ≈Σ 0 ) = ∅
∗
Since x01 ∈ hx1 i, by Lemma 3, if x01 can reach a marker state, say y0 , via P (t 0 ) = s0 then x1 must be able to reach a state, say y, via P (t 00 ) = s0 such that hyi = hy0 i. Since any two equivalent states have the same marking status and y0 is a marker state, we can conclude that y is also a marker state. This means ξ1 (x1 , t 00 ) ∩ X1,m 6= ∅, namely, t ∈ N (G1 ). Thus, by G1 v G2 , we have
(∃x2 ∈ ξ2 (x2,0 , t )) NG1 (x1 ) ⊇ NG2 (x2 ) ∧ [x1 ∈ X1,m ⇐⇒ x2 ∈ X2,m ]. Since G2 is standardized, we have hx2 i ∈ ξ20 (hx2,0 i, P (s)). For any s0 ∈ NG2 / ≈Σ 0 (hx2 i), we have ξ20 (hx2 i, s0 ) ∩ X2,m / ≈Σ 0 6= ∅. Thus, there is t 000 ∈ Σ ∗ with P (t 000 ) = s0 such that ξ2 (x2 , t 000 ) ∩ X2,m 6= ∅. Since µ ∈ s0 , we have µ ∈ t 000 . Thus, t 000 ∈ NG2 (x2 ) ⊆ NG1 (x1 ). So
ξ10 (hx1 i, s0 ) ∩ (X1,m / ≈Σ 0 ) 6= ∅ namely s0 ∈ NG1 / ≈Σ 0 (hx1 i). Thus, NG2 / ≈Σ 0 (hx2 i) ⊆ NG1 / ≈Σ 0 (hx1 i). Since
hx1 i ∈ X1,m / ≈Σ 0 ⇐⇒ x1 ∈ X1,m by Definition 5 ⇐⇒ x2 ∈ X2,m because G1 v G2 ⇐⇒ hx2 i ∈ X2,m / ≈Σ 0 by Definition 5 we get that G1 / ≈Σ 0 v G2 / ≈Σ 0 . If G1 ∼ = G2 , by Definition 8 we have G1 v G2 and G2 v G1 . By using the above argument we get G1 / ≈Σ 0 v G2 / ≈Σ 0 and G2 / ≈Σ 0 v G1 / ≈Σ 0 . Thus, G1 / ≈Σ 0 ∼ = G2 / ≈Σ 0 . Proof of Proposition 11. Let ξ 00 be the transition map of G/ ≈Σ 00 , and ξ 000 be the transition map of (G/ ≈Σ 0 )/ ≈Σ 00 . Let P12 : Σ ∗ → Σ 0 ∗ , P13 : Σ ∗ → Σ 00 ∗ and P23 : Σ 0 ∗ → Σ 00 ∗ be natural projections. We first show that G/ ≈Σ 00 v (G/ ≈Σ 0 )/ ≈Σ 00 . By Proposition 7 we have N (G/ ≈Σ 00 ) = P13 (N (G)) = P23 (P12 (N (G))) = P23 (N (G/ ≈Σ 0 ))
= N ((G/ ≈Σ 0 )/ ≈Σ 00 ). To show B(G/ ≈Σ 00 ) ⊆ B((G/ ≈Σ 0 )/ ≈Σ 00 ), let s ∈ B(G/ ≈Σ 00 ). There is x ∈ X with hxiΣ 00 ∈ ξ 00 (hx0 iΣ 00 , s) and ∗
(∀s0 ∈ Σ 00 ) ξ 00 (hxiΣ 00 , s0 ) ∩ Xm / ≈Σ 00 = ∅. Thus, there is t ∈ Σ ∗ with P13 (t ) = s, x0 ∈ hxi and x0 ∈ ξ (x0 , t ) ∧ (∀t 0 ∈ Σ ∗ ) ξ (x0 , t 0 ) ∩ Xm 6= ∅ ⇒ t 0 ∈ (Σ − Σ 00 )∗ . Since G is standardized, we get that x0 is a blocking state. Then t ∈ B(G). By Proposition 7,
which means there is t ∈ Σ ∗ and x01 ∈ hx1 i such that
P13 (t ) = s ∈ P23 (P12 (B(G))) ⊆ B((G/ ≈Σ 0 )/ ≈Σ 00 )
P (t ) = s ∧ x01 ∈ ξ1 (x1,0 , t ) ∧ (∀t 0 ∈ Σ ∗ ) [ξ1 (x01 , t 0 ) ∩ X1,m 6= ∅
which means B(G/ ≈Σ 00 ) ⊆ B((G/ ≈Σ 0 )/ ≈Σ 00 ). Let s ∈ N (G/ ≈Σ 00 ). For any x ∈ X with hxiΣ 00 ∈ ξ 00 (hx0 iΣ 00 , s) and
⇒ t ∈ (Σ − Σ ) ]. 0
0 ∗
Since G1 is standardized, we get that x01 is a blocking state of G1 . Then t ∈ B(G1 ), which means s = P (t ) ∈ P (B(G1 )). Since G1 v G2 ,
(∃t ∈ Σ ∗ ) P13 (t ) = s ∧ x ∈ ξ (x0 , t )
976
R. Su et al. / Automatica 46 (2010) 968–978
since G is standardized, if s = , then t = , which means x = x0 . Clearly, we have the following expression: hhx0 iΣ 0 iΣ 00 ∈ ξ 000 (hhx0 iΣ 0 iΣ 00 , ). If s 6= , then by Definition 5 and the assumption that Σ 00 ⊆ Σ 0 ⊆ Σ , we have
Next, we show B((G1 × G2 )/ ≈Σ 0 ) ⊆ B((G1 / ≈Σˆ 1 ) × (G2 / ≈Σˆ 2 )). Let s ∈ B((G1 × G2 )/ ≈Σ 0 ). Then there exist t ∈ (Σ1 ∪ Σ2 )∗ and (x1 , x2 ) ∈ ξ1 × ξ2 ((x1,0 , x2,0 ), t ) such that h(x1 , x2 )iΣ 0 ∈ ξ 0 (h(x1,0 , x2,0 )iΣ 0 , P (t )) and for all s0 ∈ Σ 0 ∗ ,
hhxiΣ 0 iΣ 00 ∈ ξ 000 (hhx0 iΣ 0 iΣ 00 , s).
ξ 0 (h(x1 , x2 )iΣ 0 , s0 ) ∩ ((X1,m × X2,m )/ ≈Σ 0 ) = ∅
Thus, in either case we have hhxiΣ 0 iΣ 00 ∈ ξ (hhx0 iΣ 0 iΣ 00 , s). We now show that 000
N(G/ ≈Σ 0 )/ ≈Σ 00 (hhxiΣ 0 iΣ 00 ) ⊆ NG/ ≈Σ 00 (hxiΣ 00 ). Let s0 ∈ N(G/ ≈Σ 0 )/ ≈Σ 00 (hhxiΣ 0 iΣ 00 ). Then there exists t 0 ∈ Σ ∗ with P13 (t 0 ) = s0 such that ξ (x, t 0 ) ∩ Xm 6= ∅. Since µ ∈ s0 , we have µ ∈ t 0 . Thus, P13 (t 0 ) 6= . By Definition 5, we get that ξ 00 (hxiΣ 00 , P13 (t 0 )) ∩ Xm / ≈Σ 00 6= ∅. Thus, s0 ∈ NG/ ≈Σ 00 (hxiΣ 00 ), namely N(G/ ≈Σ 0 )/ ≈Σ 00 (hhxiΣ 0 iΣ 00 ) ⊆ NG/ ≈Σ 00 (hxiΣ 00 ). Clearly,
hxiΣ 00 ∈ Xm / ≈Σ 00 ⇐⇒ hxiΣ 0 ∈ Xm / ≈Σ 0 ⇐⇒ hhxiΣ 0 iΣ 00 ∈ (Xm / ≈Σ 0 )/ ≈Σ 00 we have G/ ≈Σ 00 v (G/ ≈Σ 0 )/ ≈Σ 00 . Next, to show (G/ ≈Σ 0 )/ ≈Σ 00 v G/ ≈Σ 00 , we first show B((G/ ≈Σ 0 )/ ≈Σ 00 ) ⊆ B(G/ ≈Σ 00 ). Let s ∈ B((G/ ≈Σ 0 )/ ≈Σ 00 ). Then there is x ∈ X such ∗ that hhxiΣ 0 iΣ 00 ∈ ξ 000 (hhx0 iΣ 0 iΣ 00 , s) and for all s0 ∈ Σ 00 , 000 0 ξ (hhxiΣ 0 iΣ 00 , s ) ∩ ((Xm / ≈Σ 0 )/ ≈Σ 00 ) = ∅. Thus, there exists t ∈ Σ ∗ with P13 (t ) = s and x0 ∈ hhxiΣ 0 iΣ 00 such that x0 ∈ ξ (x0 , t ) and
(∀t 0 ∈ Σ ∗ ) [ξ (x0 , t 0 ) ∩ Xm 6= ∅ ⇒ t 0 ∈ (Σ − Σ 00 )∗ ]. Since G is standardized, we have that x0 is a blocking state. Then t ∈ B(G). By Proposition 7, P13 (t ) = s ∈ P13 (B(G)) ⊆ B(G/ ≈Σ 00 ), which means B((G/ ≈Σ 0 )/ ≈Σ 00 ) ⊆ B(G/ ≈Σ 00 ). Let s ∈ N ((G/ ≈Σ 0 )/ ≈Σ 00 ). For any x ∈ X with hhxiΣ 0 iΣ 00 ∈ ξ 000 (hhx0 iΣ 0 iΣ 00 , s) and
(∃t ∈ Σ ∗ ) P13 (t ) = s ∧ x ∈ ξ (x0 , t ) since G is standardized, if s = , then t = , which means x = x0 . Clearly, we have the following expression: hx0 iΣ 00 ∈ ξ 00 (hx0 iΣ 00 , ). If s 6= , then by Definition 5 and the assumption that Σ 00 ⊆ Σ 0 ⊆ Σ , we get that hxiΣ 00 ∈ ξ 00 (hx0 iΣ 00 , s). Thus, in either case we have hxiΣ 00 ∈ ξ 00 (hx0 iΣ 00 , s). We now show that NG/ ≈Σ 00 (hxiΣ 00 ) ⊆ N(G/ ≈Σ 0 )/ ≈Σ 00 (hhxiΣ 0 iΣ 00 ). Let s0 ∈ NG/ ≈Σ 00 (hxiΣ 00 ). Then
(∃t 0 ∈ Σ ∗ ) P13 (t 0 ) = s0 ∧ ξ (x, t 0 ) ∩ Xm 6= ∅. Since µ ∈ s0 , we have µ ∈ t 0 . By Definition 5, we have ξ 000 (hhxiΣ 0 iΣ 00 , P13 (t 0 )) ∩ ((Xm / ≈Σ 0 )/ ≈Σ 00 ) 6= ∅. Thus, s0 ∈ N(G/ ≈Σ 0 )/ ≈Σ 00 (hhxiΣ 0 iΣ 00 ), namely NG/ ≈Σ 00 (hxiΣ 00 ) ⊆ N(G/ ≈Σ 0 )/ ≈Σ 00 (hhxiΣ 0 iΣ 00 ). Since hxiΣ 00 ∈ Xm / ≈Σ 00 ⇐⇒ hhxiΣ 0 iΣ 00 ∈ (Xm / ≈Σ 0 )/ ≈Σ 00 , we have (G/ ≈Σ 0 )/ ≈Σ 00 v G/ ≈Σ 00 . Proof of Proposition 12. Let Gi = (Xi , Σi , ξi , xi,0 , Xi,m ) ∈ φ(Σi ) ˆ i = Σi ∩ Σ 0 , and with i = 1, 2. For notation simplicity let Σ ∗ ∗ ∗ ˆ ∗ 0∗ ˆ ˆ i∗ , P : (Σ1 ∪ Σ2 ) → Σ , Pi : Σi → Σi , Pi : Σ 0 → Σ Qi : (Σ1 ∪ Σ2 )∗ → Σi∗ be natural projections, ξ 0 for the transition map of (G1 × G2 )/ ≈Σ 0 and ξi0 for the transition map of Gi / ≈Σˆ i (i = 1, 2). First, we have the following, N ((G1 × G2 )/ ≈Σ 0 ) = P (N (G1 × G2 ))
by Proposition 7
= P (N (G1 ) k N (G2 )) = P1 (N (G1 )) k P2 (N (G2 )) because Σ1 ∩ Σ2 ⊆ Σ 0 = N (G1 / ≈Σˆ 1 ) k N (G2 / ≈Σˆ 2 ) by Proposition 7 = N ((G1 / ≈Σˆ 1 ) × (G2 / ≈Σˆ 2 )).
which means (x1 , x2 ) 6∈ X1,m × X2,m and
(∀t 0 ∈ Σ ∗ ) ξ1 × ξ2 ((x1 , x2 ), t 0 ) ∩ (X1,m × X2,m ) 6= ∅ ⇒ t 0 ∈ ((Σ1 ∪ Σ2 ) − Σ 0 )∗ . Since G1 and G2 are standardized, from (x1 , x2 ) ∈ ξ1 × ξ2 ((x1,0 , x2,0 ), t ) and the fact that Σ1 ∩ Σ2 ⊆ Σ 0 we can derive that (hx1 iΣˆ 1 , hx2 iΣˆ 2 ) ∈ ξ10 × ξ20 ((hx1,0 iΣˆ 1 , hx2,0 iΣˆ 1 ), s). We claim that (hx1 iΣˆ 1 , hx2 iΣˆ 2 ) is a blocking state of (G1 / ≈Σˆ 1 ) × (G2 / ≈Σˆ 2 ).
Otherwise, there exists s0 ∈ Σ 0 such that ξ10 × ξ20 ((hx1 iΣˆ 1 , hx2 iΣˆ 2 ), s0 ) ∩ ((X1,m / ≈Σˆ 1 ) × (X2,m / ≈Σˆ 2 )) 6= ∅. Since (x1 , x2 ) 6∈ X1,m × X2,m , we get that (hx1 iΣˆ 1 , hx2 iΣˆ 2 ) 6∈ (X1,m / ≈Σˆ 1 ) × (X2,m / ≈Σˆ 2 ). Thus, s0 6= , which means there exists t 0 ∈ Σ ∗ with P (t 0 ) = s0 6∈ ((Σ1 ∪ Σ2 ) − Σ 0 )∗ such that ξ1 × ξ2 ((x1 , x2 ), t 0 ) ∩ (X1,m × X2,m ) 6= ∅ — contradict the fact that for all t 0 ∈ Σ ∗ , ∗
ξ1 × ξ2 ((x1 , x2 ), t 0 ) ∩ (X1,m × X2,m ) 6= ∅ ⇒ t 0 ∈ ((Σ1 ∪ Σ2 ) − Σ 0 )∗ . From the claim we get s ∈ B((G1 / ≈Σˆ 1 ) × (G2 / ≈Σˆ 2 )).
Let s ∈ N ((G1 × G2 )/ ≈Σ 0 ). For any (x1 , x2 ) ∈ X1 × X2 with h(x1 , x2 )iΣ 0 ∈ ξ 0 (h(x1,0 , x2,0 )iΣ 0 , s) and
(∃t ∈ Σ ∗ ) P (t ) = s ∧ (x1 , x2 ) ∈ ξ ((x1,0 , x2,0 ), t ) since G1 and G2 are standardized, if s = , then t = , which means (x1 , x2 ) = (x1,0 , x2,0 ). Clearly, we have the following expression: (hx1,0 iΣˆ 1 , hx2,0 iΣˆ 2 ) ∈ ξ10 × ξ20 ((hx1,0 iΣˆ 1 , hx2,0 iΣˆ 2 ), ). If s 6= , then by Definition 5 and the assumption that Σ1 ∩ Σ2 ⊆ Σ 0 , we get (hx1 iΣˆ 1 , hx2 iΣˆ 2 ) ∈ ξ10 × ξ20 ((hx1,0 iΣˆ 1 , hx2,0 iΣˆ 2 ), s). Thus, in either case we have (hx1 iΣˆ 1 , hx2 iΣˆ 2 ) ∈ ξ10 × ξ20 ((hx1,0 iΣˆ 1 , hx2,0 iΣˆ 2 ), s). We now show that N(G1 / ≈ ˆ )×(G2 / ≈ ˆ ) (hx1 iΣˆ 1 , hx2 iΣˆ 2 ) ⊆ Σ1
Σ2
∈ N(G1 / ≈Σˆ )×(G2 / ≈Σˆ ) (hx1 iΣˆ 1 , 2 1 hx2 iΣˆ 2 ). Then there exists t 0 ∈ Σ ∗ with P (t 0 ) = s0 such that ξ1 × ξ2 ((x1 , x2 ), t 0 ) ∩ (X1,m × X2,m ) 6= ∅. Since µ ∈ s0 , we have µ ∈ t 0 . Thus, P (t 0 ) 6= . By Definition 5, we have ξ 0 (h(x1 , x2 )iΣ 0 , P (t 0 )) ∩ (X1,m × X2,m )/ ≈Σ 0 6= ∅. Thus, s0 ∈ N(G1 ×G2 )/ ≈Σ 0 (h(x1 , x2 )iΣ 0 ), ⊆ N(G1 ×G2 )/ ≈Σ 0 namely N(G1 / ≈ ˆ )×(G2 / ≈ ˆ ) (hx1 iΣˆ 1 , hx2 iΣˆ 2 ) Σ1 Σ2 0 (h(x1 , x2 )iΣ ) Since. N(G1 ×G2 )/ ≈Σ 0 (h(x1 , x2 )iΣ 0 ). Let s0
(hx1 iΣˆ 1 , hx2 iΣˆ 2 ) ∈ (X1,m / ≈Σˆ 1 ) × (X2,m / ≈Σˆ 2 ) ⇐⇒ (x1 , x2 ) ∈ X1,m × X2,m ⇐⇒ h(x1 , x2 )iΣ 0 ∈ (X1,m × X2,m )/ ≈Σ 0 we have (G1 × G2 )/ ≈Σ 0 v (G1 / ≈Σˆ 1 ) × (G2 / ≈Σˆ 2 ). To show (G1 / ≈Σˆ 1 ) × (G2 / ≈Σˆ 2 ) v (G1 × G2 )/ ≈Σ 0 , we first show that B((G1 / ≈Σˆ 1 ) × (G2 / ≈Σˆ 2 )) ⊆ B((G1 × G2 )/ ≈Σ 0 ). Let s ∈ B((G1 / ≈Σˆ 1 ) × (G2 / ≈Σˆ 2 )). Then there exist t ∈ (Σ1 ∪ Σ2 )∗ and (x1 , x2 ) ∈ ξ1 × ξ2 ((x1,0 , x2,0 ), t ) such that (hx1 iΣˆ 1 , hx2 iΣˆ 2 ) ∈ ξ10 × ξ20 ((hx1,0 iΣˆ 1 , hx2,0 iΣˆ 2 ), P (s)), and ∗
(∀s0 ∈ Σ 0 ) ξ10 × ξ20 ((hx1 iΣˆ 1 , hx2 iΣˆ 2 ), s0 ) ∩ ((X1,m / ≈Σˆ 1 ) × (X2,m / ≈Σˆ 2 )) = ∅.
(A.1)
From expression (A.1) we have (x1 , x2 ) 6∈ X1,m × X2,m . Since G1 and G2 are standardized, from the fact that Σ1 ∩ Σ2 ⊆ Σ 0 we have h(x1 , x2 )iΣ 0 ∈ ξ 0 (h(x1,0 , x2,0 )iΣ 0 , s). We claim that h(x1 , x2 )iΣ 0 is a
R. Su et al. / Automatica 46 (2010) 968–978
blocking state of (G1 × G2 )/ ≈Σ 0 . Otherwise, there exists s0 ∈ Σ 0 such that
∗
977
Clearly, ξ (x0 , τ s00 ) = η(y0 , s00 ). Since ∆ ⊆ Σ and Xm = Ym , we get that
ξ 0 (h(x1 , x2 )iΣ 0 , s0 ) ∩ ((X1,m × X2,m )/ ≈Σ 0 ) 6= ∅.
(∃x ∈ Y ) x ∈ η(y0 , s00 ) ∧ (∀s0 ∈ ∆∗ ) η(x, s0 ) ∩ Ym = ∅
Since G1 and G2 are standardized, we get that
which means s00 ∈ B(×i∈I Ai ) 6= ∅. In either case we have B(×i∈I Ai ) 6= ∅, contradicting the assumption that B(×i∈I Ai ) = ∅. Thus, the ONLY IF must be true.
ξ 0 (h(x1 , x2 )iΣ 0 , s0 µ) ∩ ((X1,m × X2,m )/ ≈Σ 0 ) 6= ∅. Clearly, Pˆ i (s0 µ) 6= . Thus, there exists t 0 ∈ Σ ∗ with P (t 0 ) = s0 µ such that ξ1 × ξ2 ((x1 , x2 ), t 0 µ) ∩ (X1,m × X2,m ) 6= ∅. Since
Pˆ i (s µ) 6= for i = 1, 2 and Σ1 ∩ Σ2 ⊆ Σ , we have ξ1 × ξ20 ((hx1 iΣˆ 1 , hx2 iΣˆ 2 ), s0 µ) ∩ ((X1,m / ≈Σˆ 1 ) × (X2,m / ≈Σˆ 2 )) 6= ∅, which contradicts expression (A1). Thus, the claim is true, namely s ∈ B((G1 × G2 )/ ≈Σ 0 ). Let s ∈ N ((G1 / ≈Σˆ 1 ) × (G2 / ≈Σˆ 2 )). For any (x1 , x2 ) ∈ X1 × X2 with (hx1 iΣˆ 1 , hx2 iΣˆ 2 ) ∈ ξ10 × ξ20 ((hx1,0 iΣˆ 1 , hx2,0 iΣˆ 2 ), s) and 0
(∃t ∈ Σ ∗ ) P (t ) = s ∧ (x1 , x2 ) ∈ ξ ((x1,0 , x2,0 ), t ) since G1 and G2 are standardized, if s = , then t = , which means (x1 , x2 ) = (x1,0 , x2,0 ). Clearly, we have h(x1,0 , x2,0 )iΣ 0 ∈ ξ 0 (h(x1,0 , x2,0 )iΣ 0 , ). If s 6= , by Definition 5 and Σ1 ∩ Σ2 ⊆ Σ 0 , we get h(x1 , x2 )iΣ 0 ∈ ξ 0 (h(x1,0 , x2,0 )iΣ 0 , s). Thus, in either case we have h(x1 , x2 )iΣ 0 ∈ ξ 0 (h(x1,0 , x2,0 )iΣ 0 , s). We now show N(G1 ×G2 )/ ≈Σ 0 (h(x1 , x2 )iΣ 0 ) ⊆ N(G1 / ≈ ˆ )×(G2 / ≈ ˆ ) (hx1 iΣˆ 1 , hx2 iΣˆ 2 ). Σ Σ 1
2
Let s0 ∈ N(G1 ×G2 )/ ≈Σ 0 (h(x1 , x2 )iΣ 0 ). Then there exists t 0 ∈ Σ ∗ with P (t 0 ) = s0 such that ξ1 × ξ2 ((x1 , x2 ), t 0 ) ∩ (X1,m × X2,m ) 6= ∅.
Since µ ∈ s0 , we have µ ∈ t 0 . Thus, Pˆ i (P (t 0 )) 6= (i = 1, 2). By Definition 5 and Σ1 ∩ Σ2 ⊆ Σ 0 , we have ξ10 × ξ20 ((hx1 iΣˆ 1 , hx2 iΣˆ 2 ), P (t 0 )) ∩ ((X1,m / ≈Σˆ 1 ) × (X2,m / ≈Σˆ 2 )) 6= ∅. Thus, s0 ∈ N(G1 / ≈ ˆ )×(G2 / ≈ ˆ ) (hx1 iΣˆ 1 , hx2 iΣˆ 2 ), namely N(G1 ×G2 )/ ≈Σ 0 Σ1
Σ2
(h(x1 , x2 )iΣ 0 ) ⊆ N(G1 / ≈Σˆ
1
)×(G2 / ≈Σˆ ) (hx1 iΣ ˆ 1 , hx2 iΣ ˆ 2 ).
Since
2
(hx1 iΣˆ 1 , hx2 iΣˆ 2 ) ∈ (X1,m / ≈Σˆ 1 ) × (X2,m / ≈Σˆ 2 ) ⇐⇒ h(x1 , x2 )iΣ 0 ∈ (X1,m × X2,m )/ ≈Σ 0 we have (G1 / ≈Σˆ 1 ) × (G2 / ≈Σˆ 2 ) v (G1 × G2 )/ ≈Σ 0 .
Proof of Lemma 15. Let ∆ = i∈I ∆i , Σ = i∈I Σi , η = η1 × · · ·×ηn , ξ = ξ1 ×· · ·×ξn , Y = Y1 ×· · ·× Yn , Ym = Y1,m ×· · ·× Yn,m , X = X1 × · · · × Xn , Xm = X1,m × · · · × Xn,m , y0 = (y1,0 , . . . , yn,0 ) and x0 = (x1,0 , . . . , xn,0 ). Clearly, Xm = Ym and y0 ∈ ξ (x0 , τ ). To show the IF part, suppose it is not true. Then B(×i∈I Gi ) = ∅ but B(×i∈I Ai ) 6= ∅, which means there exist s ∈ ∆∗ and y ∈ Y such that
S
S
y ∈ η(y0 , s) ∧ (∀s0 ∈ ∆∗ ) ∆(y, s0 ) ∩ Ym = ∅. By construction of {Gi |i ∈ I }, we have y ∈ X such that y ∈ ξ (x0 , τ s) ∧ (∀s0 ∈ Σ ∗ ) ξ (y, s0 ) ∩ Xm = ∅ which means τ s ∈ B(×i∈I Gi ) 6= ∅, contradicting the assumption that B(×i∈I Gi ) = ∅. Thus, the IF part must be true. To show the ONLY IF part, suppose it is not true. Then B(×i∈I Gi ) 6= ∅ but B(×i∈I Ai ) = ∅, which means, (∃s ∈ Σ ∗ ) (∃x ∈ X ) x ∈ ξ (x0 , s) ∧ (∀s0 ∈ Σ ∗ ) ξ (x, s0 ) ∩ Xm = ∅. There are two cases to consider. Case 1: s = . Then x0 is a blocking state in ×i∈I Gi because ξ (x0 , ) = {x0 }, which means y0 ∈ ξ (x0 , τ ) is also a blocking state in ×i∈I Gi . By the construction of {Gi |i ∈ I }, we get y0 is a blocking state in ×i∈I Ai . Thus, ∈ B(×i∈I Ai ) 6= ∅. Case 2: s 6= . Then s = τ s00 for some s00 ∈ ∆∗ , which means there exists x ∈ X such that x ∈ ξ (x0 , τ s00 ) and
(∀s0 ∈ Σ ∗ ) ξ (x, s0 ) ∩ Xm = ∅.
References
0
0
Design Software: XPTCT (Version 121 for Windows 98/XP). Systems Control Group, Dept. of ECE, University of Toronto, July 1, 2008. URL: www.control.utoronto.ca/DES. Feng, L., & Wonham, W. M. (2006). Computationally efficient supervisor design: abstraction and modularity. In Proc. 8th international workshop on discrete event systems, Ann Arbor, USA (pp. 3–8). Fernandez, J. C. (1990). An implementation of an efficient algorithm for bisimulation equivalence. Science of Computer Programming, 13(2–3), 219–236. Flordal, H., & Malik, R. (2006). Modular nonblocking verification using conflict equivalence. In Proc. 8th international workshop on discrete event systems, Ann Arbor, USA (pp. 100–106). Flordal, H., Malik, R., Fabian, M., & Akesson, K. (2007). Compositional synthesis of maximally permissive supervisors using supervisor equivalence. Discrete Event Dynamic Systems, 17(4), 475–504. Hill, R. C., Tilbury, D. M., & Lafortune, S. (2008). Modular supervisory control with equivalence-based conflict resolution. In Proc. 2008 American control conference, Seattle, USA (pp. 491–498). Leduc, R. J., Lawford, M., & Wonham, W. M. (2005). Hierarchical interface-based supervisory control-part II: parallel case. IEEE Transactions on Automatic Control, 50(9), 1336–1348. Malik, R., & Flordal, H. (2008). Yet another approach to compositional synthesis of discrete event systems. In Proc. 9th international workshop on discrete event systems, WODES08, Göteborg, Sweden (pp. 16–21). Malik, R., Flordal, H., & Pena, P. N. (2008). Conflicts and projections. In Proc. 1st IFAC workshop on dependable control of discrete systems, Cachan-Paris, France (pp. 63–68). Milner, R. (1990). Operational and algebraic semantics of concurrent processes. In Handbook of theoretical computer science (vol. B): formal models and semantics (pp. 1201–1242). MIT Press. Pena, P. N., Carrilho da Cunha, A. E., Cury, J. E. R., & Lafortune, S. (2008). New results on the nonconflict test of modular supervisors. In Proc. 9th international workshop on discrete event systems, Göteborg, Sweden (pp. 468–473). Pena, P. N., Cury, J. E. R., & Lafortune, S. (2006). Testing modularity of local supervisors: an approach based on abstractions. In Proc. 8th international workshop on discrete event systems, Ann Arbor, USA (pp. 107–112). de Queiroz, M. H., & Cury, J. E. R. (2000). Modular supervisory control of composed systems. In Proc. of American control conference, Chicago, USA (pp. 4051–4055). Ramadge, P. J., & Wonham, W. M. (1987). Supervisory control of a class of discrete event systems. SIAM Journal on Control and Optimization, 25(1), 206–230. Schmidt, K., & Breindl, C. (2008). On maximal permissiveness of hierarchical and modular supervisory control approaches for discrete event systems. In Proc. 9th international workshop on discrete event systems, Göteborg, Sweden (pp. 462–467). Su, R., van Schuppen, J. H., & Rooda, J. E. (2008). Synthesizing nonblocking distributed supervisors based on automaton abstraction. In Proc. 47th IEEE conference on decision and control, Cancun, Mexico (pp. 883–888). Su, R., van Schuppen, J. H., & Rooda, J. E. (2010). Model abstraction of nondeterministic finite state automata in supervisor synthesis. IEEE Transactions on Automatic Control (in press). Su, R., & Thistle, J. G. (2006). A distributed supervisor synthesis approach based on weak bisimulation. In Proc. 8th international workshop on discrete event systems, Ann Arbor, USA (pp. 64–69). Su, R., & Wonham, W. M. (2005). Global and local consistencies in distributed fault diagnosis for discrete-event systems. IEEE Transactions on Automatic Control, 50(12), 1923–1935. Theunissen, R. J. M., Schiffelers, R. R. H., van Beek, D. A., & Rooda, J. E. (2008). Supervisory control synthesis for a patient support system. SE technical report No. 2008-8. Eindhoven, The Netherlands: Systems Engineering Group, Department of Mechanical Engineering, Eindhoven University of Technology. Wonham, W. M., & Ramadge, P. J. (1987). On the supremal controllable sublanguage of a given language. SIAM Journal on Control and Optimization, 25(3), 637–659. Wonham, W. M., & Ramadge, P. J. (1988). Modular supervisory control of discrete event systems. Mathematics of Control, Signals & Systems, 1(1), 13–30. Wong, K. C., & Wonham, W. M. (2004). On the computation of observers in discreteevent systems. Discrete Event Dynamic Systems, 14(1), 55–107. Wonham, W. M. (2007). Supervisory control of discrete-event systems. Systems Control Group, Dept. of ECE, University of Toronto. URL: www.control.utoronto.ca/DES. Yi, J., Zhang, M., Ding, S., & van der Meulen, P. (2007). Throughput analysis of linear cluster tools. In Proc. 3rd IEEE international conference on automation science and engineering, Scottsdale, AZ, USA (pp 1063–1068).
978
R. Su et al. / Automatica 46 (2010) 968–978 Rong Su received the M.A.Sc. and Ph.D. degrees both in electrical engineering from the University of Toronto, Toronto, Canada, in 2000 and 2004 respectively. He is currently affiliated with the Systems Engineering Group in the Department of Mechanical Engineering at Eindhoven University of Technology in the Netherlands. His research interests include modeling, fault diagnosis and supervisory control of discrete-event systems. Dr. Su has been a member of IFAC technical committee on discrete event and hybrid systems (TC 1.3) since 2005.
Jan H. van Schuppen received the M.Sc. degree in applied physics from the Delft University of Technology, Delft, The Netherlands, in 1970, and the Ph.D. degree in electrical engineering from the University of California, Berkeley, in 1973. He is affiliated with the Research Institute Centrum voor Wiskunde en Informatica (CWI), Amsterdam, The Netherlands. His research interests include control of hybrid and discrete-event systems, stochastic control, realization, and system identification. In applied research his interests include engineering problems of control of motorway traffic, of communication networks, and control and system theory for the life sciences. Dr. Van Schuppen is Editor-in-Chief of the journal Mathematics of Control, Signals, and Systems, was Associate Editor-at-Large of the IEEE TRANSACTIONS AUTOMATIC CONTROL, and was Department Editor of the journal Discrete Event Dynamic Systems.
Jacobus E. Rooda received the M.Sc. degree from Wageningen University of Agriculture Engineering and the Ph.D. degree from Twente University, The Netherlands. Since 1985 he is Professor of (Manufacturing) Systems Engineering at the Department of Mechanical Engineering of Eindhoven University of Technology, The Netherlands. His research fields of interest are modeling and analysis of manufacturing systems. His interest is especially in control of (high-tech) manufacturing lines and in supervisory control of high-tech (manufacturing) machines.
Albert T. Hofkamp graduated from his masters study at the University of Twente at Enschede, The Netherlands in 1995. He received his Ph.D. degree in Design from the Systems Engineering Group in the Department of Mechanical Engineering at Eindhoven University of Technology in the Netherlands with a real-time implementation of the Chi specification language in 2001. Still working in the same group, he implements simulators, compilers, and transformation tools, and helps researchers realize their ideas in software.