JID:SCICO AID:1658 /FLA
[m3G; v 1.120; Prn:13/12/2013; 13:57] P.1 (1-34)
Science of Computer Programming ••• (••••) •••–•••
Contents lists available at ScienceDirect
Science of Computer Programming www.elsevier.com/locate/scico
A sound and complete theory of graph transformations for service programming with sessions and pipelines Liang Zhao a,∗ , Roberto Bruni b , Zhiming Liu c a b c
Institute of Computing Theory and Technology, Xidian University, 2 South Taibai Road, Xi’an 710071, China Department of Computer Science, University of Pisa, Largo B. Pontecorvo 3, I-56127 Pisa, Italy Software Engineering Group, School of Computing, Telecommunication and Networks, Birmingham City University, Birmingham B4 7XG, UK
h i g h l i g h t s • • • •
Algebra of hierarchical graphs that supports Double-Pushout graph transformations. Representation of service systems as hierarchical graphs. Representation of behaviors of service systems as graph transformation rules. Soundness and completeness of graph transformation rules.
a r t i c l e
i n f o
Article history: Received 31 October 2011 Received in revised form 16 August 2013 Accepted 11 November 2013 Available online xxxx Keywords: Process calculus Hierarchical graph Graph transformation
a b s t r a c t Graph transformation techniques, the Double-Pushout (DPO) approach in particular, have been successfully applied in the modeling of concurrent systems. In this area, a research thread has addressed the definition of concurrent semantics for process calculi. In this paper, we propose a theory of graph transformations for service programming with sophisticated features such as sessions and pipelines. Through graph representation of CaSPiS, a recently proposed process calculus, we show how graph transformations can cope with advanced features of service-oriented computing, such as several logical notions of scoping together with the interplay between linking and containment. We first exploit a graph algebra and set up a graph model that supports graph transformations in the DPO approach. Then, we show how to represent CaSPiS processes as hierarchical graphs in the graph model and their behaviors as graph transformation rules. Finally, we provide the soundness and completeness results of these rules with respect to the reduction semantics of CaSPiS. © 2013 Elsevier B.V. All rights reserved.
1. Introduction Process calculi are a flexible mathematical formalism that provides a convenient abstraction for concurrent systems, in the same way as λ-calculus lays the foundation of sequential computation. A process calculus generally has two main ingredients: an algebra of computational entities, including a set of structural congruence axioms, and an operational semantics for modeling the evolution of processes. The entities are called processes and they are constructed by primitives such as communication and parallel composition. The operational semantics is provided either in terms of a labeled transition system, or as a reduction system that poses the basis for studying several notions of behavioral equivalence over processes.
*
Corresponding author. Tel./fax: +86 29 88202883. E-mail addresses:
[email protected] (L. Zhao),
[email protected] (R. Bruni),
[email protected] (Z. Liu).
0167-6423/$ – see front matter © 2013 Elsevier B.V. All rights reserved. http://dx.doi.org/10.1016/j.scico.2013.11.029
JID:SCICO AID:1658 /FLA
2
[m3G; v 1.120; Prn:13/12/2013; 13:57] P.2 (1-34)
L. Zhao et al. / Science of Computer Programming ••• (••••) •••–•••
Process calculi have become quite mature in the study of traditional concurrent and communicating systems [1,2], and even advanced to specification and verification of mobile systems [3]. However, the traditional process calculi do not match certain advanced features of service-oriented computing (SOC) like the nested scoping of sessions or pipelining workflows, as well as the interplay between linking and containment. Though there are attempts of using π -calculus [3] as a model of service systems [4,5], the modeling is of quite low level and different first-class aspects in SOC, such as client-service interaction and orchestration, are mixed up and obfuscated. The low level communication primitives of π -calculus make the analysis quite complicated. Particularly, with the same communication pattern used to encode many different aspects, it is almost infeasible to re-use static analysis techniques to provide any guarantees about safe interactions. In order to improve this situation, a few service-oriented calculi are proposed. Service Centered Calculus (SCC) [6] introduces service definition, service invocation and session handling as first class modeling elements, so as to model service systems at a better level of abstraction. However, SCC has only a rudimentary mechanism for handling session closure, and it has no mechanism for orchestrating values arising from different activities. These aspects are improved in Calculus of Session and Pipelines (CaSPiS) [7]. CaSPiS supports most of the key features of SCC, but the notions of session and pipelining play a more central role. A session has two sides (or participating processes) and it is equipped with protocols followed by each side during an interaction between the two sides. A pipeline permits orchestrating the flow of data produced by different sessions. The concept of pipeline is inspired by Orc [8], a basic and elegant programming model for orchestration of computation entities. A structured operational semantics of CaSPiS is given in [7] based on labeled transitions. It does yet have a simpler and compact reduction semantics [9], on which we focus, that deals with silent actions of processes in the labeled transition system. In [9], the relation of CaSPiS and π -calculus in terms of their mutual embeddings is also discussed. As illustrated by a large body of literature, graphs and graph transformations provide useful insights into distributed, concurrent and mobile systems [10–12]. Following this direction, we are going to define a graph-based concurrent semantics for CaSPiS. This can help, for example, to record causal dependencies between interactions and exploit such information for detecting the possible source of faults and misbehaviors. For this, we need to address two issues. The first is that sessions and pipelines introduce a strong hierarchical nature into a service-oriented system in both of its static structure and dynamic behavior. The hierarchical structure also changes during the evolution of the system, due to dynamic creation of nested sessions through invocation of services, and dynamic creation of processes with execution of pipelines. Therefore we must deal with hierarchical graphs. The second is that the graph transformation semantics must be “compatible” with the existing interleaving one. In this paper, we propose a hierarchical graph representation of service systems specified by CaSPiS and show how to use graph transformation rules to characterize their behaviors. More precisely, our first contribution is to set up a model of hierarchical graphs by exploiting a suitable graph algebra. In this model, graph transformations are studied following the well-studied Double-Pushout (DPO) approach [13]. Then, we map CaSPiS processes to hierarchical graphs in the graph algebra and define a graph transformation system with a few sets of graph transformation rules. As a main result of this paper, we proved that the graph transformation system is not only sound, but also complete with respect to the reduction semantics of CaSPiS. This paper extends the previous workshop version [14] in several aspects. In this paper, we modeled more sophisticated features of CaSPiS including replication processes and constructed values. Such an extension improves the expressiveness of our framework, enabling us, for example, to reason about persistent services that are always available for invocations. Another progress made in this paper is the provision of a full graph transformation framework of reduction, including rules for process copy, data assignment and garbage collection. Furthermore, this paper provides the proof of soundness and completeness of the whole graph transformation system. There are various models of graphs and graph transformations that aim at characterizing and visualizing distributed, concurrent and mobile systems, including service systems. Gadducci proposes a graphical implementation of π -calculus in [10] based on term graph rewriting [15,16]. In this work, processes of π -calculus, including recursive ones, are encoded into term graphs, which are directed and acyclic hypergraphs over a chosen signature representing “terms with shared sub-terms” over the signature. The use of term graphs makes it straightforward to re-use the standard graph rewriting technique, such as the DPO approach, which leads to a non-deterministic concurrent semantics. Then, the soundness and completeness of the encoding is verified by proving the equivalence of the concurrent semantics and the original reduction semantics of π -calculus. Milner provides a behavior semantics for condition-event Petri nets [17] in [18] based on bigraphs and their reactive systems [19,11]. Generally, a bigraph consists of two orthogonal structures: a place graph and a link graph representing locality and connectivity of agents, respectively. In this work, a condition-event Petri net is modeled as a bigraph whose place graph is flat, and then the behavior of the net is modeled as a bigraphical reactive system equipped with a labeled transition system and an associated bisimilarity equivalence. This bisimilarity is shown to coincide with the original one of condition-event Petri nets. Another graph-based framework is presented by Hirsch and Tuosto [20] for specifying systems with high-level Quality of Service (QoS) aspects, where constraint-semirings [21] are used to describe QoS requirements of various criteria. The framework is based on Synchronized Hyperedge Replacement (SHR) [22,12], a hypergraph rewriting mechanism for modeling the reconfiguration of (possibly distributed) systems. In SHR, the behavior of a single edge is defined by the notion of production, which indicates how and under what condition an edge can be replaced by a generic graph. Then, global transitions are obtained by synchronizing applications of productions with compatible
JID:SCICO AID:1658 /FLA
[m3G; v 1.120; Prn:13/12/2013; 13:57] P.3 (1-34)
L. Zhao et al. / Science of Computer Programming ••• (••••) •••–•••
3
conditions. A summary and comparison of graph models for distributed, concurrent and mobile systems can be found in the survey [23]. There is also some work that proposes and makes use of a graph algebra. Corradini and Gadducci introduce a preliminary algebra for term graphs [24] by showing that every term graph can be constructed from a small set of atom term graphs, each of which is regarded as an atom term, using two basic operations (composition and union). The algebra is then used to establish an isomorphism from terms graphs to arrows of graph substitution monoidal (gs-monoidal) categories. Bruni et al. present Architectural Design Rewriting (ADR) [25], a graph-based approach to the design of reconfigurable software architectures. In ADR, architectures are encoded as terms of a simple syntax of hierarchical graphs with a set of ad-hoc operators and atom constructs. Based on this algebra, architectural reconfigurations are defined inductively using standard term rewriting techniques. Inspired by ADR, Bruni et al. provide an algebra of hierarchical graphs [26,27] with primitives for composition, node restriction and nesting. It is a high-level language for specifying graphs with node sharing and embedded structures, thus well suited for representation of software systems where nesting and linking are key aspects. In this paper, we adopt the syntax of this algebra, but define a new semantic model in order to support graph transformations in the DPO approach. A similar graph syntax, namely Algebra of graphs with nesting (AGN), can be found in [28] which is built on graphs with nesting and restriction (NR-graphs). AGN is also equipped with primitives for composition, restriction and design hierarchy, but it considers two kinds of restricted nodes, local and global, and unifies the notions of edges and designs, compared with the algebra of hierarchical graphs. In addition, the correspondence between NR-graphs and AGN terms is established indirectly, through encoding them into term graphs and arrows of gs-monoidal categories, respectively, and using the isomorphism between terms graphs and arrows of gs-monoidal categories [24]. By contrast, the relation of the algebra of hierarchical graphs and models of term graphs has not been exploited yet. Another graph algebra is proposed by Grohmann and Miculan [29] which is a typed language for the category of binding bigraphs, a generalization of the original pure bigraphs. Similar to the algebra of hierarchical graphs and AGN, the language has general constructs of graphs such as parallel composition and restriction, but it also has a few bigraph-specific primitives such as localization and globalization. The language is shown expressive as its certain sub-languages can be used to characterize the categories of pure, local and prime bigraphs. It can be tailored to formalize graph models of SHR and ADR as well. It is worth pointing out that the algebra of hierarchical graphs is also applied in [27] to encode a couple of process calculi that characterize systems with nested structures, including CaSPiS. But there, the focus is on the encoding of states of systems rather than their behaviors. A step forward is made in [30] where standard forms of graph transformation rules are provided to model reductions of processes. Nevertheless, each rule is defined in a context-sensitive way, i.e. it only deals with the case that the reduction occurs in a specific context. To handle reductions in all possible contexts, an infinite number of rules is needed. This problem is solved in our graph model as we study graph transformation rules in the DPO approach which are context-insensitive, i.e. one rule is enough to deal with one kind of reductions that occur in any possible context. We introduce the calculus CaSPiS in Section 2, and our hierarchical graph model in Section 3. In Section 4, we define the representation of CaSPiS processes as hierarchical graphs and their behaviors as graph transformation rules, followed by a small example to illustrate the representation. We also present the soundness and completeness results of graph transformation rules with respect to the reduction semantics of CaSPiS. 2. The calculus CaSPiS This section introduces the key notions of the service-oriented calculus CaSPiS [7]. Let S , R and V be three disjoint infinite sets, respectively of service names, session names and variables. Assume also a set Σ of constructors f , each with a fixed arity ar( f ). We allow constants in CaSPiS. A constant c is regarded as a constructor of arity 0. to denote a sequence of elements, u [ j ], | u | and { u } to denote the j-th element, the length and the set of We use u elements of the sequence, respectively. A sequence of length 0 is the empty sequence, denoted as ε . We do not distinguish between an element u and a sequence (u ) of length 1. We first introduce the fragment of CaSPiS without considering the replication of processes. The simplest process is the nil process 0 that does not do anything. A process P can be prefixed by a concretion V that generates a value V ; a return V ↑ that returns a value V to the outside environment; or an abstraction ( F ) that is ready to receive a value which matches the pattern F . Such a process is called a prefixed process. In a prefixed process, a value is simply a value variable x, or a constructed value f ( V ) composed of a sequence of values through a constructor. Similarly, a pattern can be a pattern variable of the form ?x or a constructed pattern f ( F ). The standard parallel composition P | Q is allowed. However, the choice operator “+”, called summation, is limited to the nil process and prefixed processes. A service is declared by a service definition s. P and used by the environment through a service invocation s. Q . A participant process of a session r is represented by r P , where P is the protocol process this participant follows. In CaSPiS, a session r can have only two participants, and they are also called the two sides of the session. A process P can be pipelined with another process Q , denoted by P > Q , so that P can keep producing values for Q to consume. Service names, session names and variables can be restricted, in a way like the π -calculus [3] by (ν n) P . In this process, P is the scope of the restriction, i.e. (ν n) restricts all the occurrences of the name n within P .
JID:SCICO AID:1658 /FLA
[m3G; v 1.120; Prn:13/12/2013; 13:57] P.4 (1-34)
L. Zhao et al. / Science of Computer Programming ••• (••••) •••–•••
4
( P | P )| P P |P ( M + M ) + M M + M
≡c ≡c ≡c ≡c
P |( P | P ) P |P M + ( M + M ) M + M
M + 0 ≡c M (ν n)(ν n ) P ≡c (ν n )(ν n) P (ν n)0 ≡c 0
Fig. 1. Basic congruence rules. P |(ν n) Q ≡c (ν n)( P | Q ) if n ∈ / fn( P ) / fn( P ) (ν n) Q > P ≡c (ν n)( Q > P ) if n ∈ r (ν n) P ≡c (ν n)(r P ) if n =
r
Fig. 2. Special congruence rules.
Definition 1 (Basic process). A basic CaSPiS process is a term generated by the syntax:
Process Sum Pattern Value
P, Q M F V
::= ::= ::= ::=
M | P | Q | s. P | s. P | r P | P > Q | (ν n) P 0 | ( F ) P | V P | V ↑ P | M + M ?x | f ( F ) x | f ( V )
where s ∈ S , r ∈ R, x ∈ V , f ∈ Σ and n ∈ S ∪ R ∪ V . We remark that the session construct r P is a runtime syntax: it should not be used to model the initial state of a system, but can be dynamically generated upon service invocation. We omit 0 in a prefixed term and write, for example, (?x)x for (?x)x0. For a pattern F , we use bn( F ) to denote the set of its bound names, i.e. names x such that ?x occurs in F . A name n occurring in a process P can be bound by either a restriction (ν n) or an abstraction ( F ) with n ∈ bn( F ). Otherwise, it is a free name, and we use fn( P ) to denote the set of free names of P . For a value V , we also use fn( V ) to denote the set of variables occurring in V . Notice that a variable always occurs free in a value. As in the π -calculus [3], we do not distinguish between processes that are alpha-convertible, for example (?x)x z and (? y ) y z. We also have a set of structural congruence rules among processes. They are classified as basic rules of commutativity and associativity (shown in Fig. 1) and special rules for moving a restriction “forward” (shown in Fig. 2). It can be inferred that congruent processes have the same set of free names. Reduction. The basic behavior of a process P is the communication and synchronization (called interactions) between its sub-processes. After an interaction, P evolves to another process Q . Such a step of evolvement is called a reduction, denoted as P → Q . The behaviors of prefixed processes, sum processes, parallel compositions and restrictions are similar to those in a traditional process calculus. A service definition process s. P and service invocation process s. Q synchronize on the service s and its corresponding invocation s. After offering the service s, s. P evolves to a session process r P with a fresh session name r. Symmetrically, after the service invocation s, s. Q becomes the other session side r Q of r. For example, s. P |s. Q → r P |r Q . When a session r starts, the protocols P and Q of the session sides r P and r Q become active and produce and receive values from each other. For example, r (?x) P |r y Q → r P [ y /x]|r Q . A pipelined process P > Q behaves as P but keeps the new state of P pipelined with Q , until P produces a value. When P produces a value, a new instance of Q is created, that consumes the value produced by P and then runs in parallel with the original P and instances of Q created earlier. For example, y P > (?x) Q → ( P > (?x) Q )| Q [ y /x]. The formal definition of reduction needs the notion of process context, i.e. a process expression with “holes”. Specifically, a context with k holes is a process term C [ X 1 , . . . , X k ] defined in Definition 1, but containing processes variables X 1 , . . . , X k . When replacing these process variables respectively by processes P 1 , . . . , P k , we get a process C [ P 1 , . . . , P k ]. But for the context itself, we can omit the process variables and denote it as C [·, . . . , ·]. In most cases, we only need to consider contexts with one or two holes. A context is called static if none of its holes occurs in the scope of a dynamic process operator, which is either a service definition s.[·], a service invocation s.[·], a sum [·] + M or M + [·], a prefix (i.e. an abstraction, concretion or return) π [·] or the right-hand side of a pipeline P > [·]. A context is called session-immune and restriction-immune if its hole(s) does not occur in the scope of a session and restriction, respectively. Moreover, a 2-hole context is called restriction-balanced if the holes occur in the same restriction environment. For example, ((ν n)[·]|r [·]) > Q is not restriction-balanced, as only its first hole is bound by the restriction (ν n). Nevertheless, it is a static context. Following the discussion about the informal behavior of processes, we summarize the reduction rules for service definition, service invocation, session and pipelined processes in Fig. 3, where each rule shows a pair of processes P and Q such that P → Q . In these rules, it is required C 0 [·] is static; C [·,·] is static and restriction-balanced; C 1 [·] and C 2 [·] are static, session-immune and restriction-immune. So, there is no rule that allows a reduction to take place in a non-static context. In addition, the last four rules require that the substitution σ = match( F ; V ) exists, which is calculated from the pattern F
JID:SCICO AID:1658 /FLA
[m3G; v 1.120; Prn:13/12/2013; 13:57] P.5 (1-34)
L. Zhao et al. / Science of Computer Programming ••• (••••) •••–•••
(Sync)
P ≡c C [s. P 1 , s. P 2 ] Q ≡c (ν r )C [r P 1 , r P 2 ]
r fresh for C [·,·], P 1 , P 2
(S-Sync)
P ≡c C [r ( P |( V P 1 + M 1 )), r C 2 [( F ) P 2 + M 2 ]] Q ≡c C [r ( P | P 1 ), r C 2 [ P 2 σ ]]
σ = match( F ; V ) exists
(S-Sync-Ret)
P ≡c C [r ( P |r C 1 [ V ↑ P 1 + M 1 ]), r C 2 [( F ) P 2 + M 2 ]] Q ≡c C [r ( P |r C 1 [ P 1 ]), r C 2 [ P 2 σ ]]
σ = match( F ; V ) exists
(P-Sync)
P ≡c C 0 [( P |( V P 1 + M 1 )) > (( F ) P 2 + M 2 )] Q ≡c C 0 [ P 2 σ |(( P | P 1 ) > (( F ) P 2 + M 2 ))]
σ = match( F ; V ) exists
(P-Sync-Ret)
P ≡c C 0 [( P |r C 1 [ V ↑ P 1 + M 1 ]) > (( F ) P 2 + M 2 )] Q ≡c C 0 [ P 2 σ |(( P |r C 1 [ P 1 ]) > (( F ) P 2 + M 2 ))]
σ = match( F ; V ) exists
5
Fig. 3. Reduction rules.
and the value V . For example, match( f (?x, ? y ); f ( z, g (1))) = [ z, g (1)/x, y ], while match( f (?x, ? y ); g (2)) does not exist as the pattern f (?x, ? y ) and the value g (2) do not match. Let us consider an example process Q | (Cl > (? y ) P ), where Q = req.(ν )( + null) is a service to allocate new resources (if available), Cl = req.(?x)x↑ is a client of Q and P is a generic process. Then the above process can evolve as illustrated below.
Q | (Cl > (? y ) P ) → (ν r )( r (ν )( + null) | (r (?x)x↑ > (? y ) P )) ≡c (ν r )(ν )( r + null | (r (?x)x↑ > (? y ) P )) → (ν r )(ν )( r 0 | (r null↑ > (? y ) P )) → (ν r )(ν )( r 0 | (r 0 > (? y ) P ) | P [null/ y ])
(Sync) (S-Sync) (P-Sync-Ret)
Notice that r 0 is inert and therefore r 0 > (? y ) P is also inert, then the reached process amounts essentially to P [null/ y ]. An analogous computation could have led (up to the presence of inert processes) to the process (ν ) P [/ y ]. Extension with replications. A service system may contain a service definition that can be invoked repeatedly, or an abstraction that is always ready to receive a value and take corresponding actions. In order to specify such systems, a new construct of processes, replication, is introduced into CaSPiS. Definition 2 (Process). The syntax of CaSPiS processes is an extension of the basic one in Definition 1 given by:
Process
P , Q ::=
. . . (Basic constructs) | ! P (Replication)
A replication ! P is well-formed if its body P is either a service definition, an abstraction or a sum of abstractions. In the following discussion, a replication always means a well-formed one unless it is stated otherwise. The newly introduced construct ![·] is a dynamic operator. So, no reduction is allowed to occur inside the body of a replication. Instead, the behavior of a replication is defined by a new special congruence rule.
! P ≡c P |! P A replication can take part in a reduction (only) indirectly, i.e. after it is “unfolded” by the new congruence rule. For example, given P | Q → R, P |! Q ≡c P | Q |! Q → R |! Q . 3. Algebra of hierarchical graphs A CaSPiS process can be represented as a graph. For example in Fig. 4(a), the graph of P = x y ↑ shows that P generates a value x, returns a value y and then becomes the nil process. The • nodes represent the states of the control flow, and the nodes x and y show that x and y are values generated and returned by the concretion and return edges. The graph in Fig. 4(b) shows process Q generates a value z but this value is restricted, thus invisible from outside. A restricted value node is therefore not named in the graph. Notice that these graphs are hypergraphs in which an edge can be associated with one or more nodes. A hypergraph shows the control flow and data flow, as well as the structure of a process, through different types of nodes, e.g. and •, and different types of edges, e.g. Ret, Res and Con. 3.1. Graph terms For graph representation of all CaSPiS processes and their reductions, we use a graph algebra to specify hypergraphs and study their algebraic properties. Generally, a graph algebra consists of a grammar that defines a set of terms for specification and a semantic model that interpret each term as a graph. As for the grammar, we adopt the one of the graph algebra
JID:SCICO AID:1658 /FLA
[m3G; v 1.120; Prn:13/12/2013; 13:57] P.6 (1-34)
L. Zhao et al. / Science of Computer Programming ••• (••••) •••–•••
6
Fig. 4. Graph representations of two processes.
provided in [26] that is suitable to specify graphs of service systems. Nevertheless, we define a novel semantic model to support graph transformations in the DPO approach. For simplicity, we first present the grammar of graph terms without hierarchy. Let N be a set of nodes and L be a set of edge labels. Definition 3 (Graph term). A graph term is generated by the grammar
Graph
G ::= 0 | v | l( v ) | G |G | (ν v )G
where v ∈ N and l ∈ L. Term 0 specifies the empty graph, v specifies the graph of only one node named by v, l( v ) is used to specify the graph of an l-labeled edge attached to nodes v through its tentacles, G 1 |G 2 is for the composition of two graphs and (ν v )G is a restriction that binds the node v in G so that it is invisible outside. The graph terms that specify the graph representation of processes P and Q in Fig. 4, denoted respectively by J P K and J Q K, are shown as follows.
J P K = (ν p )(ν p 1 )(ν p 2 )(Con( p , x, p 1 )|Ret( p 1 , y , p 2 )|Nil( p 2 )) J Q K = (ν p )(ν p 1 )(ν p 2 )(ν z)(Res( p , z, p 1 )|Con( p 1 , z, p 2 )|Nil( p 2 )) We say a node in a graph term is free if it is not in the scope of a restriction, otherwise it is bound. As shown in Fig. 4, free nodes of a graph term are explicitly labeled by their names in the hypergraph, while bound nodes are not as their naming is not significant. Besides, for an edge with more than one tentacle, we usually order its tentacles clockwise, with the first one being drawn as an incoming arrow and others as outgoing arrows. If necessary, we explicitly give their order by 1, 2, . . . , k. For an edge with only one tentacle, it is not significant whether its tentacle is shown as an incoming or outgoing edge. Hierarchical graph terms. The grammar presented above is suitable to describe single CaSPiS processes that represent closed systems. However, it is not sufficient to deal with open systems and their compositions. This motivates the extension of the grammar with hierarchical graph terms, through the notion of design. Assume a set D of design labels. Definition 4 (Hierarchical graph term). A hierarchical graph term is a graph or a design generated by the grammar
Graph Design
G ::= 0 | v | l( v ) | G |G | (ν v )G | D v D ::= L v [G ]
where v ∈ N , l ∈ L and L ∈ D .
of its body graph G as its interface nodes, interface for short. A design D = L w [G ] exposes a sequence of free nodes w v is obtained from D by attaching its interface nodes to the nodes v . Notice that the Given a design D, a design edge D body graph G of a design L w [G ] may contain design edges of different designs. With nested designs, a graph term as well as the hypergraph it specifies is indeed hierarchical. Recall that in Fig. 4 processes P and Q are represented as closed systems that cannot be composed. With the notion of design, we can represent each of them as an open system. Instead of restricting, we expose the first control node p as the interface of a P-labeled design (P means “process”).
J P K = P p [(ν p 1 )(ν p 2 )(Con( p , x, p 1 )|Ret( p 1 , y , p 2 )|Nil( p 2 ))] J Q K = P p [(ν p 1 )(ν p 2 )(ν z)(Res( p , z, p 1 )|Con( p 1 , z, p 2 )|Nil( p 2 ))] The hypergraphs that the two designs specify are re-depicted on the top and bottom of Fig. 5(a), respectively. We can then compose them by linking their interface nodes with an edge Par (Par means “parallel composition”), which also have a third node p to interface with the outside. In this way, we get the hypergraph that represents the parallel composition P | Q , see Fig. 5(a). It can be specified by a design.
J P | Q K = P p [(ν p 1 )(ν p 2 )(Par( p , p 1 , p 2 )|J P K p 1 |J Q K p 2 )] A design plays two roles in the graph representation of a process. First, it represent the interface of a process through which the process communicates with the environment. For example, the hypergraph in Fig. 5(a) contains three designs,
JID:SCICO AID:1658 /FLA
[m3G; v 1.120; Prn:13/12/2013; 13:57] P.7 (1-34)
L. Zhao et al. / Science of Computer Programming ••• (••••) •••–•••
7
Fig. 5. Graph representation with designs.
represented by three auxiliary edges, which we call abstract edges, labeled respectively by P11 , P21 and P31 . The use of abstract edges and their labels will be clarified in the formal interpretation of graph terms. The second role of a design is to represents a service or a session that contributes to the hierarchy of a whole process. When a design is introduced merely for the first role, it can be made flat by collapsing the corresponding abstract edges. For example, the hypergraph in Fig. 5(b) is the flat version of that in Fig. 5(a). In fact, the design corresponding to the P11 -labeled edge can be further flattened after it is composed with nodes and edges in the environment. 3.2. Interpretation of graph terms as hypergraphs A hypergraph has different types of nodes for modeling different entities. In Fig. 4, for example, nodes represent data while • nodes represent states of the control flow. Assume a set T of node types so that each node v has a type T( v ) ∈ T . Besides, each edge label or design label l has an arity AR(l) and a type T(l) that is the sequence of types of the nodes that v ) is well-typed if v is of type T(l); a design D = L w [G ] is well-typed if the the edge connects. Thus |T(l)| = AR(l). An edge l( is of type T( L ); while an L-labeled design edge D v is well-typed if v is of type T( L ). sequence of its interface nodes w We only define the interpretation of well-typed graph terms, i.e. all the occurrences of edges, designs and design edges are well-typed, and a graph term by default means a well-typed one. To syntactically indicate in a graph term whether a design is to be interpreted as a flat graph, we assume a designated set of labels F ⊆ D for flat designs. Definition 5 (Interpretation of graph terms). A graph term G is interpreted as a hypergraph H(G ) = N(G ), E(G ), AE(G ), fn(G ), in(G ) defined as follows, where N(G ) is the set of nodes names, E(G ) the set of edges, AE(G ) the set of abstract edges, fn(G ) the set of free node names, and in(G ) the sequence of nodes exposed to the environment. We call H(G ) the hypergraph of G.
∅, ∅, ∅, ∅, ε { v }, ∅, ∅, { v }, ε {v }, {l(v )}, ∅, {v }, ε N(G 1 ), E(G 1 ), AE(G 1 ), fn(G 1 ) \ { v }, ε N(G 1 ) ∪ N(G 2 ), E(G 1 ) ∪ E(G 2 ), AE(G 1 ) ∪ AE(G 2 ), fn(G 1 ) ∪ fn(G 2 ), ε (N(G 1 ) ∩ N(G 2 ) = fn(G 1 ) ∩ fn(G 2 )) [ j ], w [ j ])|1 j | w |}, fn(G 1 ) \ { w }, w H( L w [G 1 ]) = N(G 1 ) ∪ { w }, E(G 1 ), AE(G 1 ) ∪ { Lkj ( w ), k fresh for L ) ( w fresh, T( w ) = T( w ], E(G 1 )[v / w ], AE(G 1 )[v / w ], (fn(G 1 ) \ { w }) ∪ {v }, ε ( L ∈ F ) H( L w [G 1 ]v ) = N(G 1 )[v / w H( L w [G 1 ]v ) = N( D )[v / in( D )], E( D )[v / in( D )], AE( D )[v / in( D )], fn( D ) ∪ {v }, ε ( L ∈ / F , D = L w [G 1 ]) H(0) H( v ) H(l(v )) H((ν v )G 1 ) H( G 1 | G 2 )
= = = = =
According to the intuition given in the previous subsection, the interpretation of a node, edge, restriction composition of a graph term is straightforward and easy to understand. A design is generally represented by a set of binary auxiliary edges, called abstract edges, linking each of the interface node of the design to a fresh node for interaction with the environment. For example, Fig. 5(a) has three abstract edges, and its flat version Fig. 5(b) has only one, that is P11 . Notice that terms (ν v )(ν w )G and (ν w )(ν v )G are interpreted as the same hypergraph. We thus extend the restriction operator to a set of nodes and write, for example, (ν { v , w })G for (ν v )(ν w )G or (ν w )(ν v )G. We show two more examples of the interpretation in Fig. 6, where the hypergraphs of the following terms are depicted, / F. with L ∈
G 1 = L ( w 1 , w 2 ) [l( w 1 , v )| w 2 ] v 1 , v 2 | L ( w 1 , w 2 ) [ w 1 |l( w 2 , v )] v 1 , v 2 G 2 = L ( w 1 , w 2 ) [l( w 1 , v )|l( w 2 , v )] v 1 , v 2 | L ( w 1 , w 2 ) [ w 1 | w 2 ] v 1 , v 2 Recall that a free node is labeled with its name, while a bound one is not since its naming is not significant. An edge is depicted as a box with tentacles, the number of which is exactly its arity. An abstract edge is represented as a dotted arrow with its label L kj . The subscript j indicates that the abstract edge links to the j-th interface node, while the superscript k are used to discriminate different occurrences of L-labeled designs. In Fig. 6, for example, the abstract edges in the upper
JID:SCICO AID:1658 /FLA
[m3G; v 1.120; Prn:13/12/2013; 13:57] P.8 (1-34)
L. Zhao et al. / Science of Computer Programming ••• (••••) •••–•••
8
Fig. 6. Hypergraphs of terms.
Fig. 7. Simplified hypergraphs. G 1 |G 2 ≡d G 2 |G 1 v |G ≡d G (ν v )G ≡d (ν w )G [ w / v ] (G 1 |G 2 )|G 3 ≡d G 1 |(G 2 |G 3 ) /v ]] 0|G ≡d G L v [G ] ≡d L w [G [ w G 1 |(ν v )G 2 ≡d (ν v )(G 1 |G 2 ) (ν v )(ν w )G ≡d (ν w )(ν v )G L w [(ν v )G ] p ≡d (ν v ) L w [G ] p (ν v )0 ≡d 0 ] L w [G ] v ≡d G [ v/w if L ∈ F }=∅ L w [G 1 |G 2 ] v ≡d G 1 | L w [G 2 ] v if fn(G 1 ) ∩ { w
if if if if if
v ∈ fn(G ) w∈ / fn(G ) } ∩ (fn(G ) \ {v }) = ∅ {w v∈ / fn(G 1 ) } ∪ {p } v∈ / {w
Fig. 8. Isomorphic graphs.
and lower parts of G 1 are labeled by different superscripts as ( L 11 , L 12 ) and ( L 21 , L 22 ), respectively. Without these superscripts, we cannot distinguish between the hypergraphs of G 1 and G 2 . A hypergraph full of abstract edge labels looks complicated. We can simplify its graphic representation, putting the body of each L-labeled design into a dotted box labeled by L and removing all the abstract edge labels. We regard the dotted box as a special “edge” and the original abstract edges as its “tentacles”, and use the same convention for edges to order these tentacles. For example, G 1 and G 2 in Fig. 6 are re-depicted in Fig. 7. Notice that a free node is shared by different designs, such as v in G 1 . Besides encapsulation, designs also provide a mechanism of abstraction, enabling us to hide elements that are not significant in the current view. In Fig. 7, for example, the design D (of label L) is simply depicted as a “double box” (with tentacles) as we are not concerned with the details of its body. Morphism. For a formal definition of graph transformations, we need to study the relations between hypergraphs, which is captured by the notion of morphism. Definition 6 (Morphism). A morphism
ρ : G 1 → G 2 is a mapping from one hypergraph G 1 to another G 2 such that
1. ρ (u ) has the same type as u, where u is either a node, an edge or an abstract edge, ), ρ maps v to w, and 2. if ρ maps an edge or abstract edge l( v ) to l( w 3. ρ maps the sequence of interface nodes of G 1 to those of G 2 . A morphism ρ : G 1 → G 2 is called fn-preserving if it maps each free node of G 1 to a free node of G 2 with the same name. Such a morphism is called strongly fn-preserving if it further maps each bound node of G 1 to a bound node of G 2 . Two hypergraphs G 1 and G 2 are isomorphic, denoted as G 1 ≡d G 2 , if there is a morphism between them that is bijective and strongly fn-preserving. As a result, isomorphic hypergraphs have the same set of free node names. When there is no confusion, we allow the interchange between a graph term and its hypergraph. Therefore, we can define the relation that two terms G 1 and G 2 are isomorphic, i.e. their hypergraphs are isomorphic. It is straightforward to verify the isomorphism relations between hierarchical graphs in Fig. 8. 3.3. Graph transformation rules A graph-based theory of programming often requires the formalization of rules of graph transformations for defining the behavior of a program or the derivation of one program from another. Graph transformation rules are often defined in terms of the algebraic notion of pushout1 [13].
1
Intuitively, a pushout combines a pair of graphs by injecting them into a larger graph with certain common parts.
JID:SCICO AID:1658 /FLA
[m3G; v 1.120; Prn:13/12/2013; 13:57] P.9 (1-34)
L. Zhao et al. / Science of Computer Programming ••• (••••) •••–•••
9
Fig. 9. Two DPO rules.
Fig. 10. Direct derivation.
ρL
ρR
Definition 7 (Double-Pushout (DPO) rule). A DPO rule R : G L ←− G I −→ G R is a pair of morphisms ρL : G I → G L and ρR : G I → G R , where ρL is injective. Graphs G L , G I and G R are called the left-hand side, the interface and the right-hand side of the rule, respectively. In most DPO rules, ρL and ρR are identity mappings or they change only a small part of nodes. We thus simply represent a DPO rule by listing the three graphs as G L |G I |G R 2 with additional annotations for nodes that are not mapped identically. We shown two examples of DPO rules R 1 and R 2 in Fig. 9. In Rule R 1 , both morphisms are the identity mapping and thus no annotation is needed. For Rule R 2 , however, we use v / v → v to annotate that ρR maps different nodes v and v in the interface to the same one v in the right-hand side. Now we show how a DPO rule can be applied to derive one graph from another. ρL
ρR
Definition 8 (Direct derivation). Given a DPO rule R : G L ← G I → G R , a graph G and a morphism ρ1 : G L → G, G is a direct derivation of G by R based on ρ1 , denoted as G ⇒ R , ρ1 G or simply G ⇒ R G , if there exist the morphisms in Fig. 10(a) such that 1. both squares are pushouts, 2. ρL is strongly fn-preserving, and 3. ρR is fn-preserving whose image includes all the free names of G . This actually implies fn(G ) = fn(G ). In this definition, ρ1 is called the match of the derivation as it actually matches G L with the subgraph ρ1 (G L ) of G. When ρ1 is found, a graph G can be constructed with morphisms ρ2 , ρL so that the square on the left of Fig. 10(a) is a pushout. The intuition is that G is obtained from the source graph G by removing the elements, i.e. nodes, edges and abstract edges, in ρ1 (G L \ ρL (G I )) and preserving those in ρ1 (ρL (G I )). Then, the target graph G can be constructed with morphisms ρ3 , ρR so that the square on the right of Fig. 10(a) is a pushout. The intuition is that G is obtained from G by adding the elements corresponding to G R \ ρR (G I ). The second and third conditions of this definition ensure that G and G are unique. An example of direct derivation by Rule R 1 from Fig. 9 is shown in Fig. 10(b). A graph transformation system is defined by a set δ of DPO rules, and a graph derivation is a sequential application of DPO rules of the system. Formally, G is a derivation of G in system δ , denoted as G ⇒∗δ G , if there is a sequence of graphs G 0 , . . . , G k (k 0) such that G ≡d G 0 ⇒ R 1 G 1 ⇒ R 2 · · · ⇒ R k G k ≡d G for R 1 , . . . , R k ∈ δ . As the case k = 0, G ⇒∗δ G holds for any set of rules δ , provided G ≡d G . For a DPO rule R, G ⇒∗R G is short for G ⇒∗{R } G . 4. Graph representation of CaSPiS In this section, we apply our graph model for the representation of CaSPiS processes and their behaviors. We first define a direct representation of each CaSPiS process P as a hierarchical graph J P K. This representation is easy to understand, but it is hard to define the reductions of processes. To overcome this problem, we define a tagged version J P K† of J P K, and 2
Here, “|” is just used to separate the graphs, it does not represent a graph composition.
JID:SCICO AID:1658 /FLA
[m3G; v 1.120; Prn:13/12/2013; 13:57] P.10 (1-34)
L. Zhao et al. / Science of Computer Programming ••• (••••) •••–•••
10
show that J P K† can be derived from the untagged version J P K. With the tagged graph representation, we provide a graph transformation system for characterizing the congruence and reductions of processes. To represent a process as a hierarchical graph, we define three node types •, and , respectively for control flows, data and channels of processes. We introduce a set of design labels {P, F, V, D, I, S, R}, respectively for processes, patterns, values, service definitions, service invocations, sessions and right-hand sides of pipelines. Designs labeled with P, F and V are flat, so the hierarchy of a graph are introduced by services, sessions and pipelines. We first define the representation of patterns and values. The graph representation of a pattern F and a value V are
-typed designs of label F and V, denoted as J F KF and J V KV , respectively. def
J?xKF = F v [ pv ( v , x)]
def
J f ( F 1 , . . . , F k )KF = F v [(ν { v 1 , . . . , v k })( f ( v , v 1 , . . . , v k )|J F 1 KF ( v 1 )| . . . |J F k KF ( v k ))]
def
def
JxKV = V v [ v v ( v , x)] J f ( V 1 , . . . , V k )KV = V v [(ν { v 1 , . . . , v k })( f ( v , v 1 , . . . , v k )|J V 1 KV ( v 1 )| . . . |J V k KV ( v k ))] A pattern variable ?x and a value variable x are represented as an edge pv and an edge v v of type ( , ) attached to the node named x, and a constructor f is represented by an edge labeled by f which is of arity AR( f ) = ar( f ) + 1 and type for each rank. Notice that fn(J F KF ) = bn( F ) and fn(J V KV ) = fn( V ) for each pattern F and value V . A process is represented as a P-labeled design of type (•, , , ). In this design, a • node p is exposed as the start of the control flow and three nodes i, o and t are exposed as the input, output and return channels, respectively. Definition 9 (Graph representation of processes). The graph representation of a process P , denoted as J P K, is defined by induction on the structure of P . The representative cases are depicted in Fig. 11. def
J0K = P( p ,i ,o,t ) [i |o|t |Nil( p )] def
J( F ) P K = P( p ,i ,o,t ) [(ν ({ p 1 , v } ∪ bn( F )))(Abs( p , v , p 1 , i )|J F KF v |J P K p 1 , i , o, t )] def
J V P K = P( p ,i ,o,t ) [(ν { p 1 , v })(Con( p , v , p 1 , o)|J V KV v |J P K p 1 , i , o, t )] def
J V ↑ P K = P( p ,i ,o,t ) [(ν { p 1 , v })(Ret( p , v , p 1 , t )|J V KV v |J P K p 1 , i , o, t )] def
J M + M K = P( p ,i ,o,t ) [(ν { p 1 , p 2 })(Sum( p , p 1 , p 2 )|J M K p 1 , i , o, t |J M K p 2 , i , o, t )] def
J P | Q K = P( p ,i ,o,t ) [(ν { p 1 , p 2 })(Par( p , p 1 , p 2 )|J P K p 1 , i , o, t |J Q K p 2 , i , o, t )] def
Js. P K = P( p ,i ,o,t ) [i |t |D( p ,t ) [(ν { p 1 , i 1 , o1 })(Def ( p , s, p 1 , i 1 , o1 )|J P K p 1 , i 1 , o1 , t )] p , o] def
Js. P K = P( p ,i ,o,t ) [i |t |I( p ,t ) [(ν { p 1 , i 1 , o1 })(Inv( p , s, p 1 , i 1 , o1 )|J P K p 1 , i 1 , o1 , t )] p , o] def
Jr P K = P( p ,i ,o,t ) [i |t |S( p ,t ) [(ν { p 1 , i 1 , o1 })(Ses( p , r , p 1 , i 1 , o1 )|J P K p 1 , i 1 , o1 , t )] p , o] def
J(ν n) P K = P( p ,i ,o,t ) [(ν { p 1 , n})(Res( p , n, p 1 )|J P K p 1 , i , o, t )] def
J P > Q K = P( p ,i ,o,t ) [(ν { p 1 , p 2 , o1 })(Pip( p , p 1 , p 2 , o1 , i , o, t )|J P K p 1 , i , o1 , t |R p [(ν {i , o, t })J Q K p , i , o, t ] p 2 )] def
J! P K = P( p ,i ,o,t ) [(ν { p 1 , i 1 , o1 , t 1 })(Rep( p , p 1 , i , o, t )|J P K p 1 , i 1 , o1 , t 1 )] The nil process 0 is represented as an edge Nil. An abstraction ( F ) P is represented as a graph with an edge Abs connected with the graphs of F and P and attached to the input channel of the whole process. Similar to an abstraction, a concretion and a return process is represented, but with a Con and a Ret edge associated with the output channel and the return channel, respectively. In the graph of a parallel composition P | Q , the graphs of P and Q are connected by a Par edge, and the channels of P and Q are combined. The graph of a session process r P is defined by attaching the graph of P with a session edge Ses. The Ses edge is also connected with the input and output channels of P . This subgraph is then encapsulated by an S-labeled design. The graphs of a service definition and a service invocation are defined similarly. A pipeline P > Q is represented as an edge Pip connected with the graphs of P and Q , where the graph of the right-hand side of the pipeline Q is encapsulated by an R-labeled design. A restriction (ν n) P and a replication ! P are respectively represented as an edge Res and Rep, which is attached to the graph of P . While in the latter case, the channels of P are invisible from outside. Notice that fn(J P K) = fn( P ) for each process P . 4.1. Tagged graph and tagging rules In the graph term J P K of a process P , each control flow node • is actually the start of a sub-processes Q of P . In this sense, the • node corresponds to a context C [·] with C [ Q ] = P . Recall that in a process reduction only sub-processes occurring in static contexts are allowed to interact with each other. To define reductions on graphs, we need to distinguish
JID:SCICO AID:1658 /FLA
[m3G; v 1.120; Prn:13/12/2013; 13:57] P.11 (1-34)
L. Zhao et al. / Science of Computer Programming ••• (••••) •••–•••
11
Fig. 11. Graph representation of processes.
active control flow nodes that correspond to static contexts, from inactive ones that correspond to non-static contexts. For this, we tag the former with unary edges labeled by A ( A means “active”), called tag edges. Definition 10 (Tagged graph of processes). The tagged graph representation of P , denoted as J P K† , is defined by induction on the structure of P . The representative cases are depicted in Fig. 12. def
J0K† = P( p ,i ,o,t ) [i |o|t | A ( p )|Nil( p )] def
J( F ) P K† = P( p ,i ,o,t ) [(ν ({ p 1 , v } ∪ bn( F )))( A ( p )|Abs( p , v , p 1 , i )|J F KF v |J P K p 1 , i , o, t )] def
J V P K† = P( p ,i ,o,t ) [(ν { p 1 , v })( A ( p )|Con( p , v , p 1 , o)|J V KV v |J P K p 1 , i , o, t )] def
J V ↑ P K† = P( p ,i ,o,t ) [(ν { p 1 , v })( A ( p )|Ret( p , v , p 1 , t )|J V KV v |J P K p 1 , i , o, t )] def
J M + M K† = P( p ,i ,o,t ) [(ν { p 1 , p 2 })( A ( p )|Sum( p , p 1 , p 2 )|J M K p 1 , i , o, t |J M K p 2 , i , o, t )] def
J P | Q K† = P( p ,i ,o,t ) [(ν { p 1 , p 2 })(Par( p , p 1 , p 2 )|J P K† p 1 , i , o, t |J Q K† p 2 , i , o, t )] def
Js. P K† = P( p ,i ,o,t ) [i |t | A ( p )|D( p ,t ) [(ν { p 1 , i 1 , o1 })(Def ( p , s, p 1 , i 1 , o1 )|J P K p 1 , i 1 , o1 , t )] p , o] def
Js. P K† = P( p ,i ,o,t ) [i |t | A ( p )|I( p ,t ) [(ν { p 1 , i 1 , o1 })(Inv( p , s, p 1 , i 1 , o1 )|J P K p 1 , i 1 , o1 , t )] p , o] def
Jr P K† = P( p ,i ,o,t ) [i |t |S( p ,t ) [(ν { p 1 , i 1 , o1 })(Ses( p , r , p 1 , i 1 , o1 )|J P K† p 1 , i 1 , o1 , t )] p , o] def
J(ν n) P K† = P( p ,i ,o,t ) [(ν n)(r v (n)|J P K† p , i , o, t )] def
J P > Q K† = P( p ,i ,o,t ) [(ν { p 1 , p 2 , o1 }) (Pip( p , p 1 , p 2 , o1 , i , o, t )|J P K† p 1 , i , o1 , t |R p [(ν {i , o, t })J Q K p , i , o, t ] p 2 )] def
J! P K† = P( p ,i ,o,t ) [(ν { p 1 , i 1 , o1 , t 1 })( A ( p )|Rep( p , p 1 , i , o, t )|J P K p 1 , i 1 , o1 , t 1 )] In a tagged graph J P K† , each occurrence of abstraction, concretion, return, service definition or invocation in a static context is tagged by an A-edge. In the case of a restriction, J(ν n) P K† is quite different from its untagged version. In J(ν n) P K† , a new value is generated and it is denoted by an r v-labeled edge (r v means “restricted value”), and the original Res-labeled edge of the untagged version is not needed. Notice that fn(J P K† ) = fn( P ) for each process P . It is worth pointing out that Definitions 9 and 10 reflect the following concerns or challenges in defining the graph representation of CaSPiS processes. 1. The graph representation of a process should characterize the hierarchy of the process. For this purpose, we introduce designs to represent services, sessions and pipelines that are possibly nested.
JID:SCICO AID:1658 /FLA
[m3G; v 1.120; Prn:13/12/2013; 13:57] P.12 (1-34)
L. Zhao et al. / Science of Computer Programming ••• (••••) •••–•••
12
Fig. 12. Tagged graphs of processes.
Fig. 13. Tagging rules.
2. The graph representation of a process should contain adequate information for further characterization of the behavior of the process. For example, the channels and tags in the representation are needed to define reductions. 3. Upon satisfying 1 and 2, the graph representation of a process should be as small as possible. With the graph representation meeting these challenges, the definition of their transformations will be straightforward. The main concern is to simulate each reduction by a few steps of transformations at the graph level. We first introduce graph transformations for dealing with tags. To obtained a tagged graph J P K† from its untagged version J P K, we add a tag edge to the start of the control flow of J P K, and then apply a sequence of graph transformation rules. These rules are called tagging rules, denoted as T and shown in Fig. 13. During the tagging, the tag A moves step by step along the flow of control, until it arrives at a nil process or a dynamic operator. In each step, the tag may go through a session, through a pipeline into its left-hand side or through a parallel composition into both of its branches. If the tag meets a restriction, the restriction edge Res is removed, the associated control flow nodes are combined, and an r v edge is added to the associated value node. We show that these rules are sufficient to transform the untagged graph of every process into its tagged version. Theorem 1 (Completeness of tagging rules). P( p ,i ,o,t ) [ A ( p )|J P K p , i , o, t ] ⇒∗T J P K† for any process P . This theorem can be proved simply by induction on the structure of P . The proof for each case is straightforward. 4.2. Rules for congruence We provide a set of graph transformation rules C to characterize the congruence relation between CaSPiS processes. The set includes basic rules for commutativity, associativity and restrictions, as well as rules for making a copy of a (sub-)process. The basic rules in this set are commutativity and associativity rules for sums and parallel compositions, together with unit rules for sums that represent the congruence relation M ≡c M + 0, shown in Fig. 14. In the case of commutativity, we simply change the order of tentacles of the Sum and Par edges; while in the case of associativity, we rearrange the configuration of these edges.
JID:SCICO AID:1658 /FLA
[m3G; v 1.120; Prn:13/12/2013; 13:57] P.13 (1-34)
L. Zhao et al. / Science of Computer Programming ••• (••••) •••–•••
13
Fig. 14. Rules for sums and parallel compositions.
Fig. 15. Rules for restrictions.
We also defined a set of rules for restrictions, shown in Fig. 15. These rules include unit rules for both untagged and tagged restrictions, as well as rules to move a restriction out of another restriction, a parallel composition, a pipeline (from the left-hand side) or a session. Copy rules. To make copies of (sub-)processes, we introduce a set of copy rules P ⊂ C . In these rules, we use edges of label C , which are of type (•, •, , , ), to copy the control flow of processes, and edges of label PC, VC and RC, which are of type ( , ), to copy patterns, values and restrictions, respectively. They are called copy edges.
JID:SCICO AID:1658 /FLA
14
[m3G; v 1.120; Prn:13/12/2013; 13:57] P.14 (1-34)
L. Zhao et al. / Science of Computer Programming ••• (••••) •••–•••
Fig. 16. Rule for replication.
Fig. 17. Control-copy rules (Part I).
Given a replication process to be copied, we first create a copy edge C and put it in parallel with the original process. This is done by Rule (Rep-Step), depicted in Fig. 16. We require that this rule can only be applied to graphs or tagged graphs of processes, e.g. without any copy edges. That is, we do not consider the interplay among different copy procedures. Alternatively, such a requirement can be specified as a set of negative application conditions (NAC) [31] of the rule. Each NAC takes the form of a graph, e.g. a single copy edge, and a DPO rule with NACs cannot be applied to a graph with either of them as a subgraph. It is worth pointing out that a copy edge C can also be generated by a reduction of a pipeline. We will show this by rules for reduction in the next subsection. The same requirement applies to those rules. Those rules are also required to be applied to graphs without copy edges. We provide a group of rules in Figs. 17 and 18 to copy the control flow and rebuilding the channels of a process step by step. Each of these rules corresponds to a specific process construct, such as nil, abstraction, service definition, pipeline and restriction. After the copy of an abstraction, a PC edge is generated which will further copy the pattern of the abstraction. Similarly, after the copy of a concretion, a return, a service definition or a service invocation, a VC edge is generated for subsequent copy of the corresponding value or service name. In addition, after the copy of a restriction, an RC edge is
JID:SCICO AID:1658 /FLA
[m3G; v 1.120; Prn:13/12/2013; 13:57] P.15 (1-34)
L. Zhao et al. / Science of Computer Programming ••• (••••) •••–•••
15
Fig. 18. Control-copy rules (Part II).
introduced in order to copy the restricted value. Recall that no session is allowed to occur in the body of a replication or the right-hand side of a pipeline, which is a non-static context. As a result, we don’t need to consider the copy of a session. Besides the copy of the control flow, we also need to copy the data of a process. For this purpose, we provide a group of rules in Fig. 19 that aim at copying patterns and values, using the copy edges PC and VC generated during the copy of the control flow, respectively.
JID:SCICO AID:1658 /FLA
16
[m3G; v 1.120; Prn:13/12/2013; 13:57] P.16 (1-34)
L. Zhao et al. / Science of Computer Programming ••• (••••) •••–•••
Fig. 19. Data-copy rules.
Fig. 20. Rules for eliminating copy edges.
The copy edges are just auxiliary ones and do not occur in the graph representation of any process. So, we have to eliminate them at the end of a copy procedure, in order to achieve the graph of the target process. Rules for this purpose are provided in Fig. 20. We should be careful in applying these rules. Specifically, there is a priority order among them (and the other copy rules). 1. (VC-Elim-PC) > (PC-Elim) or (VC-Elim). 2. (VC-Elim-RC) > (RC-Elim) or (VC-Elim). 3. Any control-copy rule or data-copy rule > (PC-Elim) or (RC-Elim) or (VC-Elim). In this case that more than one data-copy rules are applicable to a graph during the copy procedure, the one with higher (highest) priority should be applied first, otherwise the copy may be incorrect. Alternatively, such a priority order can be specified as NACs of rules (PC-Elim), (RC-Elim) and (VC-Elim). 4.3. Rules for reduction We provide a set of graph transformation rules R to characterize the reduction behavior of CaSPiS processes. Each rule is designed for a specific case of reduction. The first rule is for the synchronization between a pair of service definition and service invocation, shown in Fig. 21. The synchronization causes the creation of a new session, whose name is restricted thus inaccessible from other parts of the graph. It is possible that the data node representing the service name become isolated after the synchronization, but it can be eliminated by garbage collection. We will introduce rules for garbage collection later.
JID:SCICO AID:1658 /FLA
[m3G; v 1.120; Prn:13/12/2013; 13:57] P.17 (1-34)
L. Zhao et al. / Science of Computer Programming ••• (••••) •••–•••
17
Fig. 21. Rules for reduction (Part I).
Fig. 22. Rules for reduction (Part II).
The next two rules are for the reduction of a session, shown in Fig. 22. Rule (Ses-Sync) is for the interaction between a concretion and an abstraction of a session r. The shared channel node by the edges Con and Ses makes sure that the concretion belongs to one side of r. Similarly, the abstraction belongs to the other session side. Both of the abstraction and concretion are removed after the communication, with the value of the concretion connected to the pattern of the abstraction through an AS-edge. Such an edge is used for further data assignment. Notice that the concretion and abstraction originally occur in two sums, respectively. Their communication makes the other branches of the sums isolated in the graph. These isolated parts will be removed by garbage collection. Rule (Ses-Sync-Ret) is for the interaction between a return and an abstraction in different sides of a session r. It has a similar form to Rule (Ses-Sync), while the return edge Ret occurs in the body of another session nested inside r. The last two rules are for the reduction of a pipeline, shown in Fig. 23. Rule (Pip-Sync) is for the interaction between a concretion and an abstraction of a pipeline. The shared channel node by the edges Con and pip makes sure that the concretion belongs to left-hand side of the pipeline, so that it can communicate with the abstraction Abs on the right-hand side. The concretion is removed after the communication, and a copy edge C is generated which is put in parallel with the whole pipeline and aims at copying the right-hand side. In addition, the value of the original concretion is connected to the pattern of the abstraction through an AS-edge and a PC-edge for further data assignment and pattern copy. Notice that the concretion originally occurs in a sum. After the reduction, the other branch of the sum becomes isolated in the graph and can be removed by garbage collection. Rule (Pip-Sync-Ret) is for the interaction between a return on the left-hand side of a pipeline and an abstraction on the right-hand side. It has a similar form to Rule (Ses-Sync), while the return edge Ret occurs in body of an additional session.
JID:SCICO AID:1658 /FLA
18
[m3G; v 1.120; Prn:13/12/2013; 13:57] P.18 (1-34)
L. Zhao et al. / Science of Computer Programming ••• (••••) •••–•••
Fig. 23. Rules for reduction (Part III).
Fig. 24. A variant of Rule (Ses-Sync).
It is worth pointing out that each rule in Figs. 22 and 23 has variants. For example, (Ses-Sync), for interaction between a sum of concretions and a sum of abstractions on different sides of a session, has the following variants: 1. on one session side is a concretion, on the other side is an abstraction, 2. on one session side is a concretion, on the other side is a sum of abstractions, and 3. on one session side is a sum of concretions, on the other side is an abstraction. We show the first variant in Fig. 24. Nevertheless, each variant rule is equivalent to the original rule, since an abstraction or concretion is congruent to a sum, i.e. ( F ) P ≡c ( F ) P + 0, V P ≡c V P + 0. We require that the each rule for reduction can only be applied to tagged graphs of processes, e.g. without isolated parts or auxiliary edges other than tag edges. This requirement reflects our consideration that after a reduction we need to finish all the relevant assignment, garbage collection, as well as necessary copy and tagging procedures, before starting the next reduction. Alternatively, such a requirement can be specified as NACs of these rules. 4.4. Garbage collection rules After the application of a reduction rule, certain nodes and edges may become isolated, and they will make no contribution to the further transformations of the whole graph. We provide a set of DPO rules G , called garbage collection rules, to remove these parts from the graph. These rules are shown in Fig. 25, covering all the cases of isolated process constructs, data and channels.
JID:SCICO AID:1658 /FLA
[m3G; v 1.120; Prn:13/12/2013; 13:57] P.19 (1-34)
L. Zhao et al. / Science of Computer Programming ••• (••••) •••–•••
19
Fig. 25. Garbage collection rules.
4.5. Data assignment rules After the application of a reduction rule, we also need to assign values to their corresponding patterns, according to those AS-edges produced by the reduction. After the assignment, some of the values may not in their correct form so we have to normalize them. For these purposes, we provide a set of DPO rules D for data assignment, as well as the subsequent data normalization. They are called data assignment rules, shown in Fig. 26. Rules (PV-Assign) and (Ctr-Assign) are for the assignment of a value to a pattern. In the case of (PV-Assign), the pattern is simply a pattern variable, while in the case of (Ctr-Assign), the pattern is constructed and the value is required to be constructed with the same constructor. After the assignment, some edges of values may not be properly associated. A simple case is that two vv-edges are associated sequentially. This is redundant and Rule (VV-Norm) is for the elimination of one of them. Another case is that a constructor is shared by different values, which, however, does not happen in any process. To deal with this case, Rule (Ctr-Split) makes a copy of the constructor for each value. In addition, a vv-edge is not necessary if it is associated with a non-shared constructor, and Rule (Ctr-Norm) is for the elimination of such an edge. In order to avoid unnecessary complexity of graphs, we require that each of these rules can only be applied to graphs without copy edges. That is, we do not consider the case to perform copy and data assignment at the same time. Alternatively, such a requirement can be specified as NACs of these rules. 4.6. Examples We show the application of the graph transformation rules through a couple of examples.
JID:SCICO AID:1658 /FLA
20
[m3G; v 1.120; Prn:13/12/2013; 13:57] P.20 (1-34)
L. Zhao et al. / Science of Computer Programming ••• (••••) •••–•••
Fig. 26. Data assignment rules.
Example 1. Consider a service named time which is ready to output the current time T. This service can be used by a process that invokes the service, receives values it produces and returns them. The composition of the service and the process is specified in CaSPiS as P 0 = time.T|time.(?x)x↑ . The synchronization between time and time creates a session with a fresh name r, and P 0 evolves to P 1 = (ν r )(r T|r (?x)x↑ ). Then, the concretion T on one session side and the abstraction (?x) on the other side can communicate, assigning x on the latter side with T, and P 1 evolves to P 2 = (ν r )(r 0|r T↑ ). The same behavior can be simulated by graph transformations shown in Fig. 27. The left graph in the first row is J P 0 K† p , i , o, t . It is transformed into J P 1 K† p , i , o, t (the right graph in the second row) through a sequential application of DPO rules (Ser-Sync), (D-GC) and (Ses-Tag). Such a graph can be further transformed into J P 2 K† p , i , o, t (the right graph in the last row) by applying the DPO rules (Ses-Sync), (PV-Assign) and (Ctr-Norm) sequentially. 2 Example 2. Consider a pipeline P 3 = D > (?x)h(x)↑ . The left-hand side of the pipeline simply produces some data D to the right-hand side, while the right-hand side is ready to receive any data x, perform some operation h(. . .) on it and then return the result h(x) to the environment. Specifically, when the right-hand side receives D, a new instance h(D)↑ of h(x)↑ is created and runs in parallel with the whole pipeline. And in this way, P 3 evolves to P 4 = (0 > (?x)h(x)↑ )|h(D)↑ . The same behavior can be characterized by graph transformations shown in Fig. 28. The left graph in the first row is the graph J P 3 K† p , i , o, t of the pipeline P 3 . We can apply (Pip-Sync) on this graph and obtain a reduced pipeline together with some auxiliary copy edges and an AS-edge for further copy and data assignment, respectively. Then, using the copy edges, a copy of the right-hand side of the pipeline is made in three steps: 1. the control flow is copied through a sequential application of (Ret-Copy) and (Nil-Copy), 2. the pattern and value are copied by applying (PV-PCopy), (Ctr-VCopy) and (VV-VCopy) sequentially, and 3. the copy edges VC and PC are eliminated through a sequential application of (VC-Elim-PC) and (PC-Elim). At last, the AS-edge is eliminated by applying (PV-Assign) and (Ctr-Norm) sequentially, and we arrive at the graph
J P 4 K† p , i , o, t of the process P 4 (the right graph in the last row). 2 4.7. Soundness and completeness of graph transformation rules We have defined a graph transformation system, denoted as A , that consists of a few sets of DPO graph transformation rules, i.e. T , C , R , G and D . In this section, we show that the graph transformation system is sound and complete with respect to both congruence and reduction of CaSPiS processes. The soundness with respect to congruence means that two processes P and Q are congruent if the tagged graph of P can be transformed into that of Q , through applications of rules for congruence C as well as auxiliary tagging rules T .
JID:SCICO AID:1658 /FLA
[m3G; v 1.120; Prn:13/12/2013; 13:57] P.21 (1-34)
L. Zhao et al. / Science of Computer Programming ••• (••••) •••–•••
21
Fig. 27. Example: graph transformations of a session.
And the soundness with respect to reduction means that a process P reduces to another process Q if the tagged graph of the P can be transformed into that of Q , through applications of DPO rules A with exactly one application of rules for reduction R . Theorem 2 (Soundness w.r.t. congruence). For two processes P and Q , J P K† ⇒∗C ∪T J Q K† implies P ≡c Q . Theorem 3 (Soundness w.r.t. reduction). For two processes P and Q , if J P K† ⇒∗ A J Q K† with exactly one application of R , then P →Q. The completeness with respect to congruence means that the tagged graphs of two congruent processes P and Q can be transformed into the common tagged graph of some process Q , through applications of rules for congruence C as well as auxiliary tagging rules T . And the completeness with respect to reduction means that for any reduction P → Q , the tagged graph of P can be transformed into that of some process Q congruent with the reduced process Q . Theorem 4 (Completeness w.r.t. congruence). For two processes P and Q , P ≡c Q implies J P K† ⇒∗C ∪T J Q K† and J Q K† ⇒∗C ∪T
J Q K† for some process Q .
Theorem 5 (Completeness w.r.t. reduction). For two processes P and Q , P → Q implies J P K† ⇒∗ A J Q K† for some process Q ≡c Q . It is worth pointing out that the completeness with respect to congruence does not mean that the tagged graph of a process can always be transformed into that of a congruent process. In fact, such a conjecture is too strong to be valid. For congruent processes ! P and ! P | P , we are able to “unfold” the graph J! P K† into J! P | P K† by making a copy of P using the copy rules. However, we can hardly transform J! P | P K† back into J! P K† by applications of any set of DPO rules, as the DPO approach does not have a mechanism to check whether two parts of a graph are equivalent, i.e. representing the same
JID:SCICO AID:1658 /FLA
[m3G; v 1.120; Prn:13/12/2013; 13:57] P.22 (1-34)
L. Zhao et al. / Science of Computer Programming ••• (••••) •••–•••
22
Fig. 28. Example: graph transformations of a pipeline.
process. For the same reason, the completeness with respect to reduction does not mean that the tagged graph of P can be transformed into that of Q for any reduction P → Q . The proof of these theorems can be found in Appendix A. 4.8. Discussion: features of the graph transformation semantics Through the graph transformation system A , we have actually provided a graph transformation semantics of CaSPiS processes. And the soundness and completeness of A indicates that the graph transformation semantics is equivalent with the “textual” reduction semantics of CaSPiS. However, based on the DPO approach, the graph transformation semantics is equipped with some distinct features that the reduction semantics does not have. First, the graph transformation semantics enables concurrent applications of DPO rules [32]. Recall that an application ( R , ρ ) of a DPO rule R on a graph G with a match ρ is a direct derivation G ⇒ R , ρ G . The application preserves a set of elements (i.e. nodes, edges and abstract edges) E 0 ( R , ρ ), consumes a set of elements E − ( R , ρ ) and produces a set of elements E + ( R , ρ ), where E 0 ( R , ρ ), E − ( R , ρ ) and E + ( R , ρ ) are pairwise disjoint. In this way, the target graph G is obtained from the source graph G by replacing the subgraph E 0 ( R , ρ ) ∪ E − ( R , ρ ) with E 0 ( R , ρ ) ∪ E + ( R , ρ ). For two consecutive rule applications ( R 1 , ρ1 ) and ( R 2 , ρ2 ), e.g. G 1 ⇒ R 1 , ρ1 G 2 ⇒ R 2 , ρ2 G 3 , if they affect disjoint sets of elements or the common elements they affect are only their preserved elements, i.e.
E 0 ( R 1 , ρ1 ) ∪ E − ( R 1 , ρ1 ) ∪ E + ( R 1 , ρ1 ) ∩ E 0 ( R 2 , ρ2 ) ∪ E − ( R 2 , ρ2 ) ∪ E + ( R 2 , ρ2 ) ⊆ E 0 ( R 1 , ρ1 ) ∩ E 0 ( R 2 , ρ2 )
they are sequentially independent, which means they can be performed in any order on the source graph and the target graph remains the same. This is the key idea of concurrent rule application. Also notice that the notion of sequential independency can be extended to a sequence of more than two rule applications, where each pair of applications can only intersect at their preserved elements. Example 3. Consider the application of (D-GC) and then the two applications of (Ses-Tag) (with their own matches) in Example 1. The three rule applications are sequentially independent as they affect disjoint sets of elements. In fact, they
JID:SCICO AID:1658 /FLA
[m3G; v 1.120; Prn:13/12/2013; 13:57] P.23 (1-34)
L. Zhao et al. / Science of Computer Programming ••• (••••) •••–•••
23
Fig. 29. Example: an error in the interaction.
can be performed in any order on the source graph (the right graph in the first row in Fig. 27) and these applications will lead to the same target graph (the right graph in the second row in Fig. 27). 2 Besides, the graph transformation semantics supports the recording of causal dependencies between elements of graphs and applications of DPO rules [32]. Specifically, we can define a causal relation ≺ as the smallest transitive relation on graph elements u , . . . and rule applications ( R , ρ ), ( R , ρ ), . . . satisfying: 1. u ≺ ( R , ρ ) if u ∈ S − ( R , ρ ), 2. ( R , ρ ) ≺ u if u ∈ S + ( R , ρ ), and 3. ( R , ρ ) ≺ ( R , ρ ) if ( S + ( R , ρ ) ∩ S 0 ( R , ρ )) ∪ ( S 0 ( R , ρ ) ∩ S − ( R , ρ )) = ∅. The intuition of the above conditions is that a rule application is caused by all the elements it consumes and is a cause of each element it produces. In addition, a rule application is a cause of another rule application, if there is an element produced by the former and preserved by the latter or, symmetrically, there is an element preserved by the former and consumed by the latter. The causal relation can help us to detect the possible source of errors and misbehaviors. Example 4. Consider a process P 1 = (ν r )(r T|r ( f (?x))x↑ ). It is a variant of the process P 1 in Example 1 with a constructed pattern f (?x). The graph G 1 = J P 1 K† p , i , o, t of P 1 is depicted in Fig. 29(a). We can apply (Ses-Sync) on G 1 (based on a proper match) and arrive at a graph G 2 , shown in Fig. 29(b). Notice that in G 2 , an AS-edge is associated with an edge of the constant T and an edge of the constructor f , and no other edges. This subgraph (shown in Fig. 29(c)), however, cannot be modified by the application of any rule in the graph transformation system. Therefore, it is impossible to transform G 2 into the graph of a process which does not contain any auxiliary AS-edges. To find the source of this error, we study the causal relation concerning the AS-edge in G 2 . Because the AS-edge is produced by the application of (Ses-Sync), it is caused by the application. Also notice that the application of (Ses-Sync) consumes an Abs-edge and a Con-edge in G 1 , the application is caused by the two edges. As a result, the Abs-edge and Con-edge are the causes of the AS-edge in G 1 . In this way, we found the subgraph of G 1 (shown in Fig. 29(d)) that is the source of the “error” subgraph of G 2 . It indicates the source of the error in P 1 is that the concretion of the value T and the abstraction of a constructed pattern f (. . .) do not match. 2 Due to the above features, the graph transformation semantics we provide is indeed a concurrent semantics. By contrast, in the reduction semantics, it is not feasible to apply reduction rules concurrently, nor it is natural to track the causal relation between reductions.
JID:SCICO AID:1658 /FLA
[m3G; v 1.120; Prn:13/12/2013; 13:57] P.24 (1-34)
L. Zhao et al. / Science of Computer Programming ••• (••••) •••–•••
24
5. Conclusions We propose a graph representation of structured service systems with sessions and pipelines. This is done by translating a CaSPiS process into a graph term of a graph algebra, and providing the graph algebra a model of hypergraphs. A graph-based semantics of CaSPiS is then defined through a graph transformation system. The advantage of this approach is gained from the intuitive understanding of graphs, together with the mathematical elegance and large body of theory available on graphs and graph transformations. Especially, the use of the well-studied DPO approach leads to a concurrent semantics that supports the concurrent execution of graph transformations and the recording of causal dependencies between graph elements and transformations. In addition, the use of designs provides a natural mechanism of abstraction and information hiding, and this is important for the scalability of graphs. The hypergraph model is new compared with the one given in [27] in that hierarchy is modeled by proper combinations of abstract edges between nodes and edges of different designs. This is a key nature that enables us to define DPO graph transformations. We provide a few sets of graph transformation rules, including basic rules for congruence and reduction relation between graphs (and thus for processes), together with ones for auxiliary purposes such as tagging, copy, data assignment and garbage collection. We proved that these graph transformation rules are indeed sound and complete with respect to the congruence and reduction rules of CaSPiS processes. For future work, we are going to implement our graph transformation system with existing graph-based tools. This involves the implementation of the causal dependency between graph transformations and graph elements, which enables us to detect the source of faults and misbehaviors of services. Due to the complexity of the underlying mathematical structures of graphs, we need to consider possible optimizations in the implementation so as to reduce the computation scale as well as the consumption of computer resources. Future work also include the application of our graph model to a more substantial case study, and further exploration of the power of the theory of graphs and graph transformations for analysis of service-oriented programs. Acknowledgements This research is supported by the NSFC Grant No. 61272118, the Fundamental Research Funds for the Central Universities No. K5051303020, the projects GAVES and PEARL funded by Macao Science and Technology Development Fund, and the Italian MIUR project IPODS (PRIN 2008). Appendix A. Proof of theorems To prove Theorems 2 to 5 of soundness and completeness, we introduce auxiliary processes into CaSPiS that characterize “intermediate” graphs during the applications of DPO rules. Let LC and LS be two disjoint infinite set, representing labels for copy and labels for value-share, respectively. The extended CaSPiS syntax is as follows.
Process Pattern Value Garbage Item
P ::= . . . | l : P | Copy(l) | (l : s). P | VC (l). P | l : s. P | VC (l). P | (ν l : n) P | (ν RC (l, n)) P | †P | GB( P ; GI) | AS( V ; F ) P F ::= . . . | l : F | PC (l) | pv(l : x) | pv(PC (l, x)) V ::= . . . | l : V | VC (l) | v v (l : x) | vv(VC (l)) | vv( V ) | L : x | L : v v ( V ) | Sh( L ) GI ::= s | ch | var(x) | F | V | P | S :: {GI, . . . , GI}
where l ∈ LC , L ∈ LS , s ∈ S , n ∈ S ∪ R ∪ V , ch is a channel name and S is a set of names. A process can be a labeled process or a copy process. In a labeled process, a label can apply to the whole process as in l : P , to a service name as in (l : s). P or l : s. P , or to a restriction as in (ν l : n) P . Their corresponding copy processes are Copy(l), VC (l). P , VC (l). P and (ν RC (l, n)) P , respectively. A labeled process can also contain labeled patterns or labeled values. A labeled pattern is of the form l : F or pv(l : x). Their corresponding copy patterns are P C (l) and pv(PC (l, x)), respectively. Similarly, a labeled value is of the form l : V or vv(l : x), and their corresponding copy values are VC (l) and vv(VC (l)), respectively. In addition to labeled values and copy values, we allow prefixed values, shared values and sharing values in a process. A prefixed value is of the form vv( V ). It is similar to V but, as we will show later, its graph contains an extra vv-labeled edge. A shared value is of the form L : x or L : vv( V ). It corresponds to zero or more sharing values of the form Sh( L ), i.e. it can be shared any times. Besides the above constructs, we allow pre-tagged processes †P representing the state that P is ready for tagging, assignment processes AS( V ; F ) P representing the state that variables of P are to be assigned by values V according to patterns F , and processes GB( P ; GI) with garbage items GI. A garbage item can be a single one, which is a service name s, a channel name ch, a variable var (x), a pattern F , a value V or a process P , or a composite one S :: {GI1 , . . . , GIk } composed of a set of garbage items GI1 , . . . , GIk and bound by a set of names S. For a pattern F , let F˜ be the pattern obtained from F by replacing each copy sub-pattern PC (l), which corresponds to l : F , by F . We call a process P well-matched, if for each assignment AS( V ; F ) Q of P , F˜ and V match, i.e. match( F˜ ; V ) exist. Here, F and V can be two sequences of patterns and values of the same length. A process is by default well-matched. With the extension of processes, we extend the notion of context at the same time. However, the notion of static context remains the same, i.e. an extended context is always non-static. From now on, we use the terminology “process” to denote
JID:SCICO AID:1658 /FLA
[m3G; v 1.120; Prn:13/12/2013; 13:57] P.25 (1-34)
L. Zhao et al. / Science of Computer Programming ••• (••••) •••–•••
25
any process defined by the extended syntax above, and normal processes to denote a process defined by the original CaSPiS syntax (given in Section 2). Similarly, we have normal patterns, normal values and normal contexts. In addition, we call a process, pattern, value or context label-free if it does not contain any label l ∈ LC or L ∈ LS , and we call a one-hole context garbage-free if its hole does not occur inside a garbage item. The graph representation of extended patterns and values are defined as follows. def
def
Jl : F KF = F v [J F KF v l ]
JPC (l)KF = F v [PC ( v , l )]
def
def
Jpv(l : x)KF = F v [ pv ( v , xl )]
Jpv(PC (l, x))KF = F v [ pv ( v , x)|PC (x, l )]
def
def
Jl : V KV = V v [J V KV v l ]
JVC (l)KV = V v [VC ( v , l )]
def
def
Jvv(l : x)KV = V v [vv( v , xl )]
Jvv(VC (l))KV = V v [(ν x)(vv( v , x)|VC (x, l ))]
def
J L : xKV = V v [vv( v , xL )]
def
def
JSh( L )KV = V v [vv( v , L )]
Jvv( V )KV = V v [(ν v 1 )(vv( v , v 1 )|J V KV v 1 )]
def
J L : vv( V )KV = V v [(ν v 1 )(vv( v , v 1L )|J V KV v 1 )]
The graph representation of a garbage item GI, denoted as JGIK g , is a graph term defined as follows. def
def
JsK g = s
JchK g = ch
def
J F K g = (ν v )J F KF v
def
def
J P K g = (ν p 1 )J P K p 1 , i , o, t
Jvar(x)K g = x
def
J V K g = (ν v )J V KV v def
J S :: ∅K g = 0
def
J S :: {GI1 , . . . , GIk }K g = (ν S )(JGI1 K g | . . . |JGIk K g ) (k 1)
A garbage item GI is called empty if JGIK g ≡d 0, for example S :: ∅ and S :: { S 1 :: ∅, S 2 :: ∅}. For the convenience of the proof, we assume a garbage item GI occurring in GB( Q ; GI) is composite. In fact, each single garbage item GI is equivalent to a composite one ∅ :: {GI}, i.e. JGIK g = J∅ :: {GI}K g . The graph representation of extended processes are defined as follows. def
Jl : P K = P( p ,i ,o,t ) [J P K pl , i , o, t ] def
JCopy(l)K = P( p ,i ,o,t ) [C ( p , •l , i , o, t )] def
J(l : s). P K = P( p ,i ,o,t ) [i |t |D( p ,t ) [(ν { p 1 , i 1 , o1 })(Def ( p , sl , p 1 , i 1 , o1 )|J P K p 1 , i 1 , o1 , t )] p , o] def
JVC (l). P K = P( p ,i ,o,t ) [i |t |D( p ,t ) [(ν { p 1 , i 1 , o1 , s}) (Def ( p , s, p 1 , i 1 , o1 )|VC (s, l )|J P K p 1 , i 1 , o1 , t )] p , o] def
Jl : s. P K = P( p ,i ,o,t ) [i |t |I( p ,t ) [(ν { p 1 , i 1 , o1 })(Inv( p , sl , p 1 , i 1 , o1 )|J P K p 1 , i 1 , o1 , t )] p , o] def
JVC (l). P K = P( p ,i ,o,t ) [i |t |I( p ,t ) [(ν { p 1 , i 1 , o1 , s}) (Inv( p , s, p 1 , i 1 , o1 )|VC (s, l )|J P K p 1 , i 1 , o1 , t )] p , o] def
J(ν l : n) P K = P( p ,i ,o,t ) [(ν { p 1 , n})(Res( p , nl , p 1 )|J P K p 1 , i , o, t )] def
J(ν RC (l, n)) P K = P( p ,i ,o,t ) [(ν { p 1 , n})(Res( p , n, p 1 )|RC (n, l )|J P K p 1 , i , o, t )] def
J†P K = J P K def
JGB( P ; GI)K = P( p ,i ,o,t ) [J P K p , i , o, t |JGIK g ] def
JAS( V 1 , . . . , V k ; F 1 , . . . , F k ) P K = P( p ,i ,o,t ) [(ν { v 1 , . . . , v k , w 1 , . . . , w k }) (AS( v 1 , w 1 )| . . . |AS( v k , w k )|J V 1 KV v 1 | . . . |J V k KV v k | (ν bn( F 1 ) ∪ · · · ∪ bn( F k ))(J F 1 KF w 1 | . . . |J F k KF w k ))] For each of these new process constructs P 0 , we define its tagged graph J P 0 K† as P( p ,i ,o,t ) [ A ( p )|J P 0 K p , i , o, t ]. So, Theorem 1 actually means J†P K† ⇒∗T J P K† , and it is valid for every process P , not only normal ones. To study the congruence relation among extended processes, we map them into normal processes. First, we make copies of labeled processes, patterns and values and eliminate the labels. For example, we replace each copy process Copy(l), which corresponds to l : Q , with Q and each sharing value Sh( L ), which corresponds to L : x, with x, and then remove all the labels l and L.
JID:SCICO AID:1658 /FLA
[m3G; v 1.120; Prn:13/12/2013; 13:57] P.26 (1-34)
L. Zhao et al. / Science of Computer Programming ••• (••••) •••–•••
26
In this way, we transform a process P into a label-free process, called the label-free form of P and denoted as lf( P ). Then, we can map each label-free process P into a normal process, denoted as nf( P ). It is defined inductively as follows. def
def
nf(0) = 0
nf(†P ) = nf( P )
nf(AS( V ; F ) P ) = nf( P σ ) def
def
nf(( F ) P ) = ( F ) nf( P )
nf( V ↑ P ) = V˙ ↑ nf( P ) nf( M + M ) = nf( M ) + nf( M )
def
nf(GB( P ; GI)) = nf( P ) nf( V P ) = V˙ nf( P ) def
def
def
def
nf(s. P ) = s. nf( P )
def
nf(s. P ) = s. nf( P )
def
nf(! P ) = !nf( P )
nf(r P ) = r nf( P ) def
nf((ν n) P ) = (ν n) nf( P )
nf( P > Q ) = nf( P ) > nf( Q )
def
nf( P | Q ) = nf( P ) | nf( Q ) def def
where σ = match( F ; V ), and V˙ is the value obtained from V by replacing each vv( V ) by V . So, for any process P , we can first eliminate its labels by lf() and then map it to a normal process by nf(). We call such a normal process, i.e. nf(lf( P )), the normal formal of P . With the notion of normal form, we define the congruence relation between processes. Specifically, two processes P and Q are called nf-congruent, if their normal forms are congruent, i.e. nf(lf( P )) ≡c nf(lf( Q )). Notice that for any graph H , the process P such that J P K† ≡d H is unique up to nf-congruence, if it exists. In addition, nf-congruence is preserved by any context C [·], i.e. nf(lf( P )) ≡c nf(lf( Q )) implies nf(lf(C [ P ])) ≡c nf(lf(C [ Q ])). A.1. Proof of Theorems 2 and 3 of soundness We first prove the soundness of tagging rules T , copy rules P , rules for congruence C , garbage collection rules G and data assignment rules D , in that they transform the tagged graph of a process to that of a nf-congruent one. Theorem 6 (Soundness of tagging rules). For a process P , a DPO rule R ∈ T and a graph H such that J P K† ⇒ R H , there exists a process Q such that J Q K† ≡d H and nf(lf( P )) ≡c nf(lf( Q )). Proof. Straightforward for each rule R ∈ T . We only present the proof for a representative rule (Ses-Tag). In order that (Ses-Tag) can be applied to J P K† , P must be of the form C [†r P 1 ] for some static context C [·]. So, H ≡d J Q K† with Q = C [r †P 1 ]. Since nf(lf(†r P 1 )) = nf(lf(r P 1 )) = nf(lf(r †P 1 )), nf(lf( P )) ≡c nf(lf( Q )). 2 Theorem 7 (Soundness of copy rules). For a process P , a DPO rule R ∈ P and a graph H such that J P K† ⇒ R H , there exists a process Q such that J Q K† ≡d H and nf(lf( P )) ≡c nf(lf( Q )). Proof. Straightforward for each rule R ∈ T . We only present the proof for rules (Par-Copy) and (VC-Elim). In order that (Par-Copy) can be applied to J P K† , P must be of the form C [Copy(l), l : ( P 1 | P 2 )]. So, H ≡d J Q K† with Q = C [Copy(l)|Copy(l ), l : P 1 |l : P 2 ], and thus lf( Q ) = lf(C [ P 1 | P 2 , P 1 | P 2 ]) = lf( P ). In order that (VC-Elim) can be applied to J P K† , P must be of the form C [ P 1 (vv(VC (l)), vv(l : x))] or C [ P 1 (VC (l), l : s)], where x or s is not bound in P 1 . In the former case, H ≡d J Q K† with Q = C [ P 1 (x, x)], and thus lf( Q ) = lf( P ). In the latter case, H ≡d J Q K† with Q = C [ P 1 (s, s)], and we also have lf( Q ) = lf( P ). 2 Theorem 8 (Soundness of rules for congruence). For a process P , a DPO rule R ∈ C and a graph H such that J P K† ⇒ R H , there exists a process Q such that J Q K† ≡d H and nf(lf( P )) ≡c nf(lf( Q )). Proof. According to Theorem 7, the set of copy rules P are sound. And the proof for each rule R ∈ C \ P is straightforward. We only present the proof for two representative rules (Sum-Unit) and (Par-Res-Comm). In order that (Sum-Unit) can be applied to J P K† , P must be of the form C [ M + 0]. So, H ≡d J Q K† with Q = C [ M ]. Since nf(lf( M + 0)) = nf(lf( M )) + 0 ≡c nf(lf( M )), nf(lf( P )) ≡c nf(lf( Q )). In order that (Par-Res-Comm) can be applied to J P K† , P must be of the form C [ P 1 |(ν n) P 2 ] for some non-static context C [·]. Without loss of generality, suppose n ∈ / fn( P 1 ), which can be achieved through alpha-conversion. As a result, H ≡d J Q K† with Q = C [(ν n)( P 1 | P 2 )]. Since nf(lf( P 1 |(ν n) P 2 )) ≡c nf(lf((ν n)( P 1 | P 2 ))), nf(lf( P )) ≡c nf(lf( Q )). 2 Theorem 9 (Soundness of garbage collection rules). For a process P , a DPO rule R ∈ G and a graph H such that J P K† ⇒ R H , there exists a process Q such that J Q K† ≡d H and nf(lf( P )) ≡c nf(lf( Q )). Proof. Straightforward for each rule R ∈ T . We only present the proof for two representative rules (Abs-GC) and (PV-GC). In order that (Abs-GC) can be applied to J P K† , P must be of the form C [GB( P 1 ; GI(( F ) P 2 ))]. So, H ≡d J Q K† with Q = C [GB( P 1 ; GI(bn( F ) :: { F , P 2 }))]. Notice that nf(lf( P )) ≡c nf(lf( Q )), since nf(lf(GB( P 1 ; GI(( F ) P 2 )))) = nf(lf( P 1 )) = nf(lf(GB( P 1 ; GI(bn( F ) :: { F , P 2 })))).
JID:SCICO AID:1658 /FLA
[m3G; v 1.120; Prn:13/12/2013; 13:57] P.27 (1-34)
L. Zhao et al. / Science of Computer Programming ••• (••••) •••–•••
27
In order that (PV-GC) can be applied to J P K† , P must be of the form C [GB( P 1 ; GI(?x))]. As a result, H ≡d J Q K† with Q = C [GB( P 1 ; GI(var(x)))]. Notice that nf(lf( P )) ≡c nf(lf( Q )), since nf(lf(GB( P 1 ; GI(?x)))) = nf(lf( P 1 )) = nf(lf(GB( P 1 ; GI(var(x))))). 2 Theorem 10 (Soundness of data assignment rules). For a process P , a DPO rule R ∈ D and a graph H such that J P K† ⇒ R H , there exists a process Q such that J Q K† ≡d H and nf(lf( P )) ≡c nf(lf( Q )). Proof. Straightforward for each rule R ∈ T . We only present the proof for a representative rule (VV-Norm). In order that (VV-Norm) can be applied to J P K† , P must be of the form P (vv( V )). Let Q = P ( V ). We have H ≡d J Q K† and nf(lf( P )) = nf(lf( Q )). There is one exception that V is a shared value L : V 0 and the value vv( V ) is also shared, i.e. P is of the form P ( L : vv( L : V 0 )). In this case, we choose Q = P ( L : V 0 )[Sh( L )/Sh( L )], and still have J Q K† ≡d H and lf( P ) = lf( Q ). 2 Then we prove the soundness of rules for reduction R , in that they transform the tagged graph of a normal process P to the tagged graph of a process Q it reduces to (up to nf-congruence). Notice that there is a special case: if P contains a pattern F and a value V that are to be interacted but not matched, the result Q contains an assignment AS( V ; F ) and is not a well-matched process. For such a process Q , its tagged graph can never be transformed back into the tagged graph of a well-matched process by any DPO rules. This is, however, consistent with the reduction semantics that interactions only happen between patterns and values that are matched. Theorem 11 (Soundness of rules for reduction). For a normal process P , a DPO rule R ∈ R and a graph H such that J P K† ⇒ R H , there exists a process Q , which may not be well-matched, such that J Q K† ≡d H . And if Q is well-matched, P → nf(lf( Q )). Proof. Straightforward for each rule R ∈ R . We only present the proof for a representative rule (Ses-Sync). In order that (Ses-Sync) can be applied to J P K† , P must be of the form C [r C [ V P 1 + M 1 ], r C 2 [( F ) P 2 + M 2 ]], where C [·,·] is static and restriction-balanced, C 2 [·] and C [·] are static, session-immune and restriction-immune, and the hole of C [·] does not occur in the scope of a pipeline. This implies that there exists a normal process P such that C [ Q ] ≡c P | Q for any normal process Q . As a result, H ≡d J Q K† with Q = C [r C [GB( P 1 ; ∅ :: { M 1 })], r C 2 [GB(AS( V ; F ) P 2 ; ∅ :: { M 2 })]]. If Q is well-matched, the match σ = match( F ; V ) exists, so that P ≡c C [r ( P |( V P 1 + M 1 )), r C 2 [( F ) P 2 + M 2 ]] → C [r ( P | P 1 ), r C 2 [ P 2 σ ]] ≡c C [r C [ P 1 ], r C 2 [ P 2 σ ]] = nf(lf( Q )). 2 With the soundness of every individual rule set, we are able to prove Theorems 2 and 3. Proof of Theorem 2. J P K† ⇒∗C ∪T J Q K† means J P K† ⇒ R 1 H 1 ⇒ R 2 · · · ⇒ R k H k ≡d J Q K† , for some graphs H 1 , . . . , H k and rules R 1 , . . . , R k ∈ C ∪ T . According to Theorems 6 and 8, there exist a sequence of processes P 1 , . . . , P k such that J P j K† ≡d H j for 1 j k and P ≡c nf(lf( P 1 )) ≡c · · · ≡c nf(lf( P k )). Since J P k K† ≡d H k ≡d J Q K† , nf(lf( P k )) ≡c Q . As a result, P ≡c Q . 2 Proof of Theorem 3. J P K† ⇒∗ A J Q K† means J P K† ≡d H 0 ⇒ R 1 H 1 ⇒ R 2 . . . ⇒ R k H k ≡d J Q K† , for some graphs H 0 , H 1 , . . . , H k and rules R 1 , . . . , R k ∈ A . Suppose R j 0 (1 j 0 k) is the only one among these rules that belongs to R , i.e. each of the others belongs to T ∪ C ∪ G ∪ D . According to Theorems 6, 8, 9, 10 and 11, there exist a sequence of processes P 0 = P , P 1 , . . . , P j 0 , where P j 0 may not be well-matched, such that J P j K† ≡d H j for 0 j j 0 and P ≡c nf(lf( P 1 )) ≡c · · · ≡c nf(lf( P j 0 −1 )). Recall that each rule in R can only be applied to graphs of normal processes, there must be a normal process P such that J P K† ≡d H j 0 −1 ≡d J P j 0 −1 K† , thus P ≡c nf(lf( P j 0 −1 )). Notice that J P j 0 K† ≡d H j 0 ⇒∗ A J Q K† , P j 0 must be well-matched, since no rule in A is able to transform the tagged graph of a process which is not well-matched to that of a well-matched process. According to Theorem 11, P → nf(lf( P j 0 )). Then, according to Theorems 6, 8, 9 and 10, there exist a sequence of processes P j 0 +1 , . . . , P k such that J P j K† ≡d H j for j 0 < j k and nf(lf( P j 0 )) ≡c · · · ≡c nf(lf( P k )). Since J P k K† ≡d H k ≡d J Q K† , nf(lf( P k )) ≡c Q . As a result, P ≡c P → nf(lf( P j 0 )) ≡c Q . 2 A.2. Proof of Theorems 4 and 5 of completeness In order to prove the completeness of graph transformation rules, we extend the notions of congruence and reduction and consider a few of their variants. These variant relations are defined only between normal processes. In this subsection, therefore, a process and a context always mean a normal one. For two processes P and Q , we say P is one-step congruent with Q , denoted as P ≡c• Q , if there is a congruence rule P ≡c Q (see Section 2) such that P = C [ P ] and Q = C [ Q ], or P = C [ Q ] and Q = C [ P ], for some context C [·]. As a result, the congruence relation ≡c is the reflexive and transitive closure of ≡c• . For two processes P and Q , we say P is one-step strictly congruent with Q , denoted as P ≡•s Q , if there is a basic congruence rule P ≡c Q (see Section 2) such that P = C [ P ] and Q = C [ Q ], or P = C [ Q ] and Q = C [ P ], for some context C [·]. Let ≡s be the reflexive and transitive closure of ≡•s . We say P is strictly congruent with Q if P ≡s Q .
JID:SCICO AID:1658 /FLA
[m3G; v 1.120; Prn:13/12/2013; 13:57] P.28 (1-34)
L. Zhao et al. / Science of Computer Programming ••• (••••) •••–•••
28
For two processes P and Q , we say Q is a one-step expansion of P , denoted as P • Q , if there is a special congruence rule P ≡c Q (see Section 2) such that P = C [ P ] and Q = C [ Q ] for some context C [·]. So, P ≡c• Q means either P ≡•s Q or P • Q or Q • P . Furthermore, if the congruence rule is one of the three for moving restrictions forward (Section 2), ˆ • Q . Otherwise, the congruence rule is the one for unfolding we say Q is a one-step res-forwardness of P , denoted as P ¯ • Q . For two processes P and Q , replications (Section 2). In this case, we say Q is a one-step unfolding of P , denoted as P ¯ f Q , if P = C [! P 1 , . . . , ! P k ] and Q = C [ P 1 |! P 1 , . . . , P k |! P k ] for some we say Q is a flexible unfolding of P , denoted as P k-hole context C [·, . . . , ·] and processes P 1 , . . . , P k (k 0). Such a flexible unfolding can be achieved by applying one-step unfolding k times, to ! P 1 , . . . , ! P k , respectively. Notice that the order of these k applications are not significant. This is why we call it “flexible”. In addition, it is worth pointing out that a one-step unfolding is a special case of flexible unfolding with ¯ • Q implies P ¯ fQ. k = 1, i.e. P For two processes P and Q , we say Q is a one-step generalization of P , denoted as P c• Q , if either P ≡•s Q or P • Q . As a result, P ≡c• Q if and only if P c• Q or Q • P . Let c be the reflexive and transitive closure of c• . We say Q is a generalization of P if P c Q . For two processes P and Q , we say Q is a one-step reorganization of P , denoted as P r• Q , ˆ • Q or P ≡•s Q . As a result, P c• Q if and only if P r• Q or P ¯ • Q . Let c be the reflexive and transitive if either P • closure of c . We say Q is a reorganization of P if P r Q . By applying the congruence rules provided in Section 2, we can move the restrictions of a process P to the front, as much as possible. The result process is unique for P up to strict congruence ≡s . We call it the res-prefixed form of P , denoted as rp( P ). It is worth pointing out that the rep-prefixed from of a process does not change with (one-step) reorganizations, i.e. P r• Q implies rp( P ) ≡s rp( Q ). Another fact is that the rep-prefixed from preserves the generalization relation, i.e. P c Q implies rp( P ) c rp( Q ). Let pure reduction, denoted as → p , be the relation between processes defined in the same way as reduction (see Section 2), but without identifying congruent processes (i.e. replacing “≡c ” by “=”). So, the notion of reduction is in fact a generalization of that of pure reduction by allowing congruences: P → Q if and only if P ≡c P 0 → p Q 0 ≡c Q for some processes P 0 , Q 0 . For two processes P and Q , we say that P strictly reduces to Q , denoted as P →s Q , if P r P 0 → p Q 0 r Q for some processes P 0 and Q 0 . Similar to the generalization relation, a strict reduction is preserved by the rep-prefixed from of processes, i.e. P →s Q implies rp( P ) →s rp( Q ). Let →∗s be the reflexive and transitive closure of →s . So, P →∗s Q means P can be transformed to Q through a sequence of strict reductions. With the new notions of congruence and reduction, we separate Theorem 4 into Propositions 1 and 2, and Theorem 5 into Propositions 3 and 4. We prove these four propositions in the subsequent four subsections, respectively. Proposition 1. For two processes P and Q , P ≡c Q implies P c Q and Q c Q for some process Q . Proposition 2. For two processes P and Q , P c Q implies J P K† ⇒∗C ∪T J Q K† . Proposition 3. For two processes P and Q , P → Q implies P c P →s Q ≡c Q for some processes P and Q . Proposition 4. For two processes P and Q , P →s Q implies J P K† ⇒∗ A J Q K† . A.3. Proof of Proposition 1 This subsection only considers normal processes and normal contexts. We first introduce a few lemmas.
¯ • Q and P ¯ f P , then P c Q and Q ¯ f Q for some process Q . Lemma 1. If P Proof. Suppose from P to Q , ! P 0 is unfolded to P 0 |! P 0 ; while from P to P , ! P 1 , . . . , ! P k become P 1 |! P 1 , . . . , P k |! P k , respectively. Since P is a flexible unfolding of P , the replications ! P 1 , . . . , ! P k are pairwise irrelevant and their order is not important. As for the relation of ! P 0 and ! P 1 , . . . , ! P k in P , there are three cases. (1) ! P 0 is irrelevant with ! P 1 , . . . , ! P k . So, P = C [! P 0 , ! P 1 , . . . , ! P k ] for some context C [·, . . . , ·]. As a result, Q = C [ P 0 |! P 0 , ! P 1 , . . . , ! P k ] and P = C [! P 0 , P 1 |! P 1 , . . . , P k |! P k ]. In this case, we choose Q = C [ P 0 |! P 0 , P 1 |! P 1 , . . . , P k |! P k ] so that P c Q ¯ f Q . (2) ! P 0 is included in one of ! P 1 , . . . , ! P k . Without loss of generality, suppose it is included in ! P k . So, and Q P = C [! P 1 , . . . , ! P k−1 , ! P k ] and P k = C 1 [! P 0 ] for some contexts C [·, . . . , ·] and C 1 [·]. Thus Q = C [! P 1 , . . . , ! P k−1 , ! P k ] and P = C [ P 1 |! P 1 , . . . , P k−1 |! P k−1 , P k |! P k ], where P k = C 1 [ P 0 |! P 0 ]. We choose Q = C [ P 1 |! P 1 , . . . , P k−1 |! P k−1 , P k |! P k ] ¯ f Q . (3) Part of ! P 1 , . . . , ! P k is included in ! P 0 . Without loss of generality, suppose ! P 0 contains so that P c Q and Q ! P 1 , . . . , ! P j for some j k. So, P = C [! P 0 , ! P j +1 , . . . , ! P k ] and P 0 = C 1 [! P 1 , . . . , ! P j ] for some contexts C [·, . . . , ·] and C 1 [·]. As a result, Q = C [ P 0 |! P 0 , ! P j +1 , . . . , ! P k ] and P = C [! P 0 , P j +1 |! P j +1 , . . . , P k |! P k ], where P 0 is a shorthand for ¯ f Q . 2 C 1 [ P 1 |! P 1 , . . . , P j |! P j ]. Let Q = C [ P 0 |! P 0 , P j +1 |! P j +1 , . . . , P k |! P k ]. Then P c Q and Q
¯ f P , then P c Q and Q ¯ f Q for some Q . Lemma 2. If P r• Q and P
JID:SCICO AID:1658 /FLA
[m3G; v 1.120; Prn:13/12/2013; 13:57] P.29 (1-34)
L. Zhao et al. / Science of Computer Programming ••• (••••) •••–•••
29
Proof. Suppose P = C [! P 1 , . . . , ! P k ] and P = C [ P 1 |! P 1 , . . . , P k |! P k ]. Notice that P r• Q . There are two cases. (1) One of P 1 , . . . , P k is changed when P transforms into Q . Without loss of generality, suppose P 1 is changed into P 1 , i.e. Q = C [! P 1 , ! P 2 , . . . , ! P k ] and P 1 r• P 1 . In this case, we choose Q = C [ P 1 |! P 1 , P 2 |! P 2 , . . . , P k |! P k ], so that P c Q and ¯ f Q . (2) None of P 1 , . . . , P k is changed when P transforms into Q . Notice that each replication is simply preserved by Q any congruence rule (except the one ! P ≡c P |! P ). There must be a k-hole context C 1 [·, . . . , ·] such that Q = C 1 [! P 1 , . . . , ! P k ] and that for any processes X 1 , . . . , X k , C [ X 1 , . . . , X k ] r• C 1 [ X 1 , . . . , X k ]. Let Q be C 1 [ P 1 |! P 1 , . . . , P k |! P k ], then we have ¯ f Q . 2 P c Q and Q A direct deduction of Lemmas 1 and 2 is as follows. Recall that each step of a generalization (c• ) is either an unfolding ¯ • ) or a reorganization (r• ). (
¯ f P , then P c Q and Q ¯ f Q for some Q . Lemma 3. If P c Q and P With Lemma 3, we can prove the following. Lemma 4. If P c Q and P c• P , then P c Q and Q c Q for some Q . •
¯ P . It is a special case of P ¯ f P . According to Lemma 3, there exists a Proof. There are two case for P c• P . (1) P ¯ f Q . Notice that Q ¯ f Q implies Q c Q . (2) P r• P , which implies rp( P ) ≡s process Q such that P c Q and Q rp( P ). In this case, we choose Q = rp( Q ), so that Q c Q . Also notice that P c Q implies rp( P ) c rp( Q ). We have P c rp( P ) ≡s rp( P ) c rp( Q ) = Q . 2 With Lemma 4, Proposition 1 can be trivially proved by induction on the number k of one-step congruences from P to Q , i.e. P = P 0 ≡c• P 1 ≡c• · · · ≡c• P k = Q . A.4. Proof of Proposition 2 To prove Proposition 2, we only need to prove the following proposition, since a generalization is a sequence of one-step generalizations. Proposition 5. P c• Q implies J P K† ⇒∗C ∪T J Q K† . We first show that derivations of graphs are preserved by process contexts. Lemma 5. Let δ be a set of DPO rules, P , Q be two processes such that J P K ⇒∗δ J Q K and J P K† ⇒∗δ J Q K† . Then, for any context C [·], JC [ P ]K ⇒∗δ JC [ Q ]K and JC [ P ]K† ⇒∗δ JC [ Q ]K† . Proof. For any context C [·] and process X , JC [ X ]K is constructed based on J X K, i.e. JC [ X ]K is of the form G (J X K). As a result, JC [ P ]K ≡d G (J P K) ⇒∗δ G (J Q K) ≡d JC [ Q ]K. If C [·] is a static context, JC [ X ]K† is constructed based on J X K† for any process X , i.e. JC [ X ]K† is of the form G (J X K† ). In this case, JC [ P ]K† ≡d G (J P K† ) ⇒∗δ G (J Q K† ) ≡d JC [ Q ]K† . If C [·] is non-static, JC [ X ]K† is constructed based on the untagged graph J X K for any process X , i.e. JC [ X ]K† is of the form G (J X K). In this case, JC [ P ]K† ≡d G (J P K) ⇒∗δ G (J Q K) ≡d JC [ Q ]K† . 2 Then, we need to prove the completeness of copy rules P , in that they can “unfold” the graph of any replication ! P to that of ! P | P . For this, we show that a pattern, value and (sub-)process can be correctly copied through applications of copy rules. Lemma 6. Let F (?x1 , . . . , ?xk ) be a normal pattern, where x1 , . . . , xk are all its pattern variables. We have J P (l : F (?x1 , . . . , ?xk ))| Q (PC (l))K ⇒∗ P J P ( F (pv(l1 : x1 ), . . . , pv(lk : xk )))| Q ( F (pv(PC (l1 , x1 )), . . . , pv(PC (lk , xk ))))K. Proof. By induction on the structure of F = F (?x1 , . . . , ?xk ). In this proof and after, we always use “IH” as a shorthand for “induction hypothesis”. Case F =?x. J P (l :?x)| Q (PC (l))K ⇒(PV -PCopy) J P (pv(l : x))| Q (pv(PC (l, x)))K. jj
jj
jj
jj
Case F = f ( F 1 , . . . , F j ). For 1 j j j, let F j j be of the form F j j (?x1 , . . . , ?xk ), where x1 , . . . , xk jj
variables. So,
j j x11 , . . . , xk1 , . . . , x1 , . . . , xk 1 j
are all the pattern variables of F .
jj
are all its pattern
JID:SCICO AID:1658 /FLA
30
[m3G; v 1.120; Prn:13/12/2013; 13:57] P.30 (1-34)
L. Zhao et al. / Science of Computer Programming ••• (••••) •••–•••
J P (l : f ( F 1 , . . . , F j ))| Q (PC (l))K ⇒(Ctr-PCopy) J P ( f (l1 : F 1 , . . . , l j : F j ))| Q ( f (PC (l1 ), . . . , PC (l j )))K (IH) ⇒∗ P J P ( f ( F 1 (pv(l11 , x11 ), . . . , pv(lk11 , xk11 )), . . . , F j (pv(l1 , x1 ), . . . , pv(lk j , xk j ))))| j
j
j
j
j
j
j
j
Q ( f ( F 1 (pv(PC (l11 , x11 )), . . . , pv(PC (lk1 , xk1 ))), . . . , F j (pv(PC (l1 , x1 )), . . . , pv(PC (lk , xk )))))K 1
j j ≡d J P ( F (pv(l11 , x11 ), . . . , pv(lk j , xk j )))| Q
1
j
j j ( F (pv(PC (l11 , x11 )), . . . , pv(PC (lk j , xk j ))))K
j
2
Lemma 7. Let V (x1 , . . . , xk ) be a normal value, where x1 , . . . , xk are all the occurrences of its value variables. J P (l : V (x1 , . . . , xk ))| Q (VC (l))K ⇒∗ P J P ( V (vv(l1 : x1 ), . . . , vv(lk : xk )))| Q ( V (vv(VC (l1 )), . . . , vv(VC (lk ))))K. Proof. By induction on the structure of the value V , similar to Lemma 6.
2
Lemma 8. Let P (x1 , . . . , xk , s1 , . . . , s j ) be a normal process that contains no sessions, where x1 , . . . , xk and s1 , . . . , s j are all the occurrences of its free variables and free service names, respectively. For any contexts C 1 [·] and C 2 [·], JC 1 [l : P (x1 , . . . , xk , s1 , . . . , s j )]| C 2 [Copy(l)]K ⇒∗ P JC 1 [ P (vv(l1 : x1 ), . . . , vv(lk : xk ), ls1 : s1 , . . . , lsj : s j )]|C 2 [ P (vv(VC (l1 )), . . . , vv(VC (lk )), VC (ls1 ), . . . , VC (lsj ))]K. Proof. By induction on the structure of P = P (x1 , . . . , xk , s1 , . . . , s j ). We only present the proof for two representative cases. Case P = V Q . Let V be of the form V (x1 , . . . , x j ), where x1 , . . . , x j are all the occurrences of its value variables. Let Q be of the form Q ( y 1 , . . . , yk , s1 , . . . , s j ), where y 1 , . . . , yk and s1 , . . . , s j are all the occurrences of its free variables and free service names, respectively. So, x1 , . . . , x j , y 1 , . . . , yk are all the occurrences of free variables of P .
JC 1 [l : V Q ]|C 2 [Copy(l)]K ⇒(Con-Copy) JC 1 [l : V l : Q ]|C 2 [VC (l )Copy(l)]K (Lemma 7, IH) ⇒∗ P JC 1 [ V (vv(l1 : x1 ), . . . , vv(lj : x j )) Q (vv(l1 : y 1 ), . . . , vv(lk : yk ), ls1 : s1 , . . . , lsj : s j )]| C 2 [ V (vv(VC (l1 )), . . . , vv(VC (lj ))) Q (vv(VC (l1 )), . . . , vv(VC (lk )), VC (ls1 ), . . . , VC (lsj ))]K Case P = s. Q . Let Q be Q (x1 , . . . , xk , s1 , . . . , s j ), where x1 , . . . , xk and s1 , . . . , s j are all the occurrences of its free variables and free service names, respectively. So, s, s1 , . . . , s j are all the occurrences of free service names of P .
JC 1 [l : (s. Q )]|C 2 [Copy(l)]K ⇒(Def -Copy) JC 1 [(ls : s).(l : Q )]|C 2 [VC (ls ).Copy(l)]K (IH) ⇒∗ P JC 1 [(ls : s). Q (vv(l1 : x1 ), . . . , vv(lk : xk ), ls1 : s1 , . . . , lsj : s j )]| C 2 [VC (ls ). Q (vv(VC (l1 )), . . . , vv(VC (lk )), VC (ls1 ), . . . , VC (lsj ))]K
2
Now, we draw the conclusion that the set of copy rules are complete. Theorem 12 (Completeness of copy rules). For any normal process ! P , J! P K ⇒∗ P J! P | P K. Proof. Let P be of the form P (x1 , . . . , xk , s1 , . . . , s j ), where x1 , . . . , xk and s1 , . . . , s j are all the occurrences of its free variables and free service names, respectively.
J! P K ⇒(Rep-Step) J!l : P |Copy(l)K (Lemma 8) ⇒∗ P J! P (vv(l1 : x1 ), . . . , vv(lk : xk ), ls1 : s1 , . . . , lsj : s j )| P (vv(VC (l1 )), . . . , vv(VC (lk )), VC (ls1 ), . . . , VC (lsj ))K ⇒∗(VC-Elim) J! P (x1 , . . . , xk , s1 , . . . , s j )| P (x1 , . . . , xk , s1 , . . . , s j )K ≡d J! P | P K
2
With this theorem, we are able to prove that each congruence rule can be simulated by graph transformation rules. Lemma 9. For each basic congruence rule LS ≡c RS, JLSK ⇒∗C JRSK ⇒∗C JLSK and JLSK† ⇒∗C JRSK† ⇒∗C JLSK† . Proof. Straightforward for each rule. We only present the proof for two representative rules. For rule LS ≡c RS where LS = P | P and RS = P | P , we have JLSK ⇒(Par-Comm) JRSK ⇒(Par-Comm) JLSK, and JLSK† ⇒(Par-Comm) JRSK† ⇒(Par-Comm) JLSK† . For rule LS ≡c RS where LS = ( P | P )| P and RS = P |( P | P ), we have JLSK ⇒(Par-Assoc) JRSK ⇒(Par-Comm) J( P | P )| P K ⇒(Par-Comm) J( P | P )| P K ⇒(Par-Assoc) J P |( P | P )K ⇒(Par-Comm) J P |( P | P )K ⇒(Par-Comm) J L S K. In the same way, JLSK† ⇒(Par-Assoc) JRSK† ⇒∗{(Par-Comm),(Par-Assoc)} JLSK† . 2
JID:SCICO AID:1658 /FLA
[m3G; v 1.120; Prn:13/12/2013; 13:57] P.31 (1-34)
L. Zhao et al. / Science of Computer Programming ••• (••••) •••–•••
31
Lemma 10. For each special congruence rule LS ≡c RS, JLSK ⇒∗C JRSK and JLSK† ⇒∗C ∪T JRSK† . Proof. Straightforward for each rule. We only present the proof for two representative rules. For rule LS ≡c RS where LS = P |(ν n) Q and RS = (ν n)( P | Q ), JLSK ⇒(Par-Res-Comm) JRSK and JLSK† ≡d JRSK† . For rule LS ≡c RS where LS = ! P and RS = P |! P , J! P K ⇒∗ P J! P | P K according to Theorem 12. As a result, JLSK ⇒∗ P J! P | P K ⇒(Par-Comm) JRSK. In addition, JLSK† ≡d P( p ,i ,o,t ) [ A ( p )|JLSK p , i , o, t ] ⇒∗C P( p ,i ,o,t ) [ A ( p )|JRSK p , i , o, t ] ⇒∗T JRSK† according to Theorem 1. 2 These lemmas enable us to prove Proposition 5. Proof of Proposition 5. P c• Q means either P ≡•s Q or P • Q . If P ≡•s Q , there exist a basic congruence rule LS ≡c RS such that P = C [LS], Q = C [RS] or P = C [RS], Q = C [LS] for some context C [·]. According to Lemma 9, we have JLSK ⇒∗C JRSK ⇒∗C JLSK and JLSK† ⇒∗C JRSK† ⇒∗C JLSK† . Then, according to
Lemma 5, we have J P K† ⇒∗C J Q K† (and also J Q K† ⇒∗C J P K† ). If P • Q , there exist a special congruence rule LS ≡c RS such that P = C [LS] and Q = C [RS] for some context C [·]. According to Lemma 10, we have JLSK ⇒∗C JRSK and JLSK† ⇒∗C ∪T JRSK† . Then, according to Lemma 5, we have
J P K† ⇒∗C ∪T J Q K† . 2
A.5. Proof of Proposition 3 This subsection only considers normal processes and normal contexts. We first introduce a couple of lemmas. Lemma 11. If P →s Q and P c• P , then P →s Q and Q c Q for some Q . •
¯ P . Suppose P = C [! P 0 ] and P = C [ P 0 |! P 0 ] for some context C [·] and Proof. There are two cases for P c• P . (1) P process P 0 . Notice that ! P 0 cannot take part in the strict reduction P →s Q . It will be either preserved or simply deleted by the reduction. (1.1) If it is preserved, then there exists a context C 1 [·] such that Q = C 1 [! P 0 ], and C [ X ] →s C 1 [ X ] for any process X . In this case, we can choose Q = C 1 [ P 0 |! P 0 ], so that P →s Q and Q c Q . (1.2) If ! P 0 is deleted by the reduction, then for any process X , C [ X ] →s Q . In this case, we choose Q = Q so that P →s Q and Q c Q . (2) P r• P , which implies rp( P ) ≡s rp( P ). In this case, we choose Q = rp( Q ), so that Q c Q . Also notice that P →s Q implies rp( P ) →s rp( Q ). We have P r rp( P ) ≡s rp( P ) →s rp( Q ) = Q . 2 A direct deduction of Lemma 11 is as follows. Notice that a generalization (c ) is a sequence of one-step generalizations (c• ). Lemma 12. If P →s Q and P c P , then P →s Q and Q c Q for some Q . This lemma enables us to prove Proposition 3. Proof of Proposition 3. P → Q means P ≡c P 0 →s Q 0 ≡c Q for some P 0 and Q 0 . According to Proposition 1, there exists a process P such that P c P and P 0 c P . Then, according to Lemma 12, there exists a process Q such that P →s Q and Q 0 c Q . From Q 0 c Q , we know that Q ≡c Q 0 ≡c Q . 2 A.6. Proof of Proposition 4 To prove Proposition 4, we only need to prove the following proposition. Proposition 6. P → p Q implies J P K† ⇒∗ A J Q K† . We first prove the completeness of garbage collection rules G , in that they transform the graph of any process GB( P ; GI) into that of P . For this, we define the size sz(GI) of a garbage item GI as follows.
⎧ 1 ⎪ ⎪ ⎪ ⎪ 2 ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎨ sz( F 1 ) + · · · + sz( F k ) + 1 def sz(GI) = sz( V 1 ) + · · · + sz( V k ) + 1 ⎪ ⎪ ⎪ ⎪4 ⎪ ⎪ ⎪ ⎪ sz( F ) + sz( P ) + 1 ⎪ ⎪ ⎩ sz( V ) + sz( P ) + 1
if GI = s, ch or var(x) if GI =?x or x if GI = f ( F 1 , . . . , F k ) if GI = f ( V 1 , . . . , V k ) if GI = 0 if GI = ( F ) P if GI = V P or V ↑ P
JID:SCICO AID:1658 /FLA
[m3G; v 1.120; Prn:13/12/2013; 13:57] P.32 (1-34)
L. Zhao et al. / Science of Computer Programming ••• (••••) •••–•••
32
⎧ sz( P 1 ) + sz( P 2 ) + 1 ⎪ ⎪ ⎪ ⎪ sz( P ) + 5 ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎨ sz( P 1 ) + sz( P 2 ) + 2 def sz(GI) = sz( P ) + 2 ⎪ ⎪ ⎪ sz( P ) + 4 ⎪ ⎪ ⎪ ⎪ ⎪ 0 ⎪ ⎪ ⎩ sz(GI1 ) + · · · + sz(GIk )
if GI = P 1 | P 2 or P 1 + P 2 if GI = s. P or s. P if GI = P 1 > P 2 if GI = (ν n) P if GI = ! P if GI = S :: ∅ if GI = S :: {GI1 , . . . , GIk }
The idea is that the total size of garbage items strictly decreases through the application of a garbage collection rule. Therefore, the garbage items can be removed in finite steps. Lemma 13. For any process P and composite garbage item GI, JGB( P ; GI)K ⇒∗G J P K. Proof. By induction on the size of GI. If sz(GI) = 0, GI is empty and JGB( P ; GI)K ≡d J P K. For sz(GI) > 0, GI contains at least one single garbage item GI0 , i.e. GI is of the form GI( S :: {GI0 , GI1 , . . . , GIk }). It is straightforward to prove that JGB( P ; GI)K ⇒∗G J P K for each case of GI0 . We only present the proof for two representative cases. Case GI0 = ( F ) P 1 . JGB( P ; GI)K ⇒(Abs-GC ) JGB( P ; GI )K, where GI = GI( S :: {bn( F ) :: { F , P 1 }, GI1 , . . . , GIk }). Since sz(GI ) < sz(GI), JGB( P ; GI )K ⇒∗G J P K according to IH. Case GI0 =?x. JGB( P ; GI)K ⇒( P V −GC ) JGB( P ; GI )K, where GI = GI( S :: {var(x), GI1 , . . . , GIk }). Since sz(GI ) < sz(GI), JGB( P ; GI )K ⇒∗G J P K according to IH. 2 Theorem 13 (Completeness of garbage collection rules). JGB( P ; GI)K ⇒∗G J P K and JGB( P ; GI)K† ⇒∗G ∪T J P K† for any process P and composite garbage item GI. Proof. According to Lemma 13, JGB( P ; GI)K ⇒∗G J P K.
As a result, JGB( P ; GI)K† ≡d P( p ,i ,o,t ) [ A ( P )|JGB( P ; GI)K p , i , o, t ] ⇒∗G P( p ,i ,o,t ) [ A ( P )|J P K p , i , o, t ]. According to Theorem 1, P( p ,i ,o,t ) [ A ( P )|J P K p , i , o, t ] ⇒∗T J P K† . 2
Then, we prove the completeness of data assignment rules D , in that they transform the graph of AS( V ; F ) P into that of P σ , where σ = match( F ; V ). For this, we define the sharing form sf ( V ) of a normal value V as follows. def
sf ( V ) =
V
if V is a variable x
vv( V )
if V is a constructed value f ( V 1 , . . . , V k )
We claim that the graph of a sharing form sf ( V ) can be transformed into that of V through applications of D . Lemma 14. For any process P and normal value V , J P (sf ( V ))K ⇒∗ D J P ( V )K. Proof. If V is a variable x, we have J P (sf (x))K ≡d J P (x)K. If V is a constructed value f ( V 1 , . . . , V k ), we have J P (sf ( f ( V 1 , . . . , V k )))K ≡d J P (vv( f ( V 1 , . . . , V k )))K ⇒(Ctr-Norm) J P ( f ( V 1 , . . . , V k ))K. 2 Then, we show that a shared value can be copied through applications of D . Lemma 15. For any process P and normal value V , J P ( L : vv( V ), Sh( L ))K ⇒∗ D J P ( L : sf ( V ), V )K. Proof. By induction on the structure of V . Case V = x. J P ( L : vv(x), Sh( L ))K ⇒(VV -Norm) J P ( L : x, Sh( L ))K ≡d J P ( L : x, x)K. Case V = f ( V 1 , . . . , V k ).
J P ( L : vv( f ( V 1 , . . . , V k )), Sh( L ))K ⇒(Ctr-Split) J P ( L : vv( f ( L 1 : vv( V 1 ), . . . , Lk : vv( V k ))), f (Sh( L 1 ), . . . , Sh( Lk )))K (IH) ⇒∗ D J P ( L : vv( f ( L 1 : sf ( V 1 ), . . . , Lk : sf ( V k ))), f ( V 1 , . . . , V k ))K ≡d J P ( L : vv( f (sf ( V 1 ), . . . , sf ( V k ))), f ( V 1 , . . . , V k ))K (Lemma 14) ⇒∗ D J P ( L : vv( f ( V 1 , . . . , V k )), f ( V 1 , . . . , V k ))K As a natural deduction of Lemma 15, we have the following lemma.
2
JID:SCICO AID:1658 /FLA
[m3G; v 1.120; Prn:13/12/2013; 13:57] P.33 (1-34)
L. Zhao et al. / Science of Computer Programming ••• (••••) •••–•••
33
Lemma 16. For any process P and normal value V , J P ( L : vv( V ), Sh( L ), . . . , Sh( L ))K ⇒∗ D J P ( V , V , . . . , V )K, where Sh( L ), . . . , Sh( L ) are all the occurrences of Sh( L ) in P . Proof. If V = x, J P ( L : vv(x), Sh( L ), . . . , Sh( L ))K ⇒(VV -Norm) J P ( L : x, Sh( L ), . . . , Sh( L ))K ≡d J P (x, x, . . . , x)K. If V = f ( V 1 , . . . , V k ), according to Lemma 15, J P ( L : vv( V ), Sh( L ), . . . , Sh( L ))K ⇒∗ D J P ( L : vv( V ), V , . . . , V )K ≡d J P (vv( V ), V , . . . , V )K ⇒(Ctr-Norm) J P ( V , V , . . . , V )K. 2 Now, we are ready to prove the completeness of data assignment rules. Theorem 14 (Completeness of data assignment rules). For any normal process P , normal pattern F and normal value V such that σ = match( F ; V ) exists, JAS( V ; F ) P K ⇒∗ D J P σ K and JAS( V ; F ) P K† ⇒∗ D ∪T J P σ K† . Proof. Let F be of the form F (?x1 , . . . , ?xk ), where x1 , . . . , xk are all its pattern variables. In order that σ = match( F ; V ) exists, V must be of the form F ( V 1 , . . . , V k ) for some values V 1 , . . . , V k . That is, σ = [ V 1 , . . . , V k /x1 , . . . , xk ]. Let P = P (x1 , x1 , . . . , x1 , . . . , xk , xk , . . . , xk ), where x1 , x1 , . . . , x1 , . . . , xk , xk , . . . , xk are all the occurrences of its free variables bound by F .
JAS( V ; F ) P K ⇒∗(Ctr-Assign) JAS( V 1 , . . . , V k ; ?x1 , . . . , ?xk ) P (x1 , x1 , . . . , x1 , . . . , xk , xk , . . . , xk )K ≡d JAS( V 1 , . . . , V k ; ?x1 , . . . , ?xk ) P ( L 1 : x1 , Sh( L 1 ), . . . , Sh( L 1 ), . . . , Lk : xk , Sh( Lk ), . . . , Sh( Lk ))K ⇒∗(PV -Assign) (Lemma 16) ⇒∗ D
J P ( L 1 : vv( V 1 ), Sh( L 1 ), . . . , Sh( L 1 ), . . . , Lk : vv( V k ), Sh( Lk ), . . . , Sh( Lk ))K J P ( V 1 , V 1 , . . . , V 1 , . . . , V k , V k , . . . , V k )K
≡d J P σ K As a result, JAS( V ; F ) P K† ≡d P( p ,i ,o,t ) [ A ( p )|JAS( V ; F ) P K p , i , o, t ] ⇒∗ D P( p ,i ,o,t ) [ A ( p )|J P σ K p , i , o, t ]. Then, according to Theorem 1, P( p ,i ,o,t ) [ A ( p )|J P σ K p , i , o, t ] ⇒∗T J P σ K† . 2 Finally, the completeness of garbage collection rules and data assignment rules enables to prove Proposition 6. Proof of Proposition 6. Straightforward for each case of P → p Q . We only present the proof for a representative case. Case P = C [r ( P |( V P 1 + M 1 )), r C 2 [( F ) P 2 + M 2 ]] and Q = C [r ( P | P 1 ), r C 2 [ P 2 σ ]] with σ = match( F ; V ). In this case, C [·,·] is static and restriction-balanced, while C 2 [·] is static, session-immune and restriction-immune. We have
J P K† ⇒(Ses-Sync) JC [r ( P |GB( P 1 ; ∅ :: { M 1 })), r C 2 [GB(AS( V ; F ) P 2 ; ∅ :: { M 2 })]]K† (Theorem 13, Lemma 5) ⇒∗G ∪T JC [r ( P | P 1 ), r C 2 [AS( V ; F ) P 2 ]]K†
(Theorem 14, Lemma 5) ⇒∗ D ∪T J Q K†
2
References [1] C.A.R. Hoare, Communicating sequential processes, Commun. ACM 21 (8) (1978) 666–677. [2] R. Milner, Communication and Concurrency, Prentice Hall International, 1989. [3] R. Milner, J. Parrow, J. Walker, A calculus of mobile processes, I, Inf. Comput. 100 (1) (1992) 1–40; R. Milner, J. Parrow, J. Walker, A calculus of mobile processes, II, Inf. Comput. 100 (1) (1992) 41–77. [4] R. Lucchi, M. Mazzara, A pi-calculus based semantics for WS-BPEL, J. Log. Algebr. Program. 70 (1) (2007) 96–118. [5] G. Decker, F. Puhlmann, M. Weske, Formalizing service interactions, in: BPM’06, in: Lect. Notes Comput. Sci., vol. 4102, Springer, 2006, pp. 414–419. [6] M. Boreale, R. Bruni, L. Caires, R. De Nicola, I. Lanese, M. Loreti, F. Martins, U. Montanari, A. Ravara, D. Sangiorgi, V. Vasconcelos, G. Zavattaro, SCC: a service centered calculus, in: WS-FM’06, in: Lect. Notes Comput. Sci., vol. 4184, Springer, 2006, pp. 38–57. [7] M. Boreale, R. Bruni, R. De Nicola, M. Loreti, Sessions and pipelines for structured service programming, in: FMOODS’08, in: Lect. Notes Comput. Sci., vol. 5051, Springer, 2008, pp. 19–38. [8] J. Misra, W.R. Cook, Computation orchestration: a basis for wide-area computing, Softw. Syst. Model. 6 (1) (2007) 83–110. [9] R. Bruni, Calculi for service oriented computing, in: SFM’09, in: Lect. Notes Comput. Sci., vol. 5569, Springer, 2009, pp. 1–41. [10] F. Gadducci, Term graph rewriting for the pi-calculus, in: A. Ohori (Ed.), Programming Languages and Systems, in: Lect. Notes Comput. Sci., vol. 2895, Springer, 2003, pp. 37–54. [11] O.H. Jensen, R. Milner, Bigraphs and transitions, SIGPLAN Not. 38 (2003) 38–49. [12] G.L. Ferrari, D. Hirsch, I. Lanese, U. Montanari, E. Tuosto, Synchronised hyperedge replacement as a model for service oriented computing, in: FMCO’05, in: Lect. Notes Comput. Sci., vol. 4111, Springer, 2006, pp. 22–43. [13] A. Corradini, U. Montanari, F. Rossi, H. Ehrig, R. Heckel, M. Löwe, Algebraic approaches to graph transformation, Part I: Basic concepts and double pushout approach, in: G. Rozenberg (Ed.), Handbook of Graph Grammars and Computing by Graph Transformation, vol. 1: Foundations, World Scientific, 1997, pp. 163–245.
JID:SCICO AID:1658 /FLA
34
[m3G; v 1.120; Prn:13/12/2013; 13:57] P.34 (1-34)
L. Zhao et al. / Science of Computer Programming ••• (••••) •••–•••
[14] R. Bruni, Z. Liu, L. Zhao, Graph representation of sessions and pipelines for structured service programming, in: L. Barbosa, M. Lumpe (Eds.), Formal Aspects of Component Software, in: Lect. Notes Comput. Sci., vol. 6921, Springer, 2012, pp. 259–276. [15] H.P. Barendregt, M.C.J.D. van Eekelen, J.R.W. Glauert, J.R. Kennaway, M.J. Plasmeijer, M.R. Sleep, Term graph reduction, in: J.W. de Bakker, A.J. Nijman, P.C. Treleaven (Eds.), Parallel Architectures and Languages Europe, in: Lect. Notes Comput. Sci., vol. 259, Springer-Verlag, 1987, pp. 141–158. [16] D. Plump, Term graph rewriting, in: H. Ehrig, G. Engels, H.J. Kreowski, G. Rozenberg (Eds.), Handbook of Graph Grammars and Computing by Graph Transformation, vol. 2, World Scientific, 1999, pp. 3–61. [17] W. Reisig, Petri Nets: An Introduction, Springer-Verlag, 1985. [18] R. Milner, Bigraphs for Petri nets, in: Lectures on Concurrency and Petri Nets, in: Lect. Notes Comput. Sci., vol. 3098, Springer, 2004, pp. 161–191. [19] R. Milner, Bigraphical reactive systems, in: K. Larsen, M. Nielsen (Eds.), Proceedings of CONCUR 2001, 12th International Conference on Concurrency Theory, in: Lect. Notes Comput. Sci., vol. 2154, Springer, 2001, pp. 16–35. [20] D. Hirsch, E. Tuosto, SHReQ: Coordinating application level QoS, in: Proceedings of the Third IEEE International Conference on Software Engineering and Formal Methods, IEEE Computer Society, 2005, pp. 425–434. [21] S. Bistarelli, U. Montanari, F. Rossi, Semiring-based constraint satisfaction and optimization, J. ACM 44 (1997) 201–236. [22] I. Castellani, U. Montanari, Graph grammars for distributed systems, in: H. Ehrig, M. Nagl, G. Rozenberg (Eds.), Graph-Grammars and Their Application to Computer Science, in: Lect. Notes Comput. Sci., vol. 153, Springer, 1983, pp. 20–38. [23] R. Bruni, I. Lanese, On graph(ic) encodings, in: B. Koenig, U. Montanari, P. Gardner (Eds.), Proceedings of Dagstuhl Seminar N. 04241, Graph Transformations and Process Algebras for Modeling Distributed and Mobile Systems, 2005, pp. 23–29. [24] A. Corradini, F. Gadducci, An algebraic presentation of term graphs, via gs-monoidal categories, Appl. Categ. Struct. 7 (1999) 299–331. [25] R. Bruni, A. Lluch Lafuente, U. Montanari, E. Tuosto, Style-based architectural reconfigurations, Bull. Eur. Assoc. Theor. Comput. Sci. 94 (2008) 161–180. [26] R. Bruni, F. Gadducci, A. Lluch Lafuente, An algebra of hierarchical graphs, in: Proceedings of the 5th International Conference on Trustworthly Global Computing, in: Lect. Notes Comput. Sci., vol. 6084, Springer-Verlag, 2010, pp. 205–221. [27] R. Bruni, F. Gadducci, A. Lluch-Lafuente, An algebra of hierarchical graphs and its application to structural encoding, Sci. Ann. Coput. Sci. 20 (2010) 53–96. [28] R. Bruni, A. Corradini, F. Gadducci, A. Lluch-Lafuente, U. Montanari, On gs-monoidal theories for graphs with nesting, in: G. Engels, C. Lewerentz, W. Schafer, A. Schurr, B. Westfechtel (Eds.), Proceedings of FMN 2009, Graph Transformations and Model-Driven Engineering: Essays Dedicated to Manfred Nagl On the Occasion of His 65th Birthday, in: Lect. Notes Comput. Sci., vol. 5765, Springer-Verlag, 2010, pp. 59–86. [29] D. Grohmann, M. Miculan, Graph algebras for bigraphs, Electron. Commun. EASST 29. [30] R. Bruni, A. Corradini, U. Montanari, Modeling a service and session calculus with hierarchical graph transformation, in: C. Ermel, H. Ehrig, F. Orejas, G. Taentzer (Eds.), Proceedings of GraMoT 2010, International Colloquium on Graph and Model Transformation, On the Occasion of the 65th Birthday of Hartmut Ehrig, Electron. Commun. EASST 30 (2010). [31] A. Habel, R. Heckel, G. Taentzer, Graph grammars with negative application conditions, Fundam. Inform. 26 (3–4) (1996) 287–313. [32] P. Baldan, A. Corradini, H. Ehrig, M. Löwe, U. Montanari, F. Rossi, Concurrent semantics of algebraic graph transformations, in: H. Ehrig, H.J. Kreowski, U. Montanari, G. Rozenberg (Eds.), Handbook of Graph Grammars and Computing by Graph Transformation, vol. 3, World Scientific, 1999, pp. 107–187.