A monotone framework for CCS

A monotone framework for CCS

Computer Languages, Systems & Structures 35 (2009) 365 -- 394 Contents lists available at ScienceDirect Computer Languages, Systems & Structures jou...

956KB Sizes 1 Downloads 73 Views

Computer Languages, Systems & Structures 35 (2009) 365 -- 394

Contents lists available at ScienceDirect

Computer Languages, Systems & Structures journal homepage: w w w . e l s e v i e r . c o m / l o c a t e / c l

A monotone framework for CCS Hanne Riis Nielson∗ , Flemming Nielson DTU Informatics, Richard Petersens Plads, Bldg. 321, Technical University of Denmark, DK-2800 Kongens Lyngby, Denmark

A R T I C L E

I N F O

Article history: Received 20 June 2006 Received in revised form 28 April 2008 Accepted 25 July 2008 Keywords: Process calculi CCS Static analysis Monotone frameworks

A B S T R A C T

The calculus of communicating systems, CCS, was introduced by Robin Milner as a calculus for modelling concurrent systems. Subsequently several techniques have been developed for analysing such models in order to get further insight into their dynamic behaviour. In this paper we present a static analysis for approximating the control structure embedded within the models. We formulate the analysis as an instance of a monotone framework and thus draw on techniques that often are associated with the efficient implementation of classical imperative programming languages. We show how to construct a finite automaton that faithfully captures the control structure of a CCS model. Each state in the automaton records a multiset of the enabled actions and appropriate transfer functions are developed for transforming one state into another. A classical worklist algorithm governs the overall construction of the automaton and its termination is ensured using techniques from abstract interpretation. © 2008 Elsevier Ltd. All rights reserved.

1. Introduction Recent years have seen an increased interest in applying static analysis techniques to highly concurrent languages, in particular a variety of process calculi allowing concurrent processes to interact by means of synchronisation or communication. One line of work has aimed at adapting type and effect systems to express meaningful properties, e.g. [1–3]. Another line of work has developed control flow analysis for a variety of process calculi including the -calculus (e.g. [4]), variants of mobile ambients (e.g. [5–8]) and calculi for cryptographic protocols (e.g. [9–13]). A common characteristics of these analyses is that they are mainly concerned with properties of the configurations of the models and to a lesser degree with properties of the transitions between the configurations. In this paper we focus on the transitions. We shall study the classical approach of data flow analysis where transfer functions associated with basic blocks are often specified as bit vector frameworks or, more generally, as monotone frameworks—what these analyses have in common is that there are ways of removing analysis information when no longer appropriate. We give the first account of an instance of an analysis problem for CCS [14] where suitable generalisations of the gen and kill components of bit vector frameworks are used to define transfer functions that enable the construction of finite automata capturing the behaviour of processes—finiteness is achieved by incorporating ideas from yet another approach to static analysis, namely abstract interpretation. 1.1. Overview and motivation To illustrate the development consider the following CCS process modelling a (unary) semaphore S: S  g.p.S ∗ Corresponding author. Tel.: +45 4525 3736. E-mail addresses: [email protected] (H.R. Nielson), [email protected] (F. Nielson). 1477-8424/$ - see front matter © 2008 Elsevier Ltd. All rights reserved. doi:10.1016/j.cl.2008.07.001

366

H.R. Nielson, F. Nielson / Computer Languages, Systems & Structures 35 (2009) 365 -- 394

First the process offers the action g, then the action p after which it starts all over again. Assume that it operates in parallel with a process Q given by Q  g..Q + p.Q that is willing to either perform the action g (that will synchronise with a g action) or the action p (that will synchronise with a p action). After the g action some internal action  is performed and then the process recurses; after the p action the process recurses immediately. The semantics (to be given in Section 2) gives rise to the following infinite transition sequence: S | Q → p.S | .Q → p.S | Q → S | Q → · · · Here each configuration is characterised by its exposed actions: in the initial configuration S | Q the actions g, g and p are ready to interact but only g and g actually do. The resulting configuration is p.S | .Q and here the actions p and  are exposed but this time only  can be executed. The new configuration is p.S | Q and now p, g and p are exposed. This time p and p synchronise and we are back in the initial configuration S | Q. This is summarised in this annotated transition sequence:

Note that the action p is exposed in the first configuration but not in the second and that p is exposed in the second but not in the first configuration. Thus once the actions participating in the interaction have been selected (g and g in our case) they will cause some actions to become exposed (e.g. p) while others no longer will be exposed (e.g. g). We shall use this information to compute the exposed actions of the next configuration. In our analysis this will be captured by the definition of transfer functions containing a kill component as well as a generate component much as in the classical bit vector frameworks where the transfer functions take the simple form: fblock (E) = (E \ killblock ) ∪ genblock Here E is typically a set containing the data of interest as for example reaching definitions: which variable definitions may reach a given program point. In our case, the blocks will be the actions that interact and if we take E to be the set {g, g, q} of exposed actions of S | Q then we can calculate f(g, g) (E) = (E \ {g, g, p}) ∪ {p, } = {p, } where kill(g, g) = {g, g, p} is the set of actions that are no longer exposed when g and g synchronise and gen(g, g) = {p, } is the set of actions that become exposed. Not surprisingly, the result is exactly the set of exposed actions of p.S | .Q. In general, the situation is more complex as is illustrated by the process S | S | Q modelling a binary semaphore. The semantics now give rise to transition sequences of the form:

In the first configuration two occurrences of the action g are exposed but only one of them participates in the first interaction; the other continues to be exposed in the second configuration. In the third configuration we have one exposed occurrence of each of g and p and depending on which branch is taken we will obtain two occurrences of either g or p. This clearly shows that it is not sufficient to operate with simple sets; we need to use multisets. Furthermore, we may encounter situations where an infinite (or unbounded) number of occurrences of a particular action is exposed. This will for example be the case if we replace the process Q above with the process R defined by R  (g..0 + p.0) | R This process is equivalent to any number of parallel occurrences of the process g..0 + p·0. To deal with this we shall need to use so-called extended multisets. As we shall see in Section 3, the extended multisets give rise to complete lattices with infinite ascending chains. This means that special care needs to be taken in order to ensure that our analysis of recursively defined processes actually terminates; in Section 3 we show how to overcome this problem. In the classical bit vector frameworks the generated and killed information is always precise; as we shall see in Section 4 we shall need to approximate it in our setting. For the generated actions we shall use an over-approximation as it is always safe to say that more actions are exposed. However, for the killed actions we need an under-approximation: it is always safe to remove fewer actions than necessary.

H.R. Nielson, F. Nielson / Computer Languages, Systems & Structures 35 (2009) 365 -- 394

367

In Section 5 we show how to construct a finite automaton for each process. Each state in the automaton corresponds to an extended multiset of exposed actions for a configuration and the transitions reflect the possible interactions between exposed actions. For the process S | Q we obtain the automaton

where the initial state q0 corresponds to g, g and p being exposed, q1 to p and  being exposed and q2 to p, g and p being exposed. The annotations on the edges then reflect the interactions being performed. For the binary semaphore S | S | Q we obtain the slightly more interesting automaton:

The difference between the states q0 , q2 and q4 will be the number of exposed p actions: there are none in q0 , one in q2 and two in q4 . This corresponds to the numbers of semaphores taken. These automata are constructed using a simple worklist algorithm. The initial state corresponds to the exposed actions of the initial process and more and more states are added as the need arises. It is crucial to reuse states whenever possible, that is, when they represent the same exposed actions, as otherwise the construction need not terminate. However, for the process R displayed above the number of exposed occurrences of the action p will grow arbitrarily large. To handle this we borrow the widening operators from abstract interpretation in order to ensure that the construction always terminates. The correctness of the analysis, also presented in Section 5, amounts to a subject-reduction theorem stating that all transition sequences obtained using the semantics of the process correspond to paths in the finite automaton. For the sake of exposition our running examples are relatively simple; we conclude in Section 6 by giving a few more complex examples and in Section 7 we give pointers to extensions of the analysis as well as future work. 2. Communicating systems In this section we shall first introduce the syntax of CCS programs and then review the semantics as presented in [14]. 2.1. Syntax The syntax of CCS processes P and actions  is given by         i    i .Pi  A P::= new x P P1 | P2    i∈I 

::= x | x |  Here new x P introduces a new name x with scope P. Parallel composition is modelled using the construct P1 | P2 whereas sum  mations are of the form i∈I i i .Pi where I is a finite index set. Sums are guarded and in the case of binary sums we write

11 .P1 + 22 .P2 , in the case of unary sums we write  .P and in the case of nullary sums we write 0. The actions  are annotated with labels  ∈ Lab serving as pointers into the process; they will be used in the analysis to be presented shortly but have no semantic significance. Actions take the form x and x and are constructed from the names x; the silent action is denoted . We shall be interested in CCS programs of the form

let A1  P1 ; · · · ; Ak  Pk in P0 where the processes named A1 , . . . , Ak (∈ PN) are mutually recursively defined and may be used in the main process P0 as well as in the process bodies P1 , . . . , Pk . We shall require that A1 , . . . , Ak are pairwise distinct and that they are the only process names used. Example 1. The semaphore program may now we written as

 g1 .p2 .S Q  g3 .4 .Q + p5 .Q S|Q

let S in

The process R considered above will be written as R  (g3 .4 .0 + p5 .0) | R.

368

H.R. Nielson, F. Nielson / Computer Languages, Systems & Structures 35 (2009) 365 -- 394

Table 1 Reduction relation P →˜ Q 

 .P + Q → P

(x1.P1 + Q1 ) | (x 2.P2 + Q2 ) →(1 ,2 ) P1 | P2

P →˜ Q P | P  →˜ Q | P 

P →˜ P

P  →˜ Q  if P ≡ P  and Q  ≡ Q P →˜ Q



new x P →˜ new x P

Table 2 Axioms generating the structural congruence P ≡ Q • If Q is obtained from P by disciplined alpha renaming of bound names then P ≡ Q. • The Abelian monoid laws hold for parallel composition: ◦ P | Q ≡ Q | P, ◦ (P | Q)|R ≡ P | (Q | R) and ◦ P | 0 ≡ P.   • Summands can be permuted in i∈I i i.Pi . • Recursion can be unfolded: A ≡ P whenever A is defined by A  P. • The scope extension laws: ◦ new x 0 ≡ 0, ◦ new x (P | Q) ≡ P | new x Q if x ∈/ fn(P) and ◦new x new y P ≡ new y new x P. • The laws for an equivalence relation. • The laws for replacement in context that makes ≡ a congruence.

We shall write fn(P) for the free names of P and similarly bn(P) for the bound names of P. Furthermore, we shall assume that the names are classified into two disjoint categories: local names that may be introduced by the new construct and global names that are the only names that may be free in P0 , P1 , . . . , Pk . This ensures that a free name in Pi never will be bound by a new construct and excludes programs as let A  (a.new a A) in (A | a.a.0) from consideration. 2.2. Semantics Following [14] the calculus is equipped with a reduction semantics and a structural congruence. The reduction relation P →˜ Q is specified in Table 1 and expresses that the process P in one step may evolve into the process Q; we have annotated the arrow with the labels ˜ of the actions involved as this will prove useful when expressing the correctness of the analysis. The reduction relation naturally extends to programs in that only the main process (called P0 above) can evolve. The structural congruence P≡Q is defined as the least congruence generated from the axioms of Table 2. The notion of disciplined alpha renaming is introduced in order to make it possible to interpret the analysis result and merely amounts to requiring that each name x has a canonical name x that is preserved by alpha renaming. We shall return to this when defining the analysis; here we shall only point out that this is not a restriction as it merely corresponds to assuming that there is an arbitrary supply of names for each canonical name. Note that the recursion law implicitly uses the set of recursively defined processes in the overall program. Example 2. Using the formal semantics we can express the first steps of the reductions of the process of Example 1 as follows: S|Q

≡ →(1,3) →4 ≡ →(2,5)

(g1 .p2 .S) | (g3 .4 .Q + p5 .Q) (p2 .S) | (4 .Q) (p2 .S) | Q (p2 .S) | (g3 .4 .Q + p5 .Q) S|Q

Here the structural congruence is first used to unfold the definitions of the two processes. Then the axiom for synchronisation of two parallel processes is used to perform the actions labelled 1 and 3. The axiom for  actions is used for the next step and after that the structural congruence is called upon again. In the last step above the synchronisation axiom is applied and the process can start all over again.

H.R. Nielson, F. Nielson / Computer Languages, Systems & Structures 35 (2009) 365 -- 394

369

The notion of canonical names is lifted to actions so  will be the canonical action corresponding to . A program is consistently labelled if all occurrences of actions 1 and 2 with the same label  have the same canonical action, that is 1 = 2 ; we shall then write j() for this canonical action. A program is uniquely labelled if all occurrences of actions have distinct labels; clearly, a uniquely labelled program is consistently labelled. For simplicity we shall assume that the initial program is uniquely labelled. It is easy to prove that the property of being consistently labelled is invariant under the structural congruence and that it is preserved by the reduction semantics. This does not hold for the property of being uniquely labelled because the structural congruence allows recursive processes to be unfolded as in S | S | Q ≡ (g1 .p2 .S) | (g1 .p2 .S) | Q. 3. Exposed actions An exposed action is an action that might participate in the next interaction. The process S of Example 1 only has one occurrence of g1 as exposed action whereas the process Q has one occurrence of each of g3 and p5 as exposed actions. In general, a process may contain many, even infinitely many, occurrences of the same action (all identified by the same label) and it may be the case that several of them are ready to participate in the next interaction. To capture this we define an extended multiset M as an element of

M = Lab → N ∪ {∞} The idea is that M() records the number of occurrences of the label ; there may be a finite number in which case M() ∈ N or an infinite number in which case M() = ∞. We shall write M[n] for the extended multiset that is as M except that  is mapped to n ∈ N ∪ {∞} and we write dom(M) for the set { | M()  0}. Example 3. For the processes S | Q, S | S | Q and S | R considered earlier we shall be interested in the following extended multisets defined on the domain Lab = {1, 2, 3, 4, 5} of labels occurring in the programs:

Thus dom(M1 ) = dom(M2 ) = dom(M3 ) = {1, 3, 5}. Using the notation introduced above we e.g. have M2 = M1 [12]. In Section 3.1 we shall establish some key properties of extended multisets to be used in Section 3.2 where we introduce the abstraction function E that specifies an extended multiset for each program. Since processes may be recursively defined it is not trivial to implement the abstraction function and in Section 3.3 we address this issue in more detail. 3.1. Properties of extended multisets We shall equip the set M = Lab → N ∪ {∞} with a partial ordering  M defined by M  M M

iff ∀ : M()  M () ∨ M () = ∞

Using the notation of Example 3 we have M1  M M2 and M1  M M3 whereas it is not the case that M2  M M3 . Fact 4. The domain (M,  M ) is a complete lattice with least element ⊥M given by ∀ : ⊥M () = 0 and largest element M given by ∀ : M () = ∞. The least upper bound and greatest lower bound operators of M are denoted M and M , respectively, and they are defined by 

max{M(), M ()} ∞ ⎧ ⎨ min{M(), M ()} (M M M )() = M() ⎩  M ()

(M M M )() =

if M() ∈ N ∧ M () ∈ N otherwise if M() ∈ N ∧ M () ∈ N if M () = ∞ if M() = ∞

Using the notation from Example 3 we will have M2 M M3 = M3 [12] and M2 M M3 = M1 .

370

H.R. Nielson, F. Nielson / Computer Languages, Systems & Structures 35 (2009) 365 -- 394

When specifying the analysis we shall need operations for addition and subtraction on extended multisets; they are defined by 

M() + M () ∞ ⎧ M() − M () ⎪ ⎪ ⎨ 0 (M −M M )() = ⎪ ⎪ ⎩ ∞ (M +M M )() =

if M() ∈ N ∧ M () ∈ N otherwise if M() ∈ N ∧ M()  M () if M() ∈ N ∧ M() < M () or M() ∈ N ∧ M () = ∞ if M() = ∞

Once again using the notation of Example 3 we have M2 +M M3 = M3 [13] whereas M2 −M M3 = ⊥M [11] and M3 −M M2 = M3 [10]. We conclude by listing some properties of the operators: Fact 5. The operations enjoy the following properties: (1) M and +M are monotonic in both arguments and they both observe the laws of an Abelian monoid with ⊥M as neutral element. (2) M is monotonic in both arguments and it observes the laws of an Abelian monoid with M as neutral element. (3) −M is monotonic in its left argument and anti-monotonic in its right argument. Fact 6. The operations +M and −M satisfy the following laws: (1) M −M (M1 +M M2 ) = (M −M M1 ) −M M2 . (2) If M  M M1 then (M1 −M M) +M M2 = (M1 +M M2 ) −M M. (3) If M1  M M1 and M2  M M2 then (M1 −M M1 ) +M (M2 −M M2 ) = (M1 +M M2 ) −M (M1 +M M2 ). 3.2. Calculating exposed actions The key information of interest is the collection of extended multisets of exposed actions of the processes. Initially, this is   computed by an abstraction function E . To motivate the definition let us first consider the sum of two processes 11 .P1 + 22 .P2 . Here both of the actions 1 and 2 are ready to interact but none of those of P1 and P2 are. Thus we shall take

E [[11.P1 + 22.P2 ]] = ⊥M [1 1] +M ⊥M [2 1] If the two labels happen to be equal the overall count will become 2 since we have used the pointwise addition operator +M . Turning to parallel composition 11 .P1 | 22 .P2 we shall have a similar formula

E [[11 .P1 | 22 .P2 ]] = ⊥M [1 1] +M ⊥M [2 1] reflecting that both of the actions 1 and 2 are ready to interact but none of those of P1 and P2 are. To handle the general case we shall introduce the function

E : Proc → (PN → M) → M that as an additional parameter takes an environment holding the required information for the process names. The function is defined in Table 3 for arbitrary processes; in the case of sums and parallelism it generalises the clauses shown above. The clause for the new x P construct simply ignores the introduction of the new name and thereby its scope. Clearly this may lead to imprecision; however, in the case where recursion is not involved a simple alpha renaming of bound names will solve the problem. Turning to the clause for process names we simply consult the environment env provided as the first argument to E. As shown in Table 3 this defines a functional

FE : (PN → M) → (PN → M) Since the operations involved in its definition are all monotonic (cf. Fact 5) we have a monotonic functional defined on a complete lattice (cf. Fact 4) and Tarski's fixed point theorem ensures that it has a least fixed point which is denoted envE in Table 3. Since all processes are finite it follows that FE is continuous and hence that the Kleene formulation of the fixed point is permissible. We can now define the function

E : Proc → M simply as E [[P]] = E[[P]]envE .

H.R. Nielson, F. Nielson / Computer Languages, Systems & Structures 35 (2009) 365 -- 394

371

Table 3 Exposed actions for let A1  P1 ; · · · ; Ak  Pk in P0

E[[new x P]]env E[[P | P ]]env   E[[ i i .Pi ]]env

= =

E[[A]]env E [[P]] where FE (env)

= = = =

=

E[[P]]env E[[P]]env +M E[[P ]]env  M i∈I ⊥M [i 1]

i∈I

and env⊥M

=

env(A) E[[P]]envE [A1 E[[P1 ]]env, . . . , Ak E[[Pk ]]env] [A1 ⊥M , . . . , Ak ⊥M ]

j j  0F E (env⊥M )

and envE

Example 7. For the running example of Example 1 we shall first determine the form of the functional FE :

FE (env) = [SE[[g1 .p2 .S]]env, Q E[[g3 .4 .Q + p5 .Q]]env] = [S⊥M [11], Q ⊥M [31, 51]] Here we have used that E[[g3.4.Q + p5.Q]]env = E[[g3 .4.Q]]env +M E[[p5.Q]]env and that in general E[[.P]]env = ⊥M [1].

j Next we determine envE = j  0 FE (env⊥M ) to be envE = [S⊥M [11], Q ⊥M [31, 51]] and are now ready to compute

E[[S | Q]]envE = envE (S) +M envE (Q) = ⊥M [11, 31, 51]

E[[S | S | Q]]envE = envE (S) +M envE (S) +M envE (Q) = ⊥M [12, 31, 51] Turning to the process S | R we first observe that

FE (env)(R) = E[[(g3 .4 .0 + p5 .0) | R]]env = ⊥M [31, 51] +M env(R) j

Thus we get FE (env⊥M )(R) = ⊥M [3j, 5j] for all j  0. This means that envE (R) = we get



j j0 F E (env⊥M )(R) = ⊥M [3∞, 5∞]

and

E[[S | R]]envE = ⊥M [11, 3∞, 5∞] We can show that the exposed actions are invariant under the structural congruence and that they correctly capture the actions that may be involved in the first reduction step: Lemma 8. If P ≡ Q then E [[P]] = E [[Q]] and furthermore, if P →˜ Q then ˜ ∈ dom(E [[P]]). Proof. We prove E[[P]]envE = E[[Q]]envE by induction on how P ≡ Q was obtained from Table 2. When no unfolding of recursion takes place it is straightforward to show that E[[P]]env = E[[Q]]env for all env. So consider A ≡ P where A is defined by A  P. We

j then have to show envE (A) = E[[P]]envE which follows from envE = j  0 FE (env⊥M ) and FE (env)(A) = E[[P]]env. For the second part of the lemma we proceed by induction on the inference of P →˜ Q as defined in Table 1. The result is immediate for the two axioms and for the rule using the congruence we make use of the first part of the lemma. The remaining rules are straightforward applications of the induction hypothesis.  The following result states that the fixed point computation merely takes care of all the unfoldings allowed by the structural congruence: Lemma 9. E [[P]] =



{E[[Q]]env⊥M | P ≡ Q}.

Proof. To establish “  M ” we first prove j

∀P : E[[P]](FE (env⊥M ))  M



{E[[Q]]env⊥M | P ≡ Q}

372

H.R. Nielson, F. Nielson / Computer Languages, Systems & Structures 35 (2009) 365 -- 394 j+1

by numerical induction on j. The base case j = 0 is immediate. For the inductive step we use that E[[P]](FE

E[[P ]](Fj (env

P

(env⊥M )) equals

is obtained from P by unfolding the recursive calls contained in P. This is captured by the structural ⊥M )) where E congruence and completes the induction step. Next we observe that E[[P]] is monotonic and continuous (as may be proved by



j structural induction on P and using Fact 5). It follows from this that E[[P]]( j  0 FE (env⊥M ))  M {E[[Q]]env⊥M | P ≡ Q} and the

j required result follows because envE = j  0 FE (env⊥M ). To establish “  M ” we prove that if P ≡ Q then E[[P]]envE  M E[[Q]]env⊥M ; this follows immediately from the first part of Lemma 8 and the monotonicity of E[[Q]].  3.3. Termination Due to the recursive structure of the processes it is not trivial to implement the computation of the least fixed point of Table 3. Indeed, a naive implementation is likely not to terminate since (M,  M ) has infinite ascending chains, as illustrated in Example 7. Lemma 10. For each process P there exists numbers n1 , · · · , nk and an extended multiset M such that

E[[P]]env = (n1 ·M env(A1 )) +M · · · +M (nk ·M env(Ak ))+M M

(1)

holds for all env. Here ·M is scalar multiplication defined by 0 ·M M = ⊥M and (n + 1) ·M M = (n ·M M) +M M. Proof. The numbers and the extended multiset used in equation (1) are taken as nj = Nj (P) (for 1  j  k) and M = M(P) where

We dispense with the proof by structural induction on P validating the correctness of the formula.  Example 11. For the program defining the processes S and R the lemma expresses that

E[[(g3 .4 .0 + p5 .0) | R]]env = (nS ·M env(S)) +M (nR ·M env(R)) +M M We can easily calculate nS = 0, nR = 1 and M = ⊥M [31, 51] and, as expected, the formula equals the one for FE (env)(R) in Example 7. It is important to note that the key operation of Eq. (1) is addition (+M ) meaning that numbers will add up as the recursion is unfolded. With k recursive processes, all process names will have been unfolded at least once after k unfoldings of FE , and to make sure that the effects of all mutual recursions have been captured it suffices to perform at most k additional unfoldings. The following lemma captures this insight: Lemma 12. Using the notation of Table 3 we have k 2k envE = FE (env⊥M )  FE (env⊥M )

where  is the pointwise extension of the operation M defined by (M M M )() =



M() if M() = M () ∞ otherwise

Proof. See Appendix A.

H.R. Nielson, F. Nielson / Computer Languages, Systems & Structures 35 (2009) 365 -- 394

373

Example 13. Returning to Example 7 we now get an alternative way of computing envE (R). The program of interest has two defining equations (one for S and one for R) so k = 2 and therefore envE (R) = FE2 (env⊥M )  FE4 (env⊥M )

= ⊥M [32, 52]  ⊥M [34, 54] = ⊥M [3∞, 5∞]

where we have used that FE (env)(R) = ⊥M [31, 51] +M env(R) as before. 4. Transfer functions The abstraction function E only gives us the information of interest for the initial process and we shall now present auxiliary functions allowing us to approximate how the information evolves during the execution of the process. Once an action has participated in an interaction some new actions may become exposed and some may cease to be exposed. As an example consider once again the process S  g1 .p2 .S of Example 1. Initially, the action g1 is exposed but once it has been executed it will no longer be exposed—we say that it is killed — and instead the action p2 becomes exposed—we say that it is generated. We shall now introduce two functions G and K approximating this information. The relevant information will be an element of

T = Lab → M (= Lab → (Lab → N ∪ {∞})) As for exposed actions it is not sufficient to use sets: there may be more than one occurrence of an action that is either generated or killed by another action. Example 14. For the process S | Q from Example 1 we shall be interested in the following tables of extended multisets:

Consider the entries for the action labelled 4: Here G [[S|Q]](4) records that one occurrence of each of the actions labelled 3 and 5 are generated when 4 is executed whereas K [[S|Q]](4) records that one occurrence of the action with label 4 will be killed. We define the ordering  T on T as the pointwise extension of  M : T1  T T2

iff ∀ : T1 ()  M T2 ()

In analogy with Fact 4 this turns (T,  T ) into a complete lattice with least element ⊥T and greatest element T defined as expected. The operators T , T , +T and −T on T are defined as the pointwise extensions of the corresponding operators on M and they enjoy properties corresponding to those of Facts 5 and 6. We shall occasionally write T(1 , 2 ) as an abbreviation for T(1 ) +M T(2 ). To motivate the definitions of G and K let us consider prefixing as expressed in the process  .P. Clearly, once  has been executed it will no longer be exposed whereas the actions of E [[P]] will become exposed. Thus a first suggestion may be to take G [[ .P]]() = E [[P]] and K [[ .P]]() = ⊥M [1]. However, to cater for the case where the same label may occur several times in a process (as when  is used inside P) we have to modify these formulae slightly to ensure that they correctly combines the information available about . The function G will compute an over-approximation as it takes the least upper bound of the information available  E [[P]] M G [[P]] if  =  G [[ .P]] =  G [[P]] if    which may be rewritten as

G [[ .P]] = ⊥T [E [[P]]] T G [[P]]

374

H.R. Nielson, F. Nielson / Computer Languages, Systems & Structures 35 (2009) 365 -- 394

Table 4 Generated actions for the program let A1  P1 ; · · · ; Ak  Pk in P0

G[[new x P]]env G[[P | P ]]env   G[[ i i .Pi ]]env

= =

G[[A]]env G [[P]] where FG (env)

= = = =

=

i∈I

and env⊥T

=

and envG

G[[P]]env G[[P]]env T G[[P ]]env

T i∈I (⊥T [E [[Pi ]]] T G[[Pi ]]env) env(A) G[[P]]envG [A1 G[[P1 ]]env, · · · , Ak G[[Pk ]]env] [A1 ⊥T , · · · , Ak ⊥T ]

j j0 F G (env⊥T )

The extension to the general case is covered in Section 4.1. The function K , on the other hand, will compute an underapproximation as it takes the greatest lower bound of the information available

K [[ · P]] =





⊥M [1] M K [[P]] if  =   K [[P]] if   

which may be rewritten as

K [[ · P]] = T [M] T K [[P]] where M = ⊥M [1] Note that M actually equals E [[ .P]]. The extension to the general case is covered in Section 4.2. 4.1. Generated actions To cater for the general case, we shall define the function:

G : Proc → (PN → T) → T It takes as parameters a process and an environment providing similar information for the process names and is defined in Table 4. The clauses are much as one should expect from the explanation above, in particular we may note that the operation T is used to combine information throughout the clauses and that G[[0]](env) = ⊥T which is the neutral element for the T operation. The recursive definitions give rise to a monotonic functional

FG : (PN → T) → (PN → T) on a complete lattice (thanks to Facts 4 and 5) and hence Tarski's fixed point theorem ensures that the least fixed point envG exists. Once more the function turns out to be continuous and hence the Kleene formulation of the fixed point is permissible. Thus the function

G : Proc → T defined by G [[P]] = G[[P]]envG is well-defined. It is worth pointing out that G implicitly performs a reachability analysis and only reports on actions that are indeed reachable from the main process of the program. Also note that we make use of the function E as specified in Table 3 thereby making sure that the exposed actions are computed relative to the complete program of interest. Example 15. For the running example of Example 1 we shall first determine the form of the functional FG :

FG (env) = [SG[[g1 .p2 .S]]env, Q G[[g3 .4 .Q + p5 .Q]]env] Using the function E constructed in Example 7 we calculate

G[[g1 .p2 .S]]env = ⊥T [1E [[p2 ·S]]] T ( ⊥T [2E [[S]]] T G[[S]]env ) = ⊥T [1⊥M [21], 2⊥M [11]] T env(S) and similarly we obtain

G[[g3 .4 .Q + p5 .Q]]env = ⊥T [3⊥M [41], 4⊥M [31, 51], 5⊥M [31, 51]] T env(Q)

H.R. Nielson, F. Nielson / Computer Languages, Systems & Structures 35 (2009) 365 -- 394

The next step is to compute envG =



j j0 F G (env⊥T )

375

and here the chain stabilises already after the first iteration and we get

envG = [S⊥T [1⊥M [21], 2⊥M [11]],

Q ⊥T [3⊥M [41], 4⊥M [31, 51], 5⊥M [31, 51]]]

Using this information we can calculate G [[S | Q]] and G [[S | S | Q]] and we obtain the results shown below:

The information for S | R is obtained in a similar way; here the functional FG will associate the following information with R:

G[[(g3 .4 .0 + p5 .0)|R]]env = ⊥T [3⊥M [41]] T env(R) and the calculations proceed as above and produce the results shown. The following result shows that the information calculated by G is invariant under the structural congruence and that it potentially decreases with the reduction of the process: Lemma 16. If P ≡ Q then G [[P]] = G [[Q]] and furthermore if P →˜ Q then G [[Q]]  T G [[P]]. Proof. First we show that G[[P]]envG = G[[Q]]envG by induction on how P ≡ Q is obtained using the axioms and rules of Table 2. When no recursion is involved it is straightforward to show that G[[P]]env = G[[Q]]env for all env; in the case of sums we make use of the first part of Lemma 8. In the case of recursion consider A ≡ P where A is defined by A  P. We have to show

j envG (A) = G[[P]]envG and this follows from envG = j  0 FG (env⊥T ) andFG (env)(A) = G[[P]]env. For the second part we proceed by induction on the inference of P →˜ Q as defined in Table 1. The result is immediate for the two axioms and for the three rules it follows from the induction hypothesis with the only twist being that the first part of the lemma is used in the case of the rule using the structural congruence.  Since (T,  T ) admits infinite ascending chains we shall show that the naive implementation will in fact terminate. Our approach is similar to that of Section 3.3. The general form of the definition of G is expressed by # Lemma 17. For each process P there exists Booleans n# 1 , . . . , nk ∈ {0, 1} and a mapping T such that

G[[P]]env = (n#1 ·T env(A1 )) T · · · T (n#k ·T env(Ak )) T T holds for all env. Here the operation ·T is defined by 0 ·T T = ⊥T and 1 ·T T = T. Proof. To see that Eq. (2) is indeed correct take n# = N# j (P) (for 1  j  k) and T = T(P) where: j

We dispense with the proof by structural induction on P validating the correctness of the formula. 

(2)

376

H.R. Nielson, F. Nielson / Computer Languages, Systems & Structures 35 (2009) 365 -- 394

Table 5 Killed actions for the program let A1  P1 ; · · · ; Ak  Pk in P0

K[[new x P]]env K[[P | P ]]env   K[[ i i .Pi ]]env

= = =

K[[P]]env K[[P]]env T K[[P ]]env wT i∈I (T [i M] T K[[Pi ]]env)

K[[A]]env K [[P]] where FK (env)

= = = = =

where M=+M j∈I ⊥M [j 1] env(A) K[[P]]envK [A1 K[[P1 ]]env, . . . , Ak K[[Pk ]]env] [A1 T , . . . , Ak T ] w FKj (env )

i∈I

and env and envK

j0

Example 18. For the program defining the processes S and Q the lemma expresses that

G[[g3 .4 .Q + p5 ·Q]]env = (n#S ·T env(S)) T (n#Q ·T env(Q)) T T # Using the table above we can easily calculate n# S = 0, nR = 1 and

T = ⊥T [3⊥M [41],

4⊥M [31, 51],

5⊥M [31, 51]]

and, as expected, the formula equals the one for FG (env)(Q) in Example 15. This time the key operation is least upper bound (T ) meaning that whenever a recursion has been unfolded once there will be no further contributions from additional unfoldings. We now have the following result showing that at most k iterations are needed to compute the fixed point. Lemma 19. Using the notation of Table 4 we have k envG = FG (env⊥T )

(where k is the number of recursively defined processes). Proof. See Appendix A.  4.2. Killed actions We now turn our attention to the actions that are killed when a single action is executed. We will be going for an underapproximation as it always will be safe to kill too few actions—actually it would be safe not to kill any actions at all. Following the approach above we shall define the function:

K : Proc → (PN → T) → T that as parameters takes a process and an environment providing similar information for the process names. The function is defined in Table 5. Since we are constructing an under-approximation we make use of the greatest lower bound operation T to combine information in the clauses for parallelism and summations. One may note that K[[0]](env) = T which is the neutral element for the T operation. Also it is worth pointing out that the extended multiset M in the clause for summations actually   equals E [[ i∈I i i .Pi ]] reflecting that all the exposed actions of the summation are indeed killed once one of them has been selected for the reduction step. The recursive definitions give rise to a monotonic functional

FK : (PN → T) → (PN → T) It is defined on a complete lattice and hence it has a greatest fixed point, which is denoted envK in Table 5. The function is co-continuous because T contains no infinite decreasing chains and hence the formulation of the table is permissible. Thus the function

K : Proc → T defined by K [[P]] = K[[P]]envK is well-defined. As for G we may note that the definition implicitly performs a reachability analysis and hence only reports on actions that may be executed from the main process of the program.

H.R. Nielson, F. Nielson / Computer Languages, Systems & Structures 35 (2009) 365 -- 394

377

Example 20. Turning to the program of Example 1 we obtain the following information:

As in Example 15 we obtain exactly the same information for S | Q and S | S | Q. To see how this information is obtained let us consider the functional FK for the program defining S and Q. It has the form

FK (env) = [SK[[g1 .p2 .S]]env, Q K[[g3 .4 .Q + p5 .Q]]env] where

K[[g1 .p2 .S]]env = T [1⊥M [11], 2⊥M [21]] T env(S) and

K[[g3 ·4 ·Q + p5 ·Q]]env = T [3⊥M [31, 51], 4⊥M [41], 5⊥M [31, 51]] T env(Q) j

The environment envK is computed by constructing the chain FK (env ). The top element env maps S as well as Q to T and 1

the chain already stabilises at FK (env ) so we get

envK = [ST [1⊥M [11], 2⊥M [21]],

Q T [3⊥M [31, 51], 4⊥M [41], 5⊥M [31, 51]]]

From this information we can calculate K [[S | Q]] and K [[S | S | Q]] as shown in the table above. The information for S|R is obtained in a similar way. In analogy with Lemma 16 we have the following result expressing that the killed information is invariant under the structural congruence and that the information about killed actions potentially increases under the reductions of the processes: Lemma 21. If P ≡ Q then K [[P]] = K [[Q]] and furthermore if P →˜ Q then K [[P]]  T K [[Q]] and K [[P]](˜ )  M E [[P]]. Proof. For the first part we show that G[[P]]envG = G[[Q]]envG by induction on how P ≡ Q is obtained from Table 2. When no recursion is involved it is fairly straightforward to show that K[[P]]env = K[[Q]]env for all env. In the case of recursion j FK (env ) and consider A ≡ P where A is defined by A  P.To show envK (A) = K[[P]]envK we observe that envK = w

FK (env)(A) = K[[P]]env.

j0

For the second part we proceed by induction on the inference of P →˜ Q as defined in Table 1. First consider one of the two axioms. It is straightforward to show that K [[P]]  T K [[Q]] and next K [[P]](˜ )  M E [[P]] follows since the clause for M in Table 5 equals the exposed actions of P. In the case of the three rules the result follows from the induction hypothesis, the first part of the lemma and Lemma 8.  j

Turning to the implementation of the least fixed point we use a simple iterative procedure that is terminated when FK (env )=

j+1

FK (env ). This procedure works because the lattice of interest admits no infinite descending chains. 4.3. The transfer function In the classical bit vector frameworks the transfer functions take the form: fblock (E) = (E \ killblock ) ∪ genblock

In the case of a forward analysis, E is the information holding at the entry to the block, killblock is the information invalidated by the block and genblock is the new information created by the block. In our setting E will be the extended multiset of exposed actions available at the entry to the block and the block itself will be identified by the actions ˜ that may be executed—˜ is a pair

378

H.R. Nielson, F. Nielson / Computer Languages, Systems & Structures 35 (2009) 365 -- 394

of labels when a synchronisation happens and it is a single label when a silent action is performed. Thus the transfer function takes the form

transfer˜ (E) = (E −M K [[P0 ]](˜ )) +M G [[P0 ]](˜ ) where we now use the subtraction and addition operators of extended multisets. Example 22. Using the information of Examples 15 and 20 we can now determine the transfer functions associated with the processes. For S|Q we have the following transfer functions:

transfer(1,3) (E) = (E −M ⊥M [11, 31, 51]) +M ⊥M [21, 41] transfer4 (E) = (E −M ⊥M [41]) +M ⊥M [31, 51] transfer(2,5) (E) = (E −M ⊥M [21, 31, 51]) +M ⊥M [11, 31, 51] The following result is the key insight for showing that the transfer function transfer˜ defined above provides safe approximations to the exposed actions of the resulting process: Proposition 23. If P →˜ Q then E [[Q]]  M (E [[P]] −M K [[P]](˜ )) +M G [[P]](˜ ). Proof. We proceed by induction on the inference of P→˜ Q as defined in Table 1.

Case  .P + Q → P. First observe that G [[ .P + Q]]()  M G [[.P]]()  M E [[P]]. Then (E [[ .P + Q]] −M K [[ .P + Q]]()) +M G [[ .P + Q]]()

 M G [[.P + Q]]()  M E [[P]] as required. Case (x1 .P1 + Q1 )|(x2 .P2 + Q2 ) →1 2 P1 | P2 . As in the previous case we observe that G [[x1.P1 + Q1 ]](1 )  M E [[P1 ]] and G [[x2.P2 + Q2 ]](2 )  M E [[P2 ]]. Writing lhs for (x1 .P1 + Q1 )|(x2 .P2 + Q2 ) we therefore get (E [[lhs]] −M K [[lhs]](1 2 )) +M G [[lhs]](1 2 )

 M G [[lhs]](1 ) +M G [[lhs]](2 )

 M G [[x1 .P1 + Q1 ]](1 ) +M G [[x2 .P2 + Q2 ]](2 )  M E [[P1 ]] +M E [[P2 ]] using the monotonicity of +M (Fact 5). Since E [[P1 | P2 ]] = E [[P1 ]] +M E [[P2 ]] this is the required result. Case P | P  →˜ Q | P  because P →˜ Q. From the induction hypothesis we have E [[Q]]  M (E [[P]] −M K [[P]](˜ )) +M G [[P]](˜ ) and we also have K [[P]](˜ )  K [[P | P  ]](˜ ) and G [[P]](˜ )  G [[P | P  ]](˜ )so we get 

M





M



E [[Q]]  M (E [[P]] −M K [[P]](˜ )) +M G [[P]](˜ )

 M (E [[P]] −M K [[P | P ]](˜ )) +M G [[P | P ]](˜ )

using that +M is monotonic and that −M is monotonic in its left argument and anti-monotonic in its right argument as stated in Fact 5. Hence we may calculate

E [[Q | P ]] = E [[Q]] +M E [[P ]]

 M (E [[P]] −M K [[P | P ]](˜ )) +M G [[P | P ]](˜ ) +M E [[P ]]

= ((E [[P]] +M E [[P  ]]) −M K [[P | P  ]](˜ )) +M G [[P | P  ]](˜ ) = (E [[P | P  ]] − K [[P | P  ]](˜ )) + G [[P | P  ]](˜ ) 

M



M



where we have used Lemma 21 and Facts 5 and 6. This proves the result. Case new x P →˜ new x P  because P →˜ P  . The proof follows directly from the induction hypothesis as E [[new x P]] = E [[P]], G [[new x P]] = G [[P]] and K [[new x P]] = K [[P]]. Case P →˜ Q because P ≡ P  , P  →˜ Q  and Q  ≡ Q. This case is straightforward as E , G and K are all invariant under the structural congruence as stated in Lemmas 8, 16 and 21. This completes the proof of the proposition. 

H.R. Nielson, F. Nielson / Computer Languages, Systems & Structures 35 (2009) 365 -- 394

379

Corollary 24. Consider the program let A1  P1 ; · · · ; Ak  Pk in P0 and assume P0 →∗ P →˜ Q. Then

E [[Q]]  M transfer˜ (E [[P]]) Proof. This is an immediate consequence of Proposition 23, Lemmas 16 and 21 and Fact 5.



5. Constructing the automaton Given a program

let A1  P1 ; · · · ; Ak  Pk in P0 we shall now construct a finite automaton such that the potentially infinite transition system of the program is faithfully reflected by the finite transition system of the automaton. The automaton will have the following components: • A set of states Q; each state q is associated with an extended multiset E[q] and the idea is that q represents all processes P with E [[P]]  M E[q]. • An initial state q0 ∈ Q associated with the exposed actions E [[P0 ]] of the initial process. • A transition relation  containing transitions of one of two forms: ◦ qs ⇒(1 ,2 ) qt reflecting that in the state qs two processes with matching actions (of the form x and x) labelled 1 and 2 , respectively, may interact and give rise to the state qt . ◦ qs ⇒ qt reflecting that in the state qs a  action with label  may take place and give rise to the state qt . We shall denote the automaton by (Q, q0 , , E). This setup allows to represent an arbitrary non-deterministic automaton (without final states). From the construction below it will emerge that the automaton is partially deterministic in the sense that if qs ⇒˜ q1 and qs ⇒˜ q2 then q1 = q2 . Clearly the automaton can be made deterministic by adding a fail state qf and a transition qs ⇒˜ qf whenever there does not exist a state qt with qs ⇒˜ qt but we shall abstain from doing so. Example 25. For the example processes S | Q and S | S | Q we obtain the finite automata shown in Fig. 1 (and already discussed in the Introduction). The states correspond to the following extended multisets and the transitions are as shown on the figure:

The key algorithm is a worklist algorithm and it is presented in Section 5.1. It starts out from the initial state and constructs the automaton by adding more and more states and transitions. The algorithm makes use of a number of auxiliary operations that are further developed in the subsequent subsections: • Given a state qs representing some exposed actions we need to select those labels ˜ that represent actions that may interact in the next step; this is done using the procedure enabled described in Section 5.3.

Fig. 1. Finite automata for running example.

380

H.R. Nielson, F. Nielson / Computer Languages, Systems & Structures 35 (2009) 365 -- 394

Table 6 The worklist algorithm for constructing the automaton (1) Q := {q0 }; E[q0 ] := E [[P0 ]]; W := {q0 };  := ∅; (2) while W  ∅ do (3) select qs from W; W := W\{qs }; (4) for each ˜ ∈ enabled (E[qs ]) do (5) let E = transfer˜ (E[qs ]) in update(q , ˜ , E) (6) s

• Once the labels ˜ have been selected we can use the function transfer˜ already introduced in Section 4.3 to determine the exposed actions of the target state. • Finally, an appropriate target state qt has to be constructed and the transition qs ⇒˜ qt must be recorded; this is done using the procedure update developed in Section 5.2. Finally Section 5.4 proves the overall correctness of the construction.

5.1. The worklist algorithm The main data structures of the algorithm are: • a set Q of the states introduced so far; for each state q the table E will specify the associated extended multiset E[q] ∈ M; • a worklist W being a subset of Q containing those states that have yet to be processed; and • a set  of triples (qs , ˜ , qt ) defining the current transitions; here qs ∈ Q is the source, qt ∈ Q is the target and ˜ ∈ Lab ∪ (Lab × Lab) is the label of the edge. The overall algorithm has the form displayed in Table 6 and is explained below. The initialisations are performed in line (1): First the set of states Q is initialised to contain the initial state q0 and the associated entry in the table E is set to E [[P0 ]]. The worklist W will initially contain the state q0 and the transition relation  will be empty. Line (2) contains the classical loop inspecting the contents of the worklist. A state qs is selected and removed from the worklist in line (3) and the set of potential interactions is constructed using the procedure call enabled(E[qs ]) in line (4). The actual definition of this procedure is left to Section 5.3; for the present discussion it suffices to know that enabled(E[qs ]) ⊆ dom(E[qs ]) ∪ (dom(E[qs ]) × dom(E[qs ])) reflecting that either one of the actions or a pair of the actions from E[qs ] will take part in the next interaction. For each ˜ ∈ enabled(E[qs ]) the procedure call transfer˜ (E[qs ]) of line (5) will return an extended multiset E describing the denotation of the target state. The last step is to update the automaton to include the new transition step and this is done in line (6) by the procedure call update(qs , ˜ , E). Here the idea is first to decide whether one of the existing states can be reused and only if this is not possible a new state will be created and subsequently the transition relation  will be updated. The details of the procedure update are described in Section 5.2; however, it is worth pointing out already here that the main challenge is to define update so as to ensure that the overall construction terminates, in particular, to ensure that the set Q of states remains bounded. Example 26. To illustrate the workings of the algorithm let us consider the construction of the automaton for S | Q. Initially we have Q = {q0 }, W = {q0 } and E[q0 ] = ⊥M [11, 31, 51] as recorded in the top left part of Fig. 2. As we shall see in Example 34 we will have enabled(E[q0 ]) = {(1, 3)} so using the definition of Example 22 we find E1 = ⊥M [21, 41] and we perform the call update(q0 , (1, 3), E1 ). In Example 27 we shall see that this gives rise to the creation of a new state q1 with E[q1 ] = E1 and a transition q0 ⇒(1,3) q1 in the automaton. At the same time the worklist is updated to become W = {q1 } as illustrated at the top right of Fig. 2. In the second round it is determined that enabled(E[q1 ]) = {4} (see Example 34) and using the transfer function of Example 22 we calculate E2 = ⊥M [21, 31, 51]. The call update(q1 , 4, E2 ) will create a new state q2 with E[q2 ] = E2 and it will introduce the transition q1 ⇒4 q2 (see Example 27). The worklist now becomes W = {q2 } as summarised in the bottom left part of Fig. 2. In the third round we get enabled(E[q2 ]) = {(2, 5)} and using the transfer function we calculate E3 = ⊥M [11, 21]. As we shall see in Example 27 the call update(q2 , (2, 5), E3 ) will figure out that q0 already has E[q0 ] = E3 and hence q0 can be reused as shown in the bottom right part of Fig. 2—the final transition is added to the automaton and the worklist is exhausted so the algorithm terminates.

H.R. Nielson, F. Nielson / Computer Languages, Systems & Structures 35 (2009) 365 -- 394

381

Fig. 2. Simulation of the algorithm for S | Q.

Table 7 Processing enabled actions: update(qs , ˜ , E) (1) (2) (3) (4) (5) (6) (7) (8)

if there exists q ∈ Q with H(E[q]) = H(E) then qt := q else select qt from outside Q; Q := Q ∪ {qt }; E[qt ] := ⊥M ; if ¬(E[qt ]  M E) then E[qt ] := E[qt ] ∇M E; W := W ∪ {qt };  := \{(qs , ˜ , q)|q ∈ Q} ∪ {(qs , ˜ , qt )}; clean-up(Q, W, )

5.2. The procedure update The procedure update(qs , ˜ , E) is specified in Table 7. Recall that E is the extended multiset describing the denotation of the target state (to be called qt ) to which there should be a transition labelled ˜ that emerges from qs . The procedure proceeds in three stages: • First it determines the state qt . In lines (1–4) it is first checked whether one of the existing states can be used and if not a new state is created and the corresponding entry in E is initialised to ⊥M . In line (1) we make use of a granularity function H; the most obvious choice might be the identity function, i.e. H(E) = E, but it turns out that this choice may lead to non-termination of the worklist algorithm. A more interesting choice is H(E) = dom(E) meaning that only the domain of the extended multiset is of interest; we shall discuss further choices below. • In lines (5–6) it is checked whether the description E[qt ] includes the required information E and if not it is updated and the state is put on the worklist for future processing. If qt is a new state then most likely the test will fail, however, it may also do so when qt is one of the existing states. The widening operator ∇M used in line (6) makes sure to combine the old and the new extended multisets in such a way that termination of the overall algorithm is ensured. We shall return to the definition of ∇M shortly. • The transition relation is updated in lines (7–8). It is not enough just to add the triple (qs , ˜ , qt ); we also have to remove any previous transitions from qs with label ˜ as its target state may no longer be correct. As a consequence the automaton may contain unreachable parts and the procedure clean-up(Q, W, ) specified in Table 8 will remove the parts of Q, W and  that cannot be reached from the initial state q0 . This will be illustrated in Example 29. Example 27. In Example 26 we relied on the computation of update in each of the three rounds. In the first round we needed to determine update(q0 , (1, 3), E1 ) for E1 = ⊥M [21, 41]. Using H(E) = dom(E) as the granularity function it follows that we need

382

H.R. Nielson, F. Nielson / Computer Languages, Systems & Structures 35 (2009) 365 -- 394

Table 8 The clean-up operation: clean-up(Q, W, )

Qreach := {q0 } ∪ {q | ∃n, ∃q1 , . . . , qn : (q0 , · · · , q1 ) ∈  ∧ · · · ∧ (qn , . . . , q) ∈ }; Q := Q ∩ Qreach ;  : = ∩ (Qreach × (Lab ∪ (Lab × Lab)) × Qreach ) W := W ∩ Qreach ;

a new state (q1 ) and we shall take E[q1 ] = ⊥M ∇M E1 which, as we shall see shortly, amounts to E[q1 ] = E1 since ⊥M will be a left identity for ∇M . In the second round we called update(q1 , 4, E2 ) where E2 = ⊥M [21, 31, 51] and using the same reasoning as above the new state q2 is created with E[q2 ] = E2 . The last round is more interesting since here we called update(q2 , (2, 5), E3 ) where E3 = ⊥M [11, 21]. Now H(E3 ) = H(E[q0 ]) so the state q0 will be reused. Since E[q0 ] = E3 there is no need to update the extended multiset associated with q0 and hence the worklist will not be updated either. The widening operator ∇M : M × M → M used in line (6) of Table 7 to combine extended multisets is defined by ⎧ ⎨ M1 () if M2 ()  M1 () (M1 ∇M M2 )() = M2 () if M1 () = 0 ∧ M2 () > 0 ⎩ ∞ otherwise It will ensure that the chain of values taken by E[qt ] in line (6) always stabilises after a finite number of steps. We refer to [15,16] for a formal definition of widening and merely establish the correctness of our choice. Fact 28. ∇M is a widening operator, in particular we have M1 M M2  M M1 ∇M M2 . We shall now return to the choice of granularity function H:M→H to be used in line (1) of Table 7. The parameterised function HL,k (for k ∈ N ∪ {∞} and L ⊆ Lab) is an example of a granularity function: HL,k (E) = {(, n) |  ∈ L ∧ E() = n  k} ∪ {(, ∞) |  ∈ L ∧ (E() = n > k ∨ E() = ∞)} In the examples we have used HLab,0 ; this corresponds to focusing on the domains of the exposed multisets and hence simply ignore the counts, that is, HLab,0 (E) = HLab,0 (E ) amounts to dom(E) = dom(E ). Example 29. To illustrate the algorithm consider the process S | R and let us use the granularity function HLab,0 . Initially, the automaton will contain the single state q0 having E[q0 ] = ⊥M [11, 3∞, 5∞] and q0 will be the only state on the worklist. The while-loop of Table 6 will be executed six times; Fig. 3 shows the result after each of these rounds. The first three rounds follow the same overall pattern as in Example 26 with the only new phenomenon being that two states are created in round 2 because enabled(E[q1 ]) contains two elements, namely 4 and (2, 5) (see Example 34)—and consequently the two new states q2 and q3 are added to the worklist. In the third round the state q2 is selected and it is determined that the q0 can be reused and we obtain the result shown in the middle leftmost part of Fig. 3. In the fourth round we look at the state q3 where we have enabled(E[q3 ]) = {(1, 3), 4}. In the case of 4 it turns out that the state q0 can be reused. The case of (1, 3) is more interesting as here transfer(1,3) (E[q3 ]) = ⊥M [21, 3∞, 42, 5∞]. The granularity function HLab,0 allows us to reuse the state q1 which already is associated with the extended multiset ⊥M [21, 3∞, 41, 5∞]. Notice that the two extended multisets we would like to use for q1 have different non-zero counts associated with the label 4 but otherwise they are identical. We shall now use the widening operator ∇M to combine the two pieces of information and this gives us the new extended multiset to be associated with q1 , namely ⊥M [21, 3∞, 4∞, 5∞] (so the count for 4 is ∞). This is recorded in the table for the fourth round of Fig. 3. Since the extended multiset associated with q1 is changed it enters the worklist for further processing. In the fifth round we will check whether these changes to q1 also mean that we need to change the automaton. As before enabled(E[q1 ]) = {(2, 5), 4}. Let us first look at the transition 4. It will no longer give rise to an extended multiset that can be described by the state q2 —indeed it is possible to reuse the state q1 itself. Hence the transition from q1 to q2 must be removed and replaced by a transition from q1 to itself. Since the state q2 is no longer reachable it is removed by the clean-up procedure together with the transitions involving it. For the transition (2, 5) we decide to reuse the state q3 but its extended multiset has to be updated so that label 4 is mapped to ∞—as before this happens using the widening operator. The state q3 enters the

H.R. Nielson, F. Nielson / Computer Languages, Systems & Structures 35 (2009) 365 -- 394

383

Fig. 3. Simulation of the algorithm for S|R.

worklist once again and we are ready for the last round that simply confirms that all the states and transitions satisfy the required conditions. By selecting a more discriminating granularity function it is possible to obtain a more precise account of parts of the interactions; we shall return to this in the conclusion.

384

H.R. Nielson, F. Nielson / Computer Languages, Systems & Structures 35 (2009) 365 -- 394

To ensure the correct operation of the algorithm we shall be interested in granularity functions enjoying certain properties: • H is finitary if for all choices of finite sets Labfin ⊆ Lab, H specialises to H : (Labfin → N ∪ {∞}) → Hfin for some finite subset Hfin ⊆ H. • H is stable if H(E1 ) = H(E2 ) implies H(E1 ∇M E2 ) = H(Ei ) for i = 1, 2 Fact 30. The granularity function HL,k is finitary as well as stable (for all choices of L ⊆ Lab and k  0). We are now able to state a general termination result for the construction of the finite automaton: Theorem 31. Whenever the algorithm of Table 6 terminates it produces a partially deterministic automaton. If the granularity function H is stable then the automating satisfy the following injectiveness property: ∀q1 , q2 ∈ Q : H(E[q1 ]) = H(E[q2 ]) ⇒ q1 = q2

( )

If the granularity function H is finitary then the algorithm always terminates. Proof. The first claim is proved by showing that ∀(q1 , ˜ 1 , q1 ), (q2 , ˜ 2 , q2 ) ∈  : q1 = q2 ∧ ˜ 1 = ˜ 2 ⇒ q1 = q2 is an invariant at line (3) of the worklist algorithm of Table 6. It is maintained due to the construction of  in line (7) of Table 7. For the second claim we shall show that the formula () is an invariant at line (3) of the worklist algorithm of Table 6. In the case that an existing state is chosen as qt in line (2) of Table 7 it is immediate from stability that the modifications in lines (6–7) of Table 7 keep H(E[qt ]) unchanged and therefore the invariant is maintained. In the case that a new state is chosen as qt in lines (3–4) it follows that the modifications of lines (5–7) of Table 7 ensure that H(E[qt ]) = H(E) where it is known that H(E[q]) = H(E) fails for all old states q and therefore the invariant is maintained. The third claim is proved by contradiction. So let us fix a finite set Labfin that is appropriate for the program considered and let us consider a non-terminating execution of the worklist algorithm of Table 6. It is immediate that line (3) of Table 6 must be executed infinitely often. It is also clear that Q and E[·] grow in a non-decreasing manner. Also the set {H(E[q])|q ∈ Q} grows in a non-decreasing manner and since H is finitary the value of the set will stabilise. Subsequently the test in line (1) of Table 7 must always succeed and hence lines (3–4) of Table 7 cannot be executed any more. This shows that also Q stabilises. Next consider the vector (E[q])q∈Q which is known to grow in a non-decreasing manner. It follows from Fact 28 that (E[q])q∈Q must eventually stabilise and therefore W does not grow from this point onwards. Each subsequent execution of lines (4–7) of Table 6 will remove an element from the finite set W. It follows that at some point the test in line (3) of Table 6 yields false and that the algorithm terminates. This constitutes our desired contradiction.  5.3. The procedure enabled We now return to the construction of the procedure enabled (E) used in line (4) of the worklist algorithm in Table 6. Recall that E is the extended multiset of exposed actions in the state of interest. There are two cases: (1)  ∈ dom(E) is the label of a  action: then clearly  is enabled. (2) 1 ∈ dom(E) and 2 ∈ dom(E) are labels of matching actions: then (1 , 2 ) may be enabled provided that the two labels may occur in parallel processes. In order to address the latter issue the next step will be to compute sets of compatible actions, that is, pairs of matching actions that may occur in parallel processes. Consider the process P | P  . Clearly interactions may occur locally within each of P and P  but since the two processes are in parallel it is also possible for an action of P to interact with one in P  . To capture this assume that L and L are the labels occurring in P and P  , respectively. Recalling that j() is the canonical action associated with the label  in a consistently labelled process we take

comp(L, L ) = {(,  ) ∈ L × L | ∃ x : j() = x ∧ j( ) = x }   ∪ {( , ) ∈ L × L | ∃ x : j( ) = x ∧ j() = x }

H.R. Nielson, F. Nielson / Computer Languages, Systems & Structures 35 (2009) 365 -- 394

385

Table 9 Compatible actions for the program let A1  P1 ; · · · ; Ak  Pk in P0

C[[new x P]]env C[[P | P ]]env

= =

C[[P]]env let (L, C) = C[[P]]env and (L , C  ) = C[[P  ]]env in (L ∪ L , C ∪ C  ∪ comp(L, L ))

i i .Pi ]]env

=

let (Li , Ci ) = C[[Pi ]]env

C[[A]]env C [[P]] where FC (env)

= = = = =

in (∪i∈I Li ∪ {i }, ∪i∈I Ci ) env(A) C[[P]]envC [A1 C[[P1 ]]env, . . . , Ak C[[Pk ]]env] [A1 (∅, ∅), . . . , Ak (∅, ∅)] j j  0 FC (env∅ )

C[[

 i∈I



and env∅ and envC

to specify the potential matching pairs of actions from the two processes. Note that we are testing on the canonical actions as the analysis does not capture the alpha renaming of the semantics—it cannot distinguish between the different instances of the names obtained by e.g. unfolding a recursive process that introduces new names. The function C will compute the set of compatible actions; it has functionality

C : Proc → P(Lab) × P(Lab × Lab) and will return the labels of the potential actions of the process in the first component and pairs of labels of the potential interacting actions in the second component. The function is defined in Table 9 using the overall pattern developed in Sections 3 and 4. So we have a function

C : Proc → (PN → P(Lab) × P(Lab × Lab)) → P(Lab) × P(Lab × Lab) whose second argument is an environment providing similar information for the process names. The domain P(Lab) × P(Lab × Lab) inherits a partial ordering  from the subset ordering on sets and becomes a complete lattice. The functional FC is a monotonic functional on a complete lattice and hence has a least fixed point envC . As the functional FC is in fact also continuous the Kleene formulation of the fixed point is permissible. It follows that the overall definition of C is well-defined. The implementation can be performed by a simple iteration as the complete lattice of interest does not have infinite ascending chains. Example 32. For the running example of Example 1 we get, as one might expect, C [[S | Q]] = ({1, 2, 3, 4, 5}, {(1, 3), (2, 5)}); the same result is obtained for the processes S | S | Q and S | R. Lemma 33. If P ≡ Q then C [[P]] = C [[Q]] and furthermore if P →˜ Q then C [[Q]]  C [[P]] and ˜ ∈ L ∪ C where C [[P]] = (L, C). Proof. The proof is analogous to that of Lemma 8.  Given a program let A1  P1 ; · · · ; Ak  Pk in P0 we shall write (L , C ) for C [[P0 ]]. We shall now define enabled(E) = (L ∩ dom(E) ∩ {|j() = }) ∪ (C ∩ (dom(E) × dom(E))) as the set of enabled actions. Note that the first part only includes those labels in L that correspond to  actions in E because  actions are the only individual actions that are allowed to execute by the semantics of Table 1. Example 34. In Example 26 for S|Q we make use of the enabled function in three cases:

To see how the entry for q0 is obtained we observe that dom(E[q0 ]) = {1, 3, 5} and from Example 32 we have C [[S | Q]] = ({1, 2, 3, 4, 5}, {(1, 3), (2, 5)}) so using the above formula for enabled we get enabled(E[q0 ]) = {(1, 3)}.

386

H.R. Nielson, F. Nielson / Computer Languages, Systems & Structures 35 (2009) 365 -- 394

In Example 29 for S|R we make use of the following values of the enabled function:

Note that the function only reports whether some interaction might be possible and not how many times it might be possible. We have the following results: Fact 35. The function enabled is monotonic in its arguments. The correctness of this definition amounts to strengthening Lemma 8: Lemma 36. If P →˜ Q then ˜ ∈ enabled(E [[P]]). Proof. This is an immediate consequence of Lemmas 8 and 33.



5.4. Correctness result We now develop the simulation result showing the correctness of the automaton constructed by Table 6. We shall define P E by P E

iff E [[P]]  M E

and say that a state denoting the multiset E represents a process P whenever P E. We then have: Lemma 37. If P ≡ Q then P E if and only if Q E. Proof. This is an immediate consequence of Lemma 8.



We can now establish the main result which is independent of the choice of the granularity function H: Theorem 38. Suppose that the algorithm of Table 6 terminates and produces a finite automaton (Q, q0 , , E). If P E[q]

and P →˜ Q

then there exists a unique q ∈ Q such that Q E[q ] and

(q, ˜ , q ) ∈ 

Proof. Consider the last time where the state q was removed from the worklist W in line (4) of the worklist algorithm of Table 6. Letting 0 and E0 denote the corresponding values of the data structures we have

E0 [q] = E[q] and hence P E0 [q]. Since P →˜ Q it follows that ˜ ∈ enabled(E0 [q]) using Lemma 36 and Fact 35 and hence that ˜ is selected for consideration in line (5) of Table 6. By Theorem 23 it follows that line (6) of Table 6 produces an extended multiset E such that Q E. By line (7) of Table 6 and the definition of update in Table 7 it is immediate that we identify a state q in lines (1–4) of Table 7 and that after the execution of lines (5–8) of Table 7 we have (q, ˜ , q ) ∈ 1

and E  M E1 [q ]

where 1 and E1 denote the corresponding values of data structures at this time.

H.R. Nielson, F. Nielson / Computer Languages, Systems & Structures 35 (2009) 365 -- 394

387

There will be no further calls of update(q, ˜ , . . .) because we already executed lines (6–7) of Table 6 for ˜ and we assumed that there are no subsequent choices of q in line (4) of Table 6. It follows that line (8) of Table 7 will not subsequently remove (q, ˜ , q ) from . It is immediate that the values of E[·] grow in a non-decreasing manner. Writing  and E for the final values of the data structures we have (q, ˜ , q ) ∈ 

and E  M E1 [q ]  M E[q ]

which establishes the claim. Finally the uniqueness of q is due to Theorem 6 that asserts that the automaton is partially deterministic.  The reflexive and transitive closure of  may be inductively defined by ∗

(q0 , , q0 ) ∈  ,

∗ (q0 , , q) ∈  (q, ˜ , q ) ∈  ∗ (q , ˜ , q ) ∈  0

and allows us to state the following corollary: Corollary 39. Suppose that the algorithm of Table 6 terminates and produces a finite automaton (Q, q0 , , E). If

let A1  P1 ; · · · ; Ak  Pk in P0 →∗ P then there exists a state q ∈ Q such that P E[q]



and (q0 , , q) ∈ 

Proof. The proof is by induction on the length of . In the base case where  is  we have P = P0 and we choose q = q0 . Due to the initialisation of the worklist algorithm of Table 6 and that E[·] grows in a non-decreasing manner it is immediate that

E [[P]]  M E[q0 ] ∗

which proves that P E[q] and (q0 , , q) ∈  . In the inductive case we merely make use of Theorem 38.  6. Worked example As some more complex examples we shall consider a few variants of Milner's process for a jobshop [14] and use our prototype implementation to analyse the processes. 6.1. The prototype implementation The prototype is implemented in Standard ML and closely follows the algorithms explained in the previous sections. It takes as input a CCS program as defined in Section 2.1, performs the analysis and returns a finite automaton that can be displayed graphically using an appropriate graph drawing tool. Our prototype implementation exploits that the fixed point computations involved in determining the exposed actions and the generated actions have the alternative characterisations given in Lemmas 10 and 17; the killed actions are computed by a straightforward iteration as discussed in Section 4.2. The worklist algorithm of Section 5 is implemented using the imperative data structures of Standard ML and is parameterised on the choice of granularity function—thereby allowing experimentation with different choices. The prototype has been used to analyse several larger CCS programs and in the following we present the results computed for a few programs. 6.2. Analysing the jobshop program In the CCS program displayed in Table 10 two workers (modelled by the process J) have two tools (modelled by the processes H and M) at their disposal for handling three kinds of jobs: • easy jobs modelled by the process JE and requiring no tools, • normal jobs modelled by the process JN and requiring any one of the two tools, and • difficult jobs modelled by the process JD and requiring the H tool. The workers pick the tools up using the actions gh and gm and return the tools using the actions ph and pm—the tools themselves perform the matching actions. The actions iE , iN and iD are used for taking new (easy, normal or difficult) jobs whereas the action

388

H.R. Nielson, F. Nielson / Computer Languages, Systems & Structures 35 (2009) 365 -- 394

Table 10 The jobshop process

let H  gh1 . ph2 . H M  gm3 . pm4 . M 5 JE  o . J 6 7 8 9 JN  gh . ph . JE + gm . pm . JE 10 11 JD  gh . ph . JE 13 14 J  i12 E . JE + iN . JN + iD . JD 15 16 17 Q  iE . Q + iN . Q + iD . Q + o18 . Q in J | J | H | M | Q

Fig. 4. The jobshop J | J | H | M | Q.

o is used for returning a processed job. The overall process is closed by adding an environment process Q that at any time will produce a job of one of the three kinds and also is able to accept the processed job at any time. We shall take the view that the interesting actions are those of the jobbers requiring and releasing the tools, that is, the actions gh, ph, gm and pm labelled 6 . . . 11 in Table 10. We shall therefore choose a granularity function focusing on these labels, namely ∞ H{6 ...11} . The resulting automaton produced by our prototype implementation is shown in Fig. 4; only edges with labels from the set {6 . . . 11} are shown in order not to clutter the graph. The node q0 represents configurations where no tools are in use; then q3 , q4 and q14 represent configurations in which exactly one tool is in use and the remaining two nodes q9 and q13 represent configurations where both tools are in use. The top diamond of the graph (with nodes q0 , q3 , q4 and q9 ) captures the situation where the tools are used for normal jobs whereas the two bottom nodes (q13 and q14 ) represent the situation where a difficult job is being processed. As one may expect employing an additional jobber to the jobshop does not change much: indeed the analysis of the system J|J|J|H|M|Q with three jobbers rather than two is exactly as in Fig. 4. The effect of also acquiring an extra tool can easily be observed using the analysis. Fig. 5 shows the result of analysing the system J|J|J|H|H|M|Q

H.R. Nielson, F. Nielson / Computer Languages, Systems & Structures 35 (2009) 365 -- 394

389

Fig. 5. The jobshop J | J | J | H | H | M | Q.

Table 11 The jobshop process for complex jobs

let H  gh1 . ph2 . H M  gm3 . pm4 . M 5 JE  o . J 6 7 8 9 JN  gh . ph . JE + gm . pm . JE 10 13 11 12 JC  gh . gm . JC + gm . gh . JC 15 16 14 17  JC  pm . ph . JE + ph . pm . JE 19 20 J  i18 . J + i . J + i . J E N C E N C 21 22 23 Q  iE . Q + iN . Q + iC . Q + o24 . Q in J | J | H | M | Q

with three jobbers and three tools. The states of the automaton are displayed in four columns where the single state (q0 ) of the leftmost column represents the case where no tools are in use. The second column (nodes q3 , q4 and q48 ) represents configurations where only one tool is in use, the third column (nodes q36 , q40 , q46 , q51 and q58 ) represents configurations where two of the tools are in use and finally the rightmost column (nodes q38 , q49 and q56 ) represents configurations where all three tools are in use.

6.3. Extending the jobshop Let us now introduce a new kind of jobs, complex jobs, that require the use of both tools. In Table 11 the process JC expresses that a jobber may grab the two tools in any order and, as expressed by the process JC , they may also be released in any order. For simplicity we consider a setup with only easy, normal and complex jobs and we assume that there are only two tools and two jobbers.

390

H.R. Nielson, F. Nielson / Computer Languages, Systems & Structures 35 (2009) 365 -- 394

Fig. 6. The graph for the modified jobshop.

As before we shall focus on the actions where the jobbers require and release the tools, that is, the actions labelled 6 . . . 17 in ∞ Table 11. Using the granularity function H{6 ...17} our prototype implementation of the analysis produces the automaton shown in Fig. 6 and below we shall discuss some of the properties captured by the analysis. The diamond formed by the nodes q0 , q3 , q4 and q9 corresponds to both jobbers only working on normal jobs. The node q36 represents the configuration where one of the jobbers has acquired both tools whereas the node q62 corresponds to the deadlock situation obtained when each of the jobbers has one tools and is waiting for the other—note that the deadlock is captured by the absence of outgoing edges from the node. The direct path from q0 via q32 to q36 represents that one of the jobbers first get the H tool and then the M tool; the path via q33 represents that the tools are required in the opposite order. However, the graph also shows that the configuration q36 can be obtained in other ways. The path from q0 via q4 , q54 and q32 to q36 will capture that the M tool is acquired by one of the jobbers, then the H tool is taken by the other and only after the first jobber releases the M tool it can be acquired by the second jobber—actually the presence of the loop between q54 and q32 shows that the M tool can be used by the second jobber for a normal job before it is handed over to the other jobber. The tools are released one by one as for example captured by the paths from q36 via q38 or q39 to q0 . However, the graph also shows that they may be picked up by the other jobber and used either to process normal jobs (as captured by for example the loop between q39 and q41 ) or as a first step in acquiring both tools as required for processing a complex job. To see this consider the loop from q36 via q39 , q44 and q32 back to q36 : first the H tool is released and picked up immediately by the other jobber and then the M tool is released and picked up. As in the previous version of the example, adding more jobbers does not change the analysis result. Adding more copies of the tools produces a more complex graph capturing the new potential interactions. 7. Conclusion Often static analysis is used to capture properties of the configurations arising during the execution of programs or processes— the analysis presented in this paper goes one step further and focuses on the transitions between configurations. For communicating concurrent processes this is a non-trivial task and for CCS it is further complicated by the fact that new processes may arise dynamically just as they may cease to exist.

H.R. Nielson, F. Nielson / Computer Languages, Systems & Structures 35 (2009) 365 -- 394

391

To handle these complex scenarios we first introduced extended multisets of exposed actions; they are used to model the configurations of the systems. To capture the dynamic nature of processes we then performed a detailed analysis of how the extended multisets will grow and shrink with the execution of the processes. This was inspired by the classical kill/gen functions of bit vector frameworks for imperative programming languages and forms the basis for developing an analysis of the transitions of CCS. Here the classical worklist algorithm plays a key role and we also rely on the widening operators of abstract interpretation to ensure termination of the algorithm. We have chosen to present the analysis visually as a finite graph; it may be viewed as a finite representation of the potentially infinite transition system obtained from the reduction semantics of CCS. The analysis performs several approximations. First, we cannot obtain precise information about the killed and generated exposed actions so we settle for an under-approximation of the former and an over-approximation of the latter. Next, to ensure that the construction of the automaton terminates we use a granularity function for comparing the extended multisets associated with the states and we use a widening operator computing the extended multisets of new states. Nonetheless, for a large class of processes, our analysis is powerful enough to construct the complete transition graph. The precision of the analysis can be controlled by the choice of granularity function. In the development presented here we have taken the approach where full precision can be obtained until a certain level after which everything is mapped to ∞; to simplify the development we have also refrained from introducing a case analysis that would allow us in certain cases to regain finitary information much as in the complicated shape analyses of [17].

7.1. Extensions and future work The present paper presents the foundations for a novel analysis technique extracting a finite automaton from a process expressed in a process calculus. The initial ideas were presented in [18] and illustrated on a generalisation of the Diffie–Hellman Key agreement protocol. Several extensions of the approach have emerged since then and we shall here give an overview of the key ideas that have been pursued and we will give pointers to future work. One may observe that the actions themselves indeed play a minor role in the development of the analysis—they are only consulted when determining whether or not two exposed actions might indeed synchronise. The setting becomes far more complex when turning to more complex process calculi like the -calculus [14] where synchronisation is replaced by communication of values or names over channels. It is then necessary for the analysis to keep track of the bindings of names and in [19] we show how to extend the present analysis to the -calculus by letting all states of the automata contain information about not only the exposed actions but also (an over-approximation of) the potential bindings of names—the resulting analysis is used to validate privacy properties in a service-oriented system where several components communicate over a shared channel. The above line of work thus localises the information about bindings by associating it with the individual states of the automaton. Another possibility would be to have global information about the bindings—obviously at the price of obtaining a more approximative analysis result. The global information about name bindings can easily be obtained from a control flow analysis, in the case of the -calculus one might for example rely on the analyses of [4]. We have explored this line of work in the context of BioAmbients [20], a version of the Ambient Calculus [21] tailored for modelling biological systems. The resulting analysis is presented in [22] and it obtains global information about bindings of names as well as the spatial structure of the ambients from a control flow analysis originally developed in [23]. The finite automata obtained by the analysis provide an abstraction of the biological pathways and have been used to analyse a model of the LDL cholesterol degradation process. Indeed we believe that the framework can be adapted to a wide variety of process calculi with different synchronisation and communication primitives; in passing let us mention that we have ourselves studied a setting with broadcast communication [24] for a variant of the distributed calculus KLAIM [25]. Also there are several ways to vary the framework depending on which information is associated with the states (and hence kept local) and how much information is global and therefore has to be computed beforehand by, for example, a control flow analysis. The analyses described so far all produce over-approximations of the actual behaviour of the processes. We have also investigated how to combine it with under-approximations thereby obtaining finite automata where some of the edges express transitions that may happen whereas others describe transitions that must happen in the process. In [26] we study this in the context of CCS and the basic idea is to replace the extended multisets for upper bounds with extended multisets using intervals giving upper as well as lower bounds on the number of occurrences of an exposed action. In future work we plan to extend the approach to handle more complex process calculi. Finally, we have proposed to use variants of action computation tree logic (ACTL) [27] to describe properties of the automata constructed by this analysis technology. The idea is to equip the logic with a 3-valued interpretation and evaluate the formula on the finite automata constructed by the analysis. The correctness of the analysis result then allows us to reason about may as well as must properties of the processes based on the analysis result alone. We have investigated these ideas in the presence of over- as well as under-approximations [26]—we see this as providing a fruitful link between static analysis techniques and model checking.

Acknowledgements This work has been supported by the EU-IST-FET project SENSORIA (FP6-016004).

392

H.R. Nielson, F. Nielson / Computer Languages, Systems & Structures 35 (2009) 365 -- 394

Appendix A. Termination proofs Proof of Lemma 12. Using the notation of Table 3 the lemma states that envE = FEk (env⊥M )  FE2k (env⊥M ) where  is the pointwise extension of the operation M defined by (M M M )() =



M() if M() = M () ∞ otherwise

To prove this recall by Lemma 10 that for each process Pi we can find numbers nij and extended multisets Mi such that

E[[Pi ]]env = (ni1 ·M env(A1 )) +M · · · +M (nik ·M env(Ak )) +M Mi holds for all env. For the mth unfolding (m  1) we may calculate ⎛⎛

E[[Pi ]]

m

[m] env = (ni1 ·M env(A1 )) +M

···

[m] +M (nik ·M env(Ak )) +M

⎝⎝

m−1 





[p] ni1 ⎠ ·M M1 ⎠ +M

p=1



m−1 

· · · +M ⎝

⎞ [p] nik ⎠ ·M Mk ) +M

Mi

p=1

where [1]

nij = nij [m+1]

nij

=

k 

[m]

nip npj

p=1 [m]

as can be verified by numerical induction on m. We shall say that nij consists of km−1 summands each with m factors; each summand nip1 np1 p2 · · · npm−1 j will involve m + 1 indices (denoted i, p1 , . . . , pm−1 , j) chosen among {1, . . . , k}. Now for the new contributions arising due to additional unfoldings (for q  1): (q+1)k E[[Pi ]](q+1)k env⊥M = E[[Pi ]]qk env⊥M +M Mi,qk

where ⎛⎛ (q+1)k Mi,qk

(q+1)k−1 

= ⎝⎝



⎛⎛



[p] ni1 ⎠ ·M M1 ⎠ +M

(q+1)k−1 

· · · +M ⎝⎝

p=qk





[p] nik ⎠ ·M Mk ⎠

p=qk

First we wish to establish the following fact (for q  1): (q+1)k

Mi,qk

2k () > 0 ⇔ Mi,k () > 0

(A.1)

For “⇐” we observe that there must be some np0 p1 np1 p2 · · · npr−1 pr > 0 with Mpr () > 0 and k  r < 2k. Hence there must be some 0  a < b  r such that pa = pb . By repetition of the factor npa pa+1 · · · npb−1 pb > 0 we can construct a summand np0 p1 · · · npr−1 pr > 0 (q+1)k

containing m factors and m + 1 indices for some m satisfying qk  m < (q + 1)k. This suffices for showing Mi,qk

() > 0.

Conversely, for “⇒” there must be some np0 p1 · · · npr−1 pr > 0 with Mpr () > 0 and qk  r < (q + 1)k. In particular we have 2k  r. For each summand we can find 0  a < b  r such that pa = pb and b − a  k; hence we can delete the factor npa pa+1 · · · npb−1 pb and obtain a new summand containing m factors and m + 1 indices for some m satisfying r − k  m < r (using that b − a  k). We 2k () > 0. continue this process until m < 2k in which case m  k (once more using that b − a  k). This suffices for showing Mi,k Finally we wish to establish the following fact: ⎞ ⎛ j 2k ⎝ E[[Pi ]] env⊥M ⎠ () = ∞ ⇔ Mi,k () > 0 (A.2) j (q+1)k

For “⇐” we obtain, using the previous fact (A.1), that Mi,qk qk · · · +M Mi,(q−1)k it follows that the chain (E[[Pi ]]j env⊥M )() < ∞ for all  and j. Hence

2k + +M Mi,k M

show that

previously established fact (A.1) we get

() > 0 for all q  1 and since E[[Pi ]]qk env⊥M = E[[Pi ]]k env⊥M

E[[Pi ]] env⊥M () converges to ∞. Conversely, for “⇒” it is immediate to j

qk

some (in fact for infinitely many values of q) Mi,(q−1)k () > 0 and using the

2k () > 0. Mi,k

2k () > 0.  We conclude by observing that FEk (env⊥M )(Ai )()  FE2k (env⊥M )(Ai )() is equivalent to Mi,k

H.R. Nielson, F. Nielson / Computer Languages, Systems & Structures 35 (2009) 365 -- 394

393

Proof of Lemma 19. The lemma states that envG = FGk (env⊥T ) ∈ {0, 1} and mappings Ti such where k is the number of recursively defined processes. Recall from (2) that we have numbers n# ij that

G[[Pi ]]env = (n#i1 ·T env(A1 )) T · · · T (n#ik ·T env(Ak )) T Ti holds for all env. For the mth unfolding (m  1) we may calculate {m} G[[Pi ]]m env = (n{m} · env(A1 )) T · · · T (nik ·T env(Ak )) T i1 T











{p} m−1 {p} MAXm−1 ·T Tk ) T Ti p=1 ni1 ·T T1 T · · · T MAXp=1 nik

where {1}

nij = n# ij {m+1}

nij

{m}

= MAXkp=1 n# ip · npj

as can easily be verified by numerical induction on m. To prove the lemma it suffices to prove G[[Pi ]]m env⊥T  T G[[Pi ]]k env⊥T for m  k and this amounts to proving (for m  k) that

G[[Pi ]]m env⊥T (1 )(2 ) > 0 ⇒ G[[Pi ]]k env⊥T (1 )(2 ) > 0 {r}

The only interesting case is when nij > 0 and Tj (1 )(2 ) > 0 for k  r < m. Whenever this is the case there is a component n# · · · n # n# ip p1 p2 p 1

r−1 j

> 0 and writing p0 = i and pr = j there exists 0  a < b  r such that pa = pb ; by deleting the subcomponent

# n# pa pa+1 · · · npb−1 pb we obtain a shorter non-zero component. Continuing this process we eventually end up with a non-zero {r }

component involving some r “factors” for 1  r < k. This shows nij > 0 and concludes the proof.



References [1] Honda K, Vasconcelos VT, Kubo M. Language primitives and type discipline for structured communication-based programming. In: Programming languages and systems (ESOP). Lecture notes in computer science, vol. 1381. Berlin: Springer; 1998. p. 122–38. [2] Neubauer M, Thiemann P. An implementation of session types. In: Practical aspects of declarative languages (PADL). Lecture notes in computer science, vol. 3057. Berlin: Springer; 2004. p. 56–70. [3] Vasconcelos VT, Ravara A, Gay SJ. Session types for functional multithreading. In: CONCUR—concurrency theory. Lecture notes in computer science, vol. 3170. Berlin: Springer; 2004. p. 497–511. [4] Bodei C, Degano P, Nielson F, Riis Nielson H. Static analysis for the -calculus with applications to security. Information and Computation 2001;168:68–92. [5] Braghin C, Cortesi A, Luccio FL, Focardi R, Piazza C. Nesting analysis of mobile ambients. Computer Languages, Systems and Structures 2004;30(3–4): 207–30. [6] Nielson F, Riis Nielson H, Hansen RR. Validating firewalls using flow logics. Theoretical Computer Science 2002;283(2):381–418. [7] Riis Nielson H, Nielson F. Shape analysis for mobile ambients. Nordic Journal of Computing 2001;8:233–75. [8] Riis Nielson H, Nielson F, Buchholtz M. Security for mobility. In: Focardi R, Gorrieri R, editors. Foundations of security analysis and design II. Lecture notes in computer science, vol. 2946. Berlin: Springer; 2004. p. 207–66. [9] Bodei C, Buchholtz M, Degano P, Nielson F, Riis Nielson H. Static validation of security protocols. Journal of Computer Security 2005;13:347–90. [10] Bodei C, Degano P, Riis Nielson H, Nielson F. Flow logic for Dolev–Yao secrecy in cryptographic processes. Future Generation Computer Systems 2002;18(6):747–56. [11] Buchholz M, Riis Nielson H, Nielson F. A calculus for control flow analysis of security protocols. International Journal of Information Security 2004;2: 145–67. [12] Nanz S, Hankin C. A framework for security analysis of mobile wireless networks. Theoretical Computer Science 2006;367(1–2):203–27. [13] Nielson F, Riis Nielson H, Seidl H. Cryptographic analysis in cubic time. Electronic Notes of Theoretical Computer Science 2002;62:7–23. [14] Milner R. Communicating and mobile systems: the pi-calculus. Cambridge: Cambridge University Press; 1999. [15] Cousot P, Cousot R. Systematic design of program analysis frameworks. In: Symposium on principles of programming languages (POPL). New York: ACM Press; 1979. p. 269–82. [16] Nielson F, Riis Nielson H, Hankin CL. Principles of program analysis. Berlin: Springer; 1999 [Second printing. Berlin: Springer; 2005]. [17] Sagiv M, Reps T, Wilhelm R. Parametric shape analysis via 3-valued logic. In: Symposium on principles of programming languages (POPL). New York: ACM Press; 1999. p. 105–18. [18] Riis Nielson H, Nielson F. Data flow analysis for CCS. In: Program analysis and compilation, theory and practice. Lecture notes in computer science, vol. 4444. Berlin: Springer; 2007. p. 311–27. [19] Riis Nielson H, Nielson F. A flow-sensitive analysis of privacy properties. In: Proceedings of computer security foundations symposium (CSF), 2007. IEEE Computer Society; 2007. p. 249–64. [20] Regev A, Panina EM, Silverman W, Cardelli L, Shapiro E. BioAmbients: an abstraction for biological compartments. Theoretical Computer Science 2004;325(1):141–67. [21] Cardelli L, Gordon AD. Mobile ambients. Theoretical Computer Science 2000;240(1):177–213. [22] Pilegaard H, Nielson F, Riis Nielson H. Pathway analysis for BioAmbients. Journal of Logic and Algebraic Programming 2008;77:92–130. [23] Riis Nielson H, Nielson F, Pilegaard H. Spatial analysis of BioAmbients. In: Proceedings of the SAS'04. Lecture notes in computer science. Berlin: Springer; 2004. p. 69–83.

394

H.R. Nielson, F. Nielson / Computer Languages, Systems & Structures 35 (2009) 365 -- 394

[24] Nanz S, Nielson F, Riis Nielson H. Topology-dependent abstractions of broadcast networks. In: CONCUR 2007—concurrency theory, 18th international conference. Lecture notes in computer science, vol. 4703. Berlin: Springer; 2007. p. 226–40. [25] Bettini L, Bono V, De Nicola R, Ferrari G, Gorla D, Loreti M, Moggi M, Pugliese R, Tuosto E, Venneri B. The Klaim project: theory and practice. In: Proceedings of the IST/FET international workshop on global computing: programming environments, languages, security and analysis of systems (GC'03). Lecture notes in computer science, vol. 2874. Berlin: Springer; 2003. [26] Nanz S, Nielson F, Riis Nielson H. Modal abstractions of concurrent behaviour. In: Proceedings of the SAS'08. Lecture notes in computer science, vol. 5079. Berlin: Springer; 2008. p. 159–73. [27] De Nicola R, Vaandrager FW. Action versus state based logics for transition systems. In: Proceedings of the LITP spring school on semantics of systems of concurrent processes. Lecture notes in computer science, vol. 469, 1990. p. 407–19.