Improving time bounded reachability computations in interactive Markov chains

Improving time bounded reachability computations in interactive Markov chains

JID:SCICO AID:1894 /FLA [m3G; v1.159; Prn:28/07/2015; 11:45] P.1 (1-17) Science of Computer Programming ••• (••••) •••–••• Contents lists available...

990KB Sizes 0 Downloads 50 Views

JID:SCICO AID:1894 /FLA

[m3G; v1.159; Prn:28/07/2015; 11:45] P.1 (1-17)

Science of Computer Programming ••• (••••) •••–•••

Contents lists available at ScienceDirect

Science of Computer Programming www.elsevier.com/locate/scico

Improving time bounded reachability computations in interactive Markov chains Hassan Hatefi ∗ , Holger Hermanns Saarland University, Department of Computer Science, Campus Saarbrücken, D-66123 Germany

a r t i c l e

i n f o

Article history: Received 9 June 2014 Received in revised form 16 March 2015 Accepted 14 May 2015 Available online xxxx Keywords: Interactive Markov chains Time bounded reachability Discretisation Root finding

a b s t r a c t Interactive Markov Chains (IMCs) are compositional behaviour models extending both Continuous Time Markov Chain (CTMC) and Labelled Transition System (LTS). They are used as semantic models in different engineering contexts ranging from ultramodern satellite designs to industrial system-on-chip manufacturing. Different approximation algorithms have been proposed for model checking of IMCs, with time bounded reachability probabilities playing a pivotal role. This paper addresses the accuracy and efficiency of approximating time bounded reachability probabilities in IMCs, improving over the stateof-the-art in both efficiency of computation and tightness of approximation. Moreover, a stable numerical approach, which provides an effective framework for implementation of the theory, is proposed. Experimental evidence demonstrates the efficiency of the new approach. © 2015 Elsevier B.V. All rights reserved.

1. Introduction Why IMCs? Over the last decade, a formal approach to quantitative performance and dependability evaluation of concurrent systems has gained maturity. At its root are continuous-time Markov chains for which efficient and quantifiably precise solution methods exist [1]. On the specification side, continuous stochastic logic (CSL) [2,1] enables the specification of a large spectrum of performance and dependability measures. A CTMC can be viewed as a labelled transition system (LTS) whose transitions are delayed according to exponential distributions. Opposed to classical concurrency theory models, CTMCs neither support compositional modelling [3] nor do they allow nondeterminism in the model. Among several formalisms that overcome these limitations [4–7], interactive Markov chains (IMCs) [8] stand out. IMCs conservatively extend classical concurrency theory with exponentially distributed delays, and this induces several further benefits [9]. In particular, it enables compositional modelling with intermittent weak bisimulation minimisation [5] and allows to augment existing untimed process algebra specifications with random timing [4]. Moreover, the IMC formalism is not restricted to exponential delays but allows to encode arbitrary phase-type distributions such as hyper- and hypoexponentials [10]. Since IMCs smoothly extend classical LTSs, the model has received attention in academic as well as in industrial settings [11–14]. Why time bounded reachability? The principles of model checking IMCs are by now well understood. One analysis strand, implemented for instance in CADP [15], resorts to CSL model checking of CTMCs. But this is only applicable if the weak bisimulation quotient of the model is indeed a CTMC, which cannot be guaranteed always. This is therefore a partial solution

*

Corresponding author. E-mail addresses: hhatefi@cs.uni-saarland.de (H. Hatefi), [email protected] (H. Hermanns).

http://dx.doi.org/10.1016/j.scico.2015.05.003 0167-6423/© 2015 Elsevier B.V. All rights reserved.

JID:SCICO AID:1894 /FLA

[m3G; v1.159; Prn:28/07/2015; 11:45] P.2 (1-17)

H. Hatefi, H. Hermanns / Science of Computer Programming ••• (••••) •••–•••

2

technique, albeit it integrates well with compositional construction and minimisation approaches, and is the one used in industrial applications. The time bounded reachability problem has been successfully addressed [16–18] for Continuous-Time Markov Decision Processes (CTMDPs) with respect to the class of so-called late schedulers. These schedulers can change their decision while residing in a state. CTMDPs are very closely related to IMCs, but are not compositional. In this paper we however focus on early schedulers which fix a decision upon arriving at a state and are unable to change that decision afterwards. Early schedulers arise naturally from compositional model construction using IMCs.1 Notably, the adaptation of optimal time bounded reachability computation for CTMDPs under late schedulers to IMCs under early schedulers has not been studied. Therefore the existing techniques for CTMDPs are not directly applicable to IMCs. The approximate CSL model checking problem for IMCs has been solved by Neuhäusser and Zhang [19,20]. Most of the solutions resort to untimed model checking [21]. The core innovation lies in the solution of the time bounded model checking problem, that is needed to quantify a bounded until formula subject to a (real-valued) time interval. The problem is solved by splitting the time interval into equally sized discretisation steps, each small enough to carry at most one Markov transition with high probability. However, the practical efficiency and accuracy of this approach to evaluate time bounded reachability probabilities turns out substantially inferior to the one known for CTMCs, and this limits its applicability to real industrial cases. As a consequence, model checking algorithms for other, less precise, but still highly relevant properties have been coined [22], including expected reachability and long run average properties. Our contribution We revisit the approximation of time bounded reachability probabilities so as to arrive at an improved computational approach. For this, we generalise the discretisation approach of Neuhäusser and Zhang [19,20] by considering the effect of multiple Markov transition firings in each discretisation step. We show that this can be exploited by a tighter error bound, and thus a more accurate computation. We put the theoretical improvement into practice by proposing a stable numerical approach, which provides  -optimal time bounded reachability together with its corresponding scheduler (choice decisions in the model) for IMCs. Empirical results demonstrate that the improved algorithm can gain more than one order of magnitude speedup. Organisation of the paper. In Section 2 we define IMC and provide the notations and concepts that we use throughout the paper. Section 3 explains the state-of-the-art method in time bounded reachability computations of IMCs. We introduce the theoretical foundation of our improvement in Section 4. Section 5 develops a stable numerical approach with strict error bound for the implementation. Section 6 reports on an empirical evaluation of our method applied to an industrial case study. Finally, Section 7 concludes the paper. This paper is an extended version of a conference paper that appeared in FSEN 2013 [23]. All proofs are included as Appendices A–D. Section 5 contains novel contributions on a stable numerical approximation scheme, and this implies that the experimental results presented in Section 6 are revised in their entirety. 2. Interactive Markov chain An Interactive Markov Chain (IMC) is a model that generalises both CTMC and LTS. In this section, we provide the definition of IMC and the necessary concepts relating to it. Definition 1 (IMC). An IMC [5] is a tuple M = ( S , Act, −→, , s0 ), where S is a finite set of states, Act is a set of actions, including internal invisible action τ , −→⊂ S × Act × S is a set of interactive transitions, ⊂ S × R≥0 × S is a set of Markov transitions, and s0 is the initial state. Maximum progress vs. urgency States of an IMC are partitioned into interactive, Markov and hybrid. Interactive (Markov) states have only interactive (Markov) outgoing transitions, while hybrid states have transitions of both types. Let S I , S M and S H be the set of interactive, Markov and hybrid states, respectively. An IMC might have states without any outgoing transition. For the purpose of this paper, any such state is turned into a Markov state by adding a self loop with an arbitrary rate. Depending on whether or not they can interact with the environment, IMCs are classified into closed and open models. An open IMC can interact with the environment and in particular, can be composed with other IMCs, e.g. via parallel composition. For such models, a maximal progress assumption [5] is imposed which implies that τ -transitions take precedence over Markov transitions whenever both are enabled in a state. In contrast, a closed IMC is not subject to any further communication and composition. In this paper, we assume that the models we are going to analyse are closed, and impose the stronger urgency assumption which means that any interactive transition has precedence over Markov transitions. In other words, interactive transitions are taken immediately whenever enabled in a state, leaving no chance for enabled Markov transitions. Consequently, in a closed IMC, hybrid states can be regarded as interactive states.

1

The only schedulers that can be defined for IMCs are early, whereas both early and late schedulers are definable for CTMDPs.

JID:SCICO AID:1894 /FLA

[m3G; v1.159; Prn:28/07/2015; 11:45] P.3 (1-17)

H. Hatefi, H. Hermanns / Science of Computer Programming ••• (••••) •••–•••

3

Fig. 1. An exemplary IMC.

Branching probabilities A (probability) distribution μ over a discrete set S is a function μ : S  [0, 1] such that  μ ( s ) = 1. If μ(s) = 1 for some s ∈ S, μ is a Dirac distribution denoted by μs . Let Dist( S ) be the set of all distris∈ S butions over set S. For uniformity of notations, we use a distinguished action ⊥ ∈ / Act to indicate Markov transitions and · {⊥}, where ∪· denotes disjoint union. For s ∈ S, we define Act⊥ (s) as the set of extend the set of actions to Act⊥ = Act ∪ enabled actions in state s. If s is a Markovstate, Act⊥ (s) = {⊥}, otherwise  Act⊥ (s) = {α | (s, α , s ) ∈ −→}. The rate between state s and s is defined by rate(s, s ) = (s,λ,s )∈ λ. Then E (s) = s ∈ S rate(s, s ) denotes the sum of outgoing rates of state s. Using these concepts, we define the branching probability matrix for both interactive and Markov states by



P(s, α , s ) =

⎧ ⎪ ⎨1 ⎪ ⎩

rate(s,s ) E (s)

0

s ∈ S I ∧ (s, α , s ) ∈−→ s ∈ SM ∧ α = ⊥ otherwise

Example 1. Let M be the IMC in Fig. 1. There are two Markov states s1 and s3 , and one interactive state s2 . Initial state s0 is a hybrid state, since it has both interactive and Markov transitions enabled. Considering M as a closed IMC, the urgency assumption allows us to ignore (s0 , 0.5, s2 ) ∈ and consider s0 as an interactive state. Under this assumption, interactive transitions are instantaneously fired after zero time delay. Conversely, the sojourn time in a Markov state s is exponentially distributed with rate E (s). For example, the probability to leave s1 within δ time unit is 1 − e −5δ (E (s1 ) = 2 + 3 = 5). At this point, branching probabilities determine the distribution of evolving to the next states. For s1 , P(s1 , ⊥, s0 ) = 25 and P(s1 , ⊥, s3 ) = 35 , as a result the probabilities to go to s0 and s3 after spending at most δ time unit in s1 are 3 (1 − e −5δ ), 5

2 (1 − e −5δ ) 5

and

respectively.

Behavioural aspects Like in other transition systems, an execution in an IMC is described by a path. Formally, a finite path is tn−1 ,αn−1

t 0 ,α0

a finite sequence π = s0 −−−→ s1 · · · sn−1 −−−−−−→ sn with αi ∈ Act⊥ , t i ∈ R≥0 , i = 0 · · · n − 1. We use |π | = n as the length of π and last(π ) = sn as the last state of π . Each step of path π describes how the IMC evolves from one state to the next, by taking which action and after spending what state sojourn time. For example, when the IMC is in an interactive state s ∈ S I , it must immediately (in zero time) choose some action α ∈ Act⊥ (s) and go to state s . This gives rise to the finite 0,α

path s −−→ s . On the other hand, if s ∈ S M , the IMC can stay for t > 0 time units and then choose the next state s based

on the distribution P(s, ⊥, ·) by s −−→ s . An infinite path specifies an infinite execution of an IMC. We use Paths∗ and Pathsω to denote the set of finite and infinite paths, respectively. By dropping the sojourn times from a path, we obtain the time-abstract path. We use subscript ta to refer to the set of time-abstract finite and infinite paths (i.e. Paths∗ta and Pathsω ta ). t ,⊥

Resolving nondeterminism In states with more than one interactive transitions, the resolution of which transition to be taken is nondeterministic, just as in the LTS setting. This nondeterminism is resolved by schedulers. The most general scheduler class maps a finite and possibly timed path to a distribution over the set of interactive transitions enabled in the last state of the path: Definition 2 (Generic scheduler). A generic scheduler over IMC M = ( S , Act, −→, , s0 ) is a function, A : Paths∗  Dist(−→), where the support of A (π ) is a subset of ({last (π )} × Act × S )∩ −→ and last(π ) ∈ S I . For a finite path π , a scheduler specifies how to resolve nondeterminism on the last state of π which is an interactive state. It gives a distribution over the set of enabled transitions of last (π ). Let Gen refer to the most general class of such schedules that are still measurable. Following the definition of schedules, the probability measure can be uniquely defined over the σ -algebra on Pathsω . For that we use notation Pr ω A ,s ( ), which denotes the probability measure of a set of paths

JID:SCICO AID:1894 /FLA

[m3G; v1.159; Prn:28/07/2015; 11:45] P.4 (1-17)

H. Hatefi, H. Hermanns / Science of Computer Programming ••• (••••) •••–•••

4

⊆ Pathsω when scheduler A is set and s is the initial state. For more details on measurability, the σ -algebra and the probability measure see [19,24]. Non-zenoness Owing to the presence of immediate state changes, an IMC might exhibit Zeno behaviour, where infinitely many interactive transitions are taken in finite or zero time. This is an unrealistic phenomenon, characterised by an infinite path π , where the time spent on π does not diverge. Such a path is called a Zeno path. To exclude such unrealistic phenomena, we restrict our attention to models where the probability of Zeno behaviour is zero. This means that ∀ A ∈ Gen, ∀s ∈ S . PrωA ,s ( <∞ ) = 0, where <∞ is the set of all Zeno paths. This condition implies that starting from any interactive state, we must eventually reach some Markov state with probability one, no matter what decision has been taken by the scheduler. In the remainder of this paper, we therefore restrict ourselves to such models. 3. Time bounded reachability CSL model checking of time bounded until properties plays a pivotal role in quantitative evaluation of IMCs. It can reduce to time bounded reachability analysis, by a well-known technique of making target states absorbing [25]. This section reviews the current state-of-the-art [19,20] of solving time bounded reachability problems in IMCs. Section 4 will discuss how we can improve upon that. Fixed point characterisation We first discuss the fixed point characterisation for the maximum probability to reach a set of goal states within an interval of time. For this, let I and Q be the set of all nonempty nonnegative real intervals with real and rational bounds, respectively. For I ∈ I and t ∈ R≥0 , we define I  t = {x − t | x ∈ I ∧ x ≥ t }. If I ∈ Q and t ∈ Q≥0 , then I  t ∈ Q. Given IMC M, a time interval I ∈ I and a set of goal states G ⊆ S, the set of all paths that reach the goal I states within interval I is denoted by ♦ I G. Let p M max (s, ♦ G ) be the maximum probability of reaching the goal states within interval I if starting in state s at time 0. In formal terms, it is the supremum ranging over all possible Gen schedulers, of the ω I I probability measures on the induced paths: p M max (s, ♦ G ) = sup A ∈Gen Pr A ,s (♦ G ). The next lemma recalls a characterisation M I I of p M max (s, ♦ G ) as a fixed point. That of p min (s, ♦ G ) is dealt with similarly.

Lemma 1 (Fixed point characterisation for IMCs). (See [19, Theorem 6.1].) Let M be an IMC, G ⊆ S be a set of goal states and I ∈ I with inf I = a and sup I = b. The maximum time bounded reachability p M max : S × I  [0, 1] is the least fixed point of the higher-order operator : ( S × I  [0, 1])  ( S × I  [0, 1]), which is: 1. For s ∈ S M



b

0a

( F )(s, I ) =

0

2. For s ∈ S I

( F )(s, I ) =





E (s)e− E (s)t s ∈ S P(s, ⊥, s ) F (s , I  t ) dt  E (s)e− E (s)t s ∈ S P(s, ⊥, s ) F (s , I  t ) dt + e− E (s)a

1 max(s,α ,s )∈−→ F (s , I )

s∈ / G, s ∈ G,

s ∈ G ∧ 0 ∈ I, otherwise.

Interactive probabilistic chain The above characterisation provides an integral equation system for the maximum time interval bounded reachability probability. However the system is in general not algorithmically tractable [25]. To circumvent this problem, the fixed point characterisation of an IMC can be approximated by a discretisation [19,20] approach. Intuitively, the time interval is split into equally sized sub-intervals called discretisation steps. It is assumed that the discretisation step is small enough such that with high probability it carries at most one Markov transition firing. Under this assumption the IMC is discretised into an Interactive Probabilistic Chain (IPC) [13] by summarising the behaviour of the IMC at equidistant time points. Definition 3 (IPC). An IPC is a tuple D = ( S , Act , −→, d , s0 ), where S, Act, −→ and s0 are as Definition 1 and d ⊂ S × Dist( S ) is the set of discrete Markov transitions. A discrete Markov transition specifies with which probability a state evolves to its successors after taking one time step. The notion of discrete Markov transitions resembles the one-step transition matrix in Discrete Time Markov Chains (DTMCs). The concepts of closed and open models carry over to IPCs. As we do not have the notion of continuous time, paths in IPCs can be seen as time-abstract paths in IMCs, implicitly still counting discretisation steps, and thus discrete time. Discretisation from IMC to IPC We now recall the discretisation that turns an IMC into an IPC. Afterwards, we explain how reachability computation in an IMC can be approximated by analysis on its induced IPC, for which there exists a proved error bound.

JID:SCICO AID:1894 /FLA

[m3G; v1.159; Prn:28/07/2015; 11:45] P.5 (1-17)

H. Hatefi, H. Hermanns / Science of Computer Programming ••• (••••) •••–•••

5

Fig. 2. Discretisation of IMC.

Definition 4 (Discretisation). (See [19].) Given IMC M = ( S , Act , −→, , s0 ) and a discretisation constant δ , Mδ = ( S , Act , −→, δ , s0 ) is the induced IPC constructed from discretisation of M with respect to discretisation constant δ with δ = (s, μs )|s ∈ S M , where



− E (s)δ μs (s ) = (1 − e− E (s)δ ) · P(s, ⊥, s )

(1 − e

) · P(s, ⊥, s ) + e− E (s)δ

s = s, s = s.

The discretisation in Definition 4 approximates the original model by assuming that at most one Markov transition in

M can fire in each step of length δ . For each Markov state s the distribution μs specifies the probability of moving to the successors of s within one step of length δ by taking either one or no Markov transition in M. Here the discretisation is illustrated as an example. Example 2. In this example the IMC depicted in Fig. 1 is discretised with respect to discretisation constant δ > 0. First by assuming the IMC is closed, all of its actions are relabelled into internal action τ . Furthermore by applying urgency assumption, Markov transition (s0 , 0.5, s2 ) is omitted from the model. The induced IPC with respect to δ is shown in Fig. 2. In the model, the probability of leaving s1 or s3 within δ time unit is referred as p = 1 − e−5δ . Using the fixed point characterisation above, it is possible to relate reachability analysis in an IMC to reachability analysis in its induced IPC [19], together with an error bound. We recall the result here: Theorem 2 (Error bound). (See [19].) Given IMC M = ( S , Act , −→, , s0 ), a set of goal states G ⊆ S, a time interval I ∈ Q such that a = inf I and b = sup I with 0 ≤ a < b, and λ = maxs∈ S M E (s). Assume discretisation step δ > 0 is selected such that b = kb δ and a = ka δ for some kb , ka ∈ N. For all s ∈ S it holds: (ka ,kb ] δ pM G ) − ka max (s, ♦

(λδ)2 2

I Mδ (ka ,kb ] ≤ pM G ) + kb max (s, ♦ G ) ≤ p max (s, ♦

(λδ)2 2

+ λδ

For the proof of Theorem 2 see [19, Theorem 6.5]. Time bounded computation in IPC We briefly review the maximum time bounded reachability computation in IPCs [20]. At its core, a modified value iteration algorithm is employed. Given an IPC, a set of goal states and a step interval, the algorithm iteratively proceeds by taking two different phases. In the first phase, reachability probabilities starting from all interactive states are computed. This is done by selecting the maximum from the reachability probabilities of Markov states that are reachable from each interactive state. The second phase updates the reachability probabilities from each Markov state by taking a discrete step, which is done similarly to the reachability computation in DTMCs. The algorithm iterates until the last time step is processed. For more details about the algorithm we refer to [19,20]. 4. Improving time bounded reachability computation In this section, we generalise the previously discussed technique for computing maximum time bounded reachability. As before, we approximate the fixed point characterisation of IMCs using a discretisation technique. However instead of having at most one, we generally consider at most n Markov transition firing(s) in a discretisation step, for n being an arbitrary natural number. This enables us to establish a tighter error bound, as increasing n allows to choose a larger discretisation

JID:SCICO AID:1894 /FLA

[m3G; v1.159; Prn:28/07/2015; 11:45] P.6 (1-17)

H. Hatefi, H. Hermanns / Science of Computer Programming ••• (••••) •••–•••

6

constant δ , without compromising the original error bound. A larger discretisation constant implies fewer iterations, thus speeding up the overall runtime of the algorithm. Higher-order approximation When developing an approximation for n-th order of the maximum reachability probability, we ∗

→ (u ) as all Markov states that are reachable first restrict ourselves to intervals with zero lower bounds. We use Reach− from state u by taking an arbitrary (possibly zero) number of interactive transitions from u. In case u is a Markov state ∗



→ (u ) = {u }. Formally Reach−→ (u ) = { v ∈ S : u −→ v }, where −→ denotes the transitive closure of interactive then Reach− M transitions. We then define the n-th order approximation of maximal time bounded reachability as follows. ∗



Definition 5. Given IMC M = ( S , Act , −→, , s0 ), a set of goal states G ⊆ S, an interval I ∈ Q such that inf I = 0 and sup I = b. Assume discretisation step δ > 0 is selected such that b = kb δ for some kb ∈ N. We denote n-th order approxM (n) M (n) imation of maximal time bounded reachability by p maxδ (s, ♦ I G ) and define it as p maxδ (s, ♦ I G ) = 1 if s ∈ G, and for s ∈ S \ G:

⎧  δ Mδ (n) ⎪ E (s)e− E (s)t s ∈ S P(s, ⊥, s )A I ,n− ⎪ 1 (s , δ − t ) dt 0 ⎪ ⎨ M ( n ) Mδ (n) +e− E (s)δ p maxδ (s, ♦ I δ G ) if s ∈ S M \ G , p max (s, ♦ I G ) = ⎪ ⎪ M ( n ) δ ⎪ max ⎩ ∗ p max (s , ♦ I G ) if s ∈ S I \ G , → s ∈Reach− (s)

and for 0 ≤ k < n and 0 ≤ ≤ δ :

⎧  Mδ (n) E (s)e− E (s)t s ∈ S P(s, ⊥, s )A I ,k− (s , − t ) dt ⎪ 0 1 ⎪ ⎪ ⎪ ⎨ δ (n) I δ +e− E (s) p M G) if s ∈ S M \ G ∧ k > 0, M (n) max (s, ♦ A I ,k δ (s, ) = M ( n) δ I δ ⎪ p max (s, ♦ G) if s ∈ S M \ G ∧ k = 0, ⎪ ⎪ ⎪ Mδ (n) ⎩ max ∗ A ( s , ) if s ∈ S I \ G . I ,k − → s ∈Reach

(s)

M (n)

Intuitively, A I ,k δ (s, ) for 0 < k < n is the maximum probability to reach some goal state from state s by taking at most k Markov transition(s) within the first time units, under the condition that each discretisation steps of length δ that M (n) comes afterwards carries up to n Markov transition(s). For k = 0 the term is simply equal to p maxδ (s, ♦ I δ G ). This is the point where we cut the integration while assuming that the probability of having more Markov jumps is negligible. Higher orders of approximations represent the behaviour of the original model more faithfully, thus leading to a better error bound. M (1) Note that the first-order of approximation, p maxδ , coincides with the approximation skim developed by [19] and discussed in Section 3. Theorem 3 quantifies the quality of this approximation. Theorem 3. Given IMC M = ( S , Act , −→, , s0 ), a set of goal states G ⊆ S, an interval I ∈ Q with inf I = 0, sup I = b and λ = maxs∈ S M E (s). Assume discretisation step δ > 0 is selected such that b = kb δ for some kb ∈ N and n > 0 is the order of approximation. For all s ∈ S it holds: M (n)

p maxδ

Mδ (n) I I −λb (s, ♦ I G ) ≤ p M max (s, ♦ G ) ≤ p max (s, ♦ G ) + 1 − e

n (λδ)i kb i =0

i!

The core insight into the proposed error bound is given as follows. Intuitively speaking, we can view the error as the probability of more than n Markov transition(s) firing inside at least one of the discretisation steps. Due to independence of the number of Markov transition firings in discretisation steps, this probability can be bounded from above by considering kb Poisson distributions with parameter λδ , each quantifies the number of Markov transition firings inside the corresponding step. In this case, #≤n is defined as the event of taking at most n Markov transition(s) in any step. From Poisson distribution we know that the probability for #≤n to happen is e−λδ

n

i

n

k

i =0

(λδ)i . Hence the probability of having at least one step in i!

(λδ) b which event #≤n does not happen is 1 − e−λb . i =0 i ! It is worthwhile to note that open and closed intervals of type (0, b] and [0, b] are treated in the same manner based on Theorem 3. They lead to the same fixed point computation of time bounded reachability, in contrast to bounded until [26]. We can directly extend Definition 5 to intervals with non-zero lower bounds and adapt Theorem 3 accordingly.

Theorem 4. Given IMC M = ( S , Act , −→, , s0 ), a set of goal states G ⊆ S, an interval I ∈ Q with inf I = a > 0, sup I = b > a and λ = maxs∈ S M E (s). Assume discretisation step δ > 0 is selected such that a = ka δ and b = kb δ for some ka , kb ∈ N and n > 0 is the order of approximation. For all s ∈ S it holds: M (n)

p maxδ

n n (λδ)i ka  (λδ)i kb  Mδ (n) I I −λb (s, ♦ I G ) − 1 − e−λa ≤ pM max (s, ♦ G ) ≤ p max (s, ♦ G ) + 1 − e i! i! i =0

i =0

JID:SCICO AID:1894 /FLA

[m3G; v1.159; Prn:28/07/2015; 11:45] P.7 (1-17)

H. Hatefi, H. Hermanns / Science of Computer Programming ••• (••••) •••–•••

7

Fig. 3. An IMC subject to analysis.

The proof of Theorems 3 and that of 4 are tedious, basically following and generalising the proof of [19, Theorem 6.3] and that of [19, Theorem 6.4]. We provide the proof of Theorem 3 in Appendix A and discuss how it can be extended for Theorem 4. It is worth noting that the discretisation error decreases by decreasing discretisation step δ or increasing the order of approximation n. Thus, the error vanishes as n goes to infinity or δ goes to zero. 5. A numerical approach This section develops a stable numerical calculation approach that harvests the theoretical advances established for the second-order approximation. We focus on the second-order approximation because its implementation is relatively simple while it will turn out that the performance improvement is significant. The approach essentially follows the recursive M (2) computation offered by Definition 5 in a setting where n = 2. The crucial part is then to compute A I ,1 δ , since it involves estimating the points at which the optimal scheduler decision changes. We refer to them as switching points. The next example shows how the second-order approximation can be computed. Example 3. The IMC shown in Fig. 3a has one goal state, i.e. G = {s4 }. Assume that the discretisation step δ is set to 0.5, and M (2) that we want to approximate the maximum reachability from state s0 to state s4 within one time-step, i.e. p maxδ (s0 , ♦ I G ) M (2)

with I = [0, δ]. It can be easily seen that p maxδ Mδ ( 2 )

p max

δ I

(s0 , ♦ G ) =

M (2 )

2e−2t A I ,1 δ

(s4 , ♦[0,0] G ) = 1, and for all other states it is zero. Therefore:

(s1 , δ − t ) dt

0

M (2 )

δ (2 ) δ (2 ) (s1 , δ) = max{AM (s2 , δ), AM (s5 , δ)} I ,1 I ,1

M (2 )

(s2 , δ) = A I ,1 δ

M (2 )

(s3 , δ) =

A I ,1 δ A I ,1 δ A I ,1 δ

M (2 )



(s3 , δ) M (2 )

e−t p maxδ

[0,0] δ (2 ) (s4 , ♦[0,0] G ) dt + e−δ p M G ) = 1 − e−δ max (s3 , ♦

0

M (2 )

A I ,1 δ

δ (s5 , δ) =

3e−3t

2 5

M (2 )

p maxδ

 M (2 ) M (2 ) (s4 , ♦[0,0] G ) + 35 p maxδ (s6 , ♦[0,0] G ) dt + e−δ p maxδ (s5 , ♦[0,0] G )

0

= 25 (1 − e−3δ ) M (2)

The actual outcome crucially depends on the value of A I ,1 δ (s1 , δ). Since there are two Markov states s3 and s5 that are M (2) M (2) reachable from s1 , the maximum between A I ,1 δ (s3 , δ − t ) and A I ,1 δ (s5 , δ − t ) over the domain t ∈ [0, δ] needs to be derived. This in turn requires to determine any switching points between the two functions, because at these points the maximum function switches. In the case considered here, it can be shown that there is only one switching point, namely at √

t = δ + ln(

7−1 ). 2

Mδ ( 2 )

p max

Therefore the final results can be obtained by integration over the two sub-intervals [0, t ] and [t , 0.5]:

t I

(s0 , ♦ G ) = 0

M (2 ) 2e−2t A I ,1 δ (s3 , δ

δ − t ) dt +

M (2 )

2e−2t A I ,1 δ

(s5 , δ − t ) dt

t

This result is exactly computable up to machine precision, since the analytic solution of the integrals in Eq. (1) exists.

(1)

JID:SCICO AID:1894 /FLA

[m3G; v1.159; Prn:28/07/2015; 11:45] P.8 (1-17)

H. Hatefi, H. Hermanns / Science of Computer Programming ••• (••••) •••–•••

8

Algorithm 1: Extracting all sub-intervals, each contains a switching point. u ,u

1 2 Input : D I ,δ (t ) u ,u Output: The set of sub-intervals each contains a simple root of D I 1 2

begin eI ← ∅; if p u 1 = p u 1 ∨ p u 2 = p u 2 ∨ E (u 1 ) = E (u 2 ) then u 1 ,u 2 if D I ,δ (0)DuI ,δ1 ,u 2 (δ) < 0 then eI ← {(0, δ)}; else 1 r (1) = δ − E (u )− ln E (u 2 ) 1

 E (u 1 )( p u1 − p u1 )  E (u 2 )( p u 2 − p u )

;

2

if r (1) ∈ (0, δ) ∧ Du 1 ,u 2 (r (1) ) = 0 then

u 1 ,u 2 if D I ,δ (0)DuI ,δ1 ,u 2 (r (1) ) < 0 then eI ← eI ∪ (0, r (1) ) ; u ,u 2

1 if D I ,δ

(r (1) )DuI ,δ1 ,u 2 (δ) < 0 then eI ← eI ∪ (r (1) , δ) ;

end end return eI; end

M (2)

General solution We now explain the general solution for computing p maxδ (s, ♦ I G ) for s ∈ S M and I = [0, b], b ∈ R≥0 . Let succ(s) denote all direct successors of s, the procedure runs as follows: For each s ∈ succ(s) the switching points among M (2) ∗ the set of functions As = {A I ,1 δ (u , δ − t )} in the interval (0, δ) are computed. Let d1s < d2s , . . . , dls be the −→ u ∈Reach



(s )

s

sequence of those switching points. Moreover, let u is be the state that gives the maximum out of the functions in As



M (2)



δ ∗ within interval [dis−1 , dis ], i.e. u is = argmax A → u ∈Reach− (s ),t ∈[dis−1 ,dis ] I ,1 Then we have:





(u , δ − t ) with i = 1 . . . l s + 1, d0s = 0 and dls +1 = δ . s

s

M (2 )

p maxδ

(s, ♦ I G ) =

d l s  i

E (s)e− E (s)t

s ∈ S

i =1 s d i −1

M (2 )

P(s, ⊥, s )A I ,1 δ



(u is , δ − t ) dt

(2)

Notably, Eq. (2) has a closed form solution in the general case and thus is accurately computable up to machine precision provided the switching points are given. But contrary to Example 3 in which the only switching point is exactly computable, this is not the case in general. We therefore need a technique to approximately estimate the switching points. M (2)

Root finding Given two states u 1 , u 2 ∈ S, we would like to compute the switching points between A I ,1 δ M (2) A I ,1 δ (u 2 , δ − t ) in interval (0, δ). The switching points are the simple roots of the difference function u ,u 2

1 D I ,δ

M (2 )

(t ) = A I ,1 δ

= p u1 · e M (2)

where p u i = p maxδ

M (2 )

(u 1 , δ − t ) − A I ,1 δ

− E (u 1 )(δ−t )

(u 2 , δ − t )



+ p u 1 · (1 − e− E (u 1 )(δ−t ) ) − p u 2 · e− E (u 2 )(δ−t ) − p u 2 · (1 − e− E (u 2 )(δ−t ) )

(u i , ♦ I δ G ) and p u i =

into sub-intervals, in each, function

(u 1 , δ − t ) and



Mδ (2)

s∈ S P(u i , ⊥, s) p max u 1 ,u 2 D I ,δ (t ) have one simple

(3)

(u i , ♦ I δ G ), for i = 1, 2. Algorithm 1 splits interval (0, δ)

root besides being monotone. Lemma 5 shows that there u ,u

1 2 is one-to-one correspondence between the set of simple roots of D I ,δ (t ) in interval (0, δ) and the set of sub-intervals u 1 ,u 2 returned by Algorithm 1. It means that r ∈ (0, δ) is a simple root of D I ,δ (t ) iff there is a unique sub-interval I ∈ eI

u ,u

1 2 returned by Algorithm 1 such that r ∈ I . Note that D I ,δ can generally have up to two simple roots in interval (0, δ). The u 1 ,u 2 u 1 ,u 2 algorithm uses some properties of D I ,δ to first find out sub-intervals of (0, δ) under which D I ,δ is monotone and then applies intermediate value theorem to check whether or not it contains a simple root.

u ,u 2

1 Lemma 5. Each simple root of D I ,δ u 1 ,u 2 D I ,δ (t ) is monotone.

(t ) in interval (0, δ) is enclosed in a unique sub-interval returned by Algorithm 1, in which

After determining all sub-intervals containing the roots, any root finding algorithm can be applied to compute them. Once all roots are found, they are plugged into Eq. (2) so as to compute the resulting probability. Although the computation of Eq. (2), as stated before, can be exactly done, imprecision arises from the fact that root finding is not exact. However most root finding algorithms can make the error arbitrarily small by controlling the precision. To exemplify this, we instantiate the root finding algorithm with the bisection method and provide an error analysis accordingly.

JID:SCICO AID:1894 /FLA

[m3G; v1.159; Prn:28/07/2015; 11:45] P.9 (1-17)

H. Hatefi, H. Hermanns / Science of Computer Programming ••• (••••) •••–•••

9

Lemma 6. The error arising from the computation of Eq. (2) with only a single root approximated by c bisection steps in a model with 2 2 maximum exit rate λ is bounded by λ22cδ .

Lemma 6 bounds the error of evaluating Eq. (2) for a single root estimated by c step(s) of the bisection method. We can generalise this result to the case where there are more roots and further embed that into Theorem 3 so as to analyse the total error. The next theorem provides the error bound for the computation of time bounded reachability using the second order approximation when applying bisection method for root finding. Theorem 7. Given IMC M = ( S , Act , −→, , s0 ), a set of goal states G ⊆ S, an interval I ∈ Q with inf I = 0, sup I = b. Let ∗ → (s)|, and assume discretisation step δ > 0 is selected such that b = kb δ for some kb ∈ N. λ = maxs∈ S M E (s), f = maxs∈ S I |Reach− M (2) M (2) Moreover, let p˜ maxδ be the approximation of p maxδ when c step(s) of bisection method is used for root finding in each discretisation step. Then, for all s ∈ S it holds: M (2 )

p˜ maxδ

I I −λb δ (2 ) ˜M (s, ♦ I G ) ≤ p M max (s, ♦ G ) + 1 − e max (s, ♦ G ) ≤ p

2 (λδ)i kb

i!

i =0

+ k b f · ( f − 1)

λ2 δ 2 22c

Complexity and efficiency The key innovation of this approach lies in both the precision and the efficiency of the computation. Following Theorems 3 and 4, the number of iterations required to guarantee accuracy level  can be calculated by determining the least kb such that 1 − e −λb

n

(λδ)i kb i!

≤  . The inequality however does not have a closed-form solution n (λδ)i kb n +1 ≤ kb (λδ) i =0 i ! (n+1)! which is tight in our setting,  λb  n1 n +1 since λδ is very small. Thus, we instead consider inequality kb (λδ) . This shows how (n+1)! ≤  which leads to kb ≥ λb (n+1)! i =0

with respect to kb . Routine calculus allows us to derive that 1 − e −λb

the number of iterations required to achieve a predefined accuracy level decreases by increasing the order of approximation n. In other words, using higher-order approximations gives the same error bound in less iterations. To shed some light on this, we compare the complexity of the original first-order with that of the second-order implemented by the numerical approach discussed in this section. Given the settings of Theorem 7 and accuracy level  , assume ∗

→ set for interactive states can be done via computing the transitive N = | S | and M =|−→| + ||. Computing Reach− closure of the interactive transition relation, which is then reducible to Boolean matrix multiplication [27]. The complexity of the problem is O ( N ω ) where currently ω < 2.3727 [28]. Following the previous computation, the complexity of the 





→ (s) for any state s number of iterations for the second order approximation is O λb( λb ) 2 . Since the size of Reach− is at most f , the complexity of computing reachability from interactive states in each iteration is O ( f N ). Moreover, the complexity of root finding algorithm is c and the number of roots to be computed for a given state is O ( f 2 ). Therefore the complexity of computing the probability of Markov states in each iteration is O (c f 2 M ). Finally the overall complexity  





1



1

(bλ)2

is O N ω + c f 2 M + f N · λb( λb ) 2 . However the complexity of the first-order approximation is O N ω + ( M + f N )  , since no root finding mechanism involved in the algorithm. We observe that the per iteration complexity of the second-order approximation is higher. Nevertheless it is a negligible disadvantage, since in most of the actual models both the maximum fan-out ( f ) and the number of switching points are small. At the same time, the number of iterations (the respective last terms) is much less. Therefore the efficiency of the second-order approximation compares favourably to the original first-order approximation, at least in theory. In the next section we compare the complexity of both algorithms in practice. 6. Experimental results This section reports on empirical results with an implementation that harvests the theoretical advances established by applying the stable numerical approach explained in Section 5. To show the efficiency of our proposed approach, we also compare it with the original first-order approximation. Case study As a case study we consider a replicated file system as it is used as part of the Google search engine [29]. The IMC specification is available as an example of IMCA tool [30]. The Google File System (GFS) splits files into chunks of equal size maintained by several chunk servers. If a user wants to access a chunk, it will ask a master server which stores the address of all chunks. Then the user can directly access the appropriate chunk server to read/write the chunk. The model contains three parameters, N cs is the number of chunk server, C s is the number of chunks a server may store, and C t is the total number of chunks. Evaluation We set C s = 5000 and C t = 100 000 and vary the number of chunk servers N cs . The goal set G comprises the states at which the master server is up while there is at least one copy of each chunk available. For different model instances under various accuracy levels ( ), the minimum and maximum probability to reach some state in G within different time intervals (I ) are computed using both the first- and the second-order approximations. The former has been implemented in the IMCA tool [30], and our implementation is derived from that. All experiments were conducted on a single core of

JID:SCICO AID:1894 /FLA

[m3G; v1.159; Prn:28/07/2015; 11:45] P.10 (1-17)

H. Hatefi, H. Hermanns / Science of Computer Programming ••• (••••) •••–•••

10

Table 1 Reachability computation times for the Google file system. N cs

|S|

|G |



I

First-order p min

Second-order Time

p max

Time

p min

Time

p max

Time

20

7176

1713

10−2 10−2 10−2 10−3 10−3 10−3

[0, 0.5] [0, 1.0] [0, 1.5] [0, 0.5] [0, 1.0] [0, 1.5]

0.313532 0.731680 0.890834 0.313539 0.731681 0.890834

252 1005 2493 2558 9830 22 935

0.997521 0.999993 0.999999 0.997521 0.999999 0.999999

260 998 2398 2488 9982 22 528

0.313531 0.731667 0.890836 0.313535 0.731685 0.890836

70 187 346 217 596 1107

0.997521 0.999993 0.999999 0.997521 0.999999 0.999999

67 190 344 213 595 1116

30

16 156

3918

10−2 10−4

[0, 0.5] [0, 0.1]

0.529759 0.019006

2357 9377

0.997521 0.698805

2357 9374

0.529759 0.019009

297 298

0.997521 0.698816

286 279

40

28 736

7023

10−3 10−4

[0, 0.2] [0, 0.1]

0.190369 0.049221

12 784 32 288

0.909281 0.698805

12 784 31 744

0.190362 0.049223

781 861

0.909275 0.698812

706 766

an AMD Athlon II X4 620 processor with 8 GB RAM running on Linux. The computation time of both algorithms under different parameter settings together with the reachability probabilities are reported in Table 1. Apart from the number of chunk servers, the size of state space (| S |) and the number of goal states (|G |) are shown for each model instance in the table. As to be expected, the second-order algorithm takes generally less time to guarantee a given accuracy  . The computation times do improve considerably, with a relative speedup of up to 40 depending on different parameters. A few more detailed observations are worth noting. We see the relative speedup increasing with increased precision and time bound. Indeed, it is to be expected from our abstract complexity analysis. To be more precise, in the first-order approximation increasing the time bound by a factor g requires to scale the number of iterations by a factor g 2 , assuming all √ other parameters fixed. For the second-order approximation this factor is g g. This appears to be reflected in the results of the table, √ e.g. from time bound 0.5 to 1.5 the computation of the first and the second-order is increased roughly by factors 9 and 3 3, respectively. A similar observation holds for precision, namely increasing the precision by factor g should result √ in an increase of the number of iterations by factor g, respectively g, for the first-order, respectively the second-order 2 approximation. The results reported confirm that. For instance from precision 10−√ to 10−3 the computation time of the first-order and the second-order approximation increases roughly by factor 10 and 10, respectively. We thus conclude that the number of iterations is indeed a good metric for assessing the computation time of both of the two algorithms. Another important observation is that the actual precision provided by the algorithm is almost always far better than the corresponding  . Of course, this varies across the models, but it is a fact that for enough large time bound we get considerably better accuracy than what we asked for. The reason is that when the time bound tends to infinity the reachability in the IMC approaches that in its induced IPC and thereby the error vanishes. Unfortunately neither of the algorithms takes this fact into consideration. We feel this to be a topic for future work. 7. Conclusions This paper has presented an improvement on computation of time bounded reachability in IMCs, based on previous work [20]. It has employed a discretisation technique together with a stable numerical approach and a strict error bound. The improvement stands on allowing at most n > 1 Markov transitions to fire in each discretisation step, where previously n = 1 was assumed. In practice, setting n = 2 already provides a much tighter error bound, and thus saves considerable computation time. We have demonstrated the effectiveness of the approach in our empirical evaluation with speedups of more than one order of magnitude. Lately, model checking of open IMC has been studied, where the IMC is considered to be placed in an unknown environment that may delay or influence the IMC behaviour via synchronisation [31,32]. The approach resorts to the approximation scheme laid out in [20], which we have improved upon in the present paper. Therefore, our improvement directly carries over to the open setting. As a future work, we intend to further generalise the proposed algorithm to Markov automata [33–35]. Acknowledgements We thank Lijun Zhang for helpful discussions and comments, and Dennis Guck for developing and sharing the original implementation of the algorithm and the case study as a part of IMCA. This work is partly supported by the DFG as part of SFB/TR 14 AVACS, by the EU Seventh Framework Programme under grant agreement Nos. 295261 (MEALS) and 318490 (SENSATION), and by the CAS-SAFEA International Partnership Program for Creative Research Teams. Appendix A. Proof of Theorems 3 and 4 I First we focus on the restricted setting where I = [0, b]. Following from Lemma 1, p M max (s, ♦ G ) for s ∈ S M \ G can be rewritten [19, Section 6.3.1] into

JID:SCICO AID:1894 /FLA

[m3G; v1.159; Prn:28/07/2015; 11:45] P.11 (1-17)

H. Hatefi, H. Hermanns / Science of Computer Programming ••• (••••) •••–•••

I pM max (s, ♦ G ) =



E (s)e− E (s)t

11

I t I δ P(s, ⊥, s ) p M G ) dt + e− E (s)δ p M G) max (s , ♦ max (s, ♦

(A.1)

s ∈ S

0

The following holds due to Definition 5 for s ∈ S M \ G: Mδ (n)

p max

δ I

(s, ♦ G ) =

E (s)e− E (s)t

s ∈ S

0

M (n)

M (n)

− E (s)δ δ P(s, ⊥, s )A I ,n− p maxδ 1 (s , δ − t ) dt + e

(s, ♦ I δ G )

(A.2)

We have to prove that: M (n)

p maxδ

n

(λδ)i kb Mδ (n) I I −λb (s, ♦ I G ) ≤ p M ( s , ♦ G ) ≤ p ( s , ♦ G ) + ( 1 − e ( ) ) max max i! i =0

In the following, we prove the upper bound of the approximation. For the proof of the lower bound we refer to [19, Lemma 6.6]. Proof. The proof is by induction over kb : 1. kb = 1, therefore I = [0, δ]. We consider two cases: a. s ∈ S M \ G: Let δ be the set of paths that reach G within δ time unit. In the approximation we measure the set of paths that have at most two Markovian jumps and then reach G. Let this set be denoted by δ≤2 . Since we have

δ = δ≤2 ∪· δ>2 ( δ≤2 and δ>2 are disjoint), we have: δ ω δ ω δ Prω A ,ν ( ) − Pr A ,ν ( ≤2 ) = Pr A ,ν ( >2 )

δ The probability Prω A ,ν ( >2 ) can be bounded by the probability of more than two arrivals in a Poisson distribution with the parameter equals to the largest exit rate appearing in the IMC multiplied with δ . For the Poisson distribution,

n

(λδ)i

this probability is 1 − e−λδ ( i =0 i ! ). b. s ∈ S I \ G: This case reduces to case 1.a as follows. We have I pM max (s, ♦ G ) =

M (n)

p maxδ

max



→ (s) s ∈Reach−

(s, ♦ I G ) =

I pM max (s , ♦ G )

max s ∈Reach



M (n)

− → (s)

p maxδ

(s , ♦ I G )

(A.3)

I M I Eq. (A.3) implies that there exists s ∈ S M such that p M max (s, ♦ G ) = p max (s , ♦ G ). Because s is a Markov state, the upper bound for s is deployed to s. 2. kb − 1 ; kb : We assume the upper bound holds for kb − 1:

M (n)

[0,(kb −1)δ] pM G ) ≤ p maxδ max (s, ♦

n

(λδ)i kb −1 (s, ♦[0,(kb −1)δ] G ) + 1 − e−λ(kb −1)δ ( ) i!

(A.4)

i =0

We consider two cases: a. s ∈ S M \ G: From Eqs. (A.1) and (A.2) we have: M (n)

I δ pM max (s, ♦ G ) − p max

δ (s, ♦ I G ) = 0

E (s)e− E (s)t

s ∈ S

M (n)

I t δ P(s, ⊥, s )( p M G ) − A I ,n− max (s , ♦ 1 (s , δ − t )) dt M (n)

I δ + e− E (s)δ ( p M G ) − p maxδ (s, ♦ I δ G )) max (s, ♦

(A.5) M (n)

δ I t In order to establish an upper bound for Eq. (A.5), we find an upper bound for p M G ) − A I ,n− max (s , ♦ 1 ( s , δ − t ). We claim that for s ∈ S:

M (n)

I (δ− ) pM G ) − A I ,k δ max (s, ♦

n k

(λδ)i kb −1 (λ )i (s, ) ≤ 1 − e−λ((kb −1)δ+ ) ( ( ) ) i! i! i =0

We can prove the claim by induction on k:

i =0

(A.6)

JID:SCICO AID:1894 /FLA

[m3G; v1.159; Prn:28/07/2015; 11:45] P.12 (1-17)

H. Hatefi, H. Hermanns / Science of Computer Programming ••• (••••) •••–•••

12

(i) k = 0: We distinguish between two cases, first we assume s ∈ S M M (n)

I (δ− ) pM G ) − A I ,0 δ max (s, ♦

M (n) I (δ− ) (s, ) = p M G ) − p maxδ (s, ♦ I δ G ) max (s, ♦

 =

E (s)e− E (s)t

I (δ− +t ) P(s, ⊥, s ) p M G ) dt max (s , ♦

s ∈ S

0

M (n)

I δ + e− E (s) p M G ) − p maxδ (s, ♦ I δ G ) max (s, ♦

 =

E (s)e− E (s)t

0

I (δ− +t ) P(s, ⊥, s ) p M G ) dt max (s , ♦

s ∈ S

 Mδ (n) I δ I δ + e− E (s) p M ( s , ♦ G ) − p ( s , ♦ G ) max max

I δ δ (n) − (1 − e− E (s) ) p M G) max (s, ♦

(A.7)

Following from the fact that



E (s)e− E (s)t

I (δ− +t ) P(s, ⊥, s ) p M G ) dt ≤ 1 − e− E (s) max (s , ♦

s ∈ S

0

and plugging that and Eq. (A.4) into Eq. (A.7) gives us: M (n)

I (δ− ) pM G ) − A I ,0 δ max (s, ♦

n

(λδ)i kb −1 (s, ) ≤ 1 − e− E (s) + e− E (s) (1 − e−λ(kb −1)δ ( ) ) i! i =0

n

(λδ)i kb −1 ) = 1 − e− E (s) e−λ(kb −1)δ ( i! i =0

n

(λδ)i kb −1 ≤ 1 − e−λ((kb −1)δ+ ) ( ) i!

(A.8)

i =0

In the converse case s ∈ S I , we derive from Eq. (A.3) and Definition 5: ∗

→ (s) : p M (s, ♦ I (δ− ) G ) = p M (s , ♦ I (δ− ) G ), ∃s , s ∈ Reach− max max M (n)

A I ,0 δ

δ (n) (s, ) = AM (s , ) I ,0

This fact leads to: M (n)

I (δ− ) pM G ) − A I ,0 δ max (s, ♦

M (n) I (δ− ) (s, ) = p M G ) − A I ,0 δ (s , ) max (s , ♦

(∗)

M (n) I (δ− ) ≤ pM G ) − A I ,0 δ (s , ) max (s , ♦ n

(λδ)i kb −1 ≤ 1 − e−λ((kb −1)δ+ ) ( ) i! i =0

M (n)

M (n)

In the above, (∗) follows from the fact that A I ,0 δ (s , ) ≤ A I ,0 δ (ii) k − 1 ; k: We assume that ∀s ∈ S the following holds:

(s , ).

n

(λδ)i

M (n)

I (δ− ) δ pM G ) − A I ,k− (s, ) ≤ 1 − e−λ((kb −1)δ+ ) ( max (s, ♦ 1

i =0

i!

)kb −1

We show that, assuming Eq. (A.6), Eq. (A.9) holds. First we consider s ∈ S M : M (n)

I (δ− ) pM G ) − A I ,k δ max (s, ♦

 ≤ 0

E (s)e− E (s)t

s ∈ S

(s, )

I (δ− +t ) P(s, ⊥, s ) p M G) max (s , ♦

k −1

(λ )i i =0

i!

(A.9)

JID:SCICO AID:1894 /FLA

[m3G; v1.159; Prn:28/07/2015; 11:45] P.13 (1-17)

H. Hatefi, H. Hermanns / Science of Computer Programming ••• (••••) •••–•••

13

 Mδ (n) M (n) I δ −A I ,k− ( s , − t ) dt + e− E (s) ( p M G ) − p maxδ (s, ♦ I δ G )) max (s, ♦ 1  ≤

E (s)e− E (s)t



P(s, ⊥, s ) 1 − e−λ((kb −1)δ+ −t )

n (λδ)i kb −1

s ∈ S

0

·

k −1

(λ( − t ))i 

i!

i =0

 =1−

i =0



dt + e− E (s) 1 − e−λ(kb −1)δ

n  (λδ)i kb −1  i =0

n

(λδ)i

E (s)e− E (s)t e−λ((kb −1)δ+ −t ) (

i =0

0

i!

i!

)kb −1

i!

k −1

(λ( − t ))i i =0

i!

dt

n

(λδ)i kb −1 + e− E (s) e−λ(kb −1)δ ( ) i! i =0

n

(λδ)i kb −1 = 1 − e−λ(kb −1)δ ( ) i! i =0 ⎤ ⎡  k −1 i

(λ( − t )) · ⎣ E (s)e− E (s)t e−λ( −t ) dt + e− E (s) ⎦ i! i =0 0   

(A.10)

F ( E (s))

In the following we show that F ( E (s)) takes its minimum value at E (s) = λ. To arrive there we show that F (r ) is a monotonically decreasing function in interval 0 ≤ r ≤ λ. For this, we consider the derivative of F (r ) and show that it is nonpositive:





F (r ) =

re−rt e−λ( −t )

k −1

(λ( − t ))i i =0

0



(1 − tr )e−rt e−λ( −t )

=

dt − e−r

k −1

(λ( − t ))i

i!

i =0

0

(∗)

i!

dt − e−r



(1 − tr )e−rt dt − e−r



0

=

1 r

1

(1 − e−r ) + e−r − (1 − e−r ) − e−r = 0 r

In the above, (∗) follows from the fact that: k −1

(λ( − t ))i i =0

i!





(λ( − t ))i i =0

i!

= eλ( −t ) ⇒ e−λ( −t )

k −1

(λ( − t ))i

i!

i =0

≤1

Therefore:

 F ( E (s)) ≥ F (λ) =

λe−λt e−λ( −t )

i =0

0

 = 0

k −1

(λ( − t ))i

λe−λ

k −1

(λ( − t ))i i =0

i!

Putting the above equation into Eq. (A.10) yields:

i!

dt + e−λ

dt + e−λ = e−λ

k

(λ )i i =0

i!

JID:SCICO AID:1894 /FLA

[m3G; v1.159; Prn:28/07/2015; 11:45] P.14 (1-17)

H. Hatefi, H. Hermanns / Science of Computer Programming ••• (••••) •••–•••

14

M (n)

I (δ− ) pM G ) − A I ,k δ max (s, ♦

n

(λδ)i kb −1 (s, ) ≤ 1 − e−λ(kb −1)δ ( F (λ) ) i! i =0

n k

(λδ)i kb −1 (λ )i = 1 − e−λ((kb −1)δ+ ) ( ( ) ) i! i! i =0

i =0

For interactive states we follow the same reasoning as in case (a) which extends the bound to s ∈ S I . From Eq. (A.3) and Definition 5: ∗

→ (s) : p M (s, ♦ I (δ− ) G ) = p M (s , ♦ I (δ− ) G ), ∃s , s ∈ Reach− max max M (n)

A I ,0 δ

M (n)

(s, ) = A I ,0 δ

(s , )

This fact gives us: M (n)

I (δ− ) pM G ) − A I ,k δ max (s, ♦

M (n)

(s , )

M (n)

(s , )

I (δ− ) (s, ) = p M G ) − A I ,k δ max (s , ♦ (∗)

I (δ− ) ≤ pM G ) − A I ,k δ max (s , ♦

n

(λδ)i kb −1 ≤ 1 − e−λ((kb −1)δ+ ) ( ) i! i =0

M (n) M (n) In the above, (∗) follows from the fact that A I ,k δ (s , ) ≤ A I ,k δ (s , ). [0,kb δ] G ) − p Mδ (n) (s, ♦[0,kb δ] G ) is directly obtained by putting k Note that the upper bound for p M max max (s, ♦

= n and = δ in Eq. (A.6). b. s ∈ S I \ G: From the previous case we know that the bound holds for s ∈ S M \ G. From Eq. (A.3) and Definition 5: ∗

→ (s) : p M (s, ♦ I G ) = p M (s , ♦ I G ), ∃s , s ∈ Reach− max max M (n)

p maxδ

M (n)

(s, ♦ I G ) = p maxδ (s , ♦ I G )

This fact provides us with the following: M (n)

I δ pM max (s, ♦ G ) − p max

M (n)

I I δ (s, ♦ I G ) = p M max (s , ♦ G ) − p max (s , ♦ G ) (∗)

M (n)

I I δ ≤ pM max (s , ♦ G ) − p max (s , ♦ G )

n

(λδ)i kb ≤ 1 − e−λb ( ) i! i =0

Mδ (n)

In the above, (∗) follows from the fact that p max

δ (n) (s , ♦ I G ) ≤ p M ( s , ♦ I G ). max

I This proof can directly be extended to other types of intervals. Assume that I = (0, b], then for all s ∈ S M : p M max (s, ♦ G ) = [0,b] G ). Also for s ∈ S we have: pM ( s , ♦ I max



→ (s) : p M (s, ♦ I G ) = p M (s , ♦ I G ) = p M (s , ♦[0,b] G ) = p M (s, ♦[0,b] G ) ∃s ∈ Reach− max max max max The other types of intervals can be handled in a similar way.

2

A similar proof structure can be employed to show the correctness of the bounds provided by Theorem 4. Given an interval with non-zero lower bound, e.g. [a, b], we can apply a slightly similar technique sequentially from b to a and from a to 0. Many of the details described above are directly applicable for the proof. However, we leave the proof because of space limitation. Appendix B. Proof of Lemma 5 u ,u 2

1 Proof. We start the proof by stressing some useful properties of function D I ,δ

u ,u 2

1 Remark 1. We remark that D I ,δ properties:

(t ) in interval [0, δ].

(t ) described in Eq. (3) is a smooth function in the interval [0, δ] with the following

JID:SCICO AID:1894 /FLA

[m3G; v1.159; Prn:28/07/2015; 11:45] P.15 (1-17)

H. Hatefi, H. Hermanns / Science of Computer Programming ••• (••••) •••–•••

15

• The m-th derivative of DuI ,δ1 ,u 2 (t ) is expressed as follows: u ,u 2

1 dm D I ,δ

(t )

dt m

= ( p u 1 − p u 1 ) E (u 1 )m e− E (u 1 )(δ−t ) − ( p u 2 − p u 2 ) E (u 2 )m e− E (u 2 )(δ−t )

(B.1)

• The root of the m-th derivative of DuI ,δ1 ,u 2 (t ) whenever existing follows from:

E (u 1 )m ( p u − p )  u1 1 1 r (m) = δ − E (u )− ln E ( u ) 1 2 E (u 2 )m ( p u 2 − p u 2 )

(B.2)

• Providing E (u 1 ) = E (u 2 ), ( p u 1 − p u 1 )( p u2 − p u 2 ) > 0 and m1 = m2 , it holds that r (m1 ) = r (m2 ) . In particular, r (1) = r (2) if the assumptions hold, which means that the stationary point is a simple root of the first derivative. Hence the function is monotonic before and after the stationary point and the sign of its first derivative changes at the stationary point. Thus the function can have at most two simple roots, one less and one greater than r (1) .

Now we consider different cases of Algorithm 1: a. E (u 1 ) = E (u 2 ) ∨ ( p u 1 − p u 1 )( p u2 − p u 2 ) ≤ 0: As a result of Eq. (B.1) the function is monotonic. Excluding the case when u ,u

1 2 it is constant, i.e. D I ,δ (t ) = 0, by intermediate value theorem it has a simple root in interval (0, δ) if and only if u 1 ,u 2 u 1 ,u 2 D I ,δ (0)D I ,δ (δ) < 0. b. E (u 1 ) = E (u 2 ) ∧ ( p u 1 − p u 1 )( p u2 − p u 2 ) > 0: In this case the first derivative has a simple root described in Eq. (B.2). Following from Remark 1 the function is monotonic before and after the stationary point. Depending on the stationary point we consider the following cases: u 1 ,u 2 (1) (r ) = 0 then r (1) is the only root – r (1) ∈ (0, δ): In this case we first check the value of the function at r (1) . If D I ,δ of the function which is not simple, since it is a stationary point as well. Therefore it does not represent a switching point. Otherwise, there are two sub-intervals (0, r (1) ) and (r (1) , δ) where the function is monotonic in both of them. The condition of the intermediate value theorem is then individually checked on both of them as explained in case a. / (0, δ): In this case the function is monotonic in interval (0, δ) and thereby contains a simple root if and only if – r (1) ∈ u 1 ,u 2 the condition of intermediate value theorem holds, i.e. D I ,δ (0)DuI ,δ1 ,u 2 (δ) < 0. 2

Appendix C. Proof of Lemma 6 Proof. The lemma bounds the error arises from the estimation of only one switching point by performing c bisection step(s). Consider state s ∈ S M and its successor state s ∈ S I and let t ∗ and t˜ be the exact and the approximate switching points, ∗

→ (s ) are the optimal Markov states respectively. Assume further that without loss of generality t ∗ ≤ t˜ and u 1 , u 2 ∈ Reach− reachable from s before and after t ∗ , respectively. Due to approximate estimation of t ∗ , state u 1 is incorrectly considered as the optimal Markov state inside interval [t ∗ , t˜]. In consequence, from Eq. (2) the following error emerges

 t˜ error ≤



M (2 )

E (s)e− E (s)t A I ,1 δ

M (2 )

(u 2 , δ − t ) − A I ,1 δ

 (u 1 , δ − t ) dt

t∗

 t˜ =

2 E (s)e− E (s)t D I ,δ

u ,u 1

(t ) dt .

t∗

From mean value theorem for t ∈ [t ∗ , t˜], there exist some t ∈ [t ∗ , t ] such that u ,u 1

2 dD I ,δ

(t )

dt

u ,u 1

=

2 D I ,δ

(t ) − DuI ,δ2 ,u 1 (t ∗ ) t − t∗

u ,u 1

=

2 D I ,δ

u ,u

From routine calculus it can be shown that u 2 ,u 1 D I ,δ (t )

≤ 2λ(t − t ∗ ). It then gives

 t˜ error ≤

2 E (s)e− E (s)t D I ,δ

u ,u 1

(t ) dt

t∗

 t˜ ≤ t∗

E (s)e− E (s)t · 2λ(t − t ∗ ) dt

(t )

t − t∗

2 1 dD I ,δ (t ) dt

.

(C.1)

≤ 2λ for t ≥ 0. Therefore from Eq. (C.1) it holds for t ∈ [t ∗ , t˜] that

JID:SCICO AID:1894 /FLA

[m3G; v1.159; Prn:28/07/2015; 11:45] P.16 (1-17)

H. Hatefi, H. Hermanns / Science of Computer Programming ••• (••••) •••–•••

16

 t˜ ≤



λ · 2λ(t − t ) dt = t∗



 t˜

2λ2 (t − t ∗ ) dt = λ2 (t˜ − t ∗ )2

t∗ 2 2

λ δ

22c

,

where the last inequality follows from the absolute error bound of c bisection step(s) in an interval of length δ .

2

Appendix D. Proof of Theorem 7 M (2)

Proof. We start the proof by showing the lower bound. During the computation of p˜ maxδ , there might be some switching points that are approximately estimated using bisection method. The approximation can only make the objective (maximal reachability) worse, because it might wrongly locate some actions which are sub-optimal and consider them as being M (2) M (2) optimal. Therefore we have, for all s ∈ S, p˜ maxδ (s, ♦ I G ) ≤ p maxδ (s, ♦ I G ) and then following from Theorem 3 it holds that M (2)

I p˜ maxδ (s, ♦ I G ) ≤ p M max (s, ♦ G ). For the proof of the upper bound we need to use the result of Lemma 6. Note that the result of the lemma is valid only for one switching point inside a discretisation step. However the number of switching points in a discretisation step is bounded by f ( f − 1). Since there are kb discretisation step(s), we have

M (2 )

p maxδ

M (2 )

(s, ♦ I G ) ≤ p˜ maxδ (s, ♦ I G ) + kb f ( f − 1)

λ2 δ 2

(D.1)

22c

And finally the upper bound follows from Eq. (D.1) and Theorem 3.

2

References [1] C. Baier, H. Hermanns, J.-P. Katoen, B.R. Haverkort, Efficient computation of time-bounded reachability probabilities in uniform continuous-time Markov decision processes, Theor. Comput. Sci. 345 (1) (2005) 2–26. [2] A. Aziz, K. Sanwal, V. Singhal, R.K. Brayton, Verifying continuous time Markov chains, in: R. Alur, T.A. Henzinger (Eds.), CAV, in: Lecture Notes in Computer Science, vol. 1102, Springer, 1996, pp. 269–276. [3] H. Hermanns, J.-P. Katoen, Automated compositional Markov chain generation for a plain-old telephone system, Sci. Comput. Program. 36 (1) (2000) 97–127. [4] E. Böde, M. Herbstritt, H. Hermanns, S. Johr, T. Peikenkamp, R. Pulungan, J. Rakow, R. Wimmer, B. Becker, Compositional dependability evaluation for statemate, IEEE Trans. Softw. Eng. 35 (2) (2009) 274–292. [5] H. Hermanns, Interactive Markov Chains: The Quest for Quantified Quality, Lecture Notes in Computer Science, vol. 2428, Springer, 2002. [6] J. Hillston, A Compositional Approach to Performance Modelling, Distinguished Dissertations in Computer Science, Cambridge University Press, 2005. [7] H. Macià, V. Valero, F. Cuartero, M.C. Ruiz, SPBC: a Markovian extension of Petri box calculus with immediate multiactions, Fundam. Inform. 87 (3–4) (2008) 367–406. [8] H. Hermanns, U. Herzog, J.-P. Katoen, Process algebra for performance evaluation, Theor. Comput. Sci. 274 (1–2) (2002) 43–87. [9] M. Bravetti, H. Hermanns, J.-P. Katoen, Ymca: – why Markov chain algebra?, Electron. Notes Theor. Comput. Sci. 162 (2006) 107–112. [10] R. Pulungan, Reduction of acyclic phase-type representations, Ph.D. thesis, Universität des Saarlandes, Saarbrücken, Germany, 2009. [11] H. Boudali, P. Crouzen, B.R. Haverkort, M. Kuntz, M. Stoelinga, Architectural dependability evaluation with arcade, in: DSN, IEEE Computer Society, 2008, pp. 512–521. [12] N. Coste, H. Garavel, H. Hermanns, R. Hersemeule, Y. Thonnart, M. Zidouni, Quantitative evaluation in embedded system design: validation of multiprocessor multithreaded architectures, in: DATE, IEEE, 2008, pp. 88–89. [13] N. Coste, H. Hermanns, E. Lantreibecq, W. Serwe, Towards performance prediction of compositional models in industrial gals designs, in: A. Bouajjani, O. Maler (Eds.), CAV, in: Lecture Notes in Computer Science, vol. 5643, Springer, 2009, pp. 204–218. [14] M.-A. Esteve, J.-P. Katoen, V.Y. Nguyen, B. Postma, Y. Yushtein, Formal correctness, safety, dependability, and performance analysis of a satellite, in: M. Glinz, G.C. Murphy, M. Pezzè (Eds.), ICSE, IEEE, 2012, pp. 1022–1031. [15] H. Garavel, F. Lang, R. Mateescu, W. Serwe, CADP 2010: a toolbox for the construction and analysis of distributed processes, in: P.A. Abdulla, K.R.M. Leino (Eds.), TACAS, in: Lecture Notes in Computer Science, vol. 6605, Springer, 2011, pp. 372–387. [16] J. Fearnley, M.N. Rabe, S. Schewe, L. Zhang, Efficient approximation of optimal control for continuous-time Markov games, in: IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science, FSTTCS 2011, December 12–14, 2011, Mumbai, India, 2011, pp. 399–410. [17] P. Buchholz, I. Schulz, Numerical analysis of continuous time Markov decision processes over finite horizons, Comput. Oper. Res. 38 (3) (2011) 651–659. [18] P. Buchholz, E.M. Hahn, H. Hermanns, L. Zhang, Model checking algorithms for CTMDPs, in: Computer Aided Verification – Proceedings of 23rd International Conference, CAV 2011, July 14–20, 2011, Snowbird, UT, USA, 2011, pp. 225–242. [19] M.R. Neuhäußer, Model checking nondeterministic and randomly timed systems, Ph.D. thesis, RWTH Aachen University, University of Twente, 2010. [20] L. Zhang, M.R. Neuhäußer, Model checking interactive Markov chains, in: J. Esparza, R. Majumdar (Eds.), TACAS, in: Lecture Notes in Computer Science, vol. 6015, Springer, 2010, pp. 53–68. [21] A. Bianco, L. de Alfaro, Model checking of probabilistic and nondeterministic systems, in: P.S. Thiagarajan (Ed.), FSTTCS, in: Lecture Notes in Computer Science, vol. 1026, Springer, 1995, pp. 499–513. [22] D. Guck, T. Han, J.-P. Katoen, M.R. Neuhäußer, Quantitative timed analysis of interactive Markov chains, in: A. Goodloe, S. Person (Eds.), NASA Formal Methods, in: Lecture Notes in Computer Science, vol. 7226, Springer, 2012, pp. 8–23. [23] H. Hatefi, H. Hermanns, Improving time bounded reachability computations in interactive Markov chains, in: F. Arbab, M. Sirjani (Eds.), FSEN, in: Lecture Notes in Computer Science, vol. 8161, Springer, 2013, pp. 250–266. [24] S. Johr, Model checking compositional Markov systems, Ph.D. thesis, Saarland University, 2008. [25] C. Baier, B.R. Haverkort, H. Hermanns, J.-P. Katoen, Model-checking algorithms for continuous-time Markov chains, IEEE Trans. Softw. Eng. 29 (6) (2003) 524–541.

JID:SCICO AID:1894 /FLA

[m3G; v1.159; Prn:28/07/2015; 11:45] P.17 (1-17)

H. Hatefi, H. Hermanns / Science of Computer Programming ••• (••••) •••–•••

17

[26] L. Zhang, D.N. Jansen, F. Nielson, H. Hermanns, Automata-based CSL model checking, in: Proceedings of the 38th International Conference on Automata, Languages and Programming – Part II, ICALP’11, Springer-Verlag, Berlin, Heidelberg, 2011, pp. 271–282. [27] S. Skiena, The Algorithm Design Manual: Text, Computer Science: Algorithm Design, TELOS – The Electronic Library of Science, 1998. [28] V.V. Williams, Multiplying matrices faster than Coppersmith–Winograd, in: Proceedings of the Forty-Fourth Annual ACM Symposium on Theory of Computing, STOC ’12, ACM, New York, NY, USA, 2012, pp. 887–898. [29] L. Cloth, B.R. Haverkort, Model checking for survivability, in: QEST, IEEE Computer Society, 2005, pp. 145–154. [30] D. Guck, H. Hatefi, H. Hermanns, J.-P. Katoen, M. Timmer, Modelling, reduction and analysis of Markov automata, in: QEST, 2013, pp. 55–71. [31] T. Brázdil, H. Hermanns, J. Krcál, J. Kretínský, V. Rehák, Verification of open interactive Markov chains, in: D. D’Souza, T. Kavitha, J. Radhakrishnan (Eds.), FSTTCS, in: LIPIcs, vol. 18, Schloss Dagstuhl – Leibniz-Zentrum für Informatik, 2012, pp. 474–485. [32] H. Hermanns, J. Krcál, J. Kretínský, Compositional verification and optimization of interactive Markov chains, in: P.R. D’Argenio, H.C. Melgratti (Eds.), CONCUR, in: Lecture Notes in Computer Science, vol. 8052, Springer, 2013, pp. 364–379. [33] C. Eisentraut, H. Hermanns, L. Zhang, On probabilistic automata in continuous time, in: LICS, IEEE Computer Society, 2010, pp. 342–351. [34] Y. Deng, M. Hennessy, On the semantics of Markov automata, in: 38th International Colloquium on Automata, Languages and Programming, ICALP 2011, in: Information and Computation, vol. 222, 2013, pp. 139–168. [35] H. Hatefi, H. Hermanns, Model checking algorithms for Markov automata, Electron. Commun. EASST 53 (2012).