Automatica 84 (2017) 62–69
Contents lists available at ScienceDirect
Automatica journal homepage: www.elsevier.com/locate/automatica
Brief paper
Moment-based analysis of stochastic hybrid systems with renewal transitions✩ Mohammad Soltani a , Abhyudai Singh a,b,c,d a
Department of Electrical and Computer Engineering, University of Delaware, Newark, DE 19716, USA Department of Biomedical Engineering, University of Delaware, Newark, DE 19716, USA Department of Mathematical Sciences, University of Delaware, Newark, DE 19716, USA d Center for Bioinformatics and Computational Biology, University of Delaware, Newark, DE 19716, USA b c
article
info
Article history: Received 14 August 2016 Received in revised form 9 March 2017 Accepted 3 June 2017 Available online 28 July 2017 Keywords: Forward Kolmogorov equation Covariance matrix Stochastic event timing
a b s t r a c t Stochastic Hybrid Systems (SHS) constitute an important class of mathematical models that integrate discrete stochastic events with continuous dynamics. The time evolution of statistical moments is generally not closed for SHS, in the sense that the time derivative of the lower-order moments depends on higher-order moments. Here, we identify an important class of SHS where moment dynamics is automatically closed, and hence moments can be computed exactly by solving a system of coupled differential equations. This class is referred to as linear time-triggered SHS (TTSHS), where the state evolves according to a linear dynamical system. Stochastic events occur at discrete times and the intervals between them are independent random variables that follow a general class of probability distributions. Moreover, whenever the event occurs, the state of the SHS changes randomly based on a probability distribution. Our approach relies on embedding a Markov chain based on phase-type processes to model timing of events, and showing that the resulting system has closed moment dynamics. Interestingly, we identify a subclass of linear TTSHS, where the first and second-order moments depend only on the mean time interval between events, and invariant of higher-order statistics of event timing. TTSHS are used to model examples drawn from cell biology and nanosensors, providing novel insights into how noise is regulated in these systems. © 2017 Elsevier Ltd. All rights reserved.
1. Introduction Stochastic hybrid systems (SHS) that combine continuous and discrete interactions are increasingly being used to model noise and uncertainty in physical, biological, and engineering systems. Specific applications include communication networks (Bohacek, Hespanha, Lee, & Obraczka, 2003; Hespanha, 2004, 2005a; Hu, 2006), network control systems (Antunes, Hespanha, & Silvestre, 2013b; Hespanha, 2014), air traffic control (Prandini & Hu, 2009; Visintini, Glover, Lygeros, & Maciejowski, 2006), biological systems (Antunes & Singh, 2014; Bortolussi & Policriti, 2008; Daigle, Soltani, Petzold, & Singh, 2015; Hu, Lygeros, & Sastry, 2004; Singh & Hespanha, 2010; Soltani & Singh, 2016; Soltani, Vargas-Garcia, Antunes, & Singh, 2016; Vargas-García, Soltani, & Singh, 2016), ✩ The material in this paper was presented at the 55th IEEE Conference on Decision and Control, December 12–14, 2016, Las Vegas, NV, USA. This paper was recommended for publication in revised form by Associate Editor Huijun Gao under the direction of Editor Ian R. Petersen. AS is supported by the National Science Foundation Grant DMS-1312926. E-mail addresses:
[email protected] (M. Soltani),
[email protected] (A. Singh). http://dx.doi.org/10.1016/j.automatica.2017.07.001 0005-1098/© 2017 Elsevier Ltd. All rights reserved.
power grids (Dhople, Chen, DeVille, & Dominguez-Garcia, 2013; Wang & Crow, 2011), modeling of energy grids and smart buildings (David, Du, Larsen, Mikučionis, & Skou, 2012; Strelec, Macek, & Abate, 2012). Interested readers are referred to recent reviews for an introduction to SHS (Hespanha, 2006; Hu, Lygeros, & Sastry, 2000; Teel, Subbaraman, & Sferlazza, 2014). Traditional analysis of SHS relies heavily on various Monte Carlo simulation techniques, which come at a significant computational cost (Gillespie, 1976; Gillespie & Petzold, 2003). Since one is often interested in computing only the lower-order moments of the state variables, much time and effort can be saved by directly computing these statistical moments without having to run Monte Carlo simulations. Unfortunately, moment calculations in SHS can be nontrivial due to the problem of unclosed dynamics: the time evolution of lower-order moments of the state space depends on higherorder moments (Singh & Hespanha, 2005). In such cases, moments are solved by employing closure schemes, that close the system of differential equations by approximating higher-order moments as nonlinear functions of lower-order moments (Gillespie, 2009; Lee, Kim, & Kim, 2009; Singh & Hespanha, 2006, 2011; Soltani, VargasGarcia, & Singh, 2015).
M. Soltani, A. Singh / Automatica 84 (2017) 62–69
63
The problem of moment closure leads to an interesting question: are there classes of SHS where moments can be computed exactly without the need for closure techniques? Here, we identify such a class of SHS known as time-triggered SHS (TTSHS) that are a special case of piecewise-deterministic Markov processes (Costa & Dufour, 2008; Davis, 1993). The main ingredients of TTSHS are as follows: (1) A continuous state x(t) ∈ Rn that evolves according to a stable linear dynamical system x˙ (t) = aˆ + Ax(t),
(1)
for some constant vector aˆ and matrix A. While previous studies considered continuous dynamics of the form (1), we extend our analysis to include TTSHS with stochastic differential equations. (2) Stochastic events occur at discrete times ts , s ∈ {1, 2, . . .}, and the intervals ts − ts−1 are independent and identical random variables drawn from a given probability density function. These events can be referred to as renewal transitions, as their timing is determined by an underlying renewal process. (3) A reset map defines the change in x when the event occurs x(ts ) ↦ → x+ (ts ),
(2)
where x+ (ts ) denotes the state of the system just after the event. While prior work has considered a deterministic linear reset map x(ts ) ↦ → Jx(ts )
(3)
(Antunes, Hespanha, & Silvestre, 2010, 2012, 2013a), we allow for both state-dependent and state-independent noise sources in x+ (ts ). Our goal is to connect moments of the continuous state to the statistics of the time interval T ≡ ts − ts−1 . The key contribution of this work is to model arrival of events using phase-type processes (Tijms, 1994), and show that the resulting systems has closed moment dynamics. More specifically, the time derivative of an appropriately selected vector of moments depends only on itself, and not on higher-order moments. As a consequence, moments can be computed exactly by solving a system of differential equations. For the sake of simplicity, we focus on computing the first and second-order moments, but the ideas can be generalized to obtain any higher-order moment. In addition, a subclass of TTSHS is identified, where the first and second-order moments of x depend only on the mean time interval between events. In this case, making the arrival of events more random (for fixed mean arrival times) will not result in higher noise in x. These methods are illustrated on examples drawn from different disciplines.
Fig. 1. Schematic of a linear time-triggered stochastic hybrid system. The state evolves according to a set of stochastic differential equation and events occur randomly with hazard rate h(τ ), where the timer τ measures the time since the last event. Choosing the hazard rate as (6), ensures that the time between events follows a continuous probability density function f . Whenever the event occurs the timer is set to zero and x changes via (7).
The timing of events in TTSHS can be modeled through a timer
τ , that measures the time elapsed since the last event. This timer is reset to zero whenever an event occurs and increases over time as dτ = dt in between events. Let the time intervals between events follow a continuous positively-valued probability density function f . Then, the transition intensity for the event is given by the hazard function h(τ ) ≡
f (τ )
1−
∫τ
y=0
(6)
f (y)dy
(Evans, Hastings, & Peacock, 2000; Ross, 2010). In particular, the probability that an event occurs in the next infinitesimal time interval (t , t + dt ] is h(τ )dt. This formulation of the event arrival process via a timer allows representation of TTSHS as a statedriven SHS (Fig. 1). Hence, existing tools such as, Kolmogorov equations and Dynkin’s formulas for obtaining time evolution of moments can be employed for studying stochastic dynamics of TTSHS (Hespanha, 2014). Having defined the timing of events, we next focus on how the events alter the state of the system. Whenever an event occurs, x is reset to x+ , where x+ is a random variable with following statistics
⟨x+ (ts )⟩ = Jx(ts ) + R, ⟨ ⟩ ⊤ x+ (ts )x⊤ + (ts ) − ⟨x+ (ts )⟩⟨x+ (ts )⟩ = Q x(ts )x⊤ (ts )Q ⊤ + C x(ts )D⊤ + Dx⊤ (ts )C ⊤ + E ,
(7a)
(7b)
Let the continuous state x(t) ∈ Rn evolves according to a set of stochastic differential equations as
here J ∈ Rn×n , R ∈ Rn×1 , Q ∈ Rn×n , C ∈ Rn×n , D ∈ Rn×1 , and E ∈ Rn×n are constant matrices. Further E is symmetric. Note that the mean of x+ is a linear affine function of x, which is a generalization over the linear map (3) previously used. The covariation matrix of x+ is defined by (7b) and covers a wide range of possibilities. For example, Q = C = D = E = 0 imply x+ = Jx + R with probability one. Moreover, non-zero Q , C , D, and E can be used to incorporate constant or state-dependent noise in x+ . In the following sections, we show how statistical moments of x(t) can be computed exactly for TTSHS illustrated in Fig. 1. We first consider a subclass of TTSHS, where events only impart noise to the system, in the sense that the average value of x just after the event is the same as its value just before the event.
dx(t) = (aˆ + Ax(t))dt + (G + Bx(t)1n )dwt ,
3. Moment dynamics of TTSHS with noise-imparting events
2. TTSHS model formulation
n×n
(4)
n×n
where G ∈ R and B ∈ R are constant matrices, 1n is a 1 × n unit matrix. Further wt is a n-dimensional Weiner process satisfying
Consider a subclass of the TTSHS with J = I, R = Q = 0 reducing (7) to
⟨dwt ⟩ = 0, ⟨dwt dwt⊤ ⟩ = In dt ,
⟨x+ (ts )⟩ = x(ts ), ⟨ ⟩ ⊤ x+ (ts )x⊤ + (ts ) − ⟨x+ (ts )⟩⟨x+ (ts )⟩
(8a)
= C x(ts )D⊤ + Dx⊤ (ts )C ⊤ + E .
(8b)
(5)
where In is a n × n Identity matrix, and the symbol ⟨ ⟩ denotes the expected value.
64
M. Soltani, A. Singh / Automatica 84 (2017) 62–69
In essence, this reset corresponds to addition of a zero-mean noise term, whenever the event occurs. This noise term can be potentially state-dependent through C and D in (8b). For this class of TTSHS, we further assume that linear system (1) is stable, i.e., A is Hurwitz. Based on Theorem 1 of Hespanha and Singh (2005), the time derivative of the expected value of a vector of continuously differentiable functions
ϕ (x, τ ) = [ϕ1 (x, τ ) ϕ2 (x, τ ) . . . ϕF (x, τ )]⊤ ∈ RF
(9)
At steady-state A⟨xx⊤ ⟩ + ⟨xx⊤ ⟩A⊤ + aˆ ⟨x⊤ ⟩ + ⟨x⟩ˆa⊤ ⊤ ⊤ ⊤ ⊤ ⊤ + GG⊤ + G1⊤ n ⟨x ⟩B + B⟨x⟩1n G + nB⟨xx ⟩B
= −C ⟨h(τ )⟩ ⟨x⟩D − D⟨h(τ )⟩ ⟨x⟩ C − E ⟨h(τ )⟩, where ⟨ ⟩ denotes expected value as t → ∞. Let the random variable T ≡ ts − ts−1 denote the time interval between events for the TTSHS illustrated in Fig. 1. Then, the mean time interval is given by
can be derived using
⟨T⟩ =
d⟨ϕf (x, τ )⟩
(Lϕf )(x, τ ) , f = {1, . . . , F },
⟨
=
⟩
(10)
dt where the operator L is the extended generator of the SHS that maps ϕf to (Lϕf )(x, τ ) according to
⟩
⟨ +
1 2
( trace
∂ 2 ϕf (x, τ ) ∂ x2
× (G + Bx1n ) (G + Bx1n )⊤
∂ϕf (x,τ ) ∂x
(
∂ϕf (x, τ ) ∂x
) ≡ i
Finkelstein (2008). Using (19) and ⟨x⟩ = −A−1 aˆ from (14), the above equation reduces to ⊤
− (11)
1
⟨T⟩
C ⟨x⟩D⊤ −
1
⟨T⟩
⊤
D⟨x⟩ C ⊤ −
1
⟨T⟩
(20)
E,
where the steady-state covariance matrix
)⟩
are defined as
∂ϕf (x, τ ) . ∂ xi
(21)
M = ⟨xx⊤ ⟩ − ⟨x⟩ ⟨x⊤ ⟩
.
∈ R1×n denotes the gradient of ϕf (x, τ ) with respect to x,
the elements of
(19)
⟨h(τ )⟩
⊤ ⊤ ⊤ ⊤ = −GG⊤ − G1⊤ n ⟨x ⟩B − B⟨x⟩1n G − nB⟨x⟩ ⟨x⟩ B
Here, ∆ϕf is the change in ϕf whenever an event occurs and ∂ϕf (x,τ ) ∂x
1
AM + MA⊤ + nBMB⊤
(Lϕf )(x, τ ) ≡ h(τ⟨ ) × ∆ϕf (x, τ ) ⟩ ⟨ ⟩ ) ∂ϕf (x, τ ) ( ∂ϕf (x, τ ) + Ax + aˆ + ∂x ∂τ
⟨
(18)
⊤ ⊤
⊤
is a unique solution to the Lyapunov equation (20). Interestingly, M only depends on ⟨T⟩ and is independent of higher-order statistics of T. Thus, making timing of events more random for a fixed ⟨T⟩ will not affect noise levels in x. Next, we illustrate these results on an example motivated by modeling noise in nanosensors.
(12)
∂ϕf (x,τ ) denotes the gradient of ϕf (x, τ ) with respect to τ , ∂τ 2 ∂ ϕf (x,τ ) denotes the Hessian matrix of ϕf (x, τ ) with respect to ∂ x2
4. Modeling noise in nanosensors
Further
Setting ϕ (x, τ ) to be x in (9) and by using (8a) and (11), the time evolution of the first-order moments is obtained as d⟨x⟩ (14) = aˆ + A⟨x⟩ + ⟨h(τ ) (⟨x+ ⟩ − x)⟩ = aˆ + A⟨x⟩. dt For deriving the second-order moments we need the following theorem.
Nanomechanical resonators are increasingly being used for diverse applications, such as, atomic force microscope tips, sensing forces at the atomic level, and chemical sensors (Di Ventra, Evoy, Heflin, & Lockwood, 2004). A major source of noise in these systems is the random collisions between surrounding gas molecules and the sensor (Ekinci, Yang, & Roukes, 2004). These sensors are often kept in rarefied atmospheres, where the time interval between collisions can be long enough for model approximations based on Brownian noise to fail (Ekinci et al., 2004). We present a TTSHS model that mechanistically captures the noise due to gas-molecule collisions. Nanosensor dynamics is modeled through the following mass–spring system with displacement (x1 ) and velocity (x2 )
Theorem 1. Consider the TTSHS illustrated in Fig. 1 with reset defined by (8). Then,
[ ] [ 0 x˙ 1 = x˙ 2 −ωn2
⟨x|τ⟩ = ⟨x⟩, ∀t > 0
where ωn denotes the natural frequency, and ζ the damping ratio (Tamayo, Kosaka, Ruz, San Paulo, & Calleja, 2013). Let gas molecules strike the sensor at times ts with intervals between subsequent strikes T assumed to be independent and identical random variables with an arbitrary distribution. Whenever collisions occur, the velocity resets as
and x
(
∂ 2 ϕf (x, τ ) ∂ x2
) ≡ ij
∂ 2 ϕf (x, τ ) . ∂ xi ∂ xj
for deterministic initial conditions.
(13)
(15) ■
The proof of this theorem can be found in Appendix. As a consequence of this theorem
⟨h(τ )x⟩ = ⟨h(τ )⟩⟨x⟩.
(16)
Using (11) and the above result, the time evolution of the secondorder moments is obtained as ⊤
d⟨xx ⟩ dt
= A⟨xx⊤ ⟩ + ⟨xx⊤ ⟩A⊤ + aˆ ⟨x⊤ ⟩ + ⟨x⟩ˆa⊤ ⊤ ⊤ ⊤ ⊤ ⊤ + GG⊤ + G1⊤ n ⟨x ⟩B + B⟨x⟩1n G + nB⟨xx ⟩B ⊤ ⊤ ⊤ ⊤ ⊤ + ⟨h(τ )(xx + C xD + Dx C + E − xx )⟩ = A⟨xx⊤ ⟩ + ⟨xx⊤ ⟩A⊤ + aˆ ⟨x⊤ ⟩ + ⟨x⟩ˆa⊤ ⊤ ⊤ ⊤ ⊤ ⊤ + GG⊤ + G1⊤ n ⟨x ⟩B + B⟨x⟩1n G + nB⟨xx ⟩B ⊤ ⊤ ⊤ + C ⟨h(τ )⟩⟨x⟩D + D⟨h(τ )⟩⟨x⟩ C + E ⟨h(τ )⟩.
(17)
1 −2ζ ωn
][ ]
x1 , x2
(22)
x2 (ts ) ↦ → x2 (ts ) + η,
(23)
where η is drawn from an arbitrary distribution with zero mean and variance σ 2 . Intuitively, σ 2 depends on the velocity of the impinging gas molecules, which in turn is determined by the gas temperature. Writing (23) in terms of (8) results in
⟨x+ (ts )⟩ = x(ts ), x = [x1 x2 ]⊤ , [ ⟨ ⟩ 0 ⊤ ⊤ x+ (ts )x+ (ts ) − ⟨x+ (ts )⟩⟨x+ (ts )⟩ = E = 0
(24a) 0
σ2
]
,
(24b)
M. Soltani, A. Singh / Automatica 84 (2017) 62–69
Fig. 2. A continuous-time Markov chain model for timing of events in TTSHS. The time interval T ≡ ts − ts−1 between two successive stochastic events is assumed to follow a mixture of Erlang distributions. After an event occurs, a state Si1 , i = {1, . . . , I } is chosen with probability pi . The system transitions through states Sij , j = {1, . . . , mi } residing for an exponentially distributed time in each state. The next event occurs after exit from Simi and the above process is repeated.
with C = D = 0. Solving the Lyapunov equation (20) for A given by (22) yields the following steady-state variances of the sensor displacement and velocity
σ σ , ⟨x22 ⟩ = , 4⟨T⟩ζ ωn3 4⟨T⟩ζ ωn 2
⟨x21 ⟩ =
(25)
respectively. Here σ 2 is the collision impact, and is quantified by the variance of the noise term η in (23). Intriguingly, (25) shows that the magnitude of fluctuations in the sensor displacement/velocity is inversely dependent on ⟨T⟩, and invariant of higher-order moments of T. Thus, making the timing of collisions more random (for a fixed ⟨T⟩) has no effect on ⟨x21 ⟩ and ⟨x22 ⟩. Furthermore, measurement of ⟨x21 ⟩ and ⟨x22 ⟩ cannot discriminate between infrequent high-impact strikes (large σ 2 and ⟨T⟩), and frequent low-impact strikes (small σ 2 and ⟨T⟩). It is important to point out that (25) determines the sensor’s limit of detection. For example, consider a simple displacement sensor, where the sensor’s position is read out based on an external force. Then, the sensor’s displacement by random chance determines the lowest force that can be accurately measured. So far we have considered a sub-class of TTSHS with resets given by (8) (i.e., noise-imparting events). In the next section we return to the original TTSHS described by (4) & (7), and show how time evolution of moments can be computed for these systems.
In general, moment equations cannot be solved exactly for any arbitrary function h(τ ) in Fig. 1. Our strategy for exact moment computations relies on two steps: (i) Modeling the timing of stochastic events through a phase-type distribution, which can be represented by embedding a continuous-time Markov chain (Fig. 2) and (ii) Showing that the time evolution of moments in the resulting system becomes automatically closed at some high-order moment. Here we focus on phase-type distributions that consists of a mixture of Erlang distributions (Tijms, 1994), and use them as a practical tool for modeling the timing of stochastic events in TTSHS. Embedded Markov chain for event timing: Recall that an Erlang distribution of the shape m and the rate k is km τ m−1 e−kτ (m − 1)!
.
(26)
For this distribution mean is m/k. This Erlang distribution can be written as the sum of m independent and identical random variables that follow Exponential distributions with rate k f (τ ) = ke−kτ , where each random variable has a mean of 1/k.
I ∑ pi (mi + q − 1)! , q ∈ {1, 2, . . .} q ki (mi − 1)!
(27)
(28)
i=1
Buchholz et al. (2014). Given a specific distribution of timing of events, the above equation can in principle be used to construct a complex enough appropriate Markov chain that matches some lower-order moments of the given distribution (Lagershausen, 2013). The overall model is now given by the linear system dx(t) = (aˆ + Ax(t))dt + (G + Bx(t)1n )dwt
(29)
together with stochastic transitions associated with the embedded Markov chain illustrated in Table 1. The theorem below outlines the main result. Theorem 2. Consider the TTSHS where timing of events is modeled through the Markov chain in Fig. 2, and reset is given by (7). Then, there exists a vector µ of all first and second-order moments of stochastic processes x and sij , and selected third-order moments, such that its time evolution is given by a linear dynamical system
µ ˙ = a1 + A1 µ,
(30)
for an appropriate vector a1 and matrix A1 .
5. Moment dynamics of TTSHS
f (τ ) =
Here, the interval T ≡ ts − ts−1 is assumed to have an Erlang distribution of shape mi and rate ki with probability pi , i = {1, . . . , I } and can be represented by a continuous-time Markov chain with states Sij , i = {1, . . . , I }, j = {1, . . . , mi } (Fig. 2) (Buchholz, Kriege, & Felko, 2014). Let Bernoulli random variables sij = 1 if the system resides in state Sij and 0 otherwise. The probability of transition Sij → Si(j+1) in the next infinitesimal time interval (t , t + dt ] is given by ki sij dt, implying that the time spent in each state Sij is exponentially distributed with mean 1/ki . To summarize, just after an event occurs a state Si1 , i = {1, . . . , I } is chosen with probability pi and the next event occurs after transitioning through mi exponentially distributed steps (after an Erlang distribution of rate ki and shape mi which is selected with probability pi ). Based on this formulation, the probability of the stochastic event occurring ∑I in the time interval (t , t + dt ] is given by j=1 kj sjj dt, and whenever the event occurs, the state is reset as per (7). For a mixture of Erlang distributions, the moment ⟨Tq ⟩ is given by
⟨Tq ⟩ =
2
65
■
Proof of Theorem. Based on Theorem 1 of Hespanha and Singh (2005), time derivative of the expected value of the elements of any vector of continuously differentiable functions ϕ (x, sij ) ∈ RF is given by d⟨ϕf (x, sij )⟩
⟨ ⟩ = (Lϕf )(x, sij ) , f = {1, . . . , F },
dt where the extended generator (Lϕf ) for this SHS is
⟨ (Lϕf )(x, sij ) =
(31)
⟩ ∑
h × ∆ϕf (x, sij )
E v ents
⟩ ⟨ ( 2 ) ∂ϕf (x, sij ) ( 1 ∂ ϕf (x, sij ) + aˆ + Ax + trace ∂x 2 ∂ x2 )⟩ × (G + Bx1n ) (G + Bx1n )⊤ . ⟨
(32)
Here h denotes the transition intensities for the events and determine how often these events occur (Hespanha, 2005b; Hespanha & Singh, 2005). Note that while in Theorem 1, h is a function of the timer, here h is a function of the states of the phase-type distribution. Using the transition intensities shown in Table 1, the dynamics of the means can be written by choosing ϕ to be x and silj in (32). For
66
M. Soltani, A. Singh / Automatica 84 (2017) 62–69 Table 1 Stochastic events in the TTSHS with timing of events described by the embedded Markov Chain in Fig. 2. Stochastic events
Reset
Transition intensity
Phase-type evolution Events changing x
sij (t) ↦ → sij (t) − 1, si(j+1) (t) ↦ → si(j+1) (t) + 1 x(ts ) ↦ → x+ (ts ), sjmj (ts ) ↦ → 0, si1 (ts ) ↦ → si1 (ts ) + 1
ki s∑ ij , j ∈ {1, . . . , mi − 1} n pi j=1 kj sjmj , i&j ∈ {1, . . . , I }
example suppose that ϕf is selected to be sij j ∈ {2, . . . , mi } then there are two reactions that affect sij . First one is the reaction of arriving to the state Sij which happens with the transition intensity ki si(j−1) . For this reaction the change ∆ϕf (x, sij ) is +1. The other one is the reaction of leaving the state Sij which happens with the transition intensity ki sij . For this reaction the change ∆ϕf (x, sij ) is −1. The moment dynamics can be compactly derived as d⟨x⟩ dt
= aˆ + Ax +
= pi
dt
dt
kj (J − In )⟨xsjmj ⟩ + R⟨sjmj ⟩ ,
(
)
(33a)
− ki ⟨si1 ⟩, i = {1, . . . , I },
(33b)
j=1
d⟨si1 ⟩ d⟨sij ⟩
I ∑
⟨ I ∑
⟩ kj sjmj
j=1
= ki ⟨si(j−1) ⟩ − ki ⟨sij ⟩, i = {1, . . . , I }, j = {2, . . . , mi }.
(33c)
Note that the first equation is not closed since it depends on the second-order moments of the form ⟨xsij ⟩. The time evolution of the moments ⟨xsij ⟩ depends on third-order moments of the form ⟨xs2ij ⟩ and ⟨sij srq xb ⟩. However, using the fact that sij are Bernoulli random variables
⟨sqij ⟩ = ⟨sij ⟩, ⟨sqij xb ⟩ = ⟨sij xb ⟩, q&b ∈ {1, 2, . . .}.
dt
+ pi
kj (J − In ) xsjmj + R⟨sjmj ⟩ , i = {1, . . . , I },
(
⟨
⟩
)
dt
where kx is synthesis rate. In bacteria and mammalian cells many proteins are long-lived, and hence for simplicity, we ignore protein decay (Alon, 2011; Schwanhausser, Busse, Li, Dittmar, Schuchhardt, Wolf, Chen, & Selbach, 2011). Cell-division events occur randomly at times ts , s ∈ {1, 2, . . .} and the cell-cycle times T ≡ ts − ts−1 follows an arbitrary positively-valued distribution that are modeled using phase-type processes (Fig. 2). Whenever cell-division occurs, protein level is reset as per Cell-division: x(ts ) ↦ → x+ (ts ),
(39)
(35)
(36a)
= aˆ ⟨xsij ⟩ + A⟨xsij ⟩ − ki ⟨xsij ⟩
+ ki ⟨xsi(j−1) ⟩, i = {1, . . . , I }, j = {2, . . . , mi }.
(38)
x(ts )
j=1
d⟨xsij ⟩
x˙ (t) = kx ,
⟨x+ (ts )⟩ = , 2 ⟨ 2 ⟩ x+ (ts ) − ⟨x+ (ts )⟩2 = Q 2 x2 (ts ) + CDx(ts ) + E .
= aˆ ⟨xsi1 ⟩ + A⟨xsi1 ⟩ − ki ⟨xsi1 ⟩ I ∑
Let x(t) ∈ R denote the level of a given protein inside a single cell at time t. In between cell-division events, the protein molecules accumulate at a constant rate
where x+ is the protein level in the daughter cell after division, and has the following statistics
Exploiting (34)–(35), moment dynamics of ⟨xsij ⟩ becomes automatically closed and given by d⟨xsi1 ⟩
6.1. Model formulation
(34)
Moreover, since only one of the states sij can be 1 at a time
⟨sij srq xb ⟩ = 0, if i ̸= r or j ̸= q.
cell division occurs, the protein level is exactly halved (i.e., deterministic partitioning of protein molecules between daughter cells). Using our TTSHS formulation, we now relax this assumption and allow for noise in partitioning.
(36b)
Thus (33) and (36) represent a closed set of equations that can be used to obtain the mean dynamics ⟨x⟩. A similar approach can be taken for obtaining the second-order moments. Briefly, the time evolution of ⟨xxT ⟩ would depend on moments of the form ⟨xxT sij ⟩. Moment dynamics of ⟨xxT sij ⟩ can be closed automatically using (34)–(35). ■ In summary, our results show that if timing of events in TTSHS can be modeled via a phase-type process (as in Fig. 2), then time evolution of moments can be obtained by solving a linear system of differential equations. Next, we present and analyze a TTSHS model for capturing random fluctuations in the level of a protein.
(40b)
While (40a) implies that on average, each daughter cell gets half the number of proteins in the mother cell, (40b) quantifies the noise in the partitioning process. Here E represents a constant noise term, while Q , C and D correspond to state-dependent noise terms with quadratic and linear dependence on x(ts ), respectively. Generally Q ≤ 1/2, with the maximum value realized for extreme asymmetric partitioning where one daughter cell inherits all the proteins, and the other one gets zero x+ (ts ) = x(ts ) x+ (ts ) = 0
with probability
with probability
1 2
1 2
,
(41a)
.
(41b)
6.2. Mean protein level Consider a vector µ consisting of the mean protein level, and moments of the form ⟨sij ⟩, ⟨xsij ⟩, i = {1, . . . , I }, j = {1, . . . , mi }. Then, based on Theorem 2, the time evolution of µ is closed and given by a linear dynamical system. Steady-state analysis yields the following moments at equilibrium
⟨sij ⟩ =
6. Modeling fluctuations in protein levels Recently, we introduced a model to study how random timing of cell-division events affects stochasticity in the level of a protein (Antunes & Singh, 2014). Given the limitation of the approach used in Antunes and Singh (2014), it was assumed that whenever
(40a)
pi ki
, ⟨xsij ⟩ =
Using the fact that one)
⟨x⟩ =
mi I ∑ ∑ i=1 j=1
kx ki
( pi
1+
∑I ∑mi i=1
j=1 sij
(
⟨xsij ⟩ =
j i
)
.
= 1 (all phase-type states sum to
kx ⟨T⟩ 3 + CVT2 2
(42)
) ,
(43)
M. Soltani, A. Singh / Automatica 84 (2017) 62–69
Noise from Q
(
CVx2
=
1 27
+
⟨T3 ⟩
4 9 ⟨T⟩3 − 9 − 6CVT2 − 7CVT4
(
27 3 + CVT2
Cell-cycle noise
)
+
)2
64Q
67
Noise from E
Noise from C &D
2
3(3 − 4Q 2 )(3 + CVT2 )
+
16CD
1
(3 − 4Q 2 )(3 + CVT2 ) ⟨x⟩
+
4E
1
(37)
3 − 4Q 2 ⟨x⟩2
Partitioning noise
Box I.
where ⟨T⟩ is the mean cell-cycle time, and CVT2 quantifies its noise level through the coefficient of variation squared (Variance/Mean2 ). Note that the mean protein level depends on both the first and second-order moments of T. 6.3. Quantifying protein noise levels The magnitude of fluctuations in protein levels can be quantified using a similar approach. In this case, the vector µ consists of all the first and second-order moments of x and sij , and also thirdorder moments of the form ⟨x2 sij ⟩. The time evolution of this vector is again closed, and its steady-analysis results in an exact closedform formula (37) (see equation given in Box I) for the protein noise level CVx2 (as quantified by the coefficient of variation squared). The formula can be decomposed into four terms. The first term presents the protein noise when there are no partitioning errors, i.e., Q = C = D = E = 0 in (8b), and was also determined in our earlier work (Antunes & Singh, 2014). The next three terms are novel finding of this work and present noise contributions from the random partitioning process. Note that the origins of these three noise terms can be mapped back to Q , C , D, and E in (8b). Counterintuitively, the noise contribution arising from Q , C and D decreases with increasing CVT2 . Thus, in regimes where these terms dominate CVx2 , the extent of stochastic fluctuations in protein levels can be reduced by making cell-division events more random. 7. Conclusion This study investigates the stochastic dynamics of a class of piecewise-deterministic Markov processes — TTSHS or linear dynamical systems with renewal transitions. Our main contribution is to show that the time evolution of moments can be computed exactly by solving a system of linear differential equations. TTSHS were used to model stochasticity in two different systems — nanosensors with impinging air molecules, and protein levels affected by cell-division events. Both examples reveal intriguing insights about how system fluctuations connect with T, the time between successive discrete events. On one hand, the variance in the sensor position/velocity is completely buffered from the randomness in T. On the other hand, the noise in protein levels can decrease with increasing randomness in cell-division events. Our study also presents exciting avenues for future research. One direction of research would be to find time evolution of moments for TTSHS with multiple discrete modes, allowing for switching between linear dynamical systems. The current formulation of TTSHS considers time intervals between events to be independent. It will be interesting to add some form of correlation between successive events. This is particularly important for cell division, where the cell-cycle lengths of mother and daughter cells are generally correlated. Finally, recent work has identified classes of nonlinear stochastic systems, where moment dynamics becomes automatically closed at some higher-order moment (Sontag & Singh, 2015). In light of this study, one can explore nonlinearities in TTSHS such that moments can be computed without requiring approximate closure schemes.
Appendix. Proof of Theorem 1 We start our analysis by deriving chemical master equation describing the probabilistic evolution of states of the system. By using forward Kolmogorov equation for stochastic hybrid systems (Van Kampen, 2011), for τ > 0 we have
) ∂ ( ∂ pt (τ , x) ∂ pt (τ , x) + + (aˆ + Ax)pt (τ , x) ∂t ∂τ ∂ x ( 2 ) ∂ pt (τ , x) 1 ⊤ (G + Bx1n )(G + Bx1n ) + trace 2 ∂ x2 = −h(τ )pt (τ , x).
(A.1)
Here pt (τ , x) denotes the joint probability distribution which gives the probability that random variable τ takes the particular value of τ ∈ (0, ∞) and the vector of states x takes the particular values of x ∈ (−∞, +∞)n . Further ∂∂x is defined as
] [ ∂ ∂ ∂ ∂ , = ... ∂x ∂ x1 ∂ x2 ∂ xn
(A.2) ∂ 2 p (τ ,x)
t which is a vector of partial derivative operators, and is ∂ x2 the Hessian matrix of pt (τ , x). Moreover, for τ = 0, we have the following
) ∂ ( ∂ pt (0, x) + (aˆ + Ax)pt (0, x) ∂t ∂x 2 ) 1 ∂ ( + trace 2 (G + Bx1n )(G + Bx1n )⊤ pt (0, x) 2 ∫∂ x ∫ p+ (0, x) +∞ +∞ = h(u)pt (u, y)dudy − h(0)pt (0, x), pt (0)
−∞
(A.3)
0
where p+ (0, x) is probability distribution of states after event and is defined as p+ (0, x) =
∫
+∞
pt (0, y)g(x|y)dy,
(A.4)
−∞
for which g(x|y) shows the connection between states before and after the event. Note that the left hand side of Eq. (A.3) is obtained by calculating the time evolution of the states with respect to continuous dynamics. We do not include the term related to evolution of the timer because we assign the deterministic value of 0 to τ . Further the first term in the right-hand side of two ∫ +∞is∫multiplication +∞ probabilities: (1) The first term p 1(0) −∞ 0 h(u)pt (u, y)dudy t gives the probability of resetting τ to 0 hence it is the integration over the probability of occurring any reaction. (2) The second term p+ (0, x) defines the probability of selecting x to be x after a reset. The last term in the left-hand side simply shows the probability of leaving the current state. In addition to joint probability, we use forward Kolmogorov equation to write the time evolution of probability distribution function of τ at time t, pt (τ ). For τ > 0 we have ∂ p t (τ ) ∂ p t (τ ) + = −h(τ )pt (τ ). (A.5) ∂t ∂τ Moreover for τ = 0
∂ pt (0) = ∂t
+∞
∫
h(u)pt (u)du − h(0)pt (0). 0
(A.6)
68
M. Soltani, A. Singh / Automatica 84 (2017) 62–69
By having the probabilistic evolution of states x and τ we start analyzing the conditional moment ⟨x|τ⟩ 1
⟨x|τ⟩ ≡ ⟨x|τ = τ ⟩ =
p t (τ )
+∞
∫
xpt (τ , x)dx.
(A.7)
−∞
Using (A.7) mean dynamics of ⟨x|τ⟩ can be expressed as d⟨x|τ⟩ dt
= − +
∂ pt (τ ) ∂t
p2t (
τ)
∫
+∞
xpt (τ , x)dx
1
−∞ ∫ +∞
p t (τ )
−∞
∂ pt (τ , x) x dx. ∂t
(A.8)
A.1. Calculating ⟨x|τ⟩ for τ > 0 In order to calculate mean dynamics we need the expression ∂ p (τ ,x) ∂ p (τ ) of t∂ t and ∂t t . Substituting (A.1) and (A.5) in (A.8) and going through some mathematical steps results in d⟨x|τ⟩ dt
= aˆ + A⟨x|τ⟩ −
∂⟨x|τ⟩ . ∂τ
(A.9)
A.2. Calculating ⟨x|τ⟩ for τ = 0 In this case substituting (A.3) and (A.6) in (A.8) results d⟨x|τ⟩
= aˆ + A⟨x|τ⟩. dt Hence Eqs. (A.9) and (A.10) can be compactly written as
(A.10)
∂⟨x|τ⟩ (A.11) = aˆ + A⟨x|τ⟩ − . dt ∂τ Now we introduce a new random variable z ≡ ⟨x⟩−⟨x|τ⟩. Using (14) and (A.11) dynamics of z can be written as d⟨x|τ⟩
dz dt
= Az −
∂z , ∂τ
(A.12) ∂⟨x⟩
where we used the fact that ∂τ = 0. Given that z(0, τ ) = 0 (deterministic initial conditions) and z(t , 0) = ⟨x⟩ − ⟨x|0⟩ = ⟨x⟩ − ⟨x+ ⟩ = 0,
(A.13)
the unique solution to the linear first-order PDE (A.12) is z = 0, which implies
⟨x⟩ = ⟨x|τ⟩.
(A.14)
References Alon, U. (2011). An introduction to systems biology: design principles of biological circuits. Chapman and Hall/CRC. Antunes, D., Hespanha, J. P., & Silvestre, C. (2010). Impulsive systems triggered by superposed renewal processes. In Proc. of the 49th IEEE conf. on decision and control, Atlanta, GA (pp. 1779–1784). Antunes, D., Hespanha, J., & Silvestre, C. (2012). Volterra integral approach to impulsive renewal systems: Application to networked control. IEEE Transactions on Automatic Control, 57, 607–619. Antunes, D., Hespanha, J., & Silvestre, C. (2013a). Stability of networked control systems with asynchronous renewal links: an impulsive systems approach. Automatica, 49, 402–413. Antunes, D., Hespanha, J. P., & Silvestre, C. (2013b). Stochastic hybrid systems with renewal transitions: moment analysis with application to networked control systems with delays. SIAM Journal on Control and Optimization, 51, 1481–1499. Antunes, D., & Singh, A. (2014). Quantifying gene expression variability arising from randomness in cell division times. Journal of Mathematical Biology, 71, 437–463. Bohacek, S., Hespanha, J. P., Lee, J., & Obraczka, K. (2003). A hybrid systems modeling framework for fast and accurate simulation of data communication networks. In Proc. of the ACM int. conf. on Measurements and modeling of computer systems (SIGMETRICS), vol. 31 (pp. 58–69). Bortolussi, L., & Policriti, A. (2008). Hybrid systems and biology. In Formal methods for computational systems biology (pp. 424–448). Springer.
Buchholz, P., Kriege, J., & Felko, I. (2014). Input modeling with phase-type distributions and Markov models. In Springer briefs in mathematics. Springer. Costa, O. L. V., & Dufour, F. (2008). Stability and ergodicity of piecewise deterministic Markov processes. SIAM Journal on Control and Optimization, 47, 1053–1077. Daigle, B., Soltani, M., Petzold, L., & Singh, A. (2015). Inferring single-cell gene expression mechanisms using stochastic simulation. Bioinformatics, 31, 1428–1435. David, A., Du, D., Larsen, K. G., Mikučionis, M., & Skou, A. (2012). An evaluation framework for energy aware buildings using statistical model checking. Science China Information Sciences, 55, 2694–2707. Davis, M. H. A. (1993). Markov models and optimization. Chapman and Hall. Dhople, S. V., Chen, Y. C., DeVille, L., & Dominguez-Garcia, A. D. (2013). Analysis of power system dynamics subject to stochastic power injections. IEEE Transactions on Circuits and Systems I, 60, 3341–3353. Di Ventra, M., Evoy, S., Heflin, J. R., & Lockwood, D. J. (Eds.). (2004). Introduction to nanoscale science and technology. In Nanostructure Science and Technology. Springer US. Ekinci, K. L., Yang, Y. T., & Roukes, M. L. (2004). Ultimate limits to inertial mass sensing based upon nanoelectromechanical systems. Journal of Applied Physics, 95, 2682–2689. Evans, M., Hastings, N., & Peacock, B. (2000). Statistical distributions (3rd). Wiley. Finkelstein, M. (2008). Failure rate and mean remaining lifetime. In Springer Series in Reliability Engineering. Failure Rate Modelling for Reliability and Risk (pp. 9–44). Springer. Gillespie, C. S. (2009). Moment closure approximations for mass-action models. IET Systems Biology, 3, 52–58. Gillespie, D. T. (1976). A general method for numerically simulating the stochastic time evolution of coupled chemical reactions. Journal of Computational Physics, 22, 403–434. Gillespie, D. T., & Petzold, L. R. (2003). Improved leap-size selection for accelerated stochastic simulation. Journal of Chemical Physics, 119, 8229–8234. Hespanha, J. P. (2006). Modelling and analysis of stochastic hybrid systems. IEE Proceedings-Control Theory and Applications, 153, 520–535. Hespanha, J. P. (2004). Stochastic hybrid systems: Applications to communication networks. In R. Alur, & G. J. Pappas (Eds.), Hybrid systems: Computation and control (pp. 387–401). Springer-Verlag. Hespanha, J. P. (2005a). A model for stochastic hybrid systems with application to communication networks. Nonlinear Analysis. Theory, Methods & Applications, 62, 1353–1383. Hespanha, J. P. (2005b). Polynomial stochastic hybrid systems. In Hybrid systems: Computation and control 2005 (pp. 322–338). Springer Berlin Heidelberg. Hespanha, J. P. (2014). Modeling and analysis of networked control systems using stochastic hybrid systems. Annual Reviews in Control, 38, 155–170. Hespanha, J. P., & Singh, A. (2005). Stochastic models for chemically reacting systems using polynomial stochastic hybrid systems. International Journal of Robust and Nonlinear Control, 15, 669–689. Hu, J. (2006). Application of stochastic hybrid systems in power management of streaming data. In Proc. of the 2006 Amer. control conference, Minneapolis, MN (pp. 4754–4759). Hu, J., Lygeros, J., & Sastry, S. (2000). Towards a theory of stochastic hybrid systems. In Lecture notes in computer science. Hybrid Systems: Computation and Control (pp. 160–173). Springer. Hu, J., Lygeros, J., & Sastry, S. (2004). Modeling subtilin production in bacillus subtilis using stochastic hybrid systems. In Hybrid systems: Computation and control (pp. 417–431). Springer. Lagershausen, S. (2013). Performance analysis of Closed Queueing Networks. In Lecture notes in economics and mathematical systems. Springer. Lee, C. H., Kim, K., & Kim, P. (2009). A moment closure method for stochastic reaction networks. Journal of Chemical Physics, 130, 134107. Prandini, M., & Hu, J. (2009). Application of reachability analysis for stochastic hybrid systems to aircraft conflict prediction. IEEE Transactions on Automatic Control, 54, 913–917. Ross, S. M. (2010). Reliability theory. In Introduction To probability models. (10th ed.). (pp. 579–629). Academic Press. Schwanhausser, B., Busse, D., Li, N., Dittmar, G., Schuchhardt, J., Wolf, J., et al. (2011). Global quantification of mammalian gene expression control. Nature, 473, 337–342. Singh, A., & Hespanha, J. P. (2005). Models for multi-specie chemical reactions using polynomial stochastic hybrid systems. In Proc. of the 44th IEEE conf. on decision and control, Seville, Spain (pp. 2969–2974). Singh, A., & Hespanha, J. P. (2007). Stochastic analysis of gene regulatory networks using moment closure. In Proc. of the 2007 Amer. control conference, New York, NY (pp. 1299–1304). Singh, A., & Hespanha, J. P. (2010). Stochastic hybrid systems for studying biochemical processes. Philosophical Transactions of the Royal Society A, 368, 4995–5011. Singh, A., & Hespanha, J. P. (2011). Approximate moment dynamics for chemically reacting systems. IEEE Transactions on Automatic Control, 56, 414–418. Soltani, M., & Singh, A. (2016). Moment dynamics for linear time-triggered stochastic hybrid systems. In 2016 IEEE 55th conference on decision and control (CDC) (pp. 3702–3707).
M. Soltani, A. Singh / Automatica 84 (2017) 62–69 Soltani, M., Vargas-Garcia, C. A., Antunes, D., & Singh, A. (2016). Intercellular variability in protein levels from stochastic expression and noisy cell cycle processes. PLoS Computational Biology, e1004972. Soltani, M., Vargas-Garcia, C. A., & Singh, A. (2015). Conditional moment closure schemes for studying stochastic dynamics of genetic circuits. IEEE Transactions on Biomedical Systems and Circuits, 9, 518–526. Sontag, E., & Singh, A. (2015). Exact moment dynamics for feedforward nonlinear chemical reaction networks. IEEE Life Sciences Letters, 1, 26–29. Strelec, M., Macek, K., & Abate, A. (2012). Modeling and simulation of a microgrid as a stochastic hybrid system. In 3rd IEEE PES international conference and exhibition on innovative smart grid technologies (ISGT Europe) (pp. 1–9). Tamayo, J., Kosaka, P. M., Ruz, J. J., San Paulo, A., & Calleja, M. (2013). Biosensors based on nanomechanical systems. Chemical Society Reviews, 42, 1287–1311. Teel, A. R., Subbaraman, A., & Sferlazza, A. (2014). Stability analysis for stochastic hybrid systems: a survey. Automatica, 50(10), 2435–2456. Tijms, H. C. (1994). Stochastic Models: An Algorithmic Approach. John Wiley & Sons. Van Kampen, N. (2011). Stochastic processes in physics and chemistry. Elsevier. Vargas-García, C. A., Soltani, M., & Singh, A. (2016). Conditions for cell size homeostasis: A stochastic hybrid systems approach. IEEE Life Sciences Letters, 2, 47–50. Visintini, A., Glover, W., Lygeros, J., & Maciejowski, J. (2006). Monte Carlo optimization for conflict resolution in air traffic control. IEEE Transactions on Intelligent Transportation Systems, 7, 470–482. Wang, K., & Crow, M. (2011). Numerical simulation of stochastic differential algebraic equations for power system transient stability with random loads. In Power and energy society general meeting (pp. 1–8).
69 Mohammad Soltani received his B.Sc. degree in Electrical Engineering from the Iran University of Science and Technology, Tehran, Iran in 2009. He received his M.Sc. degree in Electrical Engineering from the Shiraz University, Shiraz, Iran in 2012. He is currently working toward his Ph.D. in Electrical Engineering at the University of Delaware, Newark, DE, USA. His current research interests include intracellular processes, genetic circuits, and mathematical modeling of biological systems.
Abhyudai Singh received his B. Tech. degree in Mechanical Engineering from the Indian Institute of Technology, Kanpur in 2002. He started his graduate studies at the Michigan State University where he received an M.S. in Electrical Engineering in 2004 and M.S. in Mechanical Engineering in 2006. He received his Ph.D. in Electrical Engineering in 2008 at the University of California, Santa Barbara. From 2008 to 2011 he was a Post Doctoral Scholar at the University of California, San Diego and is currently an assistant professor at the University of Delaware. His current research interest are dynamics and control of nonlinear systems; stochastic hybrid systems; mathematical and systems biology.