Author's Accepted Manuscript
Some acceleration methods for Monte Carlo simulation of rare events M. Estecahandy, L. Bordes, S. Collas, C. Paroissin
www.elsevier.com/locate/ress
PII: DOI: Reference:
S0951-8320(15)00200-8 http://dx.doi.org/10.1016/j.ress.2015.07.010 RESS5360
To appear in:
Reliability Engineering and System Safety
Received date: 22 December 2014 Revised date: 28 June 2015 Accepted date: 5 July 2015 Cite this article as: M. Estecahandy, L. Bordes, S. Collas, C. Paroissin, Some acceleration methods for Monte Carlo simulation of rare events, Reliability Engineering and System Safety, http://dx.doi.org/10.1016/j.ress.2015.07.010 This is a PDF file of an unedited manuscript that has been accepted for publication. As a service to our customers we are providing this early version of the manuscript. The manuscript will undergo copyediting, typesetting, and review of the resulting galley proof before it is published in its final citable form. Please note that during the production process errors may be discovered which could affect the content, and all legal disclaimers that apply to the journal pertain.
Some acceleration methods for Monte Carlo simulation of rare events M. ESTECAHANDYa,b,∗∗, L. BORDESa,∗, S. COLLASb,∗, C. PAROISSINa,∗ a Laboratoire
de Math´ematiques et de leurs Applications - UMR CNRS 5142 - Universit´e de Pau et des Pays de l’Adour - France b TOTAL EP - CSTJF Avenue Larribau - 64000 Pau - France
Abstract The reliability analysis of instrumented safety systems is an important industrial issue. The standard modeling languages (e.g., Fault trees, Markov chains) and methods employed for these studies become difficult to apply mainly because of the increasing complexity of the operating context (e.g., maintenance policies, ageing process). Thus, a powerful alternative is Petri nets associated with Monte Carlo simulation (MC). However, obtaining accurate estimators on rare events (system failures) requires very long computing times. To address this issue, common methods are not well-suited to Petri Nets whereas the ”M´ethode de Conditionnement Temporel” (MCT), a truncation method, seems to be. Indeed, MCT is independent of the duration distributions involved in a model. However, it is only defined when the rare event consists in reaching an absorbing state. To overcome this limitation, we first propose an extension of MCT (EMCT) to cases of repeated cycles where the failure event is either direct or in competition with other events. Numerical results show that EMCT gives better estimates than MC. Second, we introduce a new computational technique, called Dissociation Method, for systems with independent components. We combine it with both MC and EMCT. Through different numerical examples, we observe a significant improvement of the results. Keywords: Reliability analysis, Petri Nets, Fast Monte Carlo simulation method, Dissociation method, MCT.
1. Introduction The reliability analysis of instrumented safety systems is an important industrial concern in many fields such as public transportation, power plant and oil and gas installations. The study of such equipment becomes more and more complicated mainly because of the progress of the operating context (maintenance policies, ageing process, climate changes, subsea installations, . . . ). Consequently, to take into account the mathematical model corresponding to these real-world systems, the standard modeling languages may be less and less appropriate. Indeed, Fault Tree [1] is an approach which only models independent components, thus, this method is not relevant to treat components submitted to the same repair team for instance. Moreover, multi-phase Markov processes [2] are limited to exponentially distributed transitions between states (it can be bypassed but it leads to models with potentially huge state space and can be too complex in practice), hence, this approach is not well-suited to model the ageing process of the components with a Weibull distribution. Therefore, Petri nets [3] offer a powerful alternative [4]. Furthermore, traditional analytic or numerical methods employed for analysing the performance of these reliable systems are difficult to apply, or even no more valid. This is due to the size and the stringent hypotheses of the ∗ Corresponding
author corresponding author Email addresses:
[email protected] (M. ESTECAHANDY),
[email protected] (L. BORDES),
[email protected] (S. COLLAS),
[email protected] (C. PAROISSIN) ∗∗ Principal
Preprint submitted to Elsevier
model, and also, to the intricacy of the formulation and computation of the required dependability indicators. Thus, Monte Carlo simulation becomes a relevant option to be able to obtain approximated numerical results. Given these considerations, one solution to carry out reliability analysis on instrumented safety systems is to combine the Monte Carlo simulation with Petri Nets. However, these instrumented safety systems have to be very reliable specially when they represent the ultimate safety barriers to prevent disastrous events. In Oil and Gas industry, those systems are called High Integrity Protection Systems (HIPS) [5]. For instance, the HIPS is able to detect an overpressure in the installation thanks to sensors, and to eliminate the source of the problem by closing appropriate valves, which avoids the explosion of downstream equipment. Thus, the failure of such a system is by definition a rare event, that must be limited. To do this, their components are tested during periodic interventions in order to detect hidden failures, and then the revealed failures are fixed by repair teams. Then, a problem with the Monte Carlo simulation remains when we want to obtain accurate estimators on such highly reliable processes. Indeed, the simulation of a rare event (system failure) requires a very long computing time. To address this issue, some fast simulation techniques have been developed. We can classify these Variance Reduction Methods (VRM) in two main families: Importance Sampling (IS) [6, 7, 8] and Multilevel Splitting [9]. The main idea of IS is to change some probability distributions of the model associated with the event of interest by taking into account the other model distributions in order to increase the occurrence of the rare event (e.g., to transform the parameters of the failure distriJuly 11, 2015
• Φ is a structure-type function from E to {0, 1}: 0 represents the set of the undesired states of the system and 1 the complementary states.
bution with respect to the parameters of the repair distribution). The second one is a computational technique which gives preferential treatment to the trajectories leading more directly to the rare event. Nevertheless, to employ these approaches, we need to have a deep knowledge of the studied systems, which is not always characterized in the model. In addition, they strongly depend on parameters that are difficult to optimize automatically. Beyond these common methods that are not well-suited to Petri Nets, another technique close to IS consists in righttruncating the probability distributions of the model that lead to the rare event. This is the principle of the “M´ethode de Conditionnement Temporel” (MCT) [10], also called Failure Forcing [11] for Markov processes. This method does not require us to know all the model distributions but it is only defined when the rare event consists in reaching an absorbing state, which limits its application range. To overcome this latter constraint, we first extend MCT to cases of repeated cycles where the failure event is either direct or in competition with other events and call this extension method EMCT. Second, we propose a computational technique, called Dissociation Method (DM), for systems with independent components, and then combine it with the results of the first extension. Our aim is to estimate the Probability of Failure on Demand (PFD) of HIPS, whatever their structure. Therefore, we consider numerical examples where the components have different types of failures that can be direct or in conflict with others. The paper is organized as follows. In Section 2, we present the main quantity of interest evaluating the performance of a HIPS given by PFD. Next, we recall the principle of MCT and introduce its extension EMCT in Section 3. Then, we describe in Section 4 the Dissociation method. Finally, in Section 5, some numerical experiments are presented to illustrate the main features and the performance of each method. The proofs of the different established results can be found in Appendix A.
For complex systems, a closed-form expression for PFD(t) is most of the time difficult to obtain. Thus, an alternative is the use of Monte Carlo methods. Let X (1) , . . . , X (n) be n i.i.d. copies of X. Then, an estimator of PFD(t) is given by: =1 ¯ (i) (t)), where Φ ¯ = 1 − Φ. Φ(X ∀ t 0, PFD(t) n i=1 n
It is an unbiased estimator of PFD(t). In addition, σ ˆ σ ˆ + qα/2 √ CI1−α [PFD(t)] = PFD(t) , PFD(t) + q1−α/2 √ n n is a confidence interval (CI) of level 1 − α (α ∈ (0, 1)) for PFD(t), where σ ˆ 2 = PFD(t) 1 − PFD(t) n (n − 1) and qβ is the β-quantile of the distribution N(0, 1). In this paper, our aim is to propose acceleration methods of the Monte Carlo simulation that can allow us to improve estimation of this indicator. Note that a system can have a very low PFD(t) because its components have very low failure rates. Nevertheless, the simultaneous unavailability of components more or less reliable can also correspond to a rare event. Consequently, we have developed two acceleration methods in order to address these two rarity factors. The first one, denoted by EMCT, intends to accelerate the first type and the second one, called DM, the other. Thus, EMCT is defined at the component level and DM at the system level when m 2. 3. Extension of MCT (EMCT) 3.1. Principle of MCT As stated in the introduction, the truncation method MCT [10] is defined when the rare event consists in reaching an absorbing state and only if the probability distributions are continuous. We first recall here the principle of MCT. Let us consider a non-repairable component which fails after an operating time denoted by ξ. The principle of MCT is to condition this operating time ξ so that it occurs before a positive time limit ˜ its distribution takes Δ. We denote this conditioned time by ξ, the following form:
2. Reliability Analysis In the field of the functional safety, one of the main quantities of interest which measure the performance of a HIPS is the Probability of Failure on Demand (PFD). This probability is not formalized mathematically in volume 4 of norm IEC 61508 [12], although it is devoted exclusively to definitions and abbreviations. Therefore, we propose defining this indicator on the basis of Innal’s thesis [13] in the following way. The indicator PFD(t) corresponds to the unavailability at time t of a system related to safety, which makes it unable to properly perform its safety function. For a system composed of m components (m 1), this quantity is given, for all t ∈ R+ , by:
˜ = L (ξ | ξ Δ). L(ξ)
(1)
Let Fξ , fξ and Fξ− be respectively the cumulative distribution function (cdf), the probability density function (pdf) and the quantile function (qf) of ξ. From Eq. (1), we obtain the following results.
PFD(t) = P (Φ(X(t)) = 0) ,
˜ denoted by Fξ˜ , is given, for all t ∈ R+ , by: • The cdf of ξ,
where:
Fξ˜ (t) =
• X(t) = (X1 (t), . . . , Xm (t)) ∈ E = E1 × . . . × Em is the state vector of the m components at time t, with Ei being the state space of Xi , for all i ∈ {1, . . . , m};
Fξ (t) 1{t∈[0,Δ]} + 1{t>Δ} , Fξ (Δ)
with 1A = 1 if A is satisfied and 0 otherwise. 2
(2)
if u ∈ 0, Fξ˜k (Δ) and
˜ denoted by fξ˜ , is expressed, for all t ∈ R+ , • The pdf of ξ, as follows: fξ (t) 1{tΔ} . (3) fξ˜ (t) = Fξ (Δ)
Fξ−˜k (u)
Fξ−k
=
⎞ ⎛ ⎟⎟⎟ ⎜⎜⎜ uP min ξ1:p Δ − Fξk (Δ) ⎜⎜⎜ + Fξk (Δ)⎟⎟⎟⎠ (8) ⎝ Fξ (Δ) k
˜ denoted by F − , is for all t ∈ R+ : • The qf of ξ, ξ˜ Fξ−˜ (u) = Fξ− (uFξ (Δ)),
∀ u ∈ [0, 1].
if u ∈ Fξ˜k (Δ) , 1 . (4)
In order to apply our method, we have to simulate the conditioned operating times ξ˜1 , . . . , ξ˜ p . We must point out that, in the case of MCT, we only need to use the inverse cumulative distri˜ In our context, the bution function given by Eq. (4) to obtain ξ. ˜ cdf of ξk given by Eq. (6) is more complicated. We have to use the following algorithm, for all t Δ and p 2.
From Eqs.(2)-(4), R. Garnier found a deconditionning formula which allows to decelerate the accelerated PFD(t). However, it is only valid for sequential events where the failure is direct and the rare event consists in reaching an absorbing state. Therefore, the range of application of the corresponding deconditionning formula is limited. We thus propose an extension of this method which can deal with repeated cycles where the repair instant is either deterministic (renewal) or random, and to components with failures in competition. To this end, we introduce a different truncation.
Algorithm 3.1. Let ξ˘ j , for j ∈ {1, . . . , p}, be the operating time ξ j given that the minimum between ξ j , . . . , ξ p is smaller than Δ be i.i.d. uniform random variables on (0, 1). and U j j∈{1,...,p}
Step 1. Initialization: ξ˜1 = Fξ−˜1 (U1 )
3.2. Principle of EMCT Let us consider a repairable component with p types of failure in competition. The principle of EMCT is to condition each of the p operating times in conflict, denoted by ξ1 , . . . , ξ p , so that the minimum of these times is smaller than the positive time limit Δ. We denote by ξ˜k , for k = 1, . . . , p, the corresponding conditioned operating times. Their distributions are given, for all k ∈ {1, . . . , p}, by:
L(ξ˜k ) = L ξk min ξ1 , . . . , ξ p Δ . (5)
ξ˜ j = Fξ−j (U j )1{min ξ˜1: j−1 Δ} + Fξ−˘ (U j )1{min ξ˜1: j−1 >Δ} , j
where
j
if u ∈ 0, Fξ˘ j (Δ) and
⎞ ⎛ ⎟⎟⎟ ⎜⎜⎜ u P min ξ j:p Δ − Fξ j (Δ) − − ⎜ + Fξ j (Δ)⎟⎟⎟⎠ Fξ˘ (u) = Fξ j ⎜⎜⎝ j P min ξ j+1:p Δ if u ∈ Fξ˘ j (Δ) , 1 . Step 3. For j = p:
P (ξk t)
1{tΔ} P min ξ1:p Δ
P (ξk Δ) + P (Δ < ξk t) P ξ Δ k 1{t>Δ} . + P min ξ1:p Δ
ξ˜ p = Fξ−p (U p )1{min ξ˜1:p−1 Δ} + Fξ−˘ (U p )1{min ξ˜1:p−1 >Δ} , p
Fξ−˘ (u) = Fξ−p u P ξ p Δ ,
• The pdf of ξ˜k , denoted by fξ˜k , is expressed as follows, for all t ∈ R+ : fξ˜k (t) =
P min ξ1:p
(10)
where
(6)
p
fξk (t)
(9)
Fξ−˘ (u) = Fξ−j u P min ξ j:p Δ
• The cdf of ξ˜k , denoted by Fξ˜k is given, for all t ∈ R+ , by:
cf. Eqs. (7)-(8) .
Step 2. For j ranging from 2 to p − 1:
Let ξ1:p = ξ1 , . . . , ξ p and ξ = min {ξi , i ∈ {1, . . . , p} \ {k}}. k Applying Eq. (5), the distribution of ξ˜k has the following properties for any positive Δ:
Fξ˜k (t) =
∀u ∈ (0, 1) .
The proof of this algorithm is given in Appendix A.1. Once the mathematical framework stated, we shall now apply it to three of the most frequent structures of components in HIPS studies.
fξk (t)P ξ Δ k 1{tΔ} + 1{t>Δ} . Δ P min ξ1:p Δ
3.3. Case of a component with a hidden failure • The qf of ξ˜k , denoted by mula, when Δ is positive:
Fξ−˜ , k
satisfies the following for-
Fξ−˜k (u) = Fξ−k u P min ξ1:p Δ
In this section, we consider a component for which the failure is detected only during inspections (hidden failure) that occur periodically. Repairs are assumed to be instantaneous and perfect.
(7) 3
3.3.1. Model We denote by c the period between two successive inspections. The date of the ith inspection is ic, for all i ∈ N∗ . We want to accelerate the simulation time necessary to estimate PFD(t), which is the probability of being in the state 0 at time t. Therefore, the first step consists in formulating this probability introduced in Section 2. Let R be such that Rc is the last renewal time before t and, conditionally to {R = r}, B(k) r be the event “exactly k renewals occur before rc (including rc)”. Then, for all s 1 and t ∈ [sc, (s + 1)c), we have:
0.0018 0.0016 0.0014
(11)
r=0 k=0
where
0.0008
PFD(r,k) (t) = P (sc − rc < ξ t − rc) P B(k) , r
0.0004
0.0006
with P B(0) = δr , where δr is the Kronecker symbol, that is, r δr = 1 if r = 0 and δr = 0 otherwise, and for k ≤ r, r−k+1 P ((i − 1)c < ξ ic) P B(k−1) = . P B(k) r r−i
0.0012
PFD(8759)
PFD(r,k) (t) ,
Theoretical result MC EMCT C.I.
0.0010
PFD(t) =
r s
3.3.3. Numerical example
(12)
0
20
The previous Eqs. (11)-(12) are proved in Appendix A.2.
100
)
0.0012
MC EMCT
0.0010
PFD(r,k) acc (t)
80 4
(a) PFD(t) as a function of the number of simulations.
3.3.2. Application of EMCT For all t ∈ [sc, (s + 1)c) and Δ t, the accelerated PFD at time t resulting from the truncation of the failure distribution according to Eq. (5) (p = 1), denoted by PFDacc (t), is given by: s r
60
Number of simulations (x10
i=1
PFDacc (t) =
40
(13)
0.0004 0.0002 0.0000
PFD(t) =
s r P (ξ Δ) k+1 PFD(r,k) acc (t), ∀t ≤ Δ,
0
r=0 k=0
= PFD(t)
s r
20
40
60
80
100
Number of simulations (x104)
(r,k) (r,k) where PFD(r,k) acc (t) = P Φ Xacc (t) = 0 with Xacc being the process Xacc conditionally to B(k) r . PFD(t) is given, for all t Δ, by:
0.0006
Length of CI
PFD(r,k) (t) (r,k) (t) = PFD(r,k) acc , where PFD (t) repreP (ξ Δ) k+1 sents the unavailability when k, the number of renewals occurring before rc (including rc), and r, such that rc is the last renewal time before t, are given. The proof of Eq. (13) is shown in Appendix A.2. It follows from the latter equation that the deconditioning formula of the unavailability of a component with a hidden failure is expressed as follows: with
0.0008
r=0 k=0
(b) Length of CI as a function of the number of simulations.
Then, the estimator of
Figure 1: One component with a hidden failure.
(r,k) acc (t), P (ξ Δ) k+1 PFD
We here illustrate the efficiency of the proposed EMCT. The algorithms are implemented within the free software environment R [14]. The failure behavior of the component follows an exponential distribution of parameter 5 × 10−7 and it is repaired during inspections of periodicity 2190 hours (a quarter). Note that we choose exponential distributions in all examples
r=0 k=0 n (i,r,k) (r,k) 1 acc (t) = where PFD 1 Φ Xacc are (i,r,k) (t) =0 and Xacc i∈{1,...,n} n i=1 (r,k) . i.i.d. copies of the process Xacc
4
but other distributions could be taken. We make this choice in order to compare our results with analytical values obtained from the GRIF-Workshop Markov Package [15]. Our goal is to evaluate the PFD at time t = 8759 (≈ 1 year, not multiple of c), therefore, we set Δ = 8759. The results of our simulations are reported in Figure 1. On the blue curves, we simulate the behavior of the component using the standard simulation of Monte Carlo (MC). And, the red curves depict the results obtained by employing the accelerated Monte Carlo simulation introduced in the previous section (EMCT). The dotted curves represent the CI of level 95% for the two methods using the same color. According to Figure 1(a), we can observe that the results obtained with the acceleration method are better than those provided by the standard method in terms of number of simulations. Moreover, it should be noted that the computing times are equivalent. Thus, we can analyse the sensitivity of the level of accuracy of these estimators with respect to the number of simulations. As can be seen in Figure 1(b), an accuracy level of 7 × 10−5 can be achieved by EMCT with only 10,000 simulations, whereas 1,000,000 simulations are not sufficient when employing the standard MC. Therefore, the acceleration ratio is greater than 100.
3.4.2. Application of EMCT PFD at time t of the component with p revealed failures in conflict accelerated using Eq. (5), denoted by PFDacc (t), is equal, for all t Δ, to: PFDacc (t) =
(14)
PFD(l) (t) with PFD(l) acc (t) = l , where l corresponds to P min ξ1:p Δ the number of cycles, i.e., to the number of failures of the component before time t (see Appendix A.3 for the proof). From the previous equation, we obtain the following deconditioning formula of the accelerated component with p revealed failures: PFD(t) =
l P min ξ1:p Δ PFD(l) acc (t), ∀t Δ, l1
(l) (l) where PFD(l) acc (t) = P Φ Xacc (t) = 0 with Xacc being the process Xacc when the number of cycles is equal to l. Then, it follows that the estimator of PFD(t) is given, for all t Δ, by: = PFD(t)
l (l) P min ξ1:p Δ PFD acc (t), l1
n (i,l) 1 (l) 1 Φ Xacc where PFD (i,l) acc (t) = (t) =0 and Xacc i∈{1,...,n} are n i=1
3.4.1. Model Let us consider a repairable component which has p types of independent failures. The ith operating time before a type-k failure, for k ∈ {1, . . . , p}, is denoted by ξk(i) for i 1, and follows a pdf fξk . Then, the ith type-k failure, for i 1 and k ∈ {1, . . . , p}, is fixed after a repair time γk(i) having a repair pdf fγk . These operating time and repair time are independent random variables. Note that here the event of interest is a type-one failure. Let L be the random number of revealed failures preceding the last type-one failure occurring before time t. Conditionally to {L = l}, let T (l) be the duration of the cycles associated with the first l failures, that is, l
PFD(l) acc (t),
l1
3.4. Case of a component with p revealed failures Next, we apply the method to a component with failures in competition. When the failure is detected instantaneously, we refer to a revealed failure.
T (0) = 0 and, ∀ l 1, T (l) =
(l) . i.i.d. copies of the process Xacc
3.4.3. Numerical example Let us consider a component with two types of revealed failures. Both types have exponentially distributed operating times with parameters 5 × 10−6 and 1.74 × 10−4 respectively. The repair distribution is the same for the two types of failures and is given by an exponential distribution of parameter 1.742 × 10−1 . We want, thanks to the proposed method, to estimate the probability of being in type-one failure at time t = 8759, that is, to evaluate PFD(8759). Therefore, we set Δ = 8759. The results are reported in Figure 2. As in the previous example, we observe in Figure 2(a) that the estimator provided by the proposed acceleration method is better than the one obtained with the standard Monte Carlo simulation in terms of number of simulations. More specifically, the estimator converges faster and is closer to the theoretical value and, moreover, the length of its CI is smaller. Once again, it is worth mentioning that the computing times are similar. Thus, we can compare the number of simulations needed to achieve a prescribed level of accuracy for the two methods. As illustrated in Figure 2(b), 600,000 simulations are required to obtain a length of CI of the order of 2.3 × 10−5 with MC, when 400,000 are necessary with EMCT. Hence, the acceleration ratio is about 1.5.
S (i) ,
i=1 (i) (i) (i) where, for all i ∈ N∗ , S (i) = ξ(i) Ji + γ Ji with ξ Ji = min ξ1:p . As in the previous section, we first define the PFD(t) of the component as follows. The quantity of interest at time t is given, for all t ∈ R+ , by: PFD(t) = PFD(l) (t), l1
where PFD(l) (t) (l) (l−1) + ξ(l) = P T (l−1) + ξ(l) {Jl = 1} . Jl t < T Jl + γ Jl 5
6e−05
3.5.1. Model Let us consider a repairable component which has two independent types of failure. The first type is a hidden failure as in Section 3.3, that is, the component fails at the end of a random time ξ1(i) , for i 1, with some pdf fξ1 , and it is perfectly and instantaneously repaired (renewed) at inspection times of periodicity c. The second type is a revealed failure as in Section 3.4 when p is equal to 1. The operating time before this type of failure, denoted by ξ2(i) , for i 1, has a pdf fξ2 and its repair time γ2(i) , for i 1, has a pdf fγ2 . Note that here the event of interest is the hidden failure. Let L be the random number of revealed failures before a hidden failure. Conditionally to {L = l}, let:
2e−05
T (l) =
0e+00
PFD(8759)
4e−05
Theoretical result MC EMCT C.I.
ξ2(i) + γ2(i) ,
i=1
0
20
40
60
80
which stands for the duration of the cycles corresponding to l successive revealed breakdowns. Thus, T (l) + ξ1(l+1) represents the time required to be in hidden failure between two renewals. As previously, we first need to derive the expression of PFD(t) for the component. Conditionally to {L = l}, let A(l) be the event “exactly l revealed failures followed by one hidden failure occur”. Hence, A(0) = ξ1(1) ξ2(1) and, for all l 1,
100
Number of simulations (x104)
(a) PFD(t) as a function of the number of simulations.
6e−05
A(l) = MC EMCT
l
ξ1( j) > ξ2( j)
ξ1(l+1) ξ2(l+1) .
j=1
3e−05
4e−05
5e−05
Let R be such that Rc is the last renewal time before t. Con(k,l ) ditionally to {R = r}, we denote by Br k the event “exactly k renewals occur before rc (including rc) where the number of revealed failures between each renewal instant is stored in lk ” with l0 = l0 ∈ N and, for all k 1, lk = (l1 , . . . , lk ) ∈ Nk . Then, for k ≤ r, we have:
r−k+1 (k,l ) P Br k = P (i − 1)c < T (lk ) + ξ1(lk +1) ic
2e−05
Length of CI
l
1e−05
i=1
(k−1,l )
A(lk ) P Br−i k−1
(15)
0e+00
(0,l ) and P Br 0 = δr . Then, for all s 1 and t ∈ [sc, (s + 1)c), we have the following expression for the PFD at time t: 0
20
40
60
80
100
Number of simulations (x104)
PFD(t) =
s r
PFD(r,k,lk+1 ) (t),
(16)
r=0 k=0 lk+1 ∈Nk+1
(b) Length of CI as a function of the number of simulations.
where, PFD(r,k,lk+1 ) (t) = P sc − rc < T (lk+1 ) + ξ1(lk+1 +1) t − rc (k,l )
A(lk+1 ) P Br k .
Figure 2: One component with two revealed failures.
3.5. Case of a component with a hidden failure and a revealed failure
Eqs. (15)-(16) are demonstrated in Appendix A.4. 3.5.2. Application of EMCT k+1 l j , for any k r. For all t ∈ [sc, (s + 1)c) Let lk+1 =
Finally, we study the most frequent structure in HIPS studies given by a component with both a hidden failure and a revealed failure.
j=1
and Δ t, the accelerated PFD at time t resulting from the 6
are required to achieve a length of CI of the order of 5 × 10−4 with MC, while only 380,000 are necessary to EMCT. The acceleration ratio is thus around 1.7.
truncation introduced by Eq. (5), denoted by PFDacc (t), is given by: s r (r,k,l ) (17) PFDacc k+1 (t), PFDacc (t) = r=0 k=0 lk+1 ∈Nk+1
PFD(r,k,lk+1 ) (t) , where P (min (ξ1 , ξ2 ) Δ) |lk+1 |+k+1 PFD(r,k,lk+1 ) (t) represents the unavailability when k and lk+1 are given and correspond to the number of renewals (hidden failures) before rc (including rc), r being such that rc is the last renewal time before t, and to the number of revealed failures between each renewal respectively (see Appendix A.4 for the proof). Finally, it follows from Eq. (17), that the deconditioning formula of the unavailability of a component with both a hidden failure and a revealed failure is expressed as follows, for all t Δ:
0.012
0.013
Theoretical result MC EMCT C.I.
(r,k,l ) P (min (ξ1 , ξ2 ) Δ) |lk+1 |+k+1 PFDacc k+1 (t),
r=0 k=0 lk+1 ∈Nk+1
(r,k,l ) (r,k,l ) (t) = P Φ Xacc k+1 (t) = 0 , with Xacc k+1 be(k,l ) ing the process Xacc conditionally to Br k . Then, the estimator of PFD(t) is given, for all t Δ, by:
0.011
=
PFD(t) s r
(t) =
PFD(8759)
(r,k,lk+1 )
with PFDacc
(r,k,lk+1 )
=
0.009
PFD(t) r s
0.010
where PFDacc
0
(r,k,lk+1 ) acc (t), P (min (ξ1 , ξ2 ) Δ) |lk+1 |+k+1 PFD
20
40
60
80
100
Number of simulations (x104)
r=0 k=0 lk+1 ∈Nk+1
(a) PFD(t) as a function of the number of simulations.
with
1
, 1 ) (i,r,k,l n i=1 Φ Xacc k+1 (t) =0 n
i∈{1,...,n}
(r,k,lk+1 )
are i.i.d. copies of the process Xacc
MC EMCT
0.004
(i,r,k,l ) where Xacc k+1
(t) =
.
0.001
Length of CI
0.003
3.5.3. Numerical example Let us consider a component with two types of failure. The first one has an exponentially distributed operating time of parameter 5 × 10−6 and is repaired during inspections of periodicity 2190 hours. The second one has an exponentially distributed operating time of parameter 1.74 × 10−4 , and the repair is given by an exponential distribution of parameter 1.742 × 10−1 . Thanks to the proposed method, we want to estimate the probability of being in type-one failure at time t = 8759, that is, PFD(8759). Therefore, we set Δ = 8759 in order to apply EMCT. Similar observations to the previous examples can be made about the results depicted in Figure 3. In this case, we can note that the event of interest is not rare since PFD(8759) is about 10−2 . EMCT provides a better estimator than MC. Indeed, Figure 3(a) indicates that it converges faster and is closer to the theoretical value. Once more, since the computing times turn out to be again similar, we compare the number of simulations needed to obtain the same level of accuracy with both methods. As can be seen in Figure 3(b), 660,000 simulations
0.002
(r,k,lk+1 )
PFDacc
0
20
40
60
Number of simulations (x10
80
100
)
4
(b) Length of CI as a function of the number of simulations. Figure 3: One component with a hidden failure and a revealed failure.
7
j m−1 1 ¯ gk Xg(1) (t) Var Φ k j n 1g <...
4. Dissociation Method (DM)
+
4.1. Principle of DM This method, that we call Dissociation method, makes sense when the rare event is a combination of events. Its principle is described as follows. We assume that the m components of the system are independent. Instead of observing the joint behavior of the m components for the n simulations, the Dissociation method is based on the observation of their m individual behaviors. Thus, the first step consists in simulating each univariate process n times. Then, we combine each individual behavior of the components between them so as to obtain the system behavior. In other words, this second step is a post-processing stage of the n simulations, which crosses all possible cases and finally allows us to obtain nm observations. To summarize, this method increases artificially the number of observations with nm−1 additional simulations. Moreover, this computational technique can be associated with the classical Monte Carlo method (MC+DM) as well as EMCT (EMCT+DM). Indeed, the failure may be a rare event, but the simultaneous failure of reliable components is even rarer. Therefore, EMCT defined in Section 3 can be applied for solving the first point and this method can be useful for the second one. Thus, a combination of these two methods seems to be interesting. For computational purposes, we now examine in more details the calculation of the unavailability of a parallel system. Given the notations of Section 2, the structure-type function of such a system is expressed, for all t ∈ R+ , as follows: Φ(X(t)) = 1 −
m
i∈{1,...,m}\{g1 ,...,gk }
Note that Eq. (18) is proved in Appendix A.5. In addition, when combining the Dissociation method with MC, we have: VarMC (PFD(t)). VarMC+DM (PFD(t)) The corresponding estimator is given, for all t ∈ R+ , by: m 1 ¯ (1) DM (PFD(t)) = m Var Φi Xi (t) Var n i=1 j m−1 1 Φ ¯ gk Xg(1) (t) Var k j n 1g <...
+
i∈{1,...,m}\{g1 ,...,gk }
(.) corresponds to the empirical variance. where Var 4.2. Numerical examples We consider a system S made of two independent components in parallel. In order to illustrate the performance of the proposed method, we consider three cases: two components with a hidden failure (see Section 3.3.3), two components with two revealed failures (see Section 3.4.3), and two components with both a hidden failure and a revealed failure (see Section 3.5.3). We want to assess the performance of each proposed method when evaluating PFD(8759) and compare the results. We report in Figures 4 to 6 the obtained estimates. We follow the same color code as previously, EMCT and the standard MC are represented in red and blue respectively. The MC+DM method corresponds to the purple curve and the combined EMCT+DM method is depicted in orange. In addition, the dotted curves represent the CI of level 95% of the method having the same color (C.I.).
(1 − Φi (Xi (t)) ,
i=1
where Φi , for all i ∈ {1, . . . , m}, is the structure-type function of the component i defined from Ei to {0, 1}, with 0 representing the set of the undesired states of the component i and 1 the complementary states. Consequently, the PFD at time t can be expressed as follows: PFD(m) (t) := PFD(t) =
m
P (Φi (Xi (t)) = 0) =
i=1
m
PFDi (t).
i=1
It should be noted that, in order to evaluate this probability, it is sufficient to know the indicator of each component of the system at time t, that is PFDi (t), ∀i ∈ {1, . . . , m}. Thus, each component can be treated separately. In order to implement the Dissociation method, we make some preliminary calculations, that facilitate a fast postprocessing. For all t 0 and m 2, we have: = PFD(t)
m
4.2.1. Case of two components with a hidden failure in parallel First, we observe in Figure 4 the results obtained considering two components with a hidden failure in parallel. As can be seen in Figure 4(a), the standard MC simulation does not provide any estimate unlike EMCT which gives consistent estimators for a similar mean computing time. Furthermore, the estimates obtained using the Dissociation method are better than the previous results. More specifically, we can observe in Figure 4(b) that MC+DM provides a reasonable estimator contrary to MC that returns no result. Moreover, even if EMCT gives a good estimate, EMCT+DM is even closer to the theoretical result and, moreover, the CI curves converge faster to the PFD(t) estimator. Note that the additional computing time observed with these combined methods mainly results from the storage of the behavior of each component, and the associated postprocessing procedure is of the order of a few seconds.
i (t). PFD
i=1
For a system of m components (m 2) the variance of the approximated PFD(t) corresponding to the proposed Dissociation method is given, for all t ∈ R+ , by: m (1) 1 ¯ i X (t) = m Var Φ VarDM (PFD(t)) i n i=1
8
PFD(8759)
5.0e−10
0.0e+00
1.5e−09
2.0e−09
Theoretical result MC EMCT MC+DM EMCT+DM C.I.
1.0e−09
1.0e−06
Theoretical result MC EMCT MC+DM EMCT+DM C.I.
5.0e−07
PFD(8759)
2.5e−09
1.5e−06
both MC and EMCT gives consistent estimators and thus provides a significant improvement.
0
20
40
60
80
100 0.0e+00
Number of simulations (x104)
(a) PFD(8759) as a function of the number of simulations.
0
20
40
60
80
100
Number of simulations (x104)
Figure 5: PFD(8759) as a function of the number of simulations - Two components in parallel with two revealed failures.
4.2.3. Case of two components with both a hidden failure and a revealed failure in parallel
0.00020 20
40
60
80
100
0.00005
Number of simulations (x104)
0.00010
0
PFD(8759)
1.1e−06
0.00015
1.2e−06
PFD(8759)
1.3e−06
1.4e−06
1.5e−06
Theoretical result EMCT MC+DM EMCT+DM C.I.
(b) PFD(8759) as a function of the number of simulations - Zoom.
0.00000
Figure 4: Two components with a hidden failure in parallel.
Theoretical result MC EMCT MC+DM EMCT+DM C.I.
4.2.2. Case of two components in parallel with two revealed failures Next, we consider two components in parallel with two revealed failures. We observe the results in Figure 5. The theoretical PFD(8759) is very low (about 10−9 ). In that case, neither the standard MC nor EMCT simulations provide estimates. Hence, EMCT seems to be less efficient in the case of failures in conflict. However, the Dissociation method combined with
0
20
40
60
Number of simulations (x10
80
100
)
4
(a) PFD(8759) as a function of the number of simulations.
Finally, we consider in Figure 6 the third case consisting of two components in parallel with both a hidden failure and a re9
0.000130
vealed failure. Figure 6(a) indicates that EMCT provides better results than the standard MC simulation. Indeed, the acceleration method requires less simulations to converge to the theoretical result. Nevertheless, we can note that the combination with the Dissociation method allows to achieve more accurate estimates than the two first methods. More specifically, we can observe in Figure 6(b) that the corresponding estimates are closer to the theoretical value. It should be noted that the Dissociation method associated with EMCT seems to converge faster to the theoretical result.
First, we consider the systems involving direct hidden failures. The numerical experiments reported in Table 1 tend to indicate that all proposed acceleration methods improve the estimate of our indicator. We can observe that, when PFD(8759) is lower than 10−6 , MC does not give any result, whereas EMCT provides unbiased estimates with a small standard deviation. Furthermore, both combinations with the Dissociation method also give good results. However, MC+DM is not better than EMCT because, it gives the same result with a worse accuracy. On the contrary, from the values of the approximated MSE, EMCT+DM allows us to gain at least two orders of magnitude in accuracy by comparison with EMCT. To sum up, in the case of rare direct failures, our proposed methods turn out to be really efficient, particularly EMCT+DM. Second, we consider components with revealed failures in conflict. Note that the quantities of interest are especially rare (PFD(8759) 10−10 ) in these examples. As can be seen in Table 2, neither MC nor EMCT are able to return results, whereas the application of the Dissociation method allows to obtain good and similar estimates. Indeed, for the same computing time, DM provides unbiased estimators with a small standard deviation. Finally, in the last numerical experiments reported in Table 3, we can make similar observations to the previous table. DM closer to PFD(t). In addiallows to obtain an estimator, PFD(t), tion, if we look at performance in terms of MSE, we gain at least one order of magnitude in accuracy by comparison with MC. Note that the results provided by EMCT+DM and MC+DM are once again similar. Moreover, the results tend to indicate that, even if the event of interest is not very rare (PFD(8759) 10−8 ), EMCT is not able to provide a significant improvement in the case of failures in conflict.
0.000115
0.000120
0.000125
Theoretical result MC+DM EMCT+DM C.I.
0.000105
0.000110
PFD(8759)
also examine the level of accuracy of PFD(t) thanks to the standard deviation, and the number of simulations needed to obtain these quantities. In order to compare fairly the efficiency of our indicators, we analyse 1, 000 estimators for each one of the methods and examples reported in Tables 1 to 3. To assess their performance, we estimate the mean of PFD(t) by by PFD(t), as well the standard deviation of PFD(t) σ PFD(t) , and finally we approximate the mean squared error (MSE) by
2 (t) − PFD(t) . MSE thus com = + PFD MSE(t) σ2 PFD(t) prises an approximation of the variance of the estimator and the squared bias. We also report the average number of simulations performed during 360 seconds.
0
20
40
60
80
100
Number of simulations (x104)
(b) PFD(8759) as a function of the number of simulations - Zoom. Figure 6: Two components with both a hidden failure and a revealed failure in parallel.
To conclude, by and large, EMCT gives better results than the standard MC simulation. In addition, the Dissociation method allows to obtain a significant improvement of the results when considering systems made of independent components. Actually, whatever the degree of rarity of the event of interest, it provides estimates closer to the theoretical value.
5. Benchmarking
To conclude, let us summarize the main strengths and weaknesses of our two methods. On the one hand, it should be noted that unlike MCT, EMCT can deal with repeated cycles by storing the number of failures of the system before the instant t. Moreover, it can also manage failures in conflict thanks to the principle of truncation by the minimum between the p operating times in competition. However, EMCT proves to be really efficient in the case of direct failures, as observed in Table 1, whereas it is less effective for failures in conflict (see Tables 2 and 3). On the other hand, the applications of the Dissociation method improve significantly these two methods, and turn
Here, we illustrate the main features of the previously applied methods, namely MC and EMCT, as well as the combination of these methods with the Dissociation method, called MC+DM and EMCT+DM, on a Benchmark experiment. We consider the three structure configurations previously used to define three systems composed of m independent components in parallel, for m ∈ {2, 3, 4}. We set the length of computing time to 360 seconds. We estimate the PFD at time t = 8759 for each of the four methods, and take Δ = 8759 hours. We 10
Structure 1 Number of 2 components PFD(8759)=1.20 × 10−6 in parallel PFD(8759) Average σ PFD(8759) number of MSE(8759) simulations 1.17 × 10−6 MC 1.93 × 10−6 325, 019 3.73 × 10−12 1.20 × 10−6 EMCT 9.13 × 10−9 257, 352 8.33 × 10−17 1.20 × 10−6 MC+DM 1.21 × 10−7 182, 758 1.46 × 10−14 1.20 × 10−6 EMCT+DM 6.87 × 10−9 175, 921 4.71 × 10−17
Hidden failure 3 PFD(8759)=1.31 × 10−9 PFD(8759) Average σ PFD(8759) number of MSE(8759) simulations 0 311, 006 0 1.71 × 10−18 1.31 × 10−9 2.27 × 10−11 206, 277 5.17 × 10−22 1.32 × 10−9 1.84 × 10−10 137, 315 3.40 × 10−20 1.31 × 10−9 1.11 × 10−11 126, 631 1.22 × 10−22
4 PFD(8759)=1.43 × 10−12 PFD(8759) Average σ PFD(8759) number of MSE(8759) simulations 0 314, 453 0 2.05 × 10−24 1.43 × 10−12 5.70 × 10−14 158, 414 3.25 × 10−27 1.45 × 10−12 2.59 × 10−13 115, 708 6.72 × 10−26 1.43 × 10−12 1.52 × 10−14 103, 862 2.32 × 10−28
Table 1: Benchmarking - Structure 1
Structure 2 Number of 2 components PFD(8759)=8.23 × 10−10 in parallel PFD(8759) Average σ PFD(8759) number of simulations MSE(8759) 0 MC 200, 367 0 6.77 × 10−19 0 EMCT 114, 974 0 6.77 × 10−19 8.27 × 10−10 MC+DM 5.98 × 10−10 144, 861 3.57 × 10−19 8.14 × 10−10 EMCT+DM 5.40 × 10−10 101, 688 2.92 × 10−19
Two revealed failures 3 PFD(8759)=2.36 × 10−14 PFD(8759) Average σ PFD(8759) number of simulations MSE(8759) 0 129, 287 0 5.58 × 10−28 0 62, 581 0 5.58 × 10−28 2.39 × 10−14 2.78 × 10−14 101, 936 7.72 × 10−28 2.14 × 10−14 2.58 × 10−14 57, 174 6.70 × 10−28
4 PFD(8759)=6.76 × 10−19 PFD(8759) Average σ PFD(8759) number of simulations MSE(8759) 0 87, 241 0 4.60 × 10−37 0 37, 887 0 4.60 × 10−37 6.25 × 10−19 1.23 × 10−18 71, 617 1.51 × 10−36 6.09 × 10−19 1.22 × 10−18 36, 637 1.49 × 10−36
Table 2: Benchmarking - Structure 2
out to be efficient whatever the structure and the degree of rarity of the PFD(t). Nevertheless, this technique is only valid for independent components.
methods can be extended to the considered models in the case where the quantity of interest is a system failure regardless of type. Moreover, they should be easily applied to the average on [0; t] of the dependability indicator PFD(t), denoted by PFDavg (t). Nevertheless, the numerical test cases presented are simple. A prospect in the short time will thus be to assess the performance of the designed methods on more complex structures with dependencies and using Petri nets with assertions and predicates. Furthermore, in order to handle the dependencies, an ongoing work consists in adapting the Dissociation method to systems with dependencies. Finally, another perspective is to combine EMCT with multilevel splitting techniques [16].
6. Conclusion and perspectives To conclude, the proposed acceleration methods seem to be efficient on the numerical experiments performed, which illustrates the potential of our approaches on such system configurations. These methods are simple to implement and do not depend on the distributions of the model. In addition, the computing time associated with the Dissociation method should be easily reduced by using parallel computing. Note also that both 11
Structure 3 One hidden failure and one revealed failure Number of 2 3 4 components PFD(8759)=1.18 × 10−4 PFD(8759)=1.29 × 10−6 PFD(8759)=1.41 × 10−8 in parallel PFD(8759) PFD(8759) PFD(8759) Average Average Average σ PFD(8759) σ PFD(8759) σ PFD(8759) number of number of number of MSE(8759) simulations MSE(8759) simulations MSE(8759) simulations −4 −6 1.19 × 10 1.44 × 10 8.70 × 10−9 MC 2.25 × 10−5 24, 4131 2.82 × 10−6 171, 968 2.75 × 10−7 109, 587 −10 −12 5.08 × 10 7.97 × 10 7.57 × 10−14 1.19 × 10−4 1.29 × 10−6 2.14 × 10−8 −5 −6 EMCT 1.78 × 10 137, 306 1.88 × 10 74, 038 2.61 × 10−7 44, 232 −10 −12 3.17 × 10 3.51 × 10 6.80 × 10−14 1.18 × 10−4 1.28 × 10−6 1.40 × 10−8 −6 −8 MC+DM 4.56 × 10 119, 337 6.14 × 10 108, 706 1.01 × 10−9 71, 698 −11 −15 2.09 × 10 3.78 × 10 1.01 × 10−18 1.18 × 10−4 1.28 × 10−6 1.40 × 10−8 −6 −8 EMCT+DM 4.61 × 10 65, 818 6.32 × 10 64, 117 1.06 × 10−9 38, 635 −11 −15 2.13 × 10 4.00 × 10 1.12 × 10−18 Table 3: Benchmarking - Structure 3
Acknowledgements
[13] F. Innal, Contribution a` la mod´elisation des syst`emes instrument´es de s´ecurit´e et a` l’´evaluation de leurs performances : analyse critique de la norme CEI 61508, Ph.D. thesis, Bordeaux 1 university (France) (2008). [14] R Core Team, R: A Language and Environment for Statistical Computing, R Foundation for Statistical Computing, Vienna, Austria (2013). URL http://www.R-project.org/ [15] TOTAL, GRIF: GRaphiques Interactifs pour la Fiabilit´e, Pau, France (2014). URL http://grif-workshop.fr/ [16] M. Khazen, A. Dubi, Monte Carlo variance reduction in applications to Systems Reliability using Phase Space Splitting, in: Monte Carlo Meth. and Appl., Vol. 10, 2004, pp. 117–128.
The authors would like to thank two anonymous referees for their insightful and constructive comments.
References [1] IEC 61025 ed2.0, Fault tree analysis (FTA), International Electrotechnical Commission, Geneva (2006). [2] IEC 61165 ed2.0, Application of Markov techniques, International Electrotechnical Commission, Geneva (2006). [3] IEC 62551 ed1.0, Analysis techniques for dependability - Petri net techniques, International Electrotechnical Commission, Geneva (2012). [4] Y. Dutuit, A. B. Rauzy, J.-P. Signoret, A snapshot of methods and tools to assess safety integrity levels of high-integrity protection systems, in: Proceedings of the Institution of Mechanical Engineers, Part O: Journal of Risk and Reliability, Vol. 222, 2008, pp. 371–379. [5] J. Signoret, Y. Dutuit, A. Rauzy, High integrity protection systems (hips): Methods and tools for efficient safety integrity levels (sil) analysis and calculations, Risk, Reliability and Societal Safety (2008) 663–669. [6] R. E. Melchers, Search-based importance sampling, in: Structural Safety, Vol. 9, 1990, pp. 117–128. [7] M. Nakayama, P. Shahabuddin, Quick simulation methods for estimating the unreliability of regenerative models of large highly reliable systems, Probability in the Engineering and Information Sciences 18 (2004) 339– 368. [8] V. Nicola, P. Heidelbergerand, P. Shahabuddin, Uniformization and exponential transformation: Techniques for fast simulation of highly dependable non-markovian systems, Proceedings of the 22nd International Symposium on Fault-Tolerant Computing, IEEE Computer Society Press, 1992, pp. 130–139. [9] P. L’Ecuyer, V. Demers, B. Tuffin, Splitting for Rare-Event Simulation, in: Winter Simulation Conference, Vol. 0, 2006, pp. 137–148. [10] R. Garnier, Une m´ethode efficace d’acc´el´eration de la simulation des r´eseaux de Petri stochastiques, Ph.D. thesis, Bordeaux 1 university (France) (1998). [11] E. E. Lewis, F. B¨ohm, Monte Carlo simulation of Markov unreliability models, in: Nuclear Engineering and Design, Vol. 77, 1984, pp. 49–62. [12] IEC 61508 ed2.0, Functional safety of electrical / electronic / programmable electronic safety-related systems, International Electrotechnical Commission, Geneva (2002).
Appendix A. Proofs of the results Appendix A.1. Principle of EMCT Proof. Algorithm 3.1 Let p 2. First, we generate ξ˜1 thanks to Eqs. (7)-(8). Second, for j from 2 to p − 1, we proceed as follows, ∀ t 0 and ∀ t1 , . . . , t j ∈ R∗j : t Fξ˜ j |ξ˜1 =t1 ,...,ξ˜ j−1 =t j−1 (t) = fξ˜ j |ξ˜1 =t1 ,...,ξ˜ j−1 =t j−1 t j dt j 0 t f ξ˜1 ,...,ξ˜ j t1 , . . . , t j dt j . (A.1) = 0 fξ˜ ,...,ξ˜ t1 , . . . , t j−1 1 j−1 In order to obtain the expression of fξ˜1 ,...,ξ˜k (t1 , . . . , tk ), for all 1 k < p, we need to formulate Fξ˜1 ,...,ξ˜k (t1 , . . . , tk ). We have for all t1 , . . . , t j ∈ R∗j : ⎛ k ⎞ ⎜⎜⎜ ⎟⎟⎟ Fξ˜1 ,...,ξ˜k (t1 , . . . , tk ) = P ⎜⎜⎜⎝ {ξi ti } min ξ1:p Δ ⎟⎟⎟⎠ i=1 ⎞ ⎛ k ⎜⎜⎜ ⎟⎟⎟ ⎜ min ξ1:p Δ ⎟⎟⎠⎟ P ⎜⎝⎜ {ξi ti } i=1 = P min ξ1:p Δ 12
k
=
Fξi (ti ) −
k
i=1
p
Fξi (ti ) − Fξi (Δ)
i=1
⎞ ⎟⎟⎟ ⎟⎟⎟ Fξ j t j P min ξ j+1:p Δ 1{t j >Δ} ⎟⎟⎟⎟ ⎟⎟⎟ 1 + p ⎟⎟⎟ {min(t1: j−1 )>Δ} . (A.3) ⎟⎟⎟ 1− S ξl (Δ) ⎟⎠
S ξl (Δ) 1{ti >Δ}
l=k+1
1−
p
S ξi (Δ)
l= j
i=1
where S ξi (Δ) = P (ξi Δ). Then, it follows, for all t1 , . . . , t j ∈
Consequently, Eq. (9) is an immediate consequence of Eq. (A.3). Third, when j = p, we obtain in the same way for all t 0 and for all t1 , . . . , t p ∈ R∗p :
R∗j , that:
Fξ˜ p |ξ˜1 =t1 ,...,ξ˜ p−1 =t p−1 (t) ⎡ p ⎤ p t ⎢⎢⎢ ⎥⎥⎥ ⎢⎢⎣ ⎥⎥⎦ dt p (t ) (t ) − 1 f f ξ i ξ i {t >Δ} i i i 0
fξ˜1 ,...,ξ˜k (t1 , . . . , tk ) k
=
fξi (ti ) −
i=1
k
fξi (ti )
i=1
1−
p
S ξl (Δ) 1{ti >Δ}
l=k+1 p
.
=
(A.2)
S ξl (Δ)
i=1
i=1
+
i=1
Appendix A.2. Case of a component with a hidden failure Proof. Eqs. (11)-(12) Let R be such that Rc is the last instant of renewal before sc (including sc) and conditionally to {R = r}, B(k) r denotes the event “exactly k renewals occur before rc (including rc)”. Let ξ be a random variable corresponding to the operating time of the component. We first prove Eq. (11). The PFD at time t is given, for all t 0, by:
p
i=1
(A.4)
Then, Eq. (10) is an immediate consequence of Eq. (A.4).
⎤ ⎥⎥⎥ ⎥⎥⎥ fξi (ti ) − fξi (ti ) S ξl (Δ) 1{t j >Δ} ⎥⎥⎥ i=1 l= j+1 ⎥ 1{min(t1: j−1 )>Δ} ⎥⎥⎥⎥⎥ dt j j−1 j−1 p ⎥⎥⎥⎥ ⎥⎥⎦ fξi (ti ) − fξi (ti ) S ξl (Δ) i=1
fξ p t p − fξ p t p 1{t p >Δ}
l= j
i=1
j
fξi (ti ) 1{ti >Δ} S ξ p (Δ)
i=1
1{min(t1 ,...,t p−1 )>Δ} 1 − S ξ p (Δ) + fξ p t p 1{min(t1 ,...,t p−1 )Δ} dt p Fξ p t p 1{t p ≤Δ} = 1 1 − S ξ p (Δ) {min(t1 ,...,t p−1 )>Δ} + Fξ p t p 1{min(t1 ,...,t p−1 )Δ} .
⎡ j ⎢⎢⎢ ⎢ fξi (ti ) t ⎢⎢⎢⎢ ⎢⎢⎢ i=1 ⎢⎢⎢ = 1{min(t1: j−1 )Δ} j−1 ⎢⎢⎢ 0 ⎢ ⎢⎢⎣ fξi (ti ) j
fξi (ti ) −
i=1 p−1
0
Fξ˜ j |ξ˜1 =t1 ,...,ξ˜ j−1 =t j−1 (t) ⎡ j ⎤ j p ⎥⎥⎥ t ⎢⎢⎢ ⎢⎢⎢ ⎥⎥⎥ dt j (t ) (t ) (Δ) − 1 f f S ξ i ξ i ξ {t >Δ} i i l i ⎦ 0 ⎣ i=1 i=1 l= j+1 = j−1 j−1 p fξi (ti ) − fξi (ti ) S ξl (Δ) 1{ti >Δ} i=1
t
=
Thus, by substituting Eq. (A.2) into Eq. (A.1), we deduce that:
i=1
i=1 p−1
PFD(t) = P(Φ(X(t) = 0) s = P {Φ(X(t) = 0} {R = r}
l= j
⎛ ⎜⎜⎜ ⎜⎜⎜ t ⎜⎜⎜ fξ t j 1{t ≤Δ} j j ⎜ = fξ j t j 1{min(t1: j−1 )Δ} + ⎜⎜⎜⎜ p ⎜⎜⎜ 0 ⎜⎜⎜ 1 − S ξl (Δ) ⎝ l= j ⎡ ⎤ ⎞ p ⎥⎥⎥ ⎢⎢⎢ ⎟⎟ ⎢ ⎥ S ξl (Δ)⎥⎥⎦ 1{t j >Δ} ⎟⎟⎟⎟ fξ j t j ⎢⎢⎣1 − ⎟⎟⎟ l= j+1 ⎟⎟⎟ + ⎟⎟⎟ 1{min(t1: j−1 )>Δ} dt j p ⎟⎟⎟ ⎟⎟⎠ 1− S ξl (Δ) l= j ⎛ ⎜⎜⎜ ⎜⎜⎜⎜ ⎜⎜⎜ Fξ j t j 1{t j ≤Δ} = Fξ j t j 1{min(t1: j−1 )Δ} + ⎜⎜⎜⎜ p ⎜⎜⎜ ⎜⎜⎜ 1 − S ξl (Δ) ⎝
r=0
⎞ ⎛ s r % ⎟ ⎜⎜⎜⎜ (k) ⎟ = P ⎜⎝{sc − rc < ξ t − rc} Br ⎟⎟⎟⎠ r=0
=
r s
k=0
P (sc − rc < ξ t − rc) P B(k) . r
r=0 k=0
We now establish the recursive formula given by Eq. (12) for we have the probability of the event B(k) r . If r = 0, then (0) P B0 = 1. Let r 0. If k = 0, then P B(0) = 0 and, if r k = 1, we obtain: P B(1) = P ((r − 1)c < ξ rc) . r Finally, for k ∈ {2, . . . , r}, we set: pi j = P (i j − 1)c < ξ + i j−1 c ≤ i j c ,
l= j
13
for any j 1 and i0 = 0. Then, we obtain the following probability formula: P B(k) r
=
k−1
Thus, substituting Eq. (A.6) into Eq. (A.5), we obtain: P (sc − rc < ξ t − rc) P B(k) r (r,k) (t) = . PFDacc P (ξ Δ) k+1
pi j P ((r − 1)c < ξ + ik−1 c ≤ rc)
1≤i1 <...
=
r−k+1 i1 =1
pi1
k−1
Appendix A.3. Case of a component with p revealed failures Before proving Eq. (14), we establish the following proposition.
pi j
i1 +1≤i2 <...
Proposition Appendix A.1. For all l ∈ R+ , let T˜ (l) be a random variable such that, ∀ Δ 0,
(i) Δ, ∀i = 1, . . . , l . L(T˜ (l) ) = L T (l) min ξ1:p
× P ((r − 1)c < ξ + ik−1 c ≤ rc) =
r−k+1 i1 =1
=
r−k+1 i1 =1
pi1 P B(k−1) r−i1 P ((i1 − 1)c < ξ i1 c) P B(k−1) r−i1 .
We have, for all t ≤ Δ: f (l) (t) fT˜ (l) (t) = T l . P min ξ1:p Δ
Proof. Eq. (13) Using the same mathematical framework as in the previous proof of Eq. (11), let ξ˜ be a random variable corresponding to the operating time of the component accelerated according to Eq. (1). Then, we have for all t 0:
Proof. Proposition Appendix A.1 (1) Let Δ 0. We set M = 1/P min ξ1:p Δ . By induction, we will prove that, for all l ∈ N+ , P T˜ (l) t = Ml P T (l) t , ∀ t Δ. (A.7)
r s P sc − rc < ξ˜ t − rc P B˜ (k) PFDacc (t) = r
We first show that the statement holds for l = 1. For all t Δ, we have: P T˜ (1) t
(1) = P T (1) t min ξ1:p Δ p
(1) (1) P ξ(1) Δ = {J1 = j} min ξ1:p J1 + γ J1 t
:=
r=0 k=0 s r
(r,k) PFDacc (t).
r=0 k=0
From Eq. (2), the accelerated PFD(r,k) at time t conditionally to B(k) r is given, for all t 0, by: P (sc − rc < ξ t − rc) ˜ (k) (r,k) (t) = PFDacc P Br , P (ξ Δ) B˜ (k) r
j=1
⎛ p ⎜⎜⎜ (1) =M P ⎜⎜⎜⎜⎝ ξ(1) + γ t j j
(A.5)
j=1
(1) Δ min ξ1:p ⎛ p t ⎜ ⎜⎜ =M P ⎜⎜⎜⎜⎝ u + γ(1) j t
B(k) r
is the event given that {ξ Δ}. We now evaluate where (0) ˜ . First, we have P B = 1. the probability of the event B˜ (k) r 0 (0) Second, let r 0. If k = 0, then P B˜ r = 0 and, if k = 1, we obtain using Eq. (2):
j=1
Then, we have the following probability formula: P B˜ (k) r ⎡ k−1 ⎤ ⎢⎢⎢ ⎥⎥⎥ P ((r − 1)c < ξ + ik−1 c ≤ rc) pi j ⎢ ⎥⎥⎥ ⎢⎢⎣ = ⎦ (ξ P Δ) P (ξ Δ) 1≤i <...
j=1
⎧ ⎫⎞ ⎪ ⎪ ⎟ ⎪ ⎪ ⎪ ⎨ ⎬⎟⎟⎟⎟ (1) ⎪ ξ u min ⎟⎟ ⎪ ⎪ ⎪ ⎪ ⎪ i∈{1,...,p} i ⎪ ⎩ ⎭⎠ i j
⎧ ⎫⎞ ⎪ ⎪ ⎟ ⎪ ⎪ ⎪ ⎨ (1) ⎬⎟⎟⎟⎟ (1) ⎪ min ξ ξ ⎟⎟ ⎪ ⎪ ⎪ ⎪ ⎪ i∈{1,...,p} i ⎪ ⎩ j ⎭⎠ i j
p (1) P ξ(1) =M {J1 = j} J1 + γ J1 t j=1
= M P T (1) t .
k−1
r−k+1 1 pi1 P B(k−1) = k r−i1 P (ξ Δ) i1 =1 1 (k) = k P Br . P (ξ Δ)
0
× fξ j (u)du ⎛ p ⎜⎜⎜ (1) =M P ⎜⎜⎜⎜⎝ ξ(1) + γ t j j
P ((r − 1)c < ξ rc) = P (r − 1)c < ξ˜ rc = . P B˜ (1) r P (ξ Δ)
1
⎧ ⎫ ⎪ ⎪ ⎪ ⎪ ⎪ ⎨ (1) ⎬ (1) ⎪ ξ min ξ ⎪ j i ⎪ ⎪ ⎪ ⎪ ⎪ i∈{1,...,p} ⎩ ⎭ i j
The statement is thus true for l = 1. Second, in the inductive step, we assume the hypothesis to be true for l = k and we check that it is satisfied for l = k + 1. Using the notations from Section 3.4.1, we proceed as follows for all t Δ: P T˜ (k+1) t
(A.6) 14
(l) Δ, ∀l ∈ {1, . . . , k + 1} = P T (k+1) t min ξ1:p
(k+1) = P T˜ (k) + S (k+1) t min ξ1:p Δ t
(k+1) P u + S (k+1) t min ξ1:p Δ fT˜ (k) (u)du. =
Thus, we first show that the statement holds for l = 1. Using the fact that T (0) = 0 and the events are independent, we have the following expression, for all t Δ: PFD(1) (t) (1) (0) = P T (0) + ξ(1) + ξ(1) {J1 = 1} J1 t < T J1 + γ J1 (1) (1) = P ξ1(1) t < ξ1(1) + γ1(1) ξ1 min ξ2:p t (1) P γ1(1) > t − u P u min ξ2:p = fξ1 (u) du
(A.8)
0
Since hypothesis (A.7) holds for l = k, we have: the inductive k (k) (k) ˜ P T t = M P T t for all t ≤ Δ. And, we obtain for all t ≤ Δ: (A.9) fT˜ (k) (t) = Mk fT (k) (t) .
0
=
Thus, by plugging Eq. (A.9) into Eq. (A.8), we have for all t Δ:
(k+1) P T˜ (k+1) t = Mk P T (k) + S (k+1) t min ξ1:p Δ = Mk
0
(k+1) Δ {Jk+1 = j} min ξ1:p
p P T (k) + ξ(k+1) + γ(k+1) t j j j=1
⎫ ⎧ ⎞ ⎪ ⎪ ⎪ ⎪ ⎟⎟⎟ ⎪ ⎬ ⎨ (k+1) (k+1) ⎪ (k+1) min ξi ⎪ min ξ1:p Δ ⎟⎟⎟⎟⎠ ξ ⎪ ⎪ ⎪ ⎪ ⎪ i∈{1,...,p} ⎭ ⎩ j i j
= Mk+1
p
P T (k) + ξ(k+1) + γ(k+1) t j j
j=1
⎫⎞ ⎧ ⎪ ⎟ ⎪ ⎪ ⎪ ⎪ ⎬⎟⎟⎟⎟ ⎨ (k+1) (k+1) ⎪ min ξi ⎪ ξj ⎟⎟ ⎪ ⎪ ⎪ ⎪ ⎪ i∈{1,...,p} ⎭⎠ ⎩ =M
k+1
P T
Consequently, the statement given by Eq. (A.10) is true for l = 1. Second, we check that it is satisfied for l > 1. For any t Δ, we have:
t .
PFD(l) acc (t) ˜ (l−1) + ξ(l) + γ(l) = P T˜ (l−1) + ξ(l) Jl t < T Jl Jl
(l) Δ {Jl = 1} min ξ1:p = P T˜ (l−1) + ξ1(l) t < T˜ (l−1) + ξ1(l) + γ1(l)
(l) min ξ(l) Δ ξ1(l) min ξ2:p 1:p (l) (l−1) (l−1) = MP T˜ + ξ1 t < T˜ + ξ1(l) + γ1(l) (l) (l) Δ min ξ1:p ξ1(l) min ξ2:p t =M P T˜ (l−1) + u t < T˜ (l−1) + u + γ1(l)
fT˜ (l) (t) = Ml fT (l) (t). Proof. Eq. (14) Using the fact that the event of interest is a type-one failure and applying Fubini’s theorem, we have, for all t Δ, PFDacc (t)
˜ (l−1) + ξ˜(l) + γ(l) } { J˜l = 1} = P {T˜ (l−1) + ξ˜(l) t < T J˜ J˜ J˜ l
l
l
l1
:=
i=2
= M PFD(1) (t) .
Therefore, the induction statement is true for l = k + 1 and thus Eq. (A.7) holds for any l 1. It follows that, for all t Δ,
F ξi (u) fξ1 (u) du.
i=2
0
i j
(k+1)
p
PFD(1) acc (t) = P ξ1(1) t < ξ1(1) + γ1(1)
(1) (1) ξ1(1) min ξ2:p min ξ1:p Δ (1) (1) = M P ξ1(1) t < ξ1(1) + γ1(1) ξ1 min ξ2:p (1) Δ min ξ1:p ∞ (1) =M P u t < u + γ1(1) u min ξ2:p 0 {u Δ} fξ1 (u)du t p =M F γ1 (t − u) F ξi (u) fξ1 (u)du
j=1
= Mk+1
F γ1 (t − u)
Then, the accelerated PFD(l) (t) when l = 1 , is given, for all t Δ, by:
p (k+1) P T (k) + ξ(k+1) Jk+1 + γ Jk+1 t
t
PFD(l) acc (t).
0
l1
×
We want to prove the following acceleration formula of PFD(l) (t), for any l 1: =M
l (l) ∀t Δ, (A.10) PFD(l) acc (t) = M PFD (t), (i) ∗ where M = 1/P min ξ1 , . . . , ξ(i) p Δ for all i ∈ N . To do this, we are going to distinguish two cases: l = 1 and l > 1.
p
F ξi (u) fξ1 (u)du i=2 t t−u 0
×
p i=2
15
0
P v + u t < v + u + γ1(l) fT˜ (l−1) (v)dv
F ξi (u) fξ1 (u)du
t
t−u
0
×
p
with i0 = 0. Then, the probability can be expressed as follows:
(k,l ) P Br k
F γ1 (t − v − u)
=M 0
F ξi (u) fT˜ (l−1) (v)dv fξ1 (u)du.
(A.11)
i=2
Using Proposition Appendix A.1, Eq. (A.11) can be re-written as follows, for all t Δ and l > 1: ∞ t−u (l) l PFDacc (t) = M F γ1 (t − v − u) 0
×
p
+ ξ1(lk +1) + ik−1 c ≤ rc A(lk )
=
0
pi j P (r − 1)c < T (lk )
+ ik−1 c ≤ rc A(lk )
ξ1(lk +1)
(k−1,l ) pi1 P Br−i1 k−1 A(l1 ) P (i1 − 1)c < T (l1 ) + ξ1(l1 +1) i1 c
(k−1,l ) × P Br−i1 k−1 .
Proof. Eqs. (15)-(16) Let R be such that Rc is the last instant of renewal before sc (including sc). We denote by lk = (l1 , . . . , lk ) ∈ Nk for all k 1, with l0 = l0 ∈ N, a sequence corresponding to the number of revealed failures between each renewal instant. Then, using the notations introduced in Section 3.5.1, the PFD at time t is given, for all t 0, by:
Proof. Eq. (17) Using the same mathematical framework as in the previous proof, we have for all t ≤ Δ: PFDacc (t) s r = P sc − rc < T˜ (lk+1 ) + ξ˜(lk+1 +1) t − rc
PFD(t) = P(Φ(X(t) = 0) s P {Φ(X(t) = 0} = {R = r}
r=0 k=0 lk+1 ∈Nk+1
r=0
⎛ s ⎜⎜% = P ⎜⎜⎜⎝ sc − rc < T (l) + ξ1(l+1) t − rc r=0 l∈N ⎞ ⎟ r % % (k,lk ) ⎟⎟⎟⎟⎟ (l) A Br ⎟⎟ ⎠ k
:=
(k,l )
A˜ (lk+1 ) P B˜ r k
r s
(r,k,lk+1 )
PFDacc
(t).
r=0 k=0 lk+1 ∈Nk+1
We set M = 1/P min ξ1(i) , ξ2(i) Δ , for any i ∈ N∗ and k l k = l j . Since the events are disjoint, the accelerated
k=0 lk ∈N
P sc − rc < T (lk+1 ) + ξ1(lk+1 +1) t − rc
j=1
(k,l ) PFD(r,k,lk+1 ) (t) conditionally to Br k at time t is given, for all t ≤ Δ, by:
r=0 k=0 lk+1 ∈Nk+1
r−k+1 i1 =1
Appendix A.4. Case of a component with a hidden failure and a revealed failure
r−k+1 i1 =1
=
k−1
i1 +1≤i2 <...
+ =
pi1
i1 =1
Therefore, the statement is also true for l > 1. Thus, Eq. (A.10) holds for any l 1.
=
r−k+1
F ξi (u) fT (l−1) (v)dv fξ1 (u)du
i=2
r
pi j P (r − 1)c < T (lk )
1≤i1 <...
= Ml PFD(l) (t).
s
k−1
=
(k,l )
A(lk+1 ) P Br k .
(r,k,l
)
PFDacc k+1 (t) = P sc − rc < T (lk+1 ) + ξ1(lk+1 +1) t − rc l +1 ⎞ k+1 ⎟⎟⎟ (k,l )
(i) (i) A(lk+1 ) {min ξ1 , ξ2 Δ}⎟⎟⎟⎠ P B˜ r k i=1 = Mlk+1 +1 P sc − rc < T (lk+1 ) + ξ1(lk+1 +1) t − rc
We now establish the recursive formula given by Eq. (15) for (0,l ) (k,l ) the probability of the event Br k . First, we have P B0 0 =
(0,l ) 1. Second, let r 0. If k = 0, P Br 0 = 0 and if k = 1, we obtain:
(1,l ) A(l1 ) . P Br 1 = P (r − 1)c < T (l1 ) + ξ1(l1 +1) rc
lk+1
ξ1( j) > ξ2( j)
j=1
ξ1(lk+1 +1) ξ2(lk+1 +1)
⎞ +1 lk+1 ⎟⎟⎟ (k,l )
(i) (i) {min ξ1 , ξ2 Δ}⎟⎟⎟⎠ P B˜ r k
For k ∈ {2, . . . , r}, we set for all j 1: (l +1) A(l j ) , pi j = P (i j − 1)c < T (l j ) + ξ1 j + i j−1 c i j c
i=1
16
= Mlk+1 +1 P sc − rc < T (lk+1 ) + ξ1(lk+1 +1) t − rc (k,l )
A(lk+1 ) P B˜ r k .
Consequently, the statement is true for m = 1. Second, in the inductive step, we assume the inductive hypothesis to be true for m = q and we check that it is satisfied for m = q + 1. We proceed as follows, for all t ∈ R+ .
(A.12)
Then, we define a recursive formula
for the probability of the k,lk ) 0,l0 ) ( ( = 1. Second, let r 0. event B˜ r . First, we have P B˜ 0
(0,l ) If k = 0, P B˜ r 0 = 0 and, if k = 1, we obtain:
(q) (t) × 1 (q+1) (t) = PFD ¯ q+1 X (i) (t) . Φ PFD q+1 n i=1 n
(1,l ) P B˜ r 1 = P (r − 1)c < T˜ (l1 ) + ξ˜1(l1 +1) rc A˜ (l1 ) = Ml1 +1 P (r − 1)c < T (l1 ) + ξ1(l1 +1) rc ⎞ 1 +1 l ⎟⎟⎟ (i) (i) (l1 ) A {min ξ1 , ξ2 Δ}⎟⎟⎟⎠ i=1 l1 +1 = M P (r − 1)c < T (l1 ) + ξ1(l1 +1) rc A(l1 )
(1,l ) (A.13) = Ml1 +1 P Br 1 .
It is well known that if A and B are two independent random variables such that Var(A) ∞ and Var(B) ∞, the variance of AB is given by: Var(AB) = Var(A)Var(B)+Var(A)E(B)2 +Var(B)E(A)2 . (A.16) Taking the variance of Eq. (A.15) and according to the previous Eq. (A.16), we obtain the following result for all t ∈ R+ : VarDM (PFD (t)) ⎡ n ⎤ . (i) ⎥⎥ ⎢⎢⎢ 1 (q) (t) Var ⎢⎢⎣ ¯ q+1 X (t) ⎥⎥⎥⎦ Φ = VarDM PFD q+1 n i=1 ⎡ n ⎤ ⎢⎢ 1 (i) ⎥⎥2 ⎢ ¯ ⎢ Φq+1 Xq+1 (t) ⎥⎥⎥⎦ + Var PFDq (t) E ⎢⎣ n i=1 ⎡ n ⎤ (i) ⎥⎥ ⎢⎢⎢ 1 q (t) 2 ¯ Φq+1 Xq+1 (t) ⎥⎥⎥⎦ E PFD + Var ⎢⎢⎣ n i=1 (q+1)
Finally, for k ∈ {2, . . . , r}, we set for any j 1: (l +1) A˜ (l j ) p˜ i j = P (i j − 1)c < T˜ (l j ) + ξ˜1 j + i j−1 c ≤ i j c with i0 = 0. We obtain the following expression:
(k,l ) P B˜ r k =
k−1
1≤i1 <···
+ik−1 c ≤ rc}
= M|lk |+k
=
p˜ i j P (r − 1)c < T˜ (lk ) + ξ˜1(lk +1)
A˜ (lk ) k−1
j
¯ gk Xg(1) (t) Var Φ k
1g1 <...
¯ q+1 X (1) (t) × Var Φ q+1
pi j P (r − 1)c < T (lk ) Alk
+
+
q+1
1 nq+1 q−1 j=1
(A.14)
(1) 2 ¯ i X (t) E Φ i
(1) ¯ i X (t) Var Φ i
i=1
1 nj
×
Thus, by substituting Eqs (A.13)-(A.14) into Eq. (A.12) and using Eq. (16), we deduce that, for all t ∈ [sc, (s + 1)c):
i∈{1,...,q+1}\{g1
j
¯ gk Xg(1) (t) Var Φ k
1g1 <...
q 2 1 (1) (1) (1) ¯ ¯ ¯ i X (t) × E Φq+1 Xq+1 (t) + Var Φq+1 Xq+1 (t) E Φ i n i=1
(r,k,l ) PFDacc k+1 (t)
= M|lk+1 |+k+1 P sc − rc < T (lk+1 ) + ξ1(lk+1 +1) t − rc (k,l )
A(lk+1 ) P Br k
=
= M|lk+1 |+k+1 PFD(r,k,lk+1 ) (t).
×
q 1 j n j=1
j
¯ gk Xg(1) (t) Var Φ k
1g1 <...
i∈{1,...,q+1}\{g1
q+1 (1) 2 (1) ¯ i X (t) + 1 ¯ i X (t) . E Φ Var Φ i i nq+1 i=1 ,...,g } k
Therefore, the induction statement is true for m = q + 1. Since both the initial case and the inductive step have been achieved, by mathematical induction, the statement holds for all m 1.
Appendix A.5. Principle of DM Proof. Eq. (18) We first show that the statement holds for m = 1. We have the following expression, for all t ∈ R+ : (1)
i∈{1,...,q}\{g1 ,...,gk }
(t)) = VarDM (PFD
q−1 1 j+1 n j=1
1≤i1 <...
+ ξ1(lk +1) + ik−1 c ≤ rc
(k,l ) = M|lk |+k P Br k .
(A.15)
(1) 1 ¯ 1 X (t) . Var Φ 1 n 17