Reliability Engineering and System Safety 138 (2015) 242–252
Contents lists available at ScienceDirect
Reliability Engineering and System Safety journal homepage: www.elsevier.com/locate/ress
A dynamic discretization method for reliability inference in Dynamic Bayesian Networks Jiandao Zhu, Matthew Collette n Department of Naval Architecture and Marine Engineering, The University of Michigan, Ann Arbor 48109, United States
art ic l e i nf o
a b s t r a c t
Article history: Received 13 September 2013 Received in revised form 28 November 2014 Accepted 15 January 2015 Available online 7 February 2015
The material and modeling parameters that drive structural reliability analysis for marine structures are subject to a significant uncertainty. This is especially true when time-dependent degradation mechanisms such as structural fatigue cracking are considered. Through inspection and monitoring, information such as crack location and size can be obtained to improve these parameters and the corresponding reliability estimates. Dynamic Bayesian Networks (DBNs) are a powerful and flexible tool to model dynamic system behavior and update reliability and uncertainty analysis with life cycle data for problems such as fatigue cracking. However, a central challenge in using DBNs is the need to discretize certain types of continuous random variables to perform network inference while still accurately tracking low-probability failure events. Most existing discretization methods focus on getting the overall shape of the distribution correct, with less emphasis on the tail region. Therefore, a novel scheme is presented specifically to estimate the likelihood of low-probability failure events. The scheme is an iterative algorithm which dynamically partitions the discretization intervals at each iteration. Through applications to two stochastic crack-growth example problems, the algorithm is shown to be robust and accurate. Comparisons are presented between the proposed approach and existing methods for the discretization problem. & 2015 Elsevier Ltd. All rights reserved.
Keywords: Dynamic Bayesian Networks Reliability analysis Crack growth model Dynamic discretization Life cycle health monitoring
1. Introduction There are many source of uncertainties surrounding fatigue life prediction for marine and offshore structures such as wave loading, material properties and geometry. If those uncertainties are estimated incorrectly, the actual fatigue life of the structure could be over- or underpredicted. Recently, some advanced vessels were fitted with extensive data collection systems for hull performance and stress monitoring to obtain life cycle usage information. This life cycle usage data can help support inspection, repair and life-extensions decisions by reducing the uncertainties of design stage fatigue approaches in a cost-effective and riskacceptable manner for the vessel owner. However, most of the current literature focuses on techniques for data monitoring and collection, while comparatively fewer studies have applied this information in life-cycle updating for decision support. This article extends upon a model introduced by Straub [1], where a Dynamic Bayesian Network (DBN) is used to model a general structural deterioration process. Here the focus of the investigation is the accuracy of the reliability output and its relationship to the discretization scheme for such DBNs, including cases with inspection evidence. This paper presents a novel adaptive discretization
n
Coressponding author.
http://dx.doi.org/10.1016/j.ress.2015.01.017 0951-8320/& 2015 Elsevier Ltd. All rights reserved.
approach to increase their usefulness as a reliability-based decision support tool for such lifecycle modeling problems. The approach is designed to efficiently estimate the small probabilities of structural failure events which typically occur in the tail region of the underlying probability distribution. This is in contrast to existing discretization approaches which primarily seek to accurately estimate the entire distribution shape. Furthermore, the approach is designed for DBN solution algorithms that can infer the network dynamically (e.g. the network does not have to be unrolled) for computational efficiency when extensive monitoring data is available. One promising approach for using life cycle usage data to support engineering decisions is to develop updating strategies to refine design-stage engineering predictions. Bayesian approaches have been widely studied as a method to reduce uncertainties of modeling and qualify the effectiveness of inspection techniques. Okasha et al. [2] integrated life cycle health monitoring data into a reliability analysis for ships, building off extensive previous work for civil engineering structures. Mohanty et al. [3] used a Bayesian-based multivariate Gaussian process technique for both online and offline fatigue crack growth prediction which was validated by experimental results. Nielsen et al. [4] used onboard measured response parameters and stress data to estimate sea state parameters and predict fatigue damage through Bayesian modeling.
J. Zhu, M. Collette / Reliability Engineering and System Safety 138 (2015) 242–252
However, these three methods were based on classical Bayesian updating techniques. Researchers have also proposed a more complicated Bayesian Network approach, which uses a network representation to express more complex dependence structures between components of the problem. Friis-Hansen [5] applied Bayesian Networks as a decision support tool in general marine applications. Mahadevan et al. [6] used Bayesian Networks to estimate and improve the reliability of a non-marine structural system. Straub [1] studied the benefits of DBNs in modeling the deterioration processes. His work showed the potential of DBN in monitoring, inspection, maintenance and repair planning. Then, he and his colleagues [7,8] combined the Bayesian Networks with the structural reliability method to create an enhanced Bayesian Network for engineering risk and reliability analysis. Sankararaman et al. [9] further connected the finite element model, crack growth model, and surrogate models through a Bayesian Network for uncertainty analysis and model validation. Cohen et al. [10] also compared the Monte Carlo simulation and stochastic Bayesian approach to examine the probability of crack detection using a truncated initial crack length. A key challenge in applying Bayesian Networks to structural reliability modeling is the need to discretize continuous random variables for prediction and inference tasks. Such discretization schemes significantly impact both the accuracy and computational cost of the model. Some schemes have already been developed for inference of a hybrid Bayesian Network containing both continuous and discrete nodes [11–15], but these methods do not explicitly tackle estimating small tail region probabilities. One common type of hybrid Bayesian Network is the conditional linear Gaussian (CLG) model [16], where exact computation of means and variables is available; however, the CLG can manage cases only where all continuous variables are normally distributed. However, for the general engineering problem where other continuous distribution types may be needed some form of discretization must normally be attempted. Stochastic sampling methods, such as the Markov Chain Monte Carlo (MCMC) [17–19], are another potential option for inference of Bayesian Networks. The advantage of using a sampling method is that it can be applied to a wide range of models and is very easy to implement. However, judging convergence can be tricky especially in cases of highly non-linear underlying models or cases where the assumed prior and actual posterior distributions differ significantly. Also, a reasonable estimation of prior distribution needs to be provided when Metropolis–Hasting sampling is performed; otherwise these methods can struggle with ill-convergence. For a reliability-based inference task, the accuracy of sampling a small “tail probability” region can also be an issue. Therefore, both hybrid and stochastic sampling algorithms presently have shortcomings for evidence-based reliability updating inference. The alternative schemes involve iterating different discretization techniques to discretize each continuous node into a discretized node, and using an exact inference method, such as a junction tree, to infer the Bayesian Networks. The simplest and most traditional approach is to apply a uniform discretization of all continuous nodes, where discretization remains static throughout inference. Generally, increasing the number of discretization intervals improves the accuracy of the model. However, as life cycle data becomes available for evidence, the posterior probability distribution concentrates on certain regions that the provided evidence suggests as more probable. Denser discretization intervals are required to maintain the level of accuracy in these more probable regions. Conversely, for regions with lower probability, the size of intervals can remain relatively large. Therefore, a uniform discretization approach can be computationally inefficient for this type of problem. Recently, Chang and Chen [11] developed a new efficient inference, which is referred to as the decision-tree based inference algorithm for Hybrid Dynamic Bayesian Network (HDBN). The
243
basic idea of this inference algorithm is to separate the nodes into four different types. The likelihood tables are pre-computed by forward sampling in the topological order of the nodes in the network. The evidence space is partitioned into hyper-cubes based on the sampling data. Further, the posterior distribution of target nodes based on given evidence is inferred by separately considering the dynamic transitional nodes and static transitional nodes. This scheme shows some potential for inferring a general hybrid Bayesian Network; however, the sampling technique it uses can affect the prediction accuracy, especially for reliability purpose modeling. The idea of dynamic discretization is also very attractive which was proposed initially by Kozlov and Koller [20]. They applied a multivariate approach to discretize continuous functions on multidimensional domains according to the relative entropy error. Neil and his colleagues [21,15] then developed a variant of this method using the entropy error tracking idea. For structural reliability calculations, the primary interest in using any Bayesian Network is the ability to accurately model the normally small probability of failure (POF), not the overall distribution. This probability of failure (POF) must be accurately computed at any point during the structure's service life to guide decision making. This value normally can be found from summation of the small “tail region” probability density function, which can involve very small probabilities, sometimes on the order of 10 9 –10 10 early in the structure's lifespan. To date, the entropy tracking approaches have focused on the more general problem of properly approximating the whole probability distribution and significantly fewer investigations have been made to investigate efficient methods focused on the “tail region probability.” A further complication with existing discretization methods is that they have been developed to determine ideal discretizations on a node-by-node basis. However, if DBN is used to perform the time series-based reliability analysis over many time steps, it is attractive to standardize the discretization across time steps. If this standardization is done, inference methods such as Murphy's Frontier method [19] can be used even as the total number of time steps becomes large. However, the problem of finding a standardized discretization for all time steps has not yet been attempted by node-by-node entropy tracking to date. In this work, a new reliability purposed dynamic discretization (RPDD) algorithm is proposed to allow accurate tail-region POF estimations for lengthy DBNs. The goal is to find an optimum discretization method for DBN-based deterioration processes with lower computational cost while achieving the same level of accuracy for reliability analysis as conventional discretization. The effect of inserting inspection evidence will also be considered and combi ned into the proposed algorithm. In Section 2, an introduction of deterioration process modeling from Straub's work will be presented, followed by an introduction of inference methods. Existing discretization schemes including Neil's dynamic discretization scheme will also be presented. Then, a new dynamic discretization technique will be proposed in Section 3 with two examples following in Section 4: first, a simple constant crack growth model with assumed normal distribution for all the nodes, and then Straub's [1] stochastic crack growth model. For both examples, the new dynamic discretization will be compared to the previous discretization for prediction accuracy. A discussion of computational cost is also given. Conclusions will be provided in Section 5.
2. DBNs based deterioration process modeling 2.1. Generalized DBN model for deterioration process The DBN structure used to model deterioration in this work is based on a general model first proposed by Straub [1]. Consider
244
J. Zhu, M. Collette / Reliability Engineering and System Safety 138 (2015) 242–252
inference algorithm is composed of a forward pass step which computes μt ¼ PrðH t jm1:t Þ and then a backward pass step which computes νt ¼ Prðmt þ 1:T jH t Þ. Later, μt and νt are combined into a posterior joint probability distribution γ t ¼ PrðH t jm1:i Þ p μt nνt . Here, Ht denotes all the hidden nodes in time step t (H t ¼ fX t ; At g). The distribution of Ht is the joint probability distribution of Xt and At. Then the posterior distribution for each node can be obtained by marginalizing out the remaining nodes of Ht between time step i and time step i þ k. The prediction can be further calculated by marginalizing all the intermediate nodes according to the below equation: X PrðAi þ k jm1:i Þ ¼ PrðH i ¼ hjm1:i ÞPrðAi þ k jH i ¼ hÞ ð4Þ h
Fig. 1. A general deterioration process modeled by DBNs.
2.2. Existing discretization approaches and error identification
crack growth model as a case of a general deterioration process modeled by DBNs, as shown in Fig. 1. For t ¼ 1; 2; 3; …; T, the At node is the actual crack size at time step t, while the A0 node is the initial flaw size. The Xt node sets contain all those deterioration parameters which control the crack growth rate at each time step. The Mt node represents the in-service measurement of crack size, which is used as updating evidence from in-service inspection. For our model, only the Mt node is observable; the At node and Xt node are assumed to be unobservable. In such a model, it is common to assume that a critical crack size, ac, exists, and if the crack size exceeds ac, then failure has occurred. For structural reliability analysis, accurate calculation of the probability that the crack size At is larger than ac at each time step t is often more critical than modeling the whole distribution of the probability density function of crack size. The probability of the crack size exceeding ac is taken as the POF. To avoid working directly with very small probabilities, the POF can be further transformed into a reliability index β [22] as shown below:
β ¼ Φ 1 ðPOFÞ
ð1Þ
During initial design, the network can be run to generate a prior engineering prediction of crack size over time. When inservice measurements of crack size are available, they are entered as evidence on the Mt nodes, and the network can be updated for the future belief of crack size as well as posterior beliefs in the parameters that drive the crack growth model. For the general case, it is assumed that the Bayesian Network is built with time steps from 0 to T. As the purpose of the approach is to perform updating during the structures' life, we define the current time as i where 0 r i r T. Evidence on the network can be entered only in the interval 0 : i. Furthermore, we will define a consecutive series of time slices involving a network node X from the network as X a:b where a and b are the time step indexes of the beginning and ending positions of the time slices respectively. The process of updating the Bayesian Network involves moving iterating through sequential time slices. In such operations, the time slice currently being calculated is defined as t. Finally, a position variable, k, can be defined which defines how far t is from i: k ¼ it
when t o i
ð2Þ
k ¼ t i
otherwise:
ð3Þ
With these variables defined, two different kinds of inference activities exist: smoothing PrðAt ¼ i k jm1:i Þ for 0 r k o i and prediction PrðAt ¼ i þ k jm1:i Þ for 0 o k r T i. Smoothing runs back in time updating the belief in how the structural condition evolved into its current state, while prediction attempts to update the future state predictions given the evidence given to date. To infer DBNs, an iterative frontier algorithm is applied. A detailed description of frontier algorithm can be found in Murphy [19]. The
If the types of continuous variables in the DBN do not permit exact probability distribution computations, they need to be discretized before inference can be performed. Initially, when there is no information on posterior distribution shapes, a static discretization is often applied. This discretization can be uniform discretization, or follow a general pattern such as a lognormal distribution depending on the type of problem being modeled and the expected areas of interest in the distribution. This initial discretization can then be updated by providing more discretization intervals at high posterior probability regions after the inference is performed. As this process is essentially iterative, it is attractive to consider methods to automate it. Building off an error metric proposed by [20], a novel dynamic discretization algorithm has been proposed by Neil and Tailor [15]. The algorithm iteratively discretizes the continuous nodes in Bayesian Networks conditioned on complex configurations of evidence. Each iteration is composed of a split and merge operation according to the calculated Kullback–Leibler (KL) divergence error [20] from posterior distributions. The KL error is an estimate of the relative entropy between a function f and its discretization f^ based on the function mean fmean, the function maximum fmax, and the function minimum fmin in the given discretization interval of f state with length of wj. The expression is given as follows: f f mean f f f min f f min log min þ mean f max log max j wj j ð5Þ Ej ¼ max f max f min f mean f max f min f mean where the density values fmean, fmax and fmin are approximated using the midpoint of an interval and its points of intersection with neighboring intervals.
3. Dynamic discretization algorithm for deterioration process modeling 3.1. Discretization error identification for reliability purpose Methods such as Neil's based on the KL divergence error offer a robust and flexible way to estimate the discretization accuracy even though there is no exact solution known. However, the KL error focuses only on comparing the similarity of whole probability distributions. For the structural reliability problem, higher accuracy in the tail probability region is also important. Here, an alternative errortracking approach is proposed that aims on primarily assuring that the POF estimate is accurate. This new error definition can be split into two types according to the different inference tasks that occur: smoothing or prediction. Each of these cases will be presented in turn. Smoothing is an attempt to estimate the past crack size, given the measurement up to the current time, t. This is an effective way to trace how the crack develops as well as update the belief in the initial crack size. Due to uncertainty in crack size measurements, even if the
J. Zhu, M. Collette / Reliability Engineering and System Safety 138 (2015) 242–252
measurement result at time step i is less than ac, a small probability remains that actual crack size has already reached ac. It is assumed that the error is dominated by the maximum probability bin in the discretization of each random variable that contributes to the POF calculation for the At node. The goal becomes to find this bin that contributes the most to the POF calculation. To solve this problem, a “virtual evidence” technique is proposed. In this approach, a copy of the DBN is made, including all measurement evidence available. Then, “virtual evidence” av (1 o v r i) is inserted on the At nodes as if the measured crack size was observed exceeding ac. By examining the parent nodes of the At node, the posterior distribution of those parents can be obtained. As virtual evidence of a failure has been introduced at At, the posterior distribution of At's parents represents an approximation of the most likely route to failure at At. The interval in At's parent's discretization that has the highest posterior probability with the virtual evidence can be hypothesized to be the most likely bin to lead to At exceeding ac, and is a good candidate for subdivision to increase the accuracy of the discretization. As any prior measurements are simultaneously included in the copy of the network with virtual evidence, all available information is to be used in determining the optimum place for subdivision. Then the locations of these high-posterior intervals are recorded for subdivision in the original DBN, and the copied network is discarded. Eq. (6) represents this hypothesis that smoothing error is proportional to the maximum probability given the inspection evidence m1:i and virtual evidence av. The smoothing error of Xt can also be obtained accordingly by changing the At with Xt in Eq. (6). It is significant to note that calculating the posterior distribution after inserting new evidence will result in extra computational cost. Therefore, the number of nodes to be queried should be as small as possible while still representing most of the distribution's characteristics. One method is to query the parents of each “virtual evidence”, since those nodes are directly linked with or close to the measurement which could be affected greatly: Esmooth ðAt Þ p maxðPrðAt jm1:i ; av ÞÞ
ð6Þ
In addition to smoothing, errors in the prediction of future states are also important to identify. According to Eq. (4), once the posterior distribution of Xi is obtained, the future state of Ai þ k can be easily computed by marginalizing out all the intermediate Xt and At nodes from iþ 1 to iþ k time steps. Again, it can be assumed that the error of estimating the POF for t ðt 4 iÞ is driven by the maximum probability bin of At 1 that reaches critical size at At node. However, for the DBNs, as the nodes at the current time step are linked to the nodes at the future time step, the error of computing crack size growth at the current time step will also be brought to the future time step, resulting in further error accumulation. Therefore, instead of considering only error that each bin goes to failure, the propagated error that leads to failure after time step t must be considered. If Y t ¼ 1fAt 4 At 1g is an indicator function which takes 1 when At 4 At 1 and 0 otherwise, then the prediction error can be computed from the maximum probability bin of marginalized joint probability of PrðX t ; Y t jm1:i Þ. Eqs. (7) and (8) show the calculation of prediction error for At 1 and Xt nodes. The bins with the highest posterior probability (including all crack measurement data available) are then selected for subdivision in the algorithm: Epred ðAt 1 Þ p max
! XX PrðX t ; Y t jm1:i Þ xt
XX PrðX t ; Y t jm1:i Þ Epred ðX t Þ p max at 1 at
ð7Þ
3.2. Proposed reliability purposed dynamic discretization algorithm Therefore, based on the equations presented above, a dynamic discretization algorithm can be developed which is suitable for reliability-based deterioration process DBNs model. The algorithm is composed of five major steps, which are explained below. 1. Initialization: Though the proposed dynamic discretization algorithm is not sensitive to the initial discretization, the initial discretization still needs to be carefully considered as it is helpful in making the algorithm more efficient. A recommended method is to use a uniform discretization for root nodes (nodes which do not have any parent nodes) and perform a Monte Carlo simulation for the remaining nodes. In the fatigue crack examples used here, this simulation can be done directly on the underlying fatigue crack growth model. The initial intervals of those nodes can then be divided according to the sampling distribution. Also, the overall range covered by the discretization must be large enough to cover both the prior and posterior distribution. 2. Initial conditional probability table (CPT) generation: With the initial discretization determined, corresponding probability values for the conditional probability tables on each node in the network need to be calculated. Fundamentally, this is an approximation of the underlying continuous stochastic variable as a discrete probability mass function. Straub [1] proposed a novel discretization approach for any continuous random variables in the DBN. Following the derivation from Straub, consider discretization of any single continuous variable represented by node R, where the parents of R are denoted Q, and q and r denote the corresponding discrete variable at certain states. In general each node's CPT depends on both the likelihood of the node itself taking certain states given the input state of the node's parents and the likelihood of the parent states. The parent node distribution marginalized in terms of the child variable states is necessary for this calculation. However, such a distribution is not required to be defined in constructing a valid Bayesian Network (e.g. Eq. (10) in [1]). Straub simplified this problem by assuming that the child node probability can be calculated over one parent node at a time (e. g. independence between parents for the child CPT which is often true but not always), and assuming a form of the marginal parent distribution. Straub assumes a uniform parent marginal distribution for the case of discretization intervals in the CPT with both upper and lower bound, an exponential distribution form when one bound of the intervals is infinite. While Straub's results show the strong performance of this method in general, a faster simplified approach was used here. In the dynamic discretization adopted here, the CPTs are repeatedly re-generated as the discretization updates. This allows the simpler and faster method to be used effectively, as the approximations involved improve as interval width is reduced. Additionally, it removes the need for numerical integration present in Straub's method, as well as the assumed form of the parent distributions. In this approach, if the Q is not the root node, three points are selected to approximate each state of node. Then PrðR ¼ rjQ ¼ qÞ can be calculated as follows: 1 X Ifrg ðGR ðqÞÞ ð9Þ PrðR ¼ rjQ ¼ qÞ 3q ¼ ½q ;q ;q l
at
! ð8Þ
245
m
u
where GR ðÞ is a function that calculates the value of node R given the parent node state Q. As not all parent states will result in a value of GR that corresponds to R¼ r (e.g. some states will lead to an output that falls into another interval in the discretization of R), an indicator function Ifrg ðÞ is used. I takes
246
J. Zhu, M. Collette / Reliability Engineering and System Safety 138 (2015) 242–252
on a value of 1 when R¼ r and 0 otherwise. This approximation is run over three points in a bounded interval – ql, qm and qu represent, respectively, the lower bound, mean, and upper bound for each interval. Although the approximation in Eq. (9) for intervals with two boundaries seems to be rough, the error of each interval is approximately proportional to the size of the interval, and can be reduced by dividing intervals with larger errors. However, special consideration is needed in a reliability-based discretization method for final discretization intervals with only one boundary. In examples for nodes such as crack size, these infinite final upper intervals correspond to physical states where the system has failed and thus will be included in the final POF calculation. For example, in this work the infinite final interval of AT corresponds to cracks up to infinite length. The probability associated with this final interval will be included in the POF as POF calculation for any At nodes is made by summing the probability in all the bins where the crack size is larger than ac. If the probability in the final interval with only one boundary contributes significantly to this summation, then the final estimated POF will be sensitive to the assumptions made in handling the final infinite interval. Either sampling as described here or the exponential prior approximation from Straub is at best an approximation, and for infinite intervals reducing the interval size to reduce the error is not an option. Rather than attempting to improve the approximation for infinite intervals, here the approach taken is to reduce the associated probability of this final interval to reduce the sensitivity in the POF calculation to this interval. In other words, if aInf is the final one boundary bin used to discretize At, then the following relationship should be enforced: PrðAt ¼ aInf jm1:i Þ{PrðAt ¼ aInf 1 jm1:i Þ
ð10Þ
This ensures that the probability contribution to the POF from the final infinite interval is much smaller than that from the adjacent two-boundary interval. Therefore, the sensitivity of the POF to the approximations in this infinite interval is likewise small. 3. Compute the error: After generating CPTs, the estimated perinterval smoothing error and prediction error are calculated according to Eqs. (6) and (8), respectively. For effective convergence, the prediction error will be computed at time step tu (i þ 1 o t u r T) by directly using posterior distribution of nodes and CPD relations according to Eqs. (7) and (8). To compute the smoothing error, the “virtual evidence” is inserted at the last non-zero posterior probability bin of At node at time step tv (2 o t v o i). Then, the posterior distributions of At and Xt nodes are queried through time step 1 to t v 1. Finally, the maximum probability of the bin and the index of that bin are stored for each time step. 4. Stopping criteria judgment: Two stopping criteria have been established. The first criterion is simply to judge whether the algorithm has reached a pre-set maximum number of iterati PT ons specified. The second criterion is to check if t ¼ 1 jβ ðl 1Þ ðlÞ ðlÞ j =kc r α, where βt is the reliability index at iteration t βt l for time step t, and kc is a normalizing constant appropriate P ð2Þ ð1Þ for the problem. Here kc ¼ Tt ¼ 1 j βt β t j is used, which compares the remaining improvement in β to the initial improvement made by the algorithm in the first cycle. For this work, α is chosen between 0:5% and 1%, which means the ðlÞ ðl 1Þ variation between β t and β t is much less than the initial iteration. 5. Subdivide the intervals: The subdivision operation is performed for each queried time step at each iteration of the algorithm. In the present implementation, all time steps are queried in terms of smoothing and prediction tasks, although it may be possible
to further improve the speed of the algorithm by optimizing the query time step list. The subdivision intervals are found from the intervals with maximum smoothing and prediction errors at each queried time step, which can be directly obtained through Eqs. (6)–(8). When the same subdivision interval is identified for division by multiple steps in the algorithm, it is still added only once to the subdivision list and is divided into only two additional intervals. If the stopping criteria are not satisfied, those recorded split intervals' index will be used to update the discretization at the next iteration. A simple merge operation has also been considered in the proposed algorithm, which tries to merge the zero posterior distribution bins or the bins with the least error. However, it has been proven to only slightly reduce computational cost while introducing a potential iteration-to-iteration instability where intervals can be merged and then subdivided on successive iterations. Therefore, merge operators were not used in this calculation. The algorithm of this RPDD method is shown below. Algorithm 1. Proposed reliability purposed dynamic discretization algorithm. Input: DBN initialize per step 1 above. A possibly incomplete set of crack size measurements M ¼ ðm1 ; m2 ‥mi Þ up to time slice i, which is the time step of last crack measurement. Output: Reliability index β at each time step from final discretization for l ¼1 to max_num_iter do Compute the CPTs: PrðlÞ ðX 1 Þ, PrðlÞ ðA1 j X 1 Þ, PrðlÞ ðX t j X t 1 Þ, PrðlÞ ðAt j At 1 ; X t Þ, PrðlÞ ðM t j At Þ. Insert the crack inspection evidence M t ¼ m1:i . Perform inference to find the posterior distribution of PrðlÞ ðX t j m1:i Þ, PrðlÞ ðAt j m1:i Þ. ðlÞ
Compute β t according to Eq. (1) P ðlÞ ðl 1Þ if f Tt ¼ 1 j β t βt j =kc r αg or l ¼ max_num_iter then ðlÞ
return β t . else Call Division see – Algorithm 2 end if end for
Algorithm 2. Division algorithm from Algorithm 1. Input: DBN with evidence inserted per Algorithm 1 Output: Array of new subdivision intervals for nodes in network Prediction error calculation and identification of intervals to split for t u ¼ i þ 1 to T do EAt u ¼ Epred ðAt u 1 Þ by Eq. (7) EX t u ¼ Epred ðX tu 1 Þ by Eq. (8) end for IA ¼interval index for maximum error in EA IX ¼interval index for maximum error in EX Smoothing error calculation and identification of intervals to split for tv ¼1 to i do Insert virtual evidence at At v ¼ av . for tt ¼1 to t v 1 do Perform network inference to find the posterior distribution of PrðlÞ ðX tt j m1:i ; av Þ, PrðlÞ ðAtt j m1:i ; av Þ. EAtt ¼ Esmooth ðAtt Þ by Eq. (6) EX tt ¼ Esmooth ðX tt Þ by Eq. (6) end for I AM ¼ interval index for maximum error in EA
J. Zhu, M. Collette / Reliability Engineering and System Safety 138 (2015) 242–252
for t ¼ 1; 2; 3; …; T: 8 A0 NormalðμA0 ; σ 2A0 Þ > > > > < dA Normalðμ ; σ 2 Þ t dA dA > At ¼ At 1 þ dAt > > > : M NormalðA ; σ 2 Þ
I XM ¼ interval index for maximum error in EX I A ¼ I A [ I AM I X ¼ I X [ I XM end for for k ¼1 to lengthðI A Þ do split interval in At and Mt at I Ak into two equal intervals end for for k ¼1 to lengthðI X Þ do split interval in Xt at I X k into two equal intervals end for
t
m ¼ Φ½A0 ; dA1 ; dA2 ; …T þ ϵT
The proposed algorithm will be applied in the following two examples: first is a simple constant crack growth model where all the nodes satisfy Gaussian distribution, and second one is the stochastic crack growth model from Straub's [1] paper. Both averaged total KL error εKL and reliability index error εβ for the proposed method will be compared with static discretization, proposed method with KL error judgment (algorithm follows RPDD structure, but uses KL error metric to find the bins to split) as well as Neil's dynamic discretization for each iteration. The errors are calculated according to Eq. (11) – where w is each discretization interval and j is the counter over all such intervals – and Eq. (12), correspondingly. It will be shown that the proposed RPDD algorithm can produce a more accurate reliability estimation compared with static discretization and Neil's dynamic discretization, while Neil's method is more accurate in terms of the whole probability distribution.
T 1X jβ βtruet j T t ¼ 1 approxt
ϵ
Eq. (13) can be written more compactly in vector form:
4.1. Overview
εβ ¼
t
ð13Þ
T
4. Results and discussion
T X 1X εKL ¼ E T t ¼ 1 wj j
247
ð11Þ
where Φ is a matrix composed of zeros and ones such that the multiplication in Eq. (14) performs the correct addition of crack growth increment dA to arrive at the mean crack size for each entry in the m vector and ϵ represents measurement error. This can also be expressed for a single time as follows. For i ¼ 1; 2; …; n ðn r TÞ, the conditional distribution of Mi given a set of explanatory variables taken from Φ ϕi ¼ ðϕi0 ; ϕi1 ; ϕi2 ; ϕi3 ; …; ϕiT ÞT can be specified as T
mi ¼ ϕi dA þ ϵi
ð15Þ
where dA is a ðT þ 1Þ 1 vector and ϵi is a normally distributed variable with a mean of 0 and standard deviation of σ ϵ , representing the measurement error. The objective is to find the predictive distribution of the actual crack size node by providing the life cycle measurement data sets ðΦ; MÞ. Here Φ ¼ ðϕ1 ; ϕ2 ; …; ϕn Þ is a ðT þ 1Þ n matrix and M ¼ ðm1 ; m2 ; …; mn ÞT is an n 1 vector. Since all these nodes are currently assumed to satisfy the normal distribution, exact solutions can be obtained for the rest of the nodes, which are also all normally distributed. The posterior distribution of dA is shown from Eqs. (16) to (17) and the posterior distribution of A is shown in Eq. (18): PrðdAjΦ; MÞ p PrðMjdA; ΦÞPrðdAÞ ¼ NormalðdAjμN ; σ TN Þ where μN ¼ σ σ μ ΦΦT Þ 1 . Both μ σ ( ðμ0 Þ11 ¼ μA0 ; ðμ0 Þjj ¼ μdA
σ Φ
σ
σ
ðσ 20 Þjj ¼ σ 2dA
for t ¼ 1; 2; 3; …; T
The posterior distribution of A which is denoted PrðAjΦ; MÞ becomes Z PrðAjΦ; MÞ ¼ PrðAjdA; ΦÞPrðdAjΦ; MÞ dðdAÞ T
A simple constant crack growth model used is shown in Fig. 2 to present an application of proposed algorithm. As shown in Fig. 2, only one node dAt is used as the crack growth parameter Xt, which represents the crack size increment between time step t 1 and time step t. All those nodes are continuous and assumed to satisfy the normal distribution; thus the exact distribution can be solved by the Bayesian linear regression method [23] or by a standard Kalman filter. Here the BLR approach is adopted. The relations of all those nodes are shown in Eq. (13) which assume the same prior distribution of crack increment at each time step
σ
ð17Þ
T
¼ NormalðAjΦ μN ; Φ σ 2N ΦÞ 4.2. A simple constant crack growth model
ð16Þ
2 2 1 2 2 2 1 MÞ and þ1= 2ϵ 0 þ 1= ϵ N ðð 0 Þ N ¼ ðð 0 Þ 2 and are ðT þ 1Þ ðT þ 1Þ diagonal matrix with 0 0
ðσ 20 Þ11 ¼ σ 2A0 ; ð12Þ
ð14Þ
ð18Þ
The parameters of this model are summarized in Table 1. To avoid negative values of dAt and At, the mean value of initial crack size and dA chosen is relatively large. The network is configured to have 20 slices, representing 20 time steps. Of these 20 time steps, three pairs of measurement data are provided as follows: m5 ¼ 10 mm, m10 ¼ 14 mm and m15 ¼ 25 mm. The reliability index must be estimated for At node which contains both smoothing (t r 15) and prediction (15 o t r 20) ranges. The Bayes Net Toolbox (BNT) [24] for Matlab efficiently provides the posterior solutions and is used here as the inference engine. To begin with the proposed algorithm, initial discretizations must be generated for dAt, At and Mt nodes, which are summarized in Table 2. The initial number of intervals should
Table 1 Parameters of constant crack growth model.
Fig. 2. A simple constant crack growth model.
Variable
Distribution
Mean
Standard deviation
dAt (mm) A0 (mm) Mt (mm) ac (mm)
Normal Normal Normal Deterministic
1 7 At 30
0.3 3 2 –
248
J. Zhu, M. Collette / Reliability Engineering and System Safety 138 (2015) 242–252
1
Table 2 Initialization of nodes for constant crack growth model.
RPDD RPDD with KL error Neil’s DD, no merge Uniform
0.8 Variable
Number of states
Discretization
dAt (mm) At (mm) Mt (mm)
4 5 4
½0 : 5=3 : 5; þ 1 ½0 : ac =3 : ac ; amax ; þ 1 ½0 : ac =3 : ac ; þ 1
0.6
10 β
log (ε )
0.4 0.2 0 −0.2 −0.4 −0.6
0
1
2
3
4
5
−0.8
Discretization of dAt (mm)
−1
0
20
40
60
80
100
Number of intervals for A t Fig. 4. Reliability index error comparison by algorithm on the constant crack growth model.
0
10
20
30
40
50
Discretization of At (mm)
1 RPDD RPDD with KL error Neil’s DD, no merge Uniform
0.5 0
10
15
20
25
30
Discretization of Mt (mm) Fig. 3. The discretization of dAt, At and Mt at iteration 9 for the constant crack growth model.
be as small as possible, while the range of intervals should cover the whole probability domain, especially the low probability tail region. For the discretization of At, amax denotes the boundary of the last bin where only one boundary exists. Here, amax ¼ 60 mm is chosen which satisfies Eq. (10). The discretization results at 9 iterations are shown in Fig. 3, demonstrating the results of applying the method presented in Section 3. Full convergence to the criteria presented in Section 3.2 was achieved at 13 iterations, with a similar, but denser discretization pattern. The number of intervals for the dAt, At and Mt nodes increases to 54, 97, 57, respectively. For the At node, dense intervals have been generated between approximately 20 mm and 35 mm, where the posterior distribution of At is higher than in the rest of the region and contributes significantly to the POF. Similar results occur for the region of the dAt node between 0.0 mm and 1.3 mm. Conversely, when At is above or below that region, the interval size is larger, indicating either a smaller posterior probability region or a smaller contribution on the POF. Similarly, relatively coarse discretization has also been generated for the dAt node where crack increment is large and for the Mt node when crack size is small. Therefore, it is evident that the RPDD algorithm has the ability to adjust the discretization accordingly and distinguish the different contributions for the reliability calculation according to posterior distribution. The proposed algorithm was also compared to two standard algorithms, Neil's dynamic discretization and static uniform discretization of all variables. An additional comparison was made using the RPDD methodology including virtual evidence, but using the KL error metric in place of the proposed maximum posterior probability error metric to pick which interval to subdivide. Although Neil's method has been fully implemented in the commercial Bayesian Network software package AgenaRisk, in this work a re-coded version of the algorithm was used based on the published papers describing the approach. In AgenaRisk, the analyst is not allowed to
10 KL
5
log (ε )
0
−0.5 −1 −1.5 −2 −2.5
0
20
40
60
80
100
Number of intervals for At Fig. 5. KL divergence error comparison by algorithm on the constant crack growth model.
control the discretization size at each iteration; additionally, this method has an embedded merge operation that may merge out the critical crack size bin from the discretization. This would introduce further error in the reliability analysis and hamper a comparison focused on the algorithm's ability to identify intervals that should be subdivided. Therefore, in the re-coded version of Neil's algorithm, the merge operator was not enabled for the results which follow. The εβ and εKL of at node for all four methods are shown in Figs. 4 and 5 respectively. For each iteration, the RPDD method is run first to update the discretization. Then rest three methods are run until the number of discretizations for all nodes is matched with that of the RPDD method. It can be seen that with the increase of number of intervals at each iteration, generally both types of errors decrease for all four algorithms. However, for the reliability index error, the RPDD algorithm drops the fastest among all four methods, while Neil's dynamic discretization method performs the worst. In contrast, Neil's method has the smallest KL error compared with rest of the methods, while our proposed RPDD algorithm performs the worst. For both the RPDD algorithm with KL error judgment and uniform discretization, their performance is always between the RPDD and the original Neil's method. These results strongly suggest for the crack growth reliability problem that a reliability-focused error metric may be beneficial for adaptive discretization. Similarly, the results suggest that the
J. Zhu, M. Collette / Reliability Engineering and System Safety 138 (2015) 242–252
1.15
Mean of dA
1.1
1.05
1
0.31
Standard deviation of dA
BLR RPDD RPDD with KL error Neil’s DD, no merge Uniform
0.305
0.3
0.295 BLR RPDD RPDD with KL error Neil’s DD, no merge Uniform
0.29
0.95
0.9
249
0
5
10
15
0.285
20
0
5
10
15
30
2.4 BLR RPDD RPDD with KL error Neil’s DD, no merge Uniform
2.2
20
15 BLR RPDD RPDD with KL error Neil’s DD, no merge Uniform
10
0
5
10
15
Standard deviation of At
Mean of A t
25
5
20
Time step
Time step
2 1.8 1.6 1.4 1.2 1
20
0
5
10
15
20
Time step
Time step
Fig. 6. The comparisons of mean and standard deviation for dAt and At nodes for all algorithms on the constant crack growth model.
20 BLR RPDD RPDD with KL error Neil’s DD, no merge Uniform
18 16 14 12 β
proposed reliability-focused algorithm is not the most suitable for more general discretization needs. For all four algorithms, the mean and standard deviation results are reconstructed from discretized posterior distribution for both dAt and At nodes which are shown in Fig. 6. The exact solutions calculated by Bayesian linear regression has also been shown for comparisons. It can be seen that all four methods predict mean value of the dAt and At nodes well. On the other hand, a variety of error exists in predicting standard deviation. Among all those methods, Neil's method is the closest to the exact solution which further supports the idea that the KL metric best addresses the entire probability distribution. The RPDD method predicts fairly poorly, especially when crack size is small, since very coarse discretization is generated for small crack size region as it is not significant for reliability. However, for the reliability index prediction which is shown in Fig. 7, the proposed RPDD algorithm is the most accurate among all methods and has a very small absolute error in terms of β. It is difficult to compare the computational cost for all four methods, because Neil's dynamic discretization method uses static junction tree algorithm which allows different CPTs for the same nodes under different time steps, while RPDD algorithm uses the DBN inference method which uses the same CPT for same nodes under different time steps. Also, due to the different focus of each algorithm, it might be very inefficient to apply Neil's method to achieve the same reliability accuracy that the proposed RPDD method can achieve. However, some rough comparisons can still be made for all those methods. (1) In applying the DBN inference method, the time for generating the CPT for Neil's method is T times as much as in the proposed RPDD method as these can differ at each time step. (2) The computation cost of RPDD algorithm
10 8 6 4 2
0
5
10
15
20
Time step Fig. 7. The comparisons of reliability index prediction at iteration 9 by algorithm on the constant crack growth model.
mainly depends on the number of queried nodes for the smoothing error. Further computational time improvements are possible if a more advanced method for determining which nodes to query is developed. (3) To achieve the same accuracy level, the RPDD algorithm uses the least number of discretizations among all four methods. Table 3 shows a comparison of CPU time between RPDD method spent for each iteration and equivalent static uniform discretization. An equivalent static uniform discretization is obtained by gradually increasing the number of discretizations until it
250
J. Zhu, M. Collette / Reliability Engineering and System Safety 138 (2015) 242–252
Table 3 The comparison of CPU time per iteration (sec). Iteration
1
2
3
4
5
6
7
8
9
RPDD Equiv. static uniform
1.96 0.65
1.91 0.6
1.73 0.55
1.82 0.91
2.62 1.82
5.29 3.75
8.63 5.87
15.32 11.05
23.06 21.04
achieves the same accuracy in reliability index. The CPU time to infer that discretization is then recorded. It can be seen that if two or three iterative attempts need to be made with static uniform discretization, the CPU time will exceed that for the RPDD method. This advantage becomes larger as the number of sub-divisions increases, and by the ninth iteration of the RPDD method, the RPDD method takes only 10% longer than that of an equivalent uniform static discretization. The RPDD method also has the advantage of being automated. 4.3. Stochastic crack growth model Fig. 8. A stochastic crack growth DBN model (after [1]).
While the proposed RPDD method works well for the simple constant crack growth model presented in the previous section, most real world crack growth models are significantly more complex. In this section the RPDD method is evaluated on Straub's stochastic crack growth model [1]. This in turn is based on Yang and Manning's [25] method for aerospace structures, where the crack growth rate is expressed by a lognormal random process modifying any deterministic crack growth simulation. The expression is shown as follows: h i1=ð1 m=2Þ m 1 m=2 At ¼ 1 cX t YðAt 1 Þm ΔSm π m=2 n þ At 1 ð19Þ 2 The variable At is again the actual crack size at time step t, c and m are the material constants, Y is the geometric function relating the stress intensity factor to current crack length, ΔS is the range of cyclic stress applied to the crack, and Xt is a stationary lognormal random process with a median value of 1.0 and a standard deviation σ X . The lognormal random process Xt can be further characterized by its autocovariance function. This function defines the deviation from median crack growth relationship owing to material and model imperfections. In general, the autocovariance function of the crack growth rate should decrease as the interval of two time instances t1 and t2 increases. Thus, an exponential form of autocovariance function with standard deviation function of σ X was proposed by Yang and Manning as follows: Cov½X t1 ; X t2 ¼ σ 2X expð αX j t 2 t 1 j Þ
Variable
Distribution
Mean
Standard deviation
Auto-covariance
αX (cycle) Xt A0 (mm) Mt (mm) ΔS (MPa) c m ac (mm) n (cycle)
Lognormal Lognormal Exponential Normal Deterministic Deterministic Deterministic Deterministic Deterministic
1E4 1.2 1 At 60 5.85E 14 3.59 60 1E4
1E4 0.8 1 5 – – – – –
– Eq. (20) – – – – – – –
Table 5 Initialization of nodes for stochastic crack growth model. Variable
Number of states
Discretization
αX (cycle) Xt At (mm) Mt (mm)
4 4 6 5
0; 10 4 ½1 : 5=3 : 6 ½0 : 5 : 15 0; exp½lnð0:01Þ : ðlnðac Þ lnð0:01ÞÞ=3 : lnðac Þ; b; þ 1 0; exp½lnð0:01Þ : ðlnðac Þ lnð0:01ÞÞ=3 : lnðac Þ; þ 1
ð20Þ
In this function, αX is the measure of correlation time for X(t). Therefore, a parametric crack growth model under a discrete time step can be expressed in a generic form of At ¼ f ðAt 1 ; n; X t ; αX Þ
Table 4 Parameters of the stochastic crack growth model.
ð21Þ
A corresponding stochastic crack growth DBN model is then proposed accordingly as in Fig. 8. The number of cycles has been divided into an equal number of steps with T time steps in total. αX and Xt are hyperparameters, At and Mt are the same as those in the previous example. Details of this model can be found in Straub's [1] paper. The crack for this example is assumed to be an edge crack growing into a finite width plate, a well-studied problem with similarities to many crack growth problems in ship and offshore structures. The plate has dimensions of 200 mm 100 mm (b ¼ 100 mm; 2h ¼ 200 mm) with plane strain conditions assumed. The material is assumed to be purely elastic with Young's modulus E¼73.1 MPa. A constant amplitude cyclic stress is applied on both sides of plate with Smax ¼ 60 MPa and Smin ¼ 0 MPa. An initial edge crack with length a0 ¼ 1 mm is
formed at the mid-height of the plate. Three pairs of measurement data are provided from a deterministic crack growth simulation: m10 ¼ 1:4 mm, m20 ¼ 2 mm and m30 ¼ 5 mm. The parameters of this model and the initial discretization are summarized in Tables 4 and 5, respectively. To reduce the approximation error from the one boundary interval, according to Eq. (10), the two boundary intervals of the At node range from 0 to plate width b instead of ac in Straub's [1] paper. A very small number of states are given for this initial discretization. In this example, a large measurement error is used which increases the uncertainty of the POF at each time step. In reality, if the inspection data is available at time step t and measurement technique is sufficiently accurate, the reliability estimation before time step t can be neglected. It took 16 iterations of the algorithm to reach the convergence tolerance specified in Section 3.2, however the discretization results after 9 iterations are shown in Fig. 9 as the trends are similar and the less-dense discretization is easier to visualize. The number of intervals for the αX , Xt, At and Mt is 15, 27, 61, and 60, respectively. Based on the figure, it is evident that the nodes of At and Mt generally
J. Zhu, M. Collette / Reliability Engineering and System Safety 138 (2015) 242–252
251
1.5
0
0.5
1
1.5
2
2.5
RPDD RPDD with KL error Uniform Logarithm
1
4
Discretization of αXt
x 10
5
10
15
Discretization of Xt
10 KL
0
log (ε )
0.5
0
−0.5
0
20
40
60
80
−1
100
Discretization of At −1.5
0
10
20
30
40
50
60
70
Number of intervals for At
0
10
20
30
40
50
60
Fig. 11. KL divergence error comparison by algorithm for Straub's DBN model.
Discretization of Mt Fig. 9. The discretization of αXt , Xt, At and Mt at iteration 9.
40 1 0.8 0.6
30 25
0.4
20 β
0.2
β
log10(ε )
RPDD RPDD with KL error Uniform Logarithm Logarithm, dense
35
RPDD RPDD with KL error Uniform Logarithm
0
15
−0.2
10
−0.4
5
−0.6
0
−0.8
−5 −1
0
10
20
30
40
50
60
70
Number of intervals for At
0
10
20
30
40
50
Time step
Fig. 10. Reliability index error comparison by algorithm for Straub's DBN model.
Fig. 12. Comparison of the reliability index prediction at iteration 9 by algorithm for Straub's DBN model.
maintain a logarithmic discretization, as the crack size grows exponentially with an increase in the number of cycles. Fewer intervals are generated by the RPDD algorithm for the nodes of αX and Xt, since those parameters have less impact on the At node and maintain a very similar distribution within different time steps. Since an exact solution is not available for this model, fine discretization is generated using Straub's logarithmic static discretization for comparison. The number of intervals used for each node is increased gradually until the reliability output is stationary. The final number of intervals for the αX , Xt, At and Mt has been increased to 150, 265, 318 and 318, respectively. The error of the RPDD algorithm is then compared to those of the static discretization using both uniform and logarithmic discretization and RPDD algorithm with KL error judgment at each iteration, which is shown in Fig. 10. Similar to the constant crack growth model case, when starting from similar error level, all four discretization methods generally reduce their errors until they begin to stabilize at a fixed value. However, the uniform discretization performs the worst among all four methods. The RPDD with KL error judgment performs slightly better than that of static uniform discretization
case. For the RPDD algorithm, initially the error drops slower than the logarithmic discretization before approximately 25 intervals of the At node. This is because, at each iteration, a subdivision occurs at the middle of each interval, while the logarithmic discretization allocates more effort to small crack sizes, which is closer to the optimum discretization. When the number of intervals at At node is above 25, the RPDD algorithm performs much better and takes only about 50 intervals of At to reach the lowest error. For the KL divergence error which is shown in Fig. 11, the RPDD algorithm with KL error judgment performs much better than rest of the other methods. The reliability index after 9 iterations for all different algorithms is then plotted in Fig. 12. Although, all four methods predict generally well during the smoothing stage (t r30), the prediction stage (30 r t r 50) which reflects the future risk level due to inspection records shows difference in prediction accuracy. In this region where β approaches values which typically would trigger maintenance or inspection actions, the RPDD method is significantly more accurate. The ability to accurately predict future probabilities in this range with the RPDD method enhances the value of the DBN as a decision-making tool.
252
J. Zhu, M. Collette / Reliability Engineering and System Safety 138 (2015) 242–252
5. Conclusions This paper presented a new iterative discretization algorithm for more accurate and efficient reliability evaluations of deterioration processes modeled with a DBN. A new reliability purposed dynamic algorithm was developed which separates the error analysis into two stages, smoothing and prediction, and subdivides the intervals iteratively according to the reliability contribution from each intervals. The proposed techniques were compared with existing discretization approaches in two crack growth examples. Both examples illustrated the robustness and efficiency of the proposed algorithm. The results suggest that methods designed to estimate the entire distribution shape do not perform as well as the RPDD method for low-probability reliability estimations. However, the RPDD method struggles to estimate the entire distribution as efficiently as those of existing, more generalpurpose algorithms. Overall, this suggests that different algorithmic approaches may be beneficial for different target applications of Bayesian Networks. The possibility of future hybrid methods combining the technique presented here and those developed previously should also not be overlooked. For the reliability-focused problem, the results of this study showed that the proposed algorithm can achieve the same accuracy level as that of the static discretization with less than half of the number of intervals used. Also, it avoids the need to manually iterate through different static discretization methods and resolution levels to achieve convergence. Further development of CPT generation, including the method developed in this work that excludes numeric integration, as well as methods for selecting which nodes to query with virtual evidence could further improve the performance of the proposed method. An exploration of the method's ability to handle other types of structural reliability applications in addition to crack growth should also be considered in future research. Acknowledgments The authors would like to acknowledge the support of Dr. Paul Hess of the Office of Naval Research Code 331 under Grant N00014-10-1-0193. Additional resources: The code used to implement the discretization approach and sample methods is available under the MIT Open Source License at DeepBlue (http://hdl.handle.net/2027.42/ 110679). Select input scripts and summary data from this study are also available. References [1] Straub D. Stochastic modeling of deterioration processes through dynamic Bayesian networks. J Eng Mech 2009;135(10):1089–99. [2] Okasha NM, Frangopol DM, Decò A. Integration of structural health monitoring in life-cycle performance assessment of ship structures under uncertainty. Mar Struct 2010;23(3):303–21.
[3] Mohanty S, Chattopadhyay A, Peralta P, Das S. Bayesian statistic based multivariate Gaussian process approach for offline/online fatigue crack growth prediction. Exp Mech 2011;51(6):833–43. [4] Nielsen UD, Jensen J, Pedersen PT, Ito Y. Onboard monitoring of fatigue damage rates in the hull girder. Mar Struct 2011;24(2):182–206. [5] Friis-Hansen A. Bayesian networks as a decision support tool in marine applications [Ph.D. thesis]. Technical University of Denmark; 2000. [6] Mahadevan S, Zhang R, Smith N. Bayesian networks for system reliability reassessment. Struct Saf 2001;23(3):231–51. [7] Straub D, Kiureghian AD. Bayesian network enhanced with structural reliability methods: methodology. J Eng Mech 2010;136(10):1248–58. [8] Straub D, Kiureghian AD. Bayesian network enhanced with structural reliability methods: application. J Eng Mech 2010;136(10):1259–70. [9] Sankararaman S, Ling Y, Mahadevan S. Uncertainty quantification and model validation of fatigue crack growth prediction. Eng Fract Mech 2011;78 (7):1487–504. [10] Cohen ML, Kulkarni SS, Achenbach JD. Probabilistic approach to growth and detection of a truncated distribution of initial crack lengths based on Paris' law. Struct Health Monit 2011;11(2):225–36. [11] Chang KC, Chen HD. Efficient inference for hybrid dynamic Bayesian networks. Opt Eng 2005;44(7) 077201–077201-7. [12] Cheng J, Druzdzel M. AIS-BN: an adaptive importance sampling algorithm for evidential reasoning in large Bayesian networks. J Artif Intell Res 2000;13:155–88. [13] Gogate V, Dechter R. Approximate inference algorithms for hybrid Bayesian networks with discrete constraints. In: The Twenty-first conference annual conference on uncertainty in artificial intelligence (UAI-05). Arlington, VA: AUAI Press; 2005. p. 209–16. [14] Langseth H, Nielsen T, Rumí R, Salmerón A. Inference in hybrid Bayesian networks. Reliab Eng Syst Saf 2009;94(10):1499–509. [15] Neil M, Tailor M, Marquez D. Inference in hybrid Bayesian networks using dynamic discretization. Stat Comput 2007;17:219–33. [16] Larsen RJ, Marx ML. An introduction to mathematical statistics and its applications. 5th ed. Boston: Prentice Hall; 2012. [17] Beck JL, Au S-K. Bayesian updating of structural models and reliability using Markov chain Monte Carlo simulation. J Eng Mech 2002;128(4):380–91. [18] Gamerman D, Lopes HF. Markov chain Monte Carlo: stochastic simulation for Bayesian inference. 2nd ed. Boca Raton, Florida: Chapman and Hall/CRC; 2006. [19] Murphy KP. Dynamic Bayesian networks: representation, inference and learning [Ph.D. thesis]. Berkeley: University of California; 2002. [20] Kozlov A, Koller D. Nonuniform dynamic discretization in hybrid networks. In: The 13th conference on uncertainty in artificial intelligence (UAI-97), Providence, US, 1997. p. 314–25. [21] Marquez D, Martin N, Fenton N. Improved reliability modeling using Bayesian networks and dynamic discretization. Reliab Eng Syst Saf 2010;95(4):412–25. [22] Madsen HO, Krenk S, Lind NC. Methods of structural safety. Englewood Cliffs, NJ: Prentice-Hall Inc.; 1986. [23] Carlin BP, Louis TA. Bayesian methods for data analysis. 3rd ed. Boca Raton, Florida: Chapman and Hall/CRC; 2009. [24] Murphy K. The Bayes Net toolbox for matlab. Comput Sci Stat 2001;33:1–20. [25] Yang J, Manning S. A simple second order approximation for stochastic crack growth analysis. Eng Fract Mech 1996;53(5):677–86.