G Model JMSY-399; No. of Pages 7
ARTICLE IN PRESS Journal of Manufacturing Systems xxx (2015) xxx–xxx
Contents lists available at ScienceDirect
Journal of Manufacturing Systems journal homepage: www.elsevier.com/locate/jmansys
Technical Paper
Adaptive resampling-based particle filtering for tool life prediction Peng Wang, Robert X. Gao ∗ Case Western Reserve University
a r t i c l e
i n f o
Article history: Received 25 March 2015 Received in revised form 25 March 2015 Accepted 25 March 2015 Available online xxx Keywords: Tool wear prediction Particle filter Bayesian inference Remaining useful life
a b s t r a c t Trending analysis has been widely investigated for prediction of tool wear, which impacts not only tool life but also the quality of machined products. This paper presents a Bayesian approach to predicting the flank wear rate, linking vibration data measured during machining with the state of tool wear. Variations in the measured data are aggregated based on the Kullback–Leibler divergence, which provides a measure for the distance between the current and initial probability distributions of measurement, when no wear is present. Subsequently, state space estimation of the tool wear is realized by particle filtering (PF), a non-linear and non-Gaussian system estimation technique. To overcome the sample impoverishment problem in sequential importance resampling (SIR), a new resampling scheme is proposed, which has shown to more reliably quantify the confidence interval and improve the prediction accuracy of the remaining useful life (RUL) prediction of the tool as compared to standard SIR method. The developed method is experimentally evaluated using a set of benchmark data measured from a high speed CNC machine that performed milling operations. Good results are confirmed by the comparison between the predicted tool wear state and off-line tool wear measurement. © 2015 The Society of Manufacturing Engineers. Published by Elsevier Ltd. All rights reserved.
1. Introduction When tool wear reaches a certain severity, increased cutting force, vibration and temperature can cause deteriorated surface integrity and dimensional error greater than tolerance, and lead to the end of tool life [1]. Consequently, in situ tool wear monitoring and tool life prognosis are important to ensuring machining precision and cost-effective machining operations. Traditionally, quantifying tool wear and predicting remaining tool life are performed through physics enabled models. These models establish empirical relationships between tool wear/life and certain cutting parameters (e.g. cutting speed, surface temperature, etc.), but do not exhaust the physical nature of tool wear. Unknown parameters in these empirical models are trained with a large amount of experiments under different operation conditions [2]. Paris’ law is a common approach to quantify tool wear or crack growth [3], where the stress intensity factor at the tip of a crack and crack length is modeled with respect to two material parameters. Research in recent years [4] extended Paris’ formula by mixing mode stress intensity factors to develop an analytical model for two-dimensional rolling sliding contact scenarios. Among physical models describing tool life, an important branch is around Taylor’s
∗ Corresponding author. E-mail addresses:
[email protected] (P. Wang),
[email protected] (R.X. Gao).
tool life equation [5], which relates tool life to cutting speed in a reverse exponential relationship. Hoffman [6] introduced an extended Taylor’s equation, and discussed the effects of feed rate and cutting depth on tool life besides cutting speed. Latest development on Taylor’s equation can be found in [7]. However, this kind of approach requires detailed and complete knowledge of the manufacturing system being monitored, which however is not available. Moreover, the majority of coefficients involved in the physical models need to be determined experimentally, causing the physical models to be application specific. Contradictory to the above discussed direct measurement of tool wear, another approach arises with the development of sensing techniques that use the fused information of indirect measurements (e.g. force, vibration and acoustic emission, etc.) generated in the machining process to infer the underlying tool state [8]. Accordingly, the objective of study becomes one of establishing a quantitative relationship linking the measurements or extracted features to system states (i.e. tool wear) based on state and/or parameter estimation methods. Once the tool wear model is determined, the remaining useful life (RUL) of tools given a certain threshold of wear can be predicted correspondingly. Techniques that implement the process can be classified into two categories: data-driven and model-based. Typical data-driven methods, such as artificial neural network, support vector regression (SVR), require a large amount of historical data for training. Also, they are unable to quantify the uncertainty involved in the process of machining and data acquisition, such as sensor and process noise,
http://dx.doi.org/10.1016/j.jmsy.2015.04.006 0278-6125/© 2015 The Society of Manufacturing Engineers. Published by Elsevier Ltd. All rights reserved.
Please cite this article in press as: Wang P, Gao RX. Adaptive resampling-based particle filtering for tool life prediction. J Manuf Syst (2015), http://dx.doi.org/10.1016/j.jmsy.2015.04.006
G Model JMSY-399; No. of Pages 7 2
ARTICLE IN PRESS P. Wang, R.X. Gao / Journal of Manufacturing Systems xxx (2015) xxx–xxx
and unknown future input trajectory distribution. Model-based techniques take advantage of the merits of available knowledge from first principal and historical data information. Compared to other approaches, model based methods can provide information on all qualification evaluations of the estimation and prediction: accuracy, precision and confidence. A model based approach tries to estimate the posterior probability density function (pdf) of system states given observable measurements, relying on the Markov process and Bayesian inference. RUL in this context is defined as the conditional expected time to final failure, given the current state. Depending on dynamic system types and noise assumptions, different methods including the Kalman filter [9] (for linear systems and Gaussian noise), the extended Kalman filter [10] (for weak nonlinear systems and Gaussian noise), and the particle filter (for nonlinear systems and non-Gaussian noise) can be applied to implement model based estimation and prognosis. Particle filtering, also called the sequential Monte-Carlo method [11], estimates the posterior pdf through a set of randomly sampled particles with weights. Among the different variants of PF, sequential importance resampling (SIR) is the most popular one that has been put into application. Previous work has incorporated support vector regression (SVR) and auto-regression (AR) into SIR to improve the accuracy of long term prediction, in which online measurements are not available [17]. It was found that SIR suffers from several problems: • The resampling stage of SIR brings about the sample impoverishment problem when it is intentionally introduced to overcome the particle degeneracy problem [12], which is caused by particles sampled from the initial prior distribution that do not vary with time [13]; • The estimation process is a discrete process, leading to the lack of the optimal solution, due to the same reason as above that particles are resampled from the same set of samples; • When applying SIR to joint state and parameter estimation, SIR assumes the parameters to be constant or varying slowly [14], and thus cannot process the case that the parameters have sudden changes during their evolution. To tackle the problems mentioned above, a local search particle filter (LSPF) is proposed in this paper. LSPF is performed upon an adaptive resampling strategy that has particles in the resampling process to be sampled in a range beyond the initial prior distribution, to maintain the particles’ diversity. The resampling range is determined by particles’ performance in the last iteration. As, a result, the estimation process can progressively reduce the effect of the initial sampling from the prior distribution by keeping the amount of unique particles, and consequently, increase the estimation accuracy and narrow down the confidence interval. The accuracy of estimation and prediction not only relies on the estimation methods, but also the measurements/extracted features and corresponding measurement models. For feature extraction, features in the time-domain (e.g. mean, RMS, kurtosis, etc.) and frequency domain (e.g. spectral mean, wavelet energy, energy ratio, etc.) have been investigated and evaluated in [17]. In this paper, a quantification of distribution distance based on Kullback–Leibler divergence (also called relative entropy) is introduced as the feature applied in the measurement model. By viewing measured force data within one machining cut process as a distribution, the feature is calculated by comparing the distribution of new measurement to the reference distribution. It is considered that the distribution distance increases with the degradation of tool wear. The architecture of the work presented in this paper is shown in Fig. 1. The rest of the paper is constructed as follows. After introducing the theoretical background of particle filtering and the proposed local search particle filter in Section 2, details of the particle filter
Fig. 1. Architecture presented in this paper.
based tool life prediction method is discussed in Section 3. The formulation of the state model and measurement model based on tool wear rate model and feature extraction/selection techniques is also discussed respectively. The effectiveness of the presented technique is experimentally demonstrated in Section 4, based on run-to-failure data acquired using a ball nose tungsten carbide cutter in a CNC milling machine. Finally, conclusions are drawn in Section 5. 2. Particle filtering based parameter estimation In order to make inferences about underlying states and their evolution in a dynamic system using model-based approach, Bayesian inference for estimating the posterior probability density function (pdf) plays a significant role. But sometimes, the posterior pdf is difficult to calculate directly through integration, and hence particle filtering proposes an efficient way to approximate the posterior pdf using Monte-Carlo sampling. After introducing the fundamental idea for joint state and parameter estimation based on Bayesian inference, this section will discuss how to apply PF to approximate the posterior pdf, and propose a LSPF with adaptive resampling to conquer sample impoverishment problems in conventional SIR. 2.1. Bayesian inference for joint state and parameter estimation To perform the Bayesian inference, two models derived from physics need to be established first. The state model describes the evolution of the state (i.e. tool wear width in this paper) over time. Sometimes the state evolution is conditioned with parameters, for example, tool wear rate is often subjected to material properties [16]. xk = fk (xk−1 , , wk−1 )
(1)
where fk describes the state transition function from state xk−1 to xk considering an first-order Markov process. Usually fk is fixed
Please cite this article in press as: Wang P, Gao RX. Adaptive resampling-based particle filtering for tool life prediction. J Manuf Syst (2015), http://dx.doi.org/10.1016/j.jmsy.2015.04.006
G Model
ARTICLE IN PRESS
JMSY-399; No. of Pages 7
P. Wang, R.X. Gao / Journal of Manufacturing Systems xxx (2015) xxx–xxx
3
through the estimation process, except for the situation that system states evolve with different modes over time. denotes the unknown parameters to be estimated, which is assumed to be constant in most estimation methods. wk−1 denotes the process noise representing uncertainty (e.g. operational condition uncertainty). The measurement model relating online measurements to unobservable measurements is given by: zk = hk (xk , vk )
(2)
where hk is the measurement function, which is usually nonlinear when describing a complex dynamic system. zk can be direct measurements or extracted features. k is the sequence of measurement noise, representing measurement uncertainty. In the Bayesian framework, estimation is fulfilled by recursively calculating the posterior pdf p(xk |zk ) of the state given the noisy measurements z1 :k . Considering the first-order Markov process, the pdf can be obtained using two stages: prediction and update, as shown in Eqs. (3) and (4) [17].
p(xk , |zk−1 ) = p(xk , |zk ) =
p(xk |xk−1 , )p(xk−1 , |zk−1 )dxk−1
(3)
p(xk , |zk−1 )p(zk |xk , ) p(zk |zk−1 , )
(4)
It can be seen from Eq. (4) that the estimation of the posterior pdf is transformed to the calculation of likelihood function and prior distribution. p(zk |zk−1 ) is the normalizing factor which can be calculated as:
p(zk |zk−1 ) =
p(xk , |zk−1 )p(zk |xk , )dxk
(5)
Eq. (4) provides an optimal solution for system state estimation through performing the two steps recursively. However, multidimensional integrals in (3) and (5) are usually intractable, making it difficult to obtain an analytic solution for posterior distribution. Thus sub-optimal filters or approximations are needed to evaluate p(xk |z1 :k ), such as particle filtering [18]. 2.2. Particle filtering To estimate/approximate the posterior pdf using particle filters, the unknown states and parameters are represented by a set of N i , i , i = 1, 2, . . ., N}, and associrandom samples or particles {x1:k 1:k i
ated importance weights wki . Here, the unknown parameters 1:k are assumed to be varying over time. The weights, also known as the importance of particles, are assigned with equal value initially and then updated iteratively. The integral operation in Eq. (3) is then approximated as the summarization of these random numbers as:
p(xk , k |zk−1 ) =
≈
N
p(xk |xk−1 , k )p(xk−1 , k |zk−1 )dxk−1
i
N
i
i i wk−1 ı((xk−1 , k ) − (xk−1 , k−1 ))p(xk |xk−1 , k−1 )
i
i i wk−1 p(xk |xk−1 , k−1 )
measurements given predicted states, which determines the new weights assigned to particles. i
i wki ∝ wk−1 p(zk |xki , k )
(7)
This process is corresponding to calculating the likelihood function p(zk |xk , k ) in Eq. (4). A degeneracy problem is associated with above processes, in that the weights of most particles sampled at the edge zone of the posterior distribution would be negligible after several iterations [19]. To avoid computational efforts being wasted on the calculation and update of meaningless particles with small weights, SIR arises. The key idea of SIR is the resampling process that removes particles with small weights (justified by comparing the cumulative distribution function to a threshold within 0–1) and repeat particles with large weights. However, this process introduces the particle impoverishment problem where the number of unique particles and the global search ability decreases largely. This is because particles sampled from the initial prior distribution do not vary with time, and the same set of particles is resampled within the same sample set iteratively. The degeneracy and impoverishment problem are described in Fig. 2(b). Moreover, resampling from the same set of particles can also cause the whole estimation process to be discrete and difficult to find the optimal solutions. The gray dash line in Fig. 2 represents the optimal solution, which is difficult to be found by conventional SIR because of the discrete search. 2.3. Local search particle filter
i=1
=
Fig. 2. Sketch of resampling strategy: (a) posterior distribution to be estimated and initial sampling from prior distribution; (b) conventional importance resampling, associated with the degeneracy and impoverishment problem and (c) proposed local search importance resampling.
(6)
i=1
Eq. (6) indicates that the estimation of the posterior pdf starts with sampling from a prior distribution (e.g. normal distribution, uniform distribution). In sequential importance resampling (SIR), the selection of initial prior distribution is so important that it directly affects the estimation accuracy. When new observations are available, each particle would calculate a likelihood of new
To tackle these problems, the resampling strategy needs to be refined to seek a balance between keeping particle diversity and ensuring particles’ tracking performance. A possible way, as proposed in this paper, is to adaptively adjust the search region of the particles according to their estimation results in the last iteration, achieved by assuming the updated prior distribution to be a normal distribution and adding a perturbation to original particle value, instead of repeating themselves as they are as in the conventional SIR, as shown in Figure (c).. The refined resampling is described as: i
i
i
i
p(k+1 |k ) ∝ N(k+1 |k , hPik ) i
−
i
−
(8)
T
Pik = E[(k − k )(k − k ) ]
(9)
Please cite this article in press as: Wang P, Gao RX. Adaptive resampling-based particle filtering for tool life prediction. J Manuf Syst (2015), http://dx.doi.org/10.1016/j.jmsy.2015.04.006
G Model
ARTICLE IN PRESS
JMSY-399; No. of Pages 7
P. Wang, R.X. Gao / Journal of Manufacturing Systems xxx (2015) xxx–xxx
4 Table 1 Computational complexity of LSPF. Operation
Equation
Initial sampling
{0:k , wki }i=1
State prediction
p(xk , k |zk−1 ) =
Weight update
i wki ∝ wk−1 p(zk |xki , k )
i
Complexity Ns
N
i=1
Resampling
i i p(k+1 |k )
∝
r∗n∗N i
i i wk−1 p(xk |xk−1 , k−1 )
i
k ∗ N ∗ (2m2 n + mn + m)
i i N(k+1 |k , hPik )
k ∗ (2anN2 + n2 N + 2nN + rnN)
where P in Eq. (9) represent the sample variance obtained from the −
estimation result in the previous iteration, and k = (1/N)
N
i
wki k .
i=1
The key motivating ideas are: (1) particles with large weights, near the peak of estimated distribution, are assigned with smaller perturbation; (2) particles with small weights, staying at edge zone of the distribution, are allocated within a broader search region. The perturbation or search region is determined by hPik , maintaining the diversity of resampled particles. h denotes the shrinkage coefficient, which decreases throughout iteration. Dispersing samples not only increases the number of individual particles, but also causes particles in the following iterations to move forward to an optimal solution continuously. The decreasing shrinkage coefficient ensures samples gradually converge at the optimal location, and thus can narrow down the confidence interval and provide more accurate prediction. Meanwhile, resampling in this way iteratively can be seen as an approximation of continuous sampling instead of conventional discrete sampling. Another benefit is that this kind of resampling reduces the effect of initial sampling. The computational complexity of LSPF is summarized in Table 1. In Table 1, n and m denotes the dimension of unknown parameters and measurements, respectively. N denotes the number of particles, and k denotes the number of iterations. r represents the coefficient sampling from a normal distribution, and a varies with resampling strategy, which is always less than 1. It can be seen that the total computational complexity is determined by the item kanN2 , namely, the computation time is determined mainly by the number of particles, state dimension, and data length.
3. Formulation of prognosis models As described in Section 2, when applying Bayesian inference or specifically PF to perform system state estimation or prediction, state evolution and measurement models with respect to tool wear prediction are needed. This section will introduce how to establish the two models, with the architecture described in Fig. 1. The state model is derived from Paris’ law, while the measurement model is established on the basis of the extracted feature KL divergence.
3.1. Derivation of state model The tool wear rate model can be seen as a particular type of crack growth model or fatigue spall progression model. Generally, a crack growth model is characterized by the stress intensity factor at the tip of a crack K = f (a, ), with a being the half crack size and being the nominal stress. Theoretically, the crack is assumed to not propagate when K is smaller than a threshold value, after which, the crack growth rate will be governed by a power law, such as Paris’ law [3]: dx m = C(K) dt
k * N * (n2 + n)
(10)
and √ K = x
(11)
where t is the number of cutting cycles, dx/dt denotes the tool wear rate, parameters C and m are related to material properties, and is the stress range, which is assumed to be constant. Combining Eqs. (10) and (11), it becomes: √ √ m dx m = C( x) = C( ) xm/2 = C xm dt
(12)
The above equation integrates all constants independent from tool wear size x into a single variable C . Taking integration on both sides in Eq. (12), it can be rewritten in the form that relates tool wear severity at the current time to the previous time (replacing C and m with C and m): (1−m)
xt = [xt−1
+ C(1 − m)]
1/(1−m)
+ ut−1
(13)
Thus, the current tool wear is fully described by the previous state, and unknown parameters C and m which are determined by material properties. In other words, when applying LSPF to estimate tool wear, the dimension of each particle is two corresponding to C and m. 3.2. Measurement model definition Eq. (2) establishes the relationship between measurements and system states. However, the online measurements, such as force and vibration, are always with a high sampling rate, and there is no need to estimate the tool wear when each new measurement is available. Also, the measurement is usually noisy and cannot be employed directly. Hence, feature extraction is needed to perform the estimation with the time unit being cut number. Our previous work [17] has investigated various features in the time-domain and frequency domain. In this paper, feature extraction based on Kullback–Leibler (KL) divergence is investigated, by fully taking advantage of the fact that all measurements within one cut can be seen as a distribution. It is assumed that the distribution shifts when tool wear deteriorates. Thus the distance between two distributions can be seen as an indicator to reveal the wear. Let p1(x) and p2(x) be two distributions, the information of KL divergence from p1 to p2 is defined as [20]:
KL(p1 , p2 ) =
p1 (x) log
p1 (x) dx p2 (x)
(14)
Smaller values of the information quantity KL(p1 , p2 ) mean that the distance between two distributions is smaller. That is, the larger the distance between two distributions, the larger the difference between two distributions. In this paper, the distribution obtained from the initial time is taken as the reference distribution, and the new distribution is compared to the reference distribution to calculate the KL information, which is subsequently applied
Please cite this article in press as: Wang P, Gao RX. Adaptive resampling-based particle filtering for tool life prediction. J Manuf Syst (2015), http://dx.doi.org/10.1016/j.jmsy.2015.04.006
G Model
ARTICLE IN PRESS
JMSY-399; No. of Pages 7
P. Wang, R.X. Gao / Journal of Manufacturing Systems xxx (2015) xxx–xxx
5
Fig. 3. Schematic diagram of experimental setup.
to estimate the tool wear. To evaluate the quality of extracted features, a correlation coefficient is applied:
Cc =
(xi − x¯ )(zi − z¯ )
i
(xi − x¯ )2
i
(15) (zi − z¯ )2
i
Due to quite the complex mechanisms of both tool wear and force or vibration measurement, it is difficult to establish a definite relationship between wear and extracted KL information. To tackle this problem, in this paper, both wear and KL indicator are normalized to a common range. Considering that wear and KL indicator show a similar development trend over time, the measurement model can be rewritten as: KLk = xk (Ck , mk ) + vk
(16)
This is a general way to obtain a measurement model when a relationship based on physics is unavailable. 4. Experimental evaluation To experimentally evaluate the performance of proposed LSPF based tool wear prediction framework, a set of experimental data measured from a high speed CNC machine under a dry-milling operation is used [15]. 4.1. Experiment setup
true wear. The normalized actual tool wear and calculated KL divergence are shown in Fig. 4, represented by a blue line and black star, respectively. Initially, the unknown parameter pair m and C are modeled as probability distributions following a certain distribution (e.g. uniform distribution in this paper). The initial distribution selection can determine the performance of standard SIR, but not LSPF. In the learning stage, based on the state model and measurement model, the unknown parameters combined with the state transition probability p(xk |xk−1 ) can be estimated recursively and obtained a priori. Once the measurements stop, the posterior distribution function p(xk+l |zk ) can be calculated to predict a one-step ahead tool wear based on the latest updated parameters. Also, multi-step ahead prediction can be achieved by performing the prediction process in Bayesian inference. The multi-step ahead prediction can be in turn used to calculate the remaining useful life when the failure threshold is given. Fig. 4 shows an example of tool wear prediction based on LSPF, using the information of the first 100 cuts as the prior knowledge. 500 particles are applied, with gray lines as their prediction paths. The result indicates that median of prediction (represented by red line) can generally track the tool wear. Fig. 5 shows the evolution of parameter estimation, with the blue lines as median estimations and red lines as 90% confidence limits of estimations. It can be noted that the estimated parameters’ distributions concentrate on smaller ranges (i.e. the width between two red lines) continuously, which is the advantage of LSPF over standard SIR, due to that the resampled particles keep searching the local area instead of staying the same.
A three-flute ball nose tungsten carbide cutter was tested to mill a workpiece (material: stainless steel, HRC52) in a down milling operation. The spindle was running at 10,400 rpm, while the feed rate was set as 1555 mm/min in the x direction. The depth of cut (radial) was 0.125 mm in the y direction while the depth of cut (axial) in the z direction was 0.2 mm. A Kistler quartz 3-component platform dynamometer was mounted between the workpiece and machining table to measure the cutting forces. The diagram of experimental setup is shown in Fig. 3 [17]. Online measurements including force and vibration were recorded at the sampling frequency 50 kHz. A total of around 300 cuts’ data were collected during the tool life test. 4.2. Performance evaluation For feature extraction, previous work indicates that the ratio of wavelet energy in the x direction to the z direction has the highest correlation coefficient of 0.985. The KL divergence applied in this paper has the correlation coefficient of 0.993 with respect to
Fig. 4. 200 steps ahead tool wear prediction. (For interpretation of the references to color in text near the reference citation, the reader is referred to the web version of this article.)
Please cite this article in press as: Wang P, Gao RX. Adaptive resampling-based particle filtering for tool life prediction. J Manuf Syst (2015), http://dx.doi.org/10.1016/j.jmsy.2015.04.006
G Model JMSY-399; No. of Pages 7 6
ARTICLE IN PRESS P. Wang, R.X. Gao / Journal of Manufacturing Systems xxx (2015) xxx–xxx Table 2 Comparison of SIR and LSPF on different steps ahead for RUL prediction.
SIR LSPF
200 steps ahead prediction
100 steps ahead prediction
15.4% 9.6%
11.7% 3.5%
Fig. 5. Evolution of distribution of parameters m and C for 200 steps ahead prediction. (For interpretation of the references to color in text near the reference citation, the reader is referred to the web version of this article.)
Fig. 8. Distributions of predicted RUL at 100th and 200th cut.
Fig. 6. 100 steps ahead tool wear prediction.
Fig. 6 shows the prediction starting from the 200th cut, with the evolution of parameter distribution. It is noted that longer step ahead prediction has a broader distribution of prediction path, or in the terms of RUL prediction, has a larger confidence interval, which is also proved in Fig. 7. Given the wear prediction at different steps, RUL can be calculated with respect to a predefined threshold. Fig. 7 draws the RUL estimation result using a− accuracy. a− accuracy determines whether at given point in time (specified by ) prediction accuracy
is within desired accuracy levels (specified by a). Desired accuracy levels (specified by ˇ) are expressed as a percentage of true RUL. In this paper, a is set as 0.1 and ˇ is set as 50%, which means the prediction is accepted if more than half of the prediction distribution falls within 1 ± a of true RUL. Fig. 7 indicates that RUL prediction starting from 50th cut are all accepted. Fig. 8 specify the distributions of predicted RUL when is equal to 100 and 200 cut, respectively. To demonstrate the effectiveness and robustness of the proposed method, Monte-Carlo simulations have been performed to process both the data set used for the above analysis and a different data set measured from a separate run-to-failure experiment. Each scenario has been run for 100 times. The result showing the performance of LSPF compared to SIR is listed in Table 2, which indicates the former can effectively reduce the prediction error by over 30%. 5. Conclusion Particle filtering has been investigated as a prognostic method for joint state and parameter estimations with the prediction of the severity of tool wear and the remaining useful life as an example. To address the sample impoverishment problem associated with conventional SIR, a new adaptive resampling strategy has been proposed. With the performance evaluated by a tool wear test in a CNC milling machine, the proposed method is demonstrated to have the following benefits: (1) approximating the probability distribution continuously and improving the prediction accuracy; (2) narrowing down the confidence interval and providing more accurate information support.
Fig. 7. RUL prediction with respect to a- accuracy.
Through Monte-Carlo simulation tests, LSPF is shown to have significantly better prediction accuracy over the conventional SIR. The proposed method has been evaluated for tool wear prediction under different operating conditions, and can be expanded to trending analysis in other areas, such as degradation of HVAC system performance or vehicle congestion in city planning and traffic control
Please cite this article in press as: Wang P, Gao RX. Adaptive resampling-based particle filtering for tool life prediction. J Manuf Syst (2015), http://dx.doi.org/10.1016/j.jmsy.2015.04.006
G Model JMSY-399; No. of Pages 7
ARTICLE IN PRESS P. Wang, R.X. Gao / Journal of Manufacturing Systems xxx (2015) xxx–xxx
References [1] Li B. A review of tool wear estimation using theoretical analysis and numerical simulation technologies. J Refract Metals Hard Mater 2012;35:143–51. [2] Marksberry PW, Jawahir IS. A comprehensive tool-wear/tool-life performance model in the evaluation of NDM for sustainable manufacturing. Int J Mach Tool Manuf 2008;48:878–86. [3] Paris PC, Gomez MP, Anderson WE. A rational analytic theory of fatigue. Trend Eng 1961;13:9–14. [4] Aslantas K, Tasgetiren S. A study of spur gear pitting formation and life prediction. Wear 2004;257:1167–75. [5] Mills B, Redford A. Machinability of engineering materials. London: Applied Science; 1983. [6] Hoffman E. Fundamentals of tool design. Dearborn: SME; 1984. [7] Karandikar J, Abbas A, Schmitz T. Tool life predictions using a random walk method of Bayesian updating. Mach Sci Technol 2013;17(3):410–42. [8] Teti R, Jemielniak K, O’Donnell G, Dornfeld D. Advanced monitoring of machining operations. CIRP Ann Manuf Technol 2010;59:717–39. [9] Kalman RE. A new approach to linear filtering and prediction problems. Trans ASME J Basic Eng 1960;82:35–45. [10] Julier S, Uhlmann J. A new extension of Kalman filter to nonlinear systems. In: Proc. of 11th International Symposium on Aerospace/Defense sensing, Simulation and Controls, Multi Sensor Fusion, Tracking and Resource Management. 1997. [11] Arulampalam MS, Maskell S, Gordon N, Clapp T. A tutorial on particle filters for online nonlinear/non-Gaussian Bayesian tracking. IEEE Trans Signal Process 2002;50(2):174–88.
7
[12] Doucet A, Johansen A. A tutorial on particle filtering and smoothing: fifteen years later. In: Oxford handbook of nonlinear filtering. Oxford: Oxford University Press; 2009. p. 656–704. [13] Li T, Sun S, Sattar T, Corchado J. Fight sample degeneracy and impoverishment in particle filters: a review of intelligent approaches. Expert Syst Appl 2014;51:3944–54. [14] Nemeth C, Fearnhead P, Mihaylova L. Sequential Monte-Carlo methods for state and parameter estimation in abruptly changing environments. IEEE Trans Signal Process 2014;62(5):1245–55. [15] Li X, Lim BS, Zhou JH, Huang S, Phua SJ, Shaw KC, et al. Fuzzy neural network modelling for tool wear estimation in dry milling operation. In: Proc. of Annual Conference of the Prognostics and Health Management Society. 2009. p. 1–11. [16] Wang J, Wang P, Gao R. Tool life prediction for sustainable manufacturing. In: Proc. of 11th global conference on sustainable manufacturing. 2013. p. 230–4. [17] Wang J, Wang P, Gao R. Particle filter for tool wear prediction. In: Proc. of 42nd north american manufacturing research conference. 2014. [18] Liu J, West M. Combined parameter and state estimation in simulation based filtering. Sequential Monte-Carlo methods in practice. NY: Springer New York; 2001. p. 197–223. [19] Li DZ, Wang W, Ismail F. A muted particle filter technique for system state estimation and battery life prediction. IEEE Trans Instrum Meas 2014;63(8):2034–43. [20] Eguchi S, Copas J. Interpreting Kullback–Leibler divergence with the Neyman–Pearson Lemma. J Multivar Anal 2006;97(9):2034–40.
Please cite this article in press as: Wang P, Gao RX. Adaptive resampling-based particle filtering for tool life prediction. J Manuf Syst (2015), http://dx.doi.org/10.1016/j.jmsy.2015.04.006