Robotics and Computer-Integrated Manufacturing 28 (2012) 509–516
Contents lists available at SciVerse ScienceDirect
Robotics and Computer-Integrated Manufacturing journal homepage: www.elsevier.com/locate/rcim
Bearing condition prediction considering uncertainty: An interval type-2 fuzzy neural network approach Chaochao Chen n, George Vachtsevanos School of Electrical and Computer Engineering, Georgia Institute of Technology, Atlanta, Georgia 30332, USA
a r t i c l e i n f o
abstract
Article history: Received 28 August 2011 Received in revised form 30 January 2012 Accepted 13 February 2012
Rolling-element bearings are critical components of rotating machinery. It is important to accurately predict in real-time the health condition of bearings so that maintenance practices can be scheduled to avoid malfunctions or even catastrophic failures. In this paper, an Interval Type-2 Fuzzy Neural Network (IT2FNN) is proposed to perform multi-step-ahead condition prediction of faulty bearings. Since the IT2FNN defines an interval type-2 fuzzy logic system in the form of a multi-layer neural network, it can integrate the merits of each, such as fuzzy reasoning to handle uncertainties and neural networks to learn from data. The interval type-2 fuzzy linguistic process in the IT2FNN enables the system to handle prediction uncertainties, since the type-2 fuzzy sets are such sets whose membership grades are type-1 fuzzy sets that can be used in failure prediction due to the difficult determination of an exact membership function for a fuzzy set. Noisy data of faulty bearings are used to validate the proposed predictor, whose performance is compared with that of a prevalent type-1 condition predictor called Adaptive Neuro-Fuzzy Inference System (ANFIS). The results show that better prediction accuracy can be achieved via the IT2FNN. & 2012 Elsevier Ltd. All rights reserved.
Keywords: Bearing condition prediction Type-1 fuzzy Type-2 fuzzy Interval type-2 fuzzy neural network Adaptive neuro-fuzzy inference system Uncertainty
1. Introduction Rolling-element bearings are widely used in mechanical and rotational equipments. Their unexpected failures can result in critical damage of these equipments. Thus, a reliable real-time health condition predictor is required to forecast the future conditions of bearings so that timely maintenance can be performed. Generally, machine condition prediction can be categorized into two major classes: model-based (or physics-based) and datadriven methods [1]. Given a proper model for a specific system, model-based methods can generate accurate prediction estimates. For instance, Li established a mathematical model to describe the bearing fault propagation via Paris’ law [2]. In most practical instances, however, it is expensive and/or difficult to develop such accurate models. Moreover, many high-fidelity models are too computationally intensive to be run in real time. Without the necessity of deriving complex mathematical models, data-driven methods can employ the collected condition data to estimate the fault evolution. Among the most promising data-driven methods, neural networks and fuzzy systems are widely used to forecast machine
n
Corresponding author. Tel./fax: þ 1 404 894 4130. E-mail address:
[email protected] (C. Chen).
0736-5845/$ - see front matter & 2012 Elsevier Ltd. All rights reserved. doi:10.1016/j.rcim.2012.02.005
health conditions. Recently, various neural networks have been successfully applied in the prediction of bearing conditions. Tse and Atherton [3] employed recurrent neural networks to forecast the condition degradation of bearings in a cooling tower fan. Gebraeel et al. [4] used accelerated bearing test data to validate the effectiveness of the proposed neural-network-based predictors. Since fuzzy systems can make use of human domain expertise via a series of IF-THEN rules, they have been integrated in neural networks to carry out machine condition prediction [5–8]. These integrated systems are called neuro-fuzzy systems or fuzzy neural networks that show better prediction performance than neural-network-based predictors. For example, by using the experimental data of faulty bearings, the prediction accuracy of a neuro-fuzzy system has been demonstrated higher than that of a radial basis function neural network [5]. Moreover, Chen et al. proposed an integrated machine health prediction system that combines the neuro-fuzzy with Bayesian estimation or particle filtering [9–11], whose prediction performance is superior to that of a recurrent neural network. However, note that prediction uncertainties have not been taken into account in previous works. For instance, the training and validation data were always obtained in the laboratory operating environments, and sometimes, accelerated fatigue tests were adopted to reduce experiment time. As a result, these data may not be able to accurately characterize the fault degradation process in real-world applications, e.g., the real-world data are
510
C. Chen, G. Vachtsevanos / Robotics and Computer-Integrated Manufacturing 28 (2012) 509–516
always much more noisy; also, different experts may use various linguistic knowledge to define fuzzy rules in the neuro-fuzzy systems. Therefore, a new condition predictor that can handle uncertainties is needed. Here, type-2 fuzzy sets are used to handle uncertainties. Zadeh [12] first introduced the concept of type-2 fuzzy sets as an extension of the concept of a commonly used type-1 fuzzy set. The membership grades of type-2 fuzzy sets are fuzzy. That is, there are two memberships of type-2 fuzzy sets. They are called primary membership and secondary membership that can be any subsets in [0,1]. Corresponding to each primary membership, the secondary membership defines the possibilities for the primary membership. By using the concept of type-2 fuzzy sets, a complete type-2 Fuzzy Logic System (FLS) theory has been established by Karnik and Mendel to handle linguistic and numerical uncertainties [13–15]. A type-2 FLS includes a fuzzifier, a rule base, a fuzzy inference engine, and an output processor. Different from a type-1 FLS, the output processor includes a type reducer and a defuzzifier: the type reducer generates a type-1 fuzzy set output and the defuzzifier produces a crisp number. Like a type-1 FLS, a type-2 FLS is also characterized by IF-THEN rules, but its antecedent or consequent sets are type 2. Since the type reduction is very intensive, the computation of the type-2 FLS is complicated. To simplify the computation, an interval type-2 FLS has been developed where secondary membership functions are interval sets [16]. Recently, Interval Type-2 Fuzzy Inference Systems (IT2FLS) have been successfully applied in various fields to deal with uncertainties. For instance, a new type-2 fuzzy adaptive filter was applied to equalization of a nonlinear time-varying channel [17]; a type-2 fuzzy rule based expert system was developed for stock price analysis [18]. In this paper, an Interval Type-2 Fuzzy Neural Network (IT2FNN) is proposed to perform bearing condition prediction, which represents the IT2FLS in the form of neural network so as to integrate the merits of each, such as fuzzy reasoning to handle uncertainties and neural networks to learn from data. In this paper, adaptive neuro-fuzzy inference system (ANFIS) is used to make comparison with the proposed IT2FNN. ANFIS [20] is the most widely used neuro-fuzzy system in prediction. The architecture of ANFIS is determined by domain expertise that contains knowledge of the research subject/process; the parameters of ANFIS are learned by data. The membership functions in its second layer represent type-1fuzzy sets, which cannot handle many types of uncertainties, e.g. rule uncertainties. But the corresponding membership functions in the proposed IT2FNN are type-2 that includes primary and secondary memberships, which assist the IT2FNN to deal with various uncertainties. The remainder of this paper is organized as follows: Section 2 presents the proposed IT2FNN. In Section 3, multi-step-ahead condition prediction for faulty bearings is performed by using the IT2FNN, whose performance is also compared with that of a typical type-1 condition predictor called ANFIS. Section 4 provides some concluding remarks.
2. IT2FNN predictor The IT2FNN predictor includes type-2 antecedent and consequent fuzzy sets, which are different from that in the conventional type-1 neuro-fuzzy systems, e.g. ANFIS. Since the higher type of fuzzy relation (type-2) is used in the IT2FNN predictor, the fuzziness of a relation is increased. As a consequence, ‘‘increased fuzziness in a description means increased ability to handle inexact information in a logically correct manner’’ according to Hisdal [21].
Fig. 1. Architecture of the IT2FNN predictor; M denotes an interval type-2 Gaussian membership function (MF) with an uncertain mean.
The architecture of the IT2FNN is shown in Fig. 1. It consists of five layers. Here, four inputs are adopted in layer 1 for forecasting, i.e., the current and three previous values of machine conditions or monitoring indices, xt3r , xt2r , xtr , xt , where t is the current time instant and r denotes the prediction step, e.g., when r ¼5, xt þ r , the only output in layer 5, means a five-step-ahead prediction value. In this paper, 16 fuzzy rules are employed, i.e. there are 16 rule nodes in layer 3. Layer 4 performs the function called the ‘‘type reduction’’, which generates a type-1 set from type-2 sets. Afterwards, defuzzification is carried out in layer 5. The fuzzy inference of the IT2FNN depends on a set of fuzzy rules, the same way as type-1 neuro-fuzzy systems. These fuzzy rules can be expressed as Rulej :
j j j j If ðxt3r is A~ 1 Þ And ðxt2r is A~ 2 Þ And ðxtr is A~ 3 Þ And ðxt is A~ 4 Þ,
Then yj is
h i wjL ,wjR ; j ¼ 1,2,. . .,16:
where yj is the prediction result according to the jth fuzzy j rule, A~ i is the interval type-2 fuzzy set associated with the ith input in the jth fuzzy rule, here, i ¼ 1,2,. . .,4; ½wjL ,wjR is the weighting interval set derived from interval type-2 fuzzy sets in the consequent part. The signal propagation in the IT2FNN is illustrated as follows: In the following description, xðkÞ defines the ith node input in i the kth layer, and yðkÞ i denotes the ith node output in the kth layer. Layer 1: The input signals transmit directly to the next layer without any computation. The outputs of this layer can be expressed as yð1Þ ¼ xð1Þ , i i
i ¼ 1,2,. . .,4:
ð1Þ
Layer 2: Each node in this layer performs the calculation of an interval type-2 Gaussian membership function (MF) with an uncertain mean mij in ½m ij ,m ij and a standard deviationsij , as shown in Fig. 2. The outputs of this layer are represented as 2 !2 3 ð1Þ h i i ¼ 1,2,. . .,4: 1 xi mij 5 ð2Þ 4 yij ¼ exp , mij A m ij ,m ij , j ¼ 1,2,. . .,16: 2 sij ð2Þ Based on the above equation and Fig. 2, the outputs can be calculated as 8 8 ð1Þ > Nðm ij , sij ,xð1Þ > > i Þ, xi om ij > > > < > > ð1Þ ð2Þ > > > > y ij ¼ > 1, m ij rxi rm ij > > > > > : Nðm , s ,xð1Þ Þ, xð1Þ 4 m < ij ij i ij i ð3Þ yð2Þ ¼ 8 ij > > m þ m ij > > ð1Þ ð1Þ ij > > < Nðm ij , sij ,xi Þ, xi r 2 > > > > y ð2Þ > m ij þ m ij > ij ¼ > ð1Þ ð1Þ > > > : Nðm ij , sij ,xi Þ, xi 4 2 :
C. Chen, G. Vachtsevanos / Robotics and Computer-Integrated Manufacturing 28 (2012) 509–516
511
1 0.9 0.8 0.7 0.6 0.5 0.4 0.3 0.2 0.1 0
0
1
2
3
4
5
6
7
8
9
10
Fig. 2. Interval type-2 Gaussian membership function (MF) with an uncertain mean m in ½m,m and a standard deviation s; y ð2Þ , y ð2Þ are the upper and lower MFs, respectively. Fig. 3. Flowchart of the computation of yð4Þ R .
where " Nðm ij , sij ,xð1Þ Þ exp 12 i 2
ð1Þ 4 1 ij ,xi Þ exp 2
Nðm ij , s
xð1Þ m ij i
2 #
sij
xð1Þ m ij i
sij
; !2 3 5; y ð2Þ ,y ð2Þ ij ij
are the upper and lower MFs, respectively, as shown in Fig. 2. Layer 3: A product t-norm operation is performed to obtain the firing strength of each rule in this layer, which is described as 8 ð3Þ Q ð2Þ > y ¼ y ij > < j Y i ð2Þ yð3Þ ¼ y ¼ ð4Þ Q ð2Þ , i ¼ 1,2,. . .,4, j ¼ 1,2,. . .,16: ð3Þ j ij > > i : y j ¼ y ij i
Layer 4: In this layer, a center-of-set type-reduction algorithm is performed. The details are illustrated as follows: The outputs of this layer are given by 8 Pn wRk yð3Þ > ð4Þ Rk k ¼ 1 > P y ¼ > n < R yð3Þ k ¼ 1 Rk ð4Þ Pn ð5Þ y ¼ , n ¼ 16 wLk yð3Þ > ð4Þ Lk k ¼ 1 > > ð3Þ : yL ¼ Pn k ¼ 1
yLk
where the weighting interval set ½wLk ,wRk are the centroids of the type-2 consequent sets. The algorithms with four step iterative procedure proposed by Mendel [19] are adopted to compute yð4Þ R and yð4Þ L . The brief description of the algorithms is given as follows: ð4Þ The algorithm for the computation of yR is shown in Fig. 3. Without loss of generality, assume that the wRk are arranged in ascending order, i.e., wR1 rwR2 r. . . r wRn . The algorithm for the computation of yð4Þ L is shown in Fig. 4. Without loss of generality, assume that the wLk are arranged in ascending order, i.e., wL1 r wL2 r r wLn . Layer 5: As a defuzzification process, the output of the IT2FNN ð4Þ is computed in this layer using the average of yð4Þ R andyL . yð5Þ ¼
ð4Þ yð4Þ R þyL 2
ð6Þ
In the IT2FNN, the antecedent membership function parameters, m ij , m ij , sij in layer 2, and the consequent parameterswRk , wLk
Fig. 4. Flowchart of the computation of yð4Þ L .
need to be optimized via training process. Here, the gradient descent algorithm is utilized to minimize the following cost function: E¼
1 ð5Þ ðy dÞ2 2
ð7Þ
where yð5Þ is the output of the IT2FNN, d is the actual future value. The parameters of the IT2FNN are adapted as below. m ij ðt þ1Þ ¼ m ij ðtÞZ1
@E , @m ij
t ¼ 1, 2, . . ., p
ð8Þ
m ij ðt þ1Þ ¼ m ij ðtÞZ2
@E , @m ij
t ¼ 1, 2, . . ., p
ð9Þ
sij ðt þ1Þ ¼ sij ðtÞZ3
@E , @sij
t ¼ 1, 2, . . ., p
ð10Þ
512
C. Chen, G. Vachtsevanos / Robotics and Computer-Integrated Manufacturing 28 (2012) 509–516
wRk ðt þ 1Þ ¼ wRk ðtÞZ4
@E , @wRk
t ¼ 1, 2, . . ., p
ð11Þ
wLk ðt þ1Þ ¼ wLk ðtÞZ5
@E , @wLk
t ¼ 1, 2, . . . p
ð12Þ
where Z1 , Z2 , Z3 , Z4 , Z5 are the learning rates, and p is the number of the training epochs.
3. Experimental results 3.1. Experimental setup Three sets of experimental data representing different types of bearing faults were used to train and test the proposed IT2FNN predictor. The three bearing faults include grease breakdown, spalling and an unknown fault on a helicopter. For each set of faulty data, an appropriate feature was extracted from raw vibration signals to characterize the fault propagation process. The feature on the first fault, grease breakdown, is the energy (the Root Mean Square (RMS) value of the vibration signal) in a frequency band of 2-4 KHz. This is an observation based on that when grease breakdowns, the vibration energy in this frequency band increases. Sampling frequency is 10 KHz. The feature on the second fault, spalling, is the sum of the weighted frequency components related to the frequency of interest (or fault characteristic frequency). A weighting window is defined centered around the frequency of interest. The fault spectrum times this weighting window is summed to get the feature value. Sampling frequency is 204.8 KHz. The feature on the third fault is an unknown one. The interpolated feature was directly obtained from U.S. Army. The exact fault mode is unknown, but the bearing was removed after its condition was deemed to be suspect. Here, the feature of the bearing grease breakdown was applied as the training data, and the features of the spalling and the unknown fault were employed as the testing data. They are called feature f1, f2 and f3, respectively. In most of practical instances, only limited sets of data are available, which is the same case as we have. In order to acquire enough data for training and testing, the features were interpolated between the available data. For feature f1, we have 24 real experimental data; for feature f2, we have 32 data; for feature f3, we received interpolated data from U.S. Army. Also, they were normalized in the range of [0,1], 0 is simply the smallest value in the feature vector; 1 is the largest value that could be a fault or a severe wear out. This type of normalization for machine condition prognostics can be found in [7]. In this paper, a multi-step-ahead prediction performance comparison was made between the proposed and ANFIS predictors. The Root-Mean Square Error (RMSE) was chosen as the performance metric: sffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi PN ^ 2 i ¼ 1 ðyi yi Þ RMSE ¼ ð13Þ N where N is the total number of data points, yi and y^ i are the ith actual and predicted values, respectively In order to make a fair comparison between the IT2FNN and ANFIS predictors, each of them has four antecedents, namely, x(t 3r), x(t 2r), x(t r), x(t), to forecast x(tþr). Here, r denotes the prediction step. Also, each antecedent is assigned with two fuzzy sets so that 16 fuzzy rules are used. The training process for the two predictors was terminated when the number of training iterations arrives at a predefined value, e.g., 100 epochs. Here, the testing data were corrupted with additive noise to emulate a noisy measurement environment. Then, these noisy
Fig. 5. Noisy testing data with the additive 20 dB noise to feature f2.
Fig. 6. Three-step-ahead prediction comparison results using the testing data with the additive 20 dB noise to feature f2: (a) ANFIS; (b) IT2FNN.
C. Chen, G. Vachtsevanos / Robotics and Computer-Integrated Manufacturing 28 (2012) 509–516
data were used to evaluate the predictors’ performance. It is no doubt that such testing data introduce the uncertainties to the prediction process. The following expression denotes the noisecorrupted testing data: xðtÞ ¼ sðtÞ þ nðtÞ
ð14Þ
where s(t) is the original signal, n(t) is the additive noise. Three different noises were added in the original signals, which are 15 dB, 20 dB and 25 dB uniformly-distributed noises. 3.2. Performance evaluation We used the feature f1 with the additive 20 dB noise to train the predictors, and then employed noisy testing data, e.g., feature f2 added with 20 dB noise, to test the predictors. That is, the predictors were trained through the data that contain the uncertainties or noises. Fig. 5 shows the original feature f2 and the noisy testing data with the additive 20 dB noise. The ANFIS and IT2FNN predictors were tested via this set of noisy data, and the three-step-ahead prediction results are shown in Fig. 6. As can be seen, the IT2FNN prediction results are better than that of the ANFIS predictor. In Fig. 6(a), the ANFIS predictor intends to capture the dynamic responses of the noisy testing data; the IT2FNN predictor, on the contrary, appears to forecast the evolution trend of the feature f2 without the added 20 dB noise. This point can be more clearly found when the predictors are tested in more noisy data. For example, when 15 dB noise was added in the
Fig. 7. Noisy testing data with the additive 15 dB noise to feature f2.
513
feature f2 (Fig. 7), more noisy prediction results of the ANFIS are shown in Fig. 8(a) as compared to that of the IT2FNN predictor in Fig. 8(b). Likewise, 15 dB noise was also added in the feature 3 to test the two predictors, as shown in Fig. 9. The comparison results are given in Fig. 10. From this figure, we can clearly see that the prediction performance of the IT2FNN is much superior to that of the ANFIS, and the IT2FNN predictor can capture the propagation trend of the feature f3 itself instead being ‘‘confused’’ by the added noise just like the ANFIS predictor does. The reason for this substantial performance difference between the two predictors is that the IT2FNN is able to incorporate the uncertainties contained in the training data into its fuzzy rules. Fig. 11 shows the membership functions of one of the inputs. We can see that the IT2FNN predictor possesses an interval type-2 Gaussian membership function with an uncertain mean min ½m,m , which allows the predictor to memorize the uncertainties in the training data and incorporate them in the fuzzy reasoning through fuzzy rules to perform the prediction. Five-step-ahead prediction comparison results between the ANFIS and IT2FNN predictors are also given. Fig. 12 shows the prediction results of the feature 2 added with 15 dB noise; Fig. 13 gives the results of the feature 3 with 15 dB noise. Both figures clearly demonstrate the better prediction accuracy of the IT2FNN than the ANFIS. Accordingly, the Gaussian membership functions of one input of the predictors are shown in Fig. 14. Six different sets of testing data were used to evaluate the two predictors’ prediction accuracy. The performance comparison results in terms of RMSE are shown in Table 1. We can clearly see
Fig. 9. Noisy testing data with the additive 15 dB noise to feature f3.
Fig. 8. Three-step-ahead prediction comparison results using the testing data with the additive 15 dB noise to feature f2: (a) ANFIS; (b) IT2FNN.
514
C. Chen, G. Vachtsevanos / Robotics and Computer-Integrated Manufacturing 28 (2012) 509–516
Fig. 10. Three-step-ahead prediction comparison results using the testing data with the additive 15 dB noise to feature f3: (a) ANFIS; (b) IT2FNN.
1
1
0.9
0.9
0.8
0.8
0.7
0.7
0.6
0.6
0.5
0.5
0.4
0.4
0.3
0.3
0.2
0.2
0.1
0.1
0 -0.5
0
0.5
1
1.5
0 -0.5
0
0.5
1
1.5
Fig. 11. Gaussian membership functions of one input for three-step-ahead prediction: (a) ANFIS; (b) IT2FNN.
Fig. 12. Five-step-ahead prediction comparison results using the testing data with the additive 15 dB noise to feature f2: (a) ANFIS; (b) IT2FNN.
Fig. 13. Five-step-ahead prediction comparison results using the testing data with the additive 15 dB noise to feature f3: (a) ANFIS; (b) IT2FNN.
C. Chen, G. Vachtsevanos / Robotics and Computer-Integrated Manufacturing 28 (2012) 509–516
1
1
0.9
0.9
0.8
0.8
0.7
0.7
0.6
0.6
0.5
0.5
0.4
0.4
0.3
0.3
0.2
0.2
0.1
0.1
0 -0.5
0
0.5
1
1.5
0 -0.5
0
0.5
515
1
1.5
Fig. 14. Gaussian membership functions of one input for five-step-ahead prediction: (a) ANFIS; (b) IT2FNN. Table 1 Multi-step-ahead prediction RMSE comparison using different testing data (training data are the feature 1 with additive 20 dB noise): (a) feature 2 with 25 dB noise; (b) feature 2 with 20 dB noise; (c) feature 2 with 15 dB noise; (d) feature 3 with 25 dB noise; (e) feature 3 with 20 dB noise; (f) feature 3 with 15 dB noise. þ r denotes the prediction step. (a)
(b)
(c)
(d)
(e)
(f)
þr
ANFIS
IT2FNN
ANFIS
IT2FNN
ANFIS
IT2FNN
ANFIS
IT2FNN
ANFIS
IT2FNN
ANFIS
IT2FNN
1 2 3 4 5 6 7 8
0.0364 0.0364 0.0392 0.0382 0.0465 0.0462 0.0596 0.0462
0.0354 0.0362 0.0362 0.0347 0.0366 0.0383 0.0377 0.0392
0.0380 0.0423 0.0456 0.0425 0.0487 0.0503 0.0645 0.0526
0.0362 0.0393 0.0376 0.0366 0.0391 0.0413 0.0440 0.0439
0.0608 0.0638 0.0666 0.0622 0.0771 0.0691 0.0930 0.0740
0.0470 0.0466 0.0482 0.0473 0.0503 0.0486 0.0520 0.0515
0.0322 0.0378 0.0447 0.0517 0.0745 0.0988 0.1297 0.1138
0.0311 0.0349 0.0419 0.0461 0.0536 0.0577 0.0599 0.0602
0.0363 0.0476 0.0518 0.0555 0.0740 0.1047 0.1283 0.1204
0.0325 0.0386 0.0436 0.0479 0.0560 0.0569 0.0607 0.0630
0.0542 0.0585 0.0774 0.0765 0.0923 0.1241 0.1374 0.1465
0.0413 0.0465 0.0537 0.0534 0.0599 0.0617 0.0624 0.0664
Table 2 Multi-step-ahead prediction RMSE comparison using different testing data (training data are the feature 3 with additive 20 dB noise): (a) feature 1 with 25 dB noise; (b) feature 1 with 20 dB noise; (c) feature 1 with 15 dB noise; (d) feature 2 with 25 dB noise; (e) feature 2 with 20 dB noise; (f) feature 2 with 15 dB noise. þ r denotes the prediction step. (a)
(b)
(c)
(d)
(e)
(f)
þr
ANFIS
IT2FNN
ANFIS
IT2FNN
ANFIS
IT2FNN
ANFIS
IT2FNN
ANFIS
IT2FNN
ANFIS
IT2FNN
1 2 3 4 5 6 7 8
0.0382 0.0414 0.0390 0.0375 0.0416 0.0507 0.0478 0.0455
0.0364 0.0358 0.0339 0.0374 0.0368 0.0360 0.0389 0.0377
0.0418 0.0450 0.0403 0.0390 0.0431 0.0528 0.0519 0.0530
0.0408 0.0390 0.0355 0.0382 0.0380 0.0388 0.0458 0.0398
0.0498 0.0485 0.0459 0.0463 0.0514 0.0563 0.0589 0.0690
0.0480 0.0442 0.0423 0.0450 0.0471 0.0455 0.0495 0.0489
0.0541 0.0542 0.0484 0.0517 0.0805 0.0816 0.0636 0.0933
0.0483 0.0435 0.0472 0.0513 0.0574 0.0606 0.0583 0.0617
0.0542 0.0574 0.0496 0.0530 0.0811 0.0820 0.0650 0.0963
0.0484 0.0468 0.0491 0.0530 0.0593 0.0607 0.0595 0.0632
0.0580 0.0598 0.0602 0.0599 0.0831 0.0887 0.0665 0.1128
0.0524 0.0516 0.0600 0.0590 0.0625 0.0644 0.0589 0.0639
Table 3 Multi-step-ahead prediction RMSE comparison using different testing data (training data are the feature 1 with additive 10 dB noise): (a) feature 2 with 25 dB noise; (b) feature 2 with 20 dB noise; (c) feature 2 with 15 dB noise; (d) feature 3 with 25 dB noise; (e) feature 3 with 20 dB noise; (f) feature 3 with 15 dB noise. þ r denotes the prediction step. (a)
(b)
(c)
(d)
(e)
(f)
þr
ANFIS
IT2FNN
ANFIS
IT2FNN
ANFIS
IT2FNN
ANFIS
IT2FNN
ANFIS
IT2FNN
ANFIS
IT2FNN
1 2 3 4 5 6 7 8
0.0244 0.0289 0.0346 0.0367 0.0362 0.0355 0.0408 0.0459
0.0243 0.0262 0.0331 0.0302 0.0357 0.0315 0.0344 0.0402
0.0283 0.0361 0.0369 0.0423 0.0373 0.0395 0.0484 0.0469
0.0267 0.0312 0.0364 0.0357 0.0364 0.0381 0.0411 0.0418
0.0452 0.0497 0.0554 0.0616 0.0522 0.0623 0.0644 0.0703
0.0372 0.0379 0.0489 0.0465 0.0471 0.0489 0.0504 0.0566
0.0388 0.0487 0.0480 0.0481 0.0505 0.0541 0.0559 0.0623
0.0387 0.0406 0.0474 0.0444 0.0490 0.0506 0.0543 0.0609
0.0404 0.0539 0.0520 0.0513 0.0527 0.0570 0.0630 0.0673
0.0400 0.0429 0.0497 0.0471 0.0508 0.0527 0.0592 0.0670
0.0651 0.0691 0.0713 0.0781 0.0699 0.0729 0.0785 0.0779
0.0500 0.0494 0.0606 0.0604 0.0661 0.0611 0.0668 0.0706
516
C. Chen, G. Vachtsevanos / Robotics and Computer-Integrated Manufacturing 28 (2012) 509–516
from this table that the IT2FNN predictor yields better prediction results than the ANFIS predictor, particularly when large-stepahead predictions were performed using more noisy testing data, e.g., the RMSE value of the ANFIS predictor is 0.1465 in the eightstep-ahead prediction using the feature 3 added with 15 dB noise (f), and the RMSE value of the IT2FNN predictor is just 0.0664. In order to further verify the effectiveness of the predictor, we swapped the training and testing data, and also used 10 dB noise added to the feature 1 as the training data. As a result, we came to the same conclusion, that is, the proposed predictor still outperforms the ANFIS predictor and the prediction accuracy will generally degrades with the increase of the prediction step. The comparison results are shown in Tables 2 and 3.
4. Conclusions This paper proposed an Interval Type-2 Fuzzy Neural Network (IT2FNN) in the prediction of bearing health conditions. The IT2FNN predictor employs the interval type-2 fuzzy sets to handle the uncertainties in the course of predictions, and utilizes the neural network to optimize its parameters through learning. Different faulty bearing data are used to evaluate the proposed predictor. The prediction results are compared with that of a commonly used predictor called Adaptive Neuro-Fuzzy Inference System (ANFIS). The comparison results demonstrate that the IT2FNN predictor possesses better prediction accuracy. References [1] Jardine AKS, Lin D, Banjevic D. A review on machinery diagnostics and prognostics implementing condition-based maintenance. Mechanical Systems And Signal Processing 2006;20:1483–510. [2] Li YW. Dynamic prognostics of rolling element bearing condition. PhD dissertation. Atlanta, GA: Georgia Institute of Technology; 1999. [3] Tse P, Atherton D. Prediction of machine deterioration using vibration based fault trends and recurrent neural networks. Journal of Vibration and Acoustics 1999;121:355–62.
[4] Gebraeel N, Lawley M, Liu R, Parmeshwaran V. Residual life predictions from vibration-based degradation signals: a neural network approach. IEEE Transactions On Industrial Electronics 2004;51(3):694–700. [5] Zhao F, Chen J, Guo L, Lin X. Neuro-fuzzy based condition prediction of bearing health. Journal Of Vibration And Control 2009;15(7):1079–91. [6] Wang W, Golnaraghi F, Ismail F. Prognosis of machine health condition using neuro-fuzzy systems. Mechanical Systems And Signal Processing 2004;18: 813–31. [7] Samanta B, Nataraj C. Prognostics of machine condition using soft computing. Robotics and Computer-Integrated Manufacturing 2008;24:816–23. [8] Liu J, Wang W, Golnaraghi F. An enhanced diagnostic scheme for bearing condition monitoring. IEEE Transactions On Instrumentation And Measurement 2010;59(2):309–21. [9] Chen C, Zhang B, Vachtsevanos G, Orchard M. Machine condition prediction based on adaptive neuro-fuzzy and high-order particle filtering. IEEE Transactions On Industrial Electronics 2011;58(9):4353–64. [10] Chen C, Zhang B, Vachtsevanos G. Prediction of machine health condition using neuro-fuzzy and Bayesian algorithms. IEEE Transactions On Instrumentation And Measurement 2012;61(2):297–306. [11] Chen C, Vachtsevanos G, Orchard M. Machine remaining useful life prediction based on adaptive neuro-fuzzy and high-order particle filtering. In: Annual Conference of the Prognostics and Health Management Society, Portland, OR; 2010. [12] Zadeh LA. The concept of a linguistic variable and its application to approximate reasoning—I. Information Sciences 1975;8:199–249. [13] Karnik NN, Mendel JM. Introduction to type-2 fuzzy logic systems. In: Proceedings of the IEEE FUZZ Conference, Anchorage, AK; May 1998. [14] Karnik NN, Mendel J. M.. Type-2 fuzzy logic systems: Type-reduction. In: Proceedings of the IEEE SMC Conference, San Diego, CA; Oct. 1998. [15] Karnik NN, Mendel JM, Liang Q. Type-2 fuzzy logic systems. IEEE Transactions On Fuzzy Systems 1999;7(6):643–58. [16] Liang Q, Mendel JM. Interval type-2 fuzzy logic systems: theory and design. IEEE Transactions On Fuzzy Systems 2000;8(5):535–50. [17] Liang Q, Mendel JM. Equalization of nonlinear time-varying channels using type-2 fuzzy adaptive filters. IEEE Transactions On Fuzzy Systems 2000;8(5):551–63. [18] Zarandi MH Fazel, Rezaee B, Turksen IB, Neshat E. A type-2 fuzzy rule-based expert system model for stock price analysis. Expert Systems With Applications 2009;36:139–54. [19] Mendel J. Uncertain Rule-Based Fuzzy Logic Systems: Introduction and New Directions. Upper Saddle River, NJ: Prentice Hall PTR; 2001. [20] Jang J. ANFIS: adaptive-network-based fuzzy inference system. IEEE Transactions on Systems, Man, and Cybernetics 1993;23(3):665–85. [21] Hisdal E. The IF THEN ELSE statement and interval-valued fuzzy sets of higher type. International Journal of Man-Machine Studies 1981;15:385–455.