Deep belief network-based internal valve leakage rate prediction approach

Deep belief network-based internal valve leakage rate prediction approach

Measurement 133 (2019) 182–192 Contents lists available at ScienceDirect Measurement journal homepage: www.elsevier.com/locate/measurement Deep bel...

930KB Sizes 0 Downloads 71 Views

Measurement 133 (2019) 182–192

Contents lists available at ScienceDirect

Measurement journal homepage: www.elsevier.com/locate/measurement

Deep belief network-based internal valve leakage rate prediction approach Shen-Bin Zhu a, Zhen-Lin Li a,⇑, Shi-Min Zhang a, Ying-Yu a, Hai-Feng Zhang b a b

College of Mechanical and Transportation Engineering, China University of Petroleum-Beijing, 102249 Beijing, China PetroChina Pipeline R&D Center, 065000 Langfang, China

a r t i c l e

i n f o

Article history: Received 7 June 2018 Received in revised form 27 August 2018 Accepted 6 October 2018 Available online 9 October 2018 Keywords: Internal valve leakage Acoustic emission Deep belief network Deep learning Leakage rate prediction

a b s t r a c t If a leak occurs for a valve in a natural gas station, it will first cause economic loss. Second, the gas leakage may also lead to the pollution of other pipeline systems as well as to environmental pollution. Under extreme conditions, it may even lead to an explosion, endangering the safety of the staff. Therefore, we urgently need a means to solve these problems. At present, acoustic emission (AE) detection technology is the most widely used method of diagnosing the valve leakage. The effects of gas leakage mainly depend on the valve leakage rate. However, the internal valve leakage rate is a multivariable, nonlinear, and time-varying process. Therefore, the accurate prediction of the valve leakage rate is an important challenge. Recognising this challenge, a novel prediction method, namely, regression-based deep belief network (DBN), which substitutes the linear regression (LR) layer for the linear softmax classification layer at the top of the general DBN’s structure, has been proposed to predict the internal leakage rates of a valve in a natural gas pipeline. The internal leakage signals of a ball valve and a plug valve were collected using the AE system. The time–frequency features of the signals, inlet pressure of the pipe, and the valve type are used as the input variables to predict the leakage rates with the DBN model. The ball valve leakage data, plug valve leakage data, and mixed leakage data of both are used to establish and test the proposed models separately. At the same time, the back-propagation neural network (BPNN), support vector regression using linear (L-SVR), polynomial (P-SVR) and Radial basis function (RBF-SVR) kernels and the proposed DBN were all developed and compared to check the performance. After analysing the prediction results of these models, we found that the nonlinear and unstable features of the valve internal leakage signals could be well studied by using the DBN model. In addition, the performance of the DBN model was superior to that of the traditional prediction models for the three types of data. Therefore, it can be proven that the proposed model has huge practical application value for predicting the gas leakage rates of a valve in a natural gas pipeline system and has a guiding significance for predicting the other fluid leakage rates of a valve in other pipeline systems. Ó 2018 Elsevier Ltd. All rights reserved.

1. Introduction The internal natural gas leakage of a valve refers to the leakage of natural gas through a closing member (valve) to the downstream. Valves in a natural gas pipeline generally cause internal leakage because of the complex operating environment. There are many reasons for internal valve leakage, include corrosion of sealing surface caused by non-drying and non-anticorrosive treatment when leaving factory, impurity entering valve seat caused by valve not infused with sealing grease, sphere damage caused by nonstandard installation or welding, scratching of sealing surface

⇑ Corresponding author. E-mail address: [email protected] (Z.-L. Li). https://doi.org/10.1016/j.measurement.2018.10.020 0263-2241/Ó 2018 Elsevier Ltd. All rights reserved.

caused by welding slag and other construction residues, damage of sealing face caused by improper cleaning and so on [1]. When there is an internal leak in the valve, whether the natural gas leakage rate is within the permitted range is a key to replace or repair the damaged valve in time. Therefore, it is necessary to predict the internal valve leakage rate accurately. The leakage rate of a valve is affected by many factors, mainly the inlet pressure, leak hole size and shape, valve size and valve type. Because the size and the shape of the leak hole cannot be measured, many scholars have studied the leakage rate on the basis of the inlet pressure, valve size and valve type [2–4], and these factors can reflect the degree of leakage in the valve intuitively. At present, many scholars are focused on solving the problem of predicting the valve leakage rate from the features of the

183

S.-B. Zhu et al. / Measurement 133 (2019) 182–192

vibration signals produced by the gas leakage [4–9]. When the valve is leaking, the gas will be ejected from the leak hole into the downstream pipeline at a high speed, which will produce jet noise and cause pipeline vibration. The signals produced during gas leak process are mostly non-linear and non-stationary [10]. The key of the leakage rate prediction is to establish a corresponding model between the features of valve leakage signals and leakage rates, thereby achieving the purpose of predicting the valve leakage rates by detecting the internal leakage signals. In [4], physical theoretical models for the prediction of liquid leakage rates through ball valves and the globe valves have been established. The effects of leakage rates, inlet pressure levels, valve sizes and valve types on root mean square (RMS) have been studied. The results demonstrated that the AE signal power was significantly correlated with the affecting factors of the leakage rate. However, the physical theoretical model can effectively solve the problem of simple internal relationships, but it does not perform well when dealing with complex nonlinear problems. In [11], the degrees of valve leakage were predicted from the spectral amplitude of the signal, including non-leakage, moderate leakage, and severe leakage. This method is a simple classification of the degree of leakage. Therefore, it is not representative. In recent years, with the development of machine learning, the support vector machine (SVM) model based on statistical theory has been applied to the classification of the natural gas leakage flow level of a ball valve. Nine features, namely the mean standard deviation, root mean square value, energy and entropy of the time-domain signal and the root variance frequency, peak value and frequency centre of the frequency-domain signal, are used as the input parameters. The accuracy of prediction model is more than 95% [5]. In [12], a model based on factor analysis and k-medoids clustering was used to recognize recognition internal valve leakage rates, the recognition accuracy of the model is 96.28%. These classification models have a stronger ability to deal with complex nonlinear problems compared with the physical model, but the prediction result is only a judgment for a specific range of leakage flow levels. It lacks the ability to accurately quantitatively analyse the leakage rates and reduces the applicability of the model under different detection objects. In addition, these models are based on classification models of leakage flow levels in a single valve type, and the performance of the models can be greatly reduced while dealing with and analysing multi-type and multi-leakage-sized valve leakage signals. Therefore, we urgently need a stable and accurate method for predicting valve leakage rates to solve the abovementioned problems. Along with the data explosion that resulted from the use of smart metering and various sensors, machine learning techniques are highlighted in the field of metering-based forecasting. SVR is a commonly used regression forecasting method [13]. It can be used to map the input data from a low-dimensional feature space to a high-dimensional feature space by using nonlinear mapping and then performing linear regression in the high-dimensional space [14,15]. In the field of energy, it is often used to deal with regression forecasting. In [16], the SVR forecasting model with the historic electric load and weather data of four large commercial office buildings was used to forecast the demand response baseline. The high degree of prediction accuracy and stability were shown in short-term load forecasting. In [17], the SVR model is also constructed for forecasting crude oil prices. Another well-known prediction method is the neural network, where BPNN is the most widely used prediction method. In [18], the BPNN was applied and tested to forecast daily air pollutant concentrations. In [19], BPNN was applied to predict a ball bearing’s remaining useful life. The above results all showed that the BPNN exhibited better forecasting performance.

The above-described learning methods are generally a shallow structure algorithm with one or no hidden layers [20]. Consequently, the limitation of these models lies in the limited representation ability of complex functions in the case of finite samples and computational units, and they cannot effectively explore the regularity of features [21]. Therefore, these shortcomings of the shallow models encourage us to re-examine the regression prediction problem based on deep learning. The concept of deep learning originates from the research of artificial neural networks. By combining lower-level features, a more abstract high-level model is formed to mine the deeper feature representations for the data itself [22–24]. A DBN is an important model in deep learning. It not only has the advantages of a traditional neural network but also has the strong ability of information fusion. The DBN model can make the objective function achieve global optimisation by pre-training and fine-tuning, which solved the shortcoming of the traditional neural network, which was the ease of reaching the local optimum [25]. DBN is mainly used for the modelling, feature extraction and recognition of images, documents, speech and other objects [24]. In recent years, DBN has attracted increasing attention in the field of regression prediction. In [23], a deep WSF framework and an intelligent approach based on DBN were investigated. The DBN model can enhance the WSF performance and the prediction efficiency. In [25], the regression-based DBN approach was applied to predict the sound quality of a vehicle interior’s noise. Experimental verification and comparisons demonstrated that the DBN model exhibited better prediction accuracy and stability than the multiple linear regression (MLR), BPNN and SVM models. The above literatures show that the DBN has some advantages over the traditional shallow prediction models. Nevertheless, thus far, DBN applied for predicting internal valve leakage rates has not yet been considered in the published literature. Therefore, based on the strong feature learning ability of DBN, we propose a novel regression based DBN method that replaces the top classification layer of DBN by a linear regression layer. It not only solves the problem of the poor accuracy of a physical model but also makes up for the limited ability of a shallow model to deal with complex data. The purpose of this study was to evaluate the performance of DBN as a predictive tool for valve leakage rates. The performance of the model is directly related to the data structure, and for the different data, the performance of different models should be compared and analysed. Therefore, ball valve leakage data, plug valve leakage data and mixed leakage data of both are used to establish and test the proposed models. In addition, we compare the DBN model with the traditional prediction models such as SVR and BPNN. The experimental results show that the proposed method tested using the three types of data exhibited excellent performance in predicting valve leakage rates and is superior to the traditional methods in all three cases. 2. Methods 2.1. Restricted Boltzmann machine (RBM) architecture RBMs are undirected probabilistic graphical models based on a bipartite graph which contain a single layer of observable variables and a single layer of latent variables. There are no intra-layer connections among the visible units or among the hidden units. For a given configuration ðv ; hÞ, the energy function can be defined as follows:

Eh ðv ; hÞ ¼ 

n X i¼1

ai v i 

m X j¼1

bj hj 

n X m X i¼1

j¼1

v i wi;j hj

ð1Þ

184

S.-B. Zhu et al. / Measurement 133 (2019) 182–192

where v i 2 f0; 1g denotes the visible units, hj 2 f0; 1g denotes the hidden units, wi;j is the weight between the visible units v i and the hidden units hj , ai is the offset of v i and bj is the bias term of   hj , h ¼ wi;j ; ai ; bj are the model parameters, and m and n are the number of visible units v i and hidden units hj , respectively. The visible units are used as the input and output of the data, and the hidden units can be considered an internal representation of the data. Based on the energy function, the joint probability distribution of configuration ðv ; hÞ can be established as follows:

Ph ðv ; hÞ ¼ Zh ¼

X

1 Eðv ;hÞ e Zh

eEh ðv ;hÞ

[22]. The DBN model has multiple hidden layers and has no connections in the same layer. The training of the DBN parameters is divided into two steps [29,30]: Step 1: Unsupervised pre-training Pre-training is the process of initialising the network parameters by using an unsupervised feature optimisation method (gradient descent algorithm). The network parameters are the connection weights between the layers and the bias of the neurons in each layer. Step 2: Supervised fine-tuning The trained DBN framework was used as the initial state of the network, and the parameters obtained by pre-training were used as the initial parameters. The BP network is set up at the last layer of the DBN, and the output eigenvector of the RBM is received as its input feature vector, and then, supervised training is carried out. Each layer of the RBM network can only ensure that the mappings of the weights of the intra-layer to the feature vector of the layer achieve optimal results, but not optimal to the feature vector of the entire DBN. Therefore, the back propagation network was used to execute the supervised learning of the overall weights of the DBN. The BP network can spread the error from the top to the bottom to every layer of the RBM; therefore, the DBN can reach the global optimum.

ð2Þ ð3Þ

v ;h

where Z h is a normalisation factor (partition function). The marginal distributions of the visible layer and the hidden layer can be defined as follows:

Ph ðv Þ ¼

1 X Eh ðv ;hÞ e Zh h

ð4Þ

Ph ðhÞ ¼

1 X Eh ðv ;hÞ e Zh v

ð5Þ

And then the conditional probability of v given h and the conditional probability of h given v can be defined as follows: Eh ðv ;hÞ

e Ph ðv jhÞ ¼ P

Eh ðv ;hÞ

ve

eEh ðv ;hÞ Ph ðhjv Þ ¼ P E ðv ;hÞ h he

ð6Þ

ð7Þ

The key step is how to determine the marginal distribution Ph ðv Þ as defined by the RBM model. To determine the distribution, it is necessary to calculate the normalisation factor Z h . If we enumerate all the possible configurations of v and h, 2nþm calculations are needed. When n and m are relatively large, the calculation is very time consuming. Therefore, the normalisation factor Z h is difficult to calculate [26]. Because of this, an approximate method named Gibbs sampling was used to do the calculations. Given the RBM model, the k-step Gibbs sampling was executed; the sampling processes can be defined as follows: Initialise the configuration v 0 of the visible layer with a training sample; the following sampling is performed alternately:

h0  Pðhjv 0 Þ; v 1  Pðv jh0 Þ

h1  Pðhjv 1 Þ; v 2  Pðv jh1 Þ

ð8Þ

. . . . . . ; hkþ1  Pðhjv k Þ

The sample distribution defined by the RBM model can be obtained when the sampling number k is sufficiently large. However, larger sampling numbers decrease the efficiency of the training process of the entire model, particularly when the dimensions of the visible layer and the hidden layer are higher. To improve the training efficiency of the RBM model, a fast learning approach, termed contrastive divergence, was proposed by Hinton in 2002 [27]. Unlike traditional Gibbs sampling, when we initialise the configuration v 0 with the training data, it takes only one or more times of Gibbs sampling to obtain a good approximation. 2.2. DBN architecture RBM is not itself a deep model, but the DBN (deep model) can be formed by stacking the RBM in a greedy manner [28]. DBN is a generative model involving both directed and undirected connections

2.3. Regression-based DBN The general DBN model is mainly used in classification and has few applications in regression. Therefore, in this paper, we proposed a novel regression based DBN model, which uses a linear regression (LR) layer to substitute the softmax layer on the top of the DBN structure. The LR-DBN continues to perform the pretraining phase and the fine-turning phase in the same manner as the general DBN. The difference is that the weights and biases of the lower layers are learned by back propagating the gradients from the top LR layer. Therefore, the LR objective should be differentiated with respect to the activation of the penultimate layer. When given training data fðxn ; yn Þg; n ¼ 1; 2; . . . ; m; xn; yn 2 R. The LR optimization function can be written as follows:

min ðy  xwÞT ðy  xwÞ w

ð8Þ

where w are the parameters of linear regression model. In Eq. (8), we let the objective be f(w) and replace the input x with the penultimate activation h; then, the back-propagation error for LR can be obtained as:

@f ðwÞ ¼ 2wT ðhn w  yn Þ @hn

ð9Þ

where the superscript T denotes the transposition of a matrix, Herein, the learning procedure of the back propagation algorithm is the same as that of the general DBN. 2.4. Performance criterion Four evaluation indexes are used to evaluate the overall performance of the DBN prediction model, namely mean absolute error (MAE), mean absolute percentage error (MAPE), root mean square error (RMSE), and Pearson’s correlation coefficient (CORR). MAE is used to evaluate the predictive ability of a model, MAPE is used to quantify the prediction deviation of a model, RMSE is a measure of the degree of change between the predicted and the actual data, and CORR is a measure of the linear correlation between the predicted and the actual data. They can be defined as follows:

185

S.-B. Zhu et al. / Measurement 133 (2019) 182–192

Fig. 1. Experiment platform.

Table 1 The time- and frequency-domain features of the leakage signals. Time Domain Entropy Energy Maximum Minimum Mean value Root mean square Effective value Variance

Table 2 Effect of the number of hidden units in the first layer on the prediction performance of DBN model.

Frequency Domain

Standard deviation Skewness Kurtosis Skewness factor Waveform factor Pulse factor Peak factor Margin factor

Centre of gravity frequency Mean square frequency Root mean square frequency Frequency variance Frequency standard deviation

 N   1 X  yi  yi  N i¼1

ð10Þ

   N  1X yi  yi  MAPE ¼   100  N i¼1  yi 

ð11Þ

MAE ¼

Hidden units

MAE

MAPE

RMSE

CORR

4 8 12 16 20 24 28 32 36 40

0.143657 0.211956 0.071006 0.033564 0.03305 0.032375 0.034761 0.033463 0.034531 0.033447

47.18615 86.86329 26.86992 12.34637 10.9837 11.06563 11.93871 11.52919 12.02879 11.45423

0.165693 0.228604 0.081794 0.047465 0.048214 0.046241 0.048407 0.047882 0.04927 0.048015

0.956392 0.921843 0.972668 0.978277 0.981098 0.983696 0.983473 0.981723 0.982617 0.983207

The bold values are the best values

vffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi u N   u1 X  2 RMSE ¼ t yi  yi N i¼1

ð12Þ

P P P   N Ni¼1 yi yi  Ni¼1 yi Ni¼1 yi ffi r ffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi ffi CORR ¼ rffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi PN 2 PN 2 PN 2 PN  2 N i¼1 yi  N i¼1 yi  i¼1 yi i¼1 yi

ð13Þ

where N is the number of samples, y is the actual value of the inter

nal valve leakage rate and y is the prediction value of the internal valve leakage rate. 3. Experiments 3.1. Experiment platform The experiment was set up as shown in Fig. 1. It consisted of two parts: valve leakage generation system and leakage signal acquisition system. For the valve leakage generation system, ball and plug valves with a diameter of 3 in. were made to produce the gas leakage in the experiment. A nitrogen gas cylinder was used to provide the inlet pressure. Because of the pressure difference between the upstream and the downstream pipes, the valves will cause leakage via incomplete closure. For the leakage signal acquisition system, the resonator sensor with a central frequency of 150 kHz was used to collect the AE signals of leakage. A 40-dB preamplifier was used to amplify the signals collected by the sensor. The four-channel data acquisition card (Measurement

Table 3 Effect of the number of hidden units in the second layer on the prediction performance of DBN model. Hidden units

MAE

MAPE

RMSE

CORR

4 8 12 16 20 24 28 32 36 40

0.367189 0.307199 0.088365 0.028509 0.028236 0.031277 0.029577 0.032535 0.029568 0.034226

179.3788 128.635 34.10353 11.12111 10.14507 10.75608 10.83843 11.54883 10.8081 12.23025

0.381017 0.323292 0.098489 0.040192 0.039311 0.042625 0.04026 0.04368 0.040725 0.04526

0.879764 0.896096 0.971114 0.981733 0.98367 0.984273 0.984181 0.98252 0.982787 0.982105

The bold values are the best values Table 4 Effect of the number of hidden units in the third layer on the prediction performance of DBN model. Hidden units

MAE

MAPE

RMSE

CORR

4 8 12 16 20 24 28 32 36 40

0.200437 0.146671 0.053266 0.033628 0.029096 0.030055 0.030877 0.03018 0.030042 0.030534

75.10104 54.57143 20.87528 13.5181 12.18059 12.19822 12.3035 12.47949 12.68686 12.56631

0.218121 0.160486 0.06571 0.044981 0.039794 0.041641 0.041706 0.040904 0.042819 0.042382

0.942118 0.959294 0.976242 0.979543 0.981605 0.981288 0.982181 0.981062 0.979873 0.980675

The bold values are the best values

Computing Corporation, USA) with a maximum sampling rate of 1 MHz was used to acquire the AE signals, and the single channel sampling frequency was set to 200 kHz.

186

S.-B. Zhu et al. / Measurement 133 (2019) 182–192

0.05

25

MAPE

MAE

0.045 0.04 0.035

20

15

0.03 0.025

1

10

6

1

2 3 4 5 Number of hidden layyers (b)

6

1

2 3 4 5 Number of hidden layyers (d)

6

0.99

0.055

0.985

0.05

0.98

CORR

RMSE

0.06

2 3 4 5 Number of hidden layyers (a)

0.045

0.975 0.97

0.04 1

2 3 4 5 Number of hidden layyers (c)

6

0.965

Fig. 2. Effects of the max-layer for the four evaluation indexes. (a) MAE. (b) MAPE. (c) RMSE. (d) CORR.

45 BPNN L-SVR P-SVR RBF-SVR DBN ACTUAL

40

Leakage rate (L/min)

35 30 25 20 15 10 5 0

0

10

20

30

40 Number of data

50

60

70

80

Fig. 3. Prediction results of ball valve leakage rates.

3.2. Data acquisition The signals of ball and plug valves were collected. For each valve, six sets of internal valve leakage signal acquisition experiments were done. Because of the difference in valve opening and inlet initial pressure in the six sets of experiments, therefore, six sets of ball valve signals were collected. The number of signals in the six sets collected from the ball valve was 35, 51, 42, 48, 32 and 32, respectively, a total of 240 signals. Finally, six sets of data are mixed to form the ball valve data set, that is, 240 signals. The signals acquisition method of the plug valve is the same as that

of the ball valve, and six sets of experiments have been done. The number of signals in the six sets collected from the plug valve was 54, 33, 50, 46, 49 and 48, respectively. Therefore, the number of the plug valve data set is 280.

4. DBN-based internal valve leakage rate prediction model In this study, inlet pressure, valve type (only for mixed data) and the time- and frequency-domain features of the leakage signals were used as the feature set to describe the sample data.

187

S.-B. Zhu et al. / Measurement 133 (2019) 182–192

Error

Error

Error

Error

Error

The time- and frequency-domain features of the leakage signals are shown in Table 1. Three types of data, namely ball valve data (2 4 0), plug valve data (2 8 0) and ball valve/plug valve mixed data (5 2 0), were used to construct the DBN framework. To reduce the effect of dimensionality on the prediction results, the feature set was normalised. The value of the evaluation index mentioned above was obtained from the results of the normalisation of the actual value and the predicted value. The number of input units corresponded to the dimension of the input features. The number of input units were 22 (ball valve data model: time–frequency domain features and inlet pressure), 22 (plug valve data model: time–frequency domain features and inlet pressure), and 23 (mixed data model: time–frequency domain features, inlet pressure and valve type). Further, the number of output units was set to the value that could meet our needs for predicting the internal valve leakage rates. In addition, the number of hidden layers and the number of units in each hidden layer were two important parameters that had to be determined. An insufficient or excessive number might result in poor prediction performance, and the number of hidden layers determined the ability of the model to predict

0.4 0.2 0 -0.2 -0.4 -0.6

0.4 0.2 0 -0.2 -0.4

0.4 0.2 0 -0.2 -0.4

0.4 0.2 0 -0.2 -0.4

0.4 0.2 0 -0.2 -0.4

complex data. Previous studies have shown that the performance of the DBN model with multiple hidden layers is superior to that one hidden layer in most cases [31]. Therefore, in this study, the trial-and-error method was used to determine these parameters. Ten levels of the number of hidden units ranging from 4 to 40 with intervals equal to 4 were used in this study. Six sets of samples were drawn from the six sets of experimental data in the same proportion, and then the six sets of samples were mixed to form the training set or the testing set. The training set and the testing set consist of 2/3 and 1/3 of the dataset, respectively. The partition method ensured that the samples of the training set or the testing set were representative and universal. Different data samples correspond to different optimal DBN structures. This study considered the ball valve data as an example to show the selection process of the number of units and hidden layers. First, we initialised the DBN model with 22 input units and one output unit. Furthermore, we assumed that the model had one hidden layer. According to this hypothesis, the effect of the number of hidden units on the prediction performance of DBN model is shown in Table 2. The MAE, MAPE, RMSE and CORR

0

10

20

30

40 50 Number of data (a)

60

70

80

0

10

20

30

40 50 Number of data (b)

60

70

80

0

10

20

30

40 50 Number of data (c)

60

70

80

0

10

20

30

40 50 Number of data (d)

60

70

80

0

10

20

30

40 50 Number of data (e)

60

70

80

Fig. 4. Percentage error between the predicted and actual values of ball valve leakage rates. Percentage error = (prediction value-actual value)/actual value. (a) BPNN. (b) LSVR. (c) P-SVR. (d) RBF-SVR. (e) DBN.

188

S.-B. Zhu et al. / Measurement 133 (2019) 182–192

Table 5 Evaluation index values of the five methods in the ball valve data. Method

MAE

MAPE

RMSE

CORR

L-SVR P-SVR RBF-SVR BPNN DBN

0.0369 0.0348 0.0325 0.0437 0.028236

18.0769 12.2453 13.1966 19.9675 10.14507

0.0565 0.0476 0.0446 0.0583 0.039311

0.9716 0.9749 0.9824 0.9691 0.98367

The bold values are the best values

80 BPNN L-SVR P-SVR RBF-SVR DBN ACTUAL

70

Leakage rate (L/min)

60 50 40 30 20 10 0

0

10

20

30

40

50

60

70

80

90

100

Number of data Fig. 5. Prediction results of plug valve leakage rates.

listed in Table 2 are the average values of 20 runs. This table shows that the values of MAE, RMSE and CORR were optimal at 24 hidden units, and the optimal values were 0.032375, 0.046241 and 0.983696, respectively. In addition, the optimal MAPE value was 10.9837 at 20 hidden units. With the increase in the number of hidden units, the values of the evaluation index did not show a regular change. Therefore, we set the number of units in the first hidden layer as 24. On the basis of the first hidden layer, the number of units in the second hidden layer was explored by using the same method. The results are shown in Table 3. In addition to the optimal CORR at 24 hidden units, the other three indexes reached the best at 20 hidden units. Therefore, we concluded that when the number of units in the second hidden layer was 20, the DBN with two hidden layers had the best prediction performance. At this time, the values of MAE, MAPE, RMSE and CORR were 0.028236, 10.14507, 0.039311 and 0.99 367, respectively, and were superior to the index values of the DBN with one hidden layer. We fixed the number of units in the first hidden layer of the DBN model as 24 and the number of units in the second hidden layer of the DBN model was 20. Then, we continued to explore the number of units in the third hidden layer of the DBN model by using the trial-and-error method; the experimental results are shown in Table 4. As shown in Table 4, the optimal number of units in the third hidden layer was 20. The values of MAE, MAPE, RMSE and CORR were 0.029096, 12.18059, 0.039794 and 0.981605, respectively, and they were weaker than the values of the evaluation index of the DBN with two hidden layers. In order to analyse the effects of more hidden layers and hidden layer units on the performance of prediction model, the experiment of the DBN model with 4, 5 and 6 hidden layers has been exe-

cuted. The experimental results are shown in Fig. 2. The MAE, MAPE, RMSE values reached their minimum values at the 2nd hidden layer, and then as the number of hidden layers increases, these values are on the rise. The CORR values reached maximum values at the 2nd hidden layer. Therefore, we concluded that the DBN model with two hidden layers had the best prediction performance. The optimal structure of the prediction model of ball valve data was 22-24-20-1. For the plug valve data and the mixed data, the same method was used to determine the optimal structure of the DBN model. The optimal structure of the DBN model for predicting the leakage rates of the plug valve was 22-20-16-1, and the optimal structure of the DBN model for predicting the leakage rates of mixed data was 23-20-20-16-1. The abovementioned three frameworks were used as the prediction models for the following analysis.

5. Results and discussion The DBN model was trained and tested using the ball valve data, plug valve data and a mix of both. Because the internal leakage rate varied considerably under different inlet pressures and valve types, for each type of data, five different prediction models were used to predict the internal leakage rates. The data were divided into the training set and the testing set, with a ratio of 2/3:1/3. The models were trained using the training set to excavate the nonlinear and unstable characteristics hidden in the training dataset. The testing set performed the performance test on the model, and the results are presented in the form of tables and figures. To verify the prediction performance of the DBN model, the prediction results of the

189

S.-B. Zhu et al. / Measurement 133 (2019) 182–192

DBN model were compared with those of the L-SVR, P-SVR, RBFSVR and the well-tuned BPNN. 5.1. Prediction results of ball valve leakage rates

Error

Error

Error

Error

Error

The parameters of the five models for the ball valve data were optimised. The framework of DBN was 22-24-20-1, and the framework of BPNN was 22-12-12-1. The prediction results of the ball valve leakage rates obtained using SVR, BPNN and DBN are shown

0.6 0.4 0.2 0 -0.2 -0.4 -0.6

0.6 0.4 0.2 0 -0.2 -0.4 -0.6

0.6 0.4 0.2 0 -0.2 -0.4 -0.6

0.6 0.4 0.2 0 -0.2 -0.4 -0.6

0.6 0.4 0.2 0 -0.2 -0.4 -0.6

in Fig. 3. The percentage error between the predicted and actual values is presented in Fig. 4. The Fig. 3 clearly shows that all the five models have the capability to predict the internal valve leakage rates. The prediction results obtained using DBN and the actual values almost overlapped, which implied that the internal leakage rates predicted by DBN matched the actual leakage rate. The Fig. 4 shows that the scatter points of the BPNN and L-SVR prediction error deviated the most from the zero line than the other three models and the error of many prediction points are greater than

0

10

20

30

40 50 Number of data (a)

60

70

80

90

0

10

20

30

40 50 Number of data (b)

60

70

80

90

0

10

20

30

40 50 Number of data (c)

60

70

80

90

0

10

20

30

40 50 Number of data (d)

60

70

80

90

0

10

20

30

40 50 Number of data (e)

60

70

80

90

Fig. 6. Percentage error between the predicted and actual values of plug valve leakage rates. Percentage error = (prediction value-actual value)/actual value. (a) BPNN. (b) LSVR. (c) P-SVR. (d) RBF-SVR. (e) DBN.

Table 6 Evaluation index values of the five methods in the plug valve data. Method

MAE

MAPE

RMSE

CORR

L-SVR P-SVR RBF-SVR BPNN DBN

0.0351 0.0319 0.028 0.0370 0.022992

37.5123 23.9937 17.6709 28.4062 19.91222

0.0489 0.0398 0.0376 0.0531 0.03195

0.9728 0.9824 0.9861 0.9726 0.989838

The bold values are the best values

190

S.-B. Zhu et al. / Measurement 133 (2019) 182–192

80 BPNN L-SVR P-SVR RBF-SVR DBN ACTUAL

70

Leakage rate (L/min)

60 50 40 30 20 10 0

0

20

40

60

80

100

120

140

160

180

Number of data Fig. 7. Prediction results of mixed data.

±20%, which indicated that the two models did not fully learn the hidden features of the data. The prediction errors of P-SVR, RBFSVR and DBN are basically within 20%. However, most of the prediction errors of DBN model are distributed around the zero line. This shows that the DBN prediction model can predict the ball valve leakage rate more accurately. In addition, to better demonstrate the superiority of the DBN model, the evaluation index values obtained from the models in the ball valve data are shown in Table 5. The values of the evaluation indexes were obtained from the results of the normalisation of the actual value and the predicted value. Table 5 shows that all the five models had the ability to predict the internal leakage rates of the ball valve. The values of MAE, MAPE, RMSE and CORR obtained using the DBN model were 0.028236, 10.14507, 0.039311 and 0.98367, respectively, which were superior to those obtained using the other four models. In addition, the SVR model was superior to the BPNN with two hidden layers. In the case of BPNN, it was easy to reach the local optimum, which resulted in the worst prediction performance. 5.2. Prediction results of plug valve leakage rates The framework of DBN was 22-20-16-1, and the framework of BPNN was 22-8-12-4-1. The prediction results and errors for the plug valve are shown in Figs. 5 and 6, respectively. The prediction results of DBN model deviate only at a very few points. The prediction errors are almost within 20% and distributed around the zero line. The evaluation index values obtained from the five models in the plug valve data are shown in Table 6. From Table 6, we can infer that RBF-SVR had the best MAPE value, which showed that the deviation between the predicted result and the actual result was the least. DBN was superior to the other models in terms of MAE, RMSE and CORR, which indicated that the DBN model showed stronger prediction ability than the other models. Therefore, for the plug valve data, DBN also showed good prediction performance on the whole. 5.3. Prediction results of mixed data The mixed data of the ball valve and the plug valve were tested to verify the prediction ability of the models on complex data. With

respect to the data of a single ball valve and a plug valve, both the amount and the dimensions of the mixed data increased. Therefore, the structure of the mixed data was more diversified, and the prediction results were more difficult to determine. The framework of DBN was 23-20-20-16-1, and the framework of BPNN was 23-4-1. The prediction results and error of the mixed data are shown in Figs. 7 and 8. The evaluation index values obtained from the five models of mixed data are shown in Table 7. The values of MAE, MAPE, RMSE and CORR obtained using the DBN model were 0.0211, 22.1201, 0.0316 and 0.9838, respectively. The DBN prediction model still exhibited the best performance for each evaluation index. In addition, relative to ball and plug valve data, the complexity of mixed data increases, but the performance of the five prediction models did not decrease as a whole. In contrast, there was a slight increase. Tables 5–7 show that the five models had the ability to predict the valve leakage rates for the three types of data. Among them, the DBN model showed stronger performance than the traditional prediction model. Among the four traditional prediction algorithms, RBF-SVR showed better prediction ability. The prediction performance of L-SVR and BPNN was the worst. In the mixed data, the performance of the traditional model increased more than that of DBN, indicating that the performance of the shallow model had a high dependence on the amount of data given. 6. Conclusions and prospects The internal valve leakage is a common problem in the field of petroleum and natural gas. With respect to a valve’s internal leakage, we need to detect and evaluate it in a timely manner. The diagnosis of internal leakage and the prediction of the internal leakage rate are the two most important methods for the above. The timely and effective evaluation of the abovementioned leakage can reduce the economic and environmental effects of natural gas leakage. In this study, the DBN model was first used to predict the internal leakage rates of valves, which is of direct guiding significance for the maintenance and replacement of valves in natural gas pipelines. The DBN model was trained and tested using the ball valve leakage data, plug valve leakage data and a mix of both these types of leakage data. The success of the mixed data model demonstrated that the DBN model not only had better prediction ability

191

Error

Error

Error

Error

Error

S.-B. Zhu et al. / Measurement 133 (2019) 182–192

0.6 0.4 0.2 0 -0.2 -0.4 -0.6

0.6 0.4 0.2 0 -0.2 -0.4 -0.6

0.6 0.4 0.2 0 -0.2 -0.4 -0.6

0.6 0.4 0.2 0 -0.2 -0.4 -0.6

0.6 0.4 0.2 0 -0.2 -0.4 -0.6

0

20

40

60

80 100 Number of data (a)

120

140

160

180

0

20

40

60

80 100 Number of data (b)

120

140

160

180

0

20

40

60

80 100 Number of data (c)

120

140

160

180

0

20

40

60

80 100 Number of data (d)

120

140

160

180

0

20

40

60

80 100 Number of data (e)

120

140

160

180

Fig. 8. Percentage error between the predicted and actual values of mixed data. Percentage error = (prediction value-actual value)/actual value. (a) BPNN. (b) L-SVR. (c) P-SVR. (d) RBF-SVR. (e) DBN.

Table 7 Evaluation index values of the five methods in the mixed valve data. Method

MAE

MAPE

RMSE

CORR

L-SVR P-SVR RBF-SVR BPNN DBN

0.0309 0.0223 0.0228 0.0281 0.0211

29.2686 25.6854 22.1619 25.7316 22.1201

0.0437 0.0338 0.0361 0.0416 0.0316

0.9674 0.9815 0.9797 0.9731 0.9838

The bold values are the best values

for a single valve type but also exhibited universality in predicting the internal valve leakage rates. The experimental results demonstrated that the DBN model with multiple hidden layers had the ability to deal with deep nonlinear and non-stability features and that the performance of the DBN model was better than that of the traditional SVR and BPNN models with respect to predicting the valve leakage rates in natural gas pipelines. In addition, the proposed method has a high potential for practical applications in petroleum and natural gas systems. Future work can be studied in the following ways:

 The prediction of the leakage rate in a natural gas valve may be affected by many other uncontrollable factors. In the future, we should focus on the analysis and the processing of the original signal, fully exploit the intrinsic characteristics of the data and improve the performance of the prediction model.  There is a great relationship between the performance of the model and the proportion of the data types of the original data, so the existing database of valve leakage should be improved. When the source of training data is abundant, a better DBN model for predicting the valve leakage rates can be established.

192

S.-B. Zhu et al. / Measurement 133 (2019) 182–192

Acknowledgement This research did not receive any specific grant from funding agencies in the public, commercial, or not-for-profit sectors. Appendix A. Supplementary data Supplementary data to this article can be found online at https://doi.org/10.1016/j.measurement.2018.10.020. References [1] J.L. Lyons, Lyons’ Valve Designer’s Handbook, Van Nostrand Reinhold Co, 1982. [2] E. Meland, N.F. Thornhill, E. Lunde, et al., Quantification of valve leakage rates, Aiche J. 58 (4) (2012) 1181–1193. [3] H. Wagner, Innovative techniques to deal with leaking valves, Technical Papers ISA 454 (2004) 105–117. [4] W. Kaewwaewnoi, A. Prateepasen, P. Kaewtrakulpong, Investigation of the relationship between internal fluid leakage through a valve and the acoustic emission generated from the leakage, Measurement 43 (2) (2010) 274–282. [5] Z.L. Li, H.F. Zhang, D.J. Tan, et al., A novel acoustic emission detection module for leakage recognition in a gas pipeline valve, Process Saf. Environ. Prot. 105 (2016) 32–40. [6] H.F. Zhang, Z.L. Li, Z.L. Ji, et al., Intelligent leak level recognition of gas pipeline valve using wavelet packet energy and support vector machine model, InsightNon-Destr. Test. Condition Monit. 55 (12) (2013) 670–674. [7] A. Prateepasen, W. Kaewwaewnoi, P. Kaewtrakulpong, Smart portable noninvasive instrument for detection of internal air leakage of a valve using acoustic emission signals, Measurement 44 (2) (2011) 378–384. [8] E. Meland, V. Henriksen, E. Hennie, et al., Spectral analysis of internally leaking shut-down valves, Measurement 44 (6) (2011) 1059–1072. [9] E. Meland, N.F. Thornhill, E. Lunde, et al., Quantification of valve leakage rates, Aiche J. 58 (4) (2012) 1181–1193. [10] M.F. Ghazali, S.B.M. Beck, J.D. Shucksmith, et al., Comparative study of instantaneous frequency based methods for leak detection in pipeline networks, Mech. Syst. Sig. Process. 29 (5) (2012) 187–200. [11] R.L. Leon, D.Q. Heagerty, Method and apparatus for on-line detection of leaky emergency shut down or other valves: EP, US6134949 [P]. 2000. [12] S.B. Zhu, Z.L. Li, S.M. Zhang, et al., Natural gas pipeline valve leakage rate estimation via factor and cluster analysis of acoustic emissions, Measurement 125 (2018) 48–55. [13] J. Liu, E. Zio, An adaptive online learning approach for Support Vector Regression: online-SVR-FID, Mech. Syst. Sig. Process. 76–77 (2016) 796–809.

[14] R.G. Brereton, G.R. Lloyd, Support vector machines for classification and regression, Analyst 135 (2) (2010) 230–267. [15] F.E.H. Tay, L. Cao, Application of support vector machines in financial time series forecasting, Omega 29 (4) (2001) 309–317. [16] Y. Chen, P. Xu, Y. Chu, et al., Short-term electrical load forecasting using the Support Vector Regression (SVR) model to calculate the demand response baseline for office buildings, Appl. Energy 195 (2017) 659–670. [17] L. Fan, S. Pan, Z. Li, et al., An ICA-based support vector regression scheme for forecasting crude oil prices, Technol. Forecast. Soc. Chang. 112 (2016) 245– 253. [18] Y. Bai, Y. Li, X. Wang, et al., Air pollutants concentrations forecasting using back propagation neural network based on wavelet decomposition with meteorological conditions, Atmos. Pollut. Res. 7 (3) (2016) 557–566. [19] R. Huang, L. Xi, X. Li, et al., Residual life predictions for ball bearings based on self-organizing map and back propagation neural network methods, Mech. Syst. Sig. Process. 21 (1) (2007) 193–207. [20] D. Yu, L. Deng, Deep learning and its applications to signal and information processing [Exploratory DSP], IEEE Signal Process Mag. 28 (1) (2011) 145–154. [21] A. Mohamed, G.E. Dahl, G. Hinton, Acoustic modeling using deep belief networks, IEEE Trans. Audio Speech Lang. Process. 20 (1) (2011) 14–22. [22] I. Goodfellow, Y. Bengio, A. Courville, et al., Deep Learning, MIT press, Cambridge, 2016. [23] H.Z. Wang, G.B. Wang, G.Q. Li, et al., Deep belief network based deterministic and probabilistic wind speed forecasting approach, Appl. Energy 182 (2016) 80–93. [24] Q. Zhang, L.T. Yang, Z. Chen, et al., A survey on deep learning for big data, Inf. Fusion 42 (2018) 146–157. [25] H.B. Huang, R.X. Li, M.L. Yang, et al., Evaluation of vehicle interior sound quality using a continuous restricted Boltzmann machine-based DBN, Mech. Syst. Sig. Process. 84 (2017) 245–267. [26] P.M. Long, R. Servedio, Restricted Boltzmann machines are hard to approximately evaluate or simulate, in: Proceedings of the 27th International Conference on Machine Learning (ICML-10), 2010, pp. 703–710. [27] G.E. Hinton, Training Products of Experts by Minimizing Contrastive Divergence, MIT Press, 2002. [28] G.E. Hinton, S. Osindero, Y.W. Teh, A fast learning algorithm for deep belief nets, Neural Comput. 18 (7) (2006) 1527–1554. [29] G.E. Hinton, A practical guide to training restricted Boltzmann machines, in: Neural Networks: Tricks of the Trade, Springer, Berlin, Heidelberg, 2012, pp. 599–619. [30] G.E. Hinton, R.R. Salakhutdinov, Reducing the dimensionality of data with neural networks, Science 313 (5786) (2006) 504–507. [31] N. Le Roux, Y. Bengio, Representational power of restricted Boltzmann machines and deep belief networks, Neural Comput. 20 (6) (2008) 1631–1649.