Predicting industrial polymer melt index via incorporating chaotic characters into Chou's general PseAAC

Predicting industrial polymer melt index via incorporating chaotic characters into Chou's general PseAAC

Chemometrics and Intelligent Laboratory Systems 146 (2015) 232–240 Contents lists available at ScienceDirect Chemometrics and Intelligent Laboratory...

866KB Sizes 3 Downloads 46 Views

Chemometrics and Intelligent Laboratory Systems 146 (2015) 232–240

Contents lists available at ScienceDirect

Chemometrics and Intelligent Laboratory Systems journal homepage: www.elsevier.com/locate/chemolab

Predicting industrial polymer melt index via incorporating chaotic characters into Chou's general PseAAC Miao Zhang, Beibei Zhao, Xinggao Liu ⁎ State Key Laboratory of Industrial Control Technology, Control Department, Zhejiang University, Hangzhou 310027, PR China

a r t i c l e

i n f o

Article history: Received 21 April 2015 Received in revised form 27 May 2015 Accepted 28 May 2015 Available online 4 June 2015 Keywords: Melt index prediction Chaotic theory Phase space reconstruction Least squares support vector machines Particle swarm optimization Propylene polymerization

a b s t r a c t Melt index (MI) is one of the most important quality variables determining the product's grade and quality control in practical propylene polymerization process. In this paper, the MI series is proved to be chaotic, where its Hurst index is calculated to be 0.9391, and the maximum Lyapunov exponent is 0.1560. Then, the phase space of the MI series is reconstructed through Cao's method based on the obtained chaotic characteristics, and a novel MI prediction model, Chaos–LSSVM, is proposed based on least squares support vectors (LSSVM) and chaos theory. Furthermore, the particle swarm optimization (PSO) algorithm is introduced to optimize the parameters of LSSVM, and an optimal prediction model, PSO–Chaos–LSSVM, is thereby developed. Research on these proposed models is carried out with the data from a real industrial polypropylene plant, and the prediction results show that the performance of this prediction model is much better than the pure LSSVM model without chaotic theory. © 2015 Elsevier B.V. All rights reserved.

1. Introduction Production of polypropylene (PP) has a significant influence on the world in aspects of industry, economy, military, and so on for the good performance of PP. In the PP production process the melt index (MI) is one of the most important quality variables determining the grade of product. To meet the various and stringent requirements for polymerization products, accurate prediction of MI is essential for efficient and professional monitoring and control of practical PP processes during the grade change operations. MI is usually evaluated off-line with an analytical procedure in the laboratory, which takes almost 2–4 h [1] in the laboratory, leading to off-specification products and enormous operation cost. On the other hand, considering the engineering activity and the relatively high complexity of the kinetic behavior and operation of the polymer plants, the development of inferential models with the mechanism of polymerization [2–7] is greatly challenged. A number of researchers have attempted to find modeling approaches for the prediction of MI for application in the industrial process [8–10]. Mogilicharla et al. [11] proposed a kinetic model to describe the propylene polymerization process with long chain branching for a twin catalyst system. A polyethylene melt index inferential model has been proposed by Kim and Yeo [12]. Liu and Gao [13] presented the ensemble anti-outlier just-in-time Gaussian process regression modeling method for industrial melt index prediction. Data based modeling is now a welldeveloped area of research that has provided industries with various ⁎ Corresponding author. Tel./fax: +86 571 87988336. E-mail address: [email protected] (X. Liu).

http://dx.doi.org/10.1016/j.chemolab.2015.05.028 0169-7439/© 2015 Elsevier B.V. All rights reserved.

soft sensors for the MI prediction. Three soft sensor models involving supported vector machines (SVM), partial least squares (PLS) and artificial NN have been developed by Han et al. [14] to predict MI for styrene-acrylonitrile (SAN) and PP process. Zhang et.al [15] presented the inferential estimation of a polymer melt index in an industrial polymerization process using aggregated neural networks. Furthermore, Shi and Liu [16] compared the performance of the standard SVM, the LSSVM, and the weighted least squares support vector machines (WLSSVM) models. An accurate optimal predictive model of MI values with the relevance vector machine (RVM) is proposed by Jiang and coworkers [17]. Ahmed et.al [18] developed a statistical data modeling based on partial least squares (PLS) for MI prediction in high density polyethylene processes (HDPE) to achieve energy-saving operation. Recently Zhang and Liu [19] developed a soft sensor based on adaptive fuzzy neural network and support vector regression. Fei et al. [20] proposed a novel data based soft-sensing method with PLS for MI estimating in propylene polymerization process. Though these works have achieved a good MI prediction accuracy, greater performance in prediction and better universality of the estimation model are still the first-line goal both in academy and industry. However, up to now, the above methods estimate MI in terms of measurable key variables which possess statistical correlations between process and quality variables, but no one directly predicts the MI by the law of itself. Understanding the PP chaos characteristic can make the characteristic of PP process more clear, and then the nature of chaos can be used to make better prediction. Chaos is a bounded unstable dynamic behavior that exhibits sensitive dependence on initial conditions and includes infinite unstable periodic motions in nonlinear systems.

M. Zhang et al. / Chemometrics and Intelligent Laboratory Systems 146 (2015) 232–240

The interesting chaotic approaches have been introduced to developing various different prediction methods in many fields [21–26]. In this article, an optimal chaotic prediction model based on the chaos theory and the optimized LSSVM is proposed. First, the MI series is proved to be chaotic, which is distinct from the previous point that the MI is random. The Hurst index and the maximum Lyapunov exponent of the MI series are calculated to be 0.9391 and 0.1560, respectively. In theory, the Hurst index in the range of 0.5–1 and a positive maximum Lyapunov exponent are enough to prove that the MI series is a chaotic process. Then, the phase space of the MI series is reconstructed through Cao's method based on the obtained chaotic characteristics. Thus the LSSVM based on chaos theory (Chaos–LSSVM model) is explored to predict the MI in the polypropylene process for the first time. The proposed model not only uses easy-measurable variables that can be regarded as external factors, but also utilizes the internal relations of the MI series. Furthermore, the PSO algorithm is introduced to optimize the parameters of LSSVM, and an optimal prediction model, PSO–Chaos–LSSVM, is thereby developed. Finally, research on these proposed models is carried out with the data from a real industrial PP plant. According to the general form of Chou's pseudo amino acid composition (PseAAC) [27] and demonstrated in a series of recent publications [28–33] in developing new prediction methods, to establish a really useful statistical predictor, one usually needs to consider the following procedures: (i) construct or select a valid benchmark dataset to train and test the predictor; (ii) formulate the statistical samples with an effective mathematical expression that can truly reflect their intrinsic correlation with the target to be predicted; (iii) introduce or develop a powerful algorithm (or engine) to operate the prediction; (iv) perform proper tests to objectively evaluate the anticipated accuracy of the predictor; and (v) establish a user-friendly web-server for the predictor that is accessible to the public. Below, let us describe how to deal with these steps one by one. The rest of this paper is organized as follows. Section 2 presents chaotic analysis of the MI series. Section 3 proposes the chaotic prediction model for MI. Then, Section 4 presents the simulation results and performance analysis of the proposed model; conclusion is given in Section 5. 2. Chaotic analysis of melt index 2.1. Propylene polymerization process and experimental dataset The propylene polymerization process considered here is currently operated for commercial purposes in a real plant in China. Fig. 1 illustrates a highly simplified schematic diagram of the process. The process

233

consists of four reactors in series, the first two continuous stirred-tank reactors are liquid phase reactors, and the latter two are fluidized-bed reactors. The polymerization mainly reacts in the first two reactors in a liquid phase, and in the latter two reactors the reactions are completed in vapor phase. The MI, which depends on the catalyst properties, reactant composition, reactor temperature and so on, can determine different brands of products and different grades of product quality. To develop a prediction model for the MI, nine process variables (t, p, l, a, f1, f 2, f 3, f4, f 5) which influence the process greatly, have been chosen, where t, p, l, and a stand for the process temperature, pressure, level of liquid, and percentage of hydrogen in vapor phase in the first CSTR reactor, respectively; f1–f 3 are flow rates of three streams of propylene into the reactor, and f4 and f 5 are flow rates of catalyst and aid catalyst respectively. The data used for the prediction model have been acquired from the discrete control system (DCS) historical log recorded in a real propylene polymerization plant. Therefore, according to Chou's general PseAAC [27], the statistical samples which can truly reflect their intrinsic correlation with the target to be predicted in the current study is formulated by a vector in a 9-dimensional space as given by P ¼ ½ψ1 ψ2 ⋯ ψu ⋯ ψ9 T

ð1Þ

where T is the transpose operator, and ψu(u = 1, 2, ⋯, 9) represents the uth statistical feature, which means t, p, l, a, f1, f 2, f 3, f4, and f 5, respectively. In this study, 170 sets of data are collected from a real plant and the average sample time is about 2 h. Fig. 2 illustrates the melt index time series. The x axis is the number of samples and the y axis is the melt index value. From this curve, there are a lot of no regular troughs and wave. Obviously, complex oscillating and irregular behaviors can be observed in the figure. All datasets of the process variables and the MI series are divided into training, testing and generalization datasets. 2.2. Hurst index analysis Hurst index, originally developed by Harold Edwin Hurst [34], relates to the autocorrelations of time series. It quantifies the relative tendency of a time series, either to regress strongly to the mean or to cluster in a direction [35]. Different values of Hurst index, denoted as H, indicate different time series forms: (a) H = 0.5 means a random time series; (b) 0 ≤ H b 0.5 means the time series may switch between high and low values in adjacent pairs or last a long time into the future; and (c) 0.5 b H ≤ 1 indicates a time series with positive autocorrelation [36,37]. The rescaled range analysis (R/S), which is very effective in researching the correlation of time series, is a classical method to

Fig. 1. General scheme of propylene polymerization.

234

M. Zhang et al. / Chemometrics and Intelligent Laboratory Systems 146 (2015) 232–240

2.3. Phase space reconstruction technique

Fig. 2. The time series of the melt index dataset in the PP industry.

estimate Hurst index. Suppose that {x(t)}, t = 1, 2, ⋯, n is a time series, the detailed steps to calculate Hurst index are as follows: n

(1) Calculate the mean value: xn ¼ 1n ∑ xðt Þ; t ¼1

(2) Create the mean-adjusted series: yt ¼ xt −xn ; t ¼ 1; 2; ⋯; n; t

(3) Calculate the cumulative deviate series z: zt ¼ ∑ yi ; t ¼ i¼1

1; 2; ⋯; n; (4) Compute the range R and the standard deviation S: RðnÞ ¼ sffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi max zt − min zt , SðnÞ ¼

1≤t ≤n

1≤t ≤n

n 2 1 n ∑ ðxi −xn Þ ; i¼1

(5) Calculate the rescaled range R(n)/S(n); (6) The Hurst index H is estimated by H ¼ power law

RðnÞ SðnÞ

ln ðRðnÞ=SðnÞÞ , ln ðanÞ

as it fits the

n

¼ ða  nÞ .

According to the R/S method, the fitted curve between ln(R(n)/S(n)) and ln(n) is shown in Fig. 3. As revealed from the figure, the Hurst index of the MI series is 0.9391, which is the slope of the fitted curve. Thus, the MI series is not a random walk series, and has positive autocorrelation as its Hurst index belongs to [0.5, 1].

Phase space reconstruction is the first step in nonlinear time series analysis of data from chaotic systems [38]. It is a useful nonlinear/chaotic signal processing technique to characterize dynamic system, whether low-dimensional or high-dimensional. Reconstructed phase spaces have been proven to be topologically equivalent to the original system and therefore are capable of recovering the nonlinear dynamics of the generating system [39–41]. Time-delay reconstruction technique provides an effective method to diagnose the dynamics of a system from a time series [42,43]. For a set of observable series {xi}, i = 1, 2, ⋯, N, where N is the length of the series, the phase space can be reconstructed according to: Yj = (xj, xj + 1, ⋯, xj + (m − 1)τ), j = 1, 2, ⋯, N − (m − 1)τ, with the necessary condition m ≪ N and τ N 0, where m is the embedding dimension of the reconstructed phase space, and τ is the delay time. Selecting proper delay time and embedding dimension is significant in the process of phase space reconstruction as they reflect some important features of the real system. 2.3.1. Calculate the delay time There exist lots of methods to estimate the delay time, such as autocorrelation function [44], mutual information [45], forecast entropy [46], and so on. Among these methods, mutual information technique is being broadly adopted for its easy realization and superior performance. For a set of observable series {xi}, i = 1, 2, ⋯, N, the mutual information between xt and xt + τ is defined as: X

Iðτ Þ ¼

 P ðxt ; xtþτ Þ ln

xt ;xtþτ

 P ðxt ; xtþτ Þ P ðxt ÞP ðxtþτ Þ

ð2Þ

where P(xt) and P(xt + τ) represent the probability density of xt and xt + τ respectively, P(xt, xt + τ) is the joint probability density function. The first minimum value of I(τ) is chosen as the proper delay time. 2.3.2. Calculate the embedding dimension For calculating the embedding dimension, some quantitative analysis methods are proposed, including singular-value decomposition [47], C-C method [48], Cao's method [49], false nearest neighbor method [50] and so on. Among these methods, Cao's method, which is the modified false nearest neighbor method, is adopted to in this paper. In Cao's method, delay time τ should be selected in advance, and a(i, m) is firstly defined by:     Y i ðm þ 1Þ−Y NN i ðm þ 1Þ ∞   aði; mÞ ¼   ð Þ m Y i ðmÞ−Y NN  i

ð3Þ



where i = 1, 2, ⋯, N − mτ, YiNN(m) is the nearest neighbor of Yi(m) in the m dimensional reconstructed phase space, Yi(m + 1) and YiNN(m + 1) are the expansion of Yi(m) and YiNN(m) in the reconstructed phase space with m + 1 dimensional. Then, average value of a(i, m) and variation from m dimension to m + 1 dimension are defined as follows: EðmÞ ¼

X 1 N−mτ aði; mÞ N−mτ i¼1

E1ðmÞ ¼

Eðm þ 1Þ : EðmÞ

ð4Þ

ð5Þ

Besides, another parameter is defined to discriminate chaotic series from random series, that is:

Fig. 3. The Hurst index figure of the melt index series.

E ðmÞ ¼

X  1 N−mτ xiþmτ −xNN  iþmτ N−mτ i¼1

ð6Þ

M. Zhang et al. / Chemometrics and Intelligent Laboratory Systems 146 (2015) 232–240

E2ðmÞ ¼

E ðm þ 1Þ : E ðmÞ

235

ð7Þ

For a chaotic time series, m + 1 will be selected as the smaller embedding dimension if E1(m) becomes saturated with the increase of m. However, E1(m) may also become saturated with the increase of m when the time series is random. So E 2(m) is introduced to deal with this situation. For a random time series, E 2(m) is always around 1 for all m; but for a chaotic time series, E 2(m) cannot remain constant as it is a function of m. 2.3.3. Phase space reconstruction of the MI series To reconstruct the phase space of the MI series, the proper delay time should be selected. The function between I(τ) and τ is established according to Eq. (2), and the first minimum value of I(τ) is selected as the proper time delay. Fig. 4 depicts the value of I(τ) corresponding with τ intuitively. It can be obtained that the proper time delay of the MI training dataset is 2. The embedding dimension, another essential parameter of phase space reconstruction, is determined by Cao's method. Given possible values of the embedding dimension from 1 to M, the corresponding values of E1(m) and E2(m) are shown in Fig. 5 respectively. It is obvious that the MI series is not a random time series as E2(m) ≠ 1 for some m. In other words, the MI series is deterministic to some extent. According to Cao's method, the minimum m which makes E1(m − 1) reach saturated can be selected as the embedding dimension. However, the values of E1(m) cannot reach saturated and often fluctuates for actual time series. Then the criteria of selecting embedding dimension needs a subjective judgment. A commonly used stability criterion is illustrated as follows: 

E1ðkÞ≥0:9; k ¼ 1; 2; ⋯; M E1ðlÞ≥0:9; l ¼ k þ 1; ⋯; M

ð8Þ

where M is the probably maximum value of the embedding dimension. The minimum k, which satisfies Eq. (8), is selected as the minimum embedding dimension. Therefore, the embedding dimension of the MI training dataset is 5 from Fig. 5. 2.4. Maximum Lyapunov exponent Lyapunov exponent can reflect the sensitivity of chaotic motion to initial values. Particularly, the maximum Lyapunov exponent (MLE, denoted as λ1) connected with the most widely practical applications is an important parameter to judge whether the system is chaotic or not [51].

Fig. 5. The embedding dimension figure of the melt index training dataset by Cao's method.

The different λ1 can indicate different type of motion of a system [52]: (a) λ1 b 0 means stable fixed point; (b) λ1 = 0 means stable limit cycle; (c) 0 b λ1 b ∞ means chaos; and (d) λ1 = ∞ means the random. By far, there are several methods proposed to calculate the MLE, among which the small data method [53] is selected in this study. The small data method costs a relatively short computation time, has a greater reliability for smaller time series data, and can be used for the data mixed with noise interference effectively. Hence, it is quite suitable for the calculation of MLE of the MI series. The detailed procedure of calculating the MLE with small data sets method is as follows: (1) For a time series {xi}, i = 1, 2, ⋯, N, calculate its delay time τ and embedding dimension m; (2) Make phase space reconstruction as {Yj, j = 1, 2, ⋯, M}, where M = N − (m − 1)τ; (3) Find out the nearest point Yj1 of Yj and limit a brief separation as  d j ð0Þ ¼ min Y j −Y j1 ; j j− j1jNp, p is the average period; j (4) Calculate the distance of the above-mentioned nearest points after discrete time i as dj(i) = ‖Yj+i − Yj1+i‖, i = 1, 2, ⋯, min(M − j, M − j1); q (5) For every i, calculate yðiÞ ¼ qΔ1 t ∑ ln d j ðiÞ, where q is the number j¼1 of dj(i) ≠ 0; (6) Obtain the regression line L of y using the least square algorithm, and the slope of line L is the MLE.

After the selection of the time delay τ = 2 and the embedding dimension m = 5, the phase space of the MI series is reconstructed as Xj = (xj, xj + τ, ⋯, xj + (m − 1)τ), where j = 1, 2, ⋯, n − (m − 1)τ, n = 170 is the length of the MI series and xj is the actual value of the MI series. Then the MLE of the MI training dataset is obtained to be 0.1560 according to the small data sets method, which means the MI series is chaotic. Table 1 lists the research results on chaotic characteristics of MI.

Table 1 Chaotic characteristic of the melt index dataset.

Fig. 4. Mutual information figure of the melt index dataset.

Parameters

Results

Hurst index Time delay Embedding dimension Maximum Lyapunov exponent

0.9391 2 5 0.1560

236

M. Zhang et al. / Chemometrics and Intelligent Laboratory Systems 146 (2015) 232–240

3. Establishment of the prediction model After the chaotic analysis of the MI training dataset, in this section, an optimal chaotic model based on chaos theory and PSO–optimized LSSVM is proposed for industrial MI prediction.

K ðx; x Þ ¼ exp −

3.1. Least squares support vector machines SVM is a powerful method for classification and regression aims [54]. Training process in SVM is a quadratic programming problem and the technique is originally developed for binary classification [55]. But in LSSVM, solution of problem is obtained by determining a set of linear equations instead of quadratic programming [56]. Utilizing this method, the complexity is reduced significantly. An LSSVM model for function estimation has the following representation in the feature space: f ðxÞ ¼ ωT φðxÞ þ b

ð9Þ

where x ∈ Rn, y ∈ R, φðÞ : Rn →Rn f is a nonlinear mapping from an input space to a high or even infinite dimensional feature space, depending on the kernel function adopted, and n and nf is the dimensions of the input space and the feature space, respectively. Given a training set {xk, yk}N k = 1 with the input data xk and the output data yk, the optimization problem is defined as min J ðω; eÞ ¼ ω;b;e

N 1 T 1X ω ωþγ e2 : 2 2 k¼1 k

ð10Þ

Subject to the equality constraints yk ¼ ωT φðxk Þ þ b þ ek ;

k ¼ 1; 2; ⋯; N

ð11Þ

where ek is the error between the actual output and the predictive output of the i-th data pair, and γ is the regularization factor. For the optimization problem, the Lagrange function is introduced as follows: Lðω; b; e; α Þ ¼ J ðω; eÞ−

N X

 α k ωT φðxk Þ þ b þ ek −yk

ð12Þ

k¼1

where αk(k = 1, …, N) are Lagrange multipliers. The conditions for optimality are given by 8 N X > ∂L > > α k φðxk Þ ¼ 0→ω ¼ > > > > k¼1 > ∂ω > > N > X > ∂L > < αk ¼ 0 ¼ 0→ ∂b k¼1 > > > ∂L > > ¼ 0→α k ¼ γek k ¼ 1; ⋯; N > > > ∂ek > > > ∂L > > : ¼ 0→ωT φðxk Þ þ b þ ek −yk ¼ 0 ∂α k

0 1v

1Tv K þ I=γ



b α



  0 ¼ y

ð13Þ

k ¼ 1; ⋯; N:

N X k¼1

α k K ðx; xk Þ þ b

! ð16Þ

where σ is the kernel parameter. 3.2. PSO algorithm PSO is a biologically inspired computational stochastic search method introduced by Kennedy. Compared with the genetic algorithm and simulated annealing algorithm, PSO algorithm has the advantages of easy implementation and high performance and is thus widely used [57–59]. Each potential solution in PSO is represented as a particle. By randomly initializing the population of particles in the search space, each particle in PSO has a randomized velocity associated to it, which moves through the space of the problem. The position of a particle is determined by its previous position and its speed. The speed determines a particle's direction while the fitness value is an assessment standard of the particle given by the optimization function. Suppose in a D dimension search space, the number of particles is N, the particle swarm is {xi| i = 1, 2, ⋯, N}, where xi = (xi1, xi2, ⋯, xiD), and each particle has a velocity {vi| i = 1, 2, ⋯, N }, where vi = (vi1, vi2, ⋯, viD), vi ∈ [−vmax, vmax] and vmax is the designed maximum velocity. The pbest is the best solution of the individual particle that it has achieved so far. The gbest is obtained by choosing the overall best value from all particles. At each iteration step, the velocity and position of the particle are updated by the following equation:



vkþ1 ¼ wk vki þ c1 r 1 pbest ki −xki þ c2 r 2 gbest ki −xki i

ð17Þ

xkþ1 ¼ xki þ vkþ1 i i

ð18Þ

wk ¼ wmax −ðwmax −wmin Þ  ðk−1Þ=itermax

ð19Þ

where w k is the inertia weight, c1 and c2 are acceleration coefficients, r1 and r2 are stochastic coefficients uniformly distributed in the interval [0, 1], k is the current iteration number, and itermax is the maximal iteration number. With the increase of iteration number, the gbest gets closer to the optimal solution of the optimization problem.

In this study, firstly LSSVM method based on chaotic theory, named Chaos–LSSVM model, is established for MI prediction. The Chaos– LSSVM model not only utilizes nine easy measurable variables, but also the chaotic property of the MI series. Given the MI series {xi| i = 1, 2, ⋯, n} and the nine process variables, the input and output datasets of the Chaos–LSSVM model are described as follows:

1

2

si ¼ ðt iþðm−1Þτþ1 ; piþðm−1Þτþ1 ; liþðm−1Þτþ1 ; aiþðm−1Þτþ1 ; f iþðm−1Þτþ1 ; f iþðm−1Þτþ1 ;

ð14Þ

where y = [ y1 y2 ⋯ yN]T, 1v = [1 1 ⋯ 1]T, α = [α1 α2 ⋯ αN]T, and I is an identity matrix. Mercer's condition K(xi, xj) = φ(xi)Tφ(xj) is kernel function. The resulting LSSVM model for function estimation becomes yðxÞ ¼

kx−x k22 σ2

3.3. The proposed PSO–Chaos–LSSVM model

By eliminating the variables ω and b, the solution is given by 

where αi, b are the solution to the linear system, and K(x, xi) are any kernel functions satisfying the Mercer condition. Typical kernel functions include linear, polynomial, and radial basis function (RBF). In this paper, the RBF kernel is applied, as described in Eq. (16).

ð15Þ

3

4

5

f iþðm−1Þτþ1 ; f iþðm−1Þτþ1 ; f iþðm−1Þτþ1 ; xi ; xiþτ ; ⋯; xiþðm−1Þτ Þ

yi ¼ xiþðm−1Þτþ1

ð20Þ

ð21Þ

where si is the input data, yi is the goal output, i = 1, 2, ⋯, n − (m − 1)τ − 1, n = 170 is the length of the MI series, the delay time is τ = 2 and the embedding dimension is m = 5. It should be noted that the

M. Zhang et al. / Chemometrics and Intelligent Laboratory Systems 146 (2015) 232–240

237

input and output datasets of the pure LSSVM model are different, which are described as follows:

1 2 3 4 5 si ¼ T i ; pi ; li ; ai ; f i ; f i ; f i ; f i ; f i

ð22Þ

yi ¼ xi :

ð23Þ

As described in Eq. (20), after phase space reconstruction, the number of the input vector si is 161, which is equal to the output MI dataset yi. Then the first 111 vectors are used for training dataset and 30 vectors for testing dataset, leaving the remaining data as generalization dataset. The model's prediction accuracy can be reflected from the performance on testing dataset and the model's universality can be reflected from the performance on the generalization dataset. The Chaos–LSSVM model is established once the input vectors and output values are determined as mentioned above. But in order to develop a more accurate prediction model, the PSO algorithm is adopted to optimize the parameters of the Chaos–LSSVM model, which are the regularization factor γ and the kernel parameter σ, named PSO– Chaos–LSSVM model. The σ value is related to the distance between the training points and the smoothness of the interpolation of the model. The γ value is in charge of the trade-off between the smoothness of the model and its accuracy. The flow chart of the PSO–Chaos–LSSVM model is shown in Fig. 6. Moreover, the detailed procedure of the PSO– Chaos–LSSVM model is given as follows: Step 1 Calculate the Hurst index of the MI series according to the R/S method. If H = 0.5, go to Step 5; else go to Step 2. Step 2 Chaotic analysis of the MI series. Calculate the time delay τ and the embedding dimension m with mutual information method and Cao's method. Step 3 Conduct phase space reconstruction, and then calculate the MLE of the MI series. If λ1 N 0, go to Step 4; else go to Step 5. Step 4 Obtain the input and output training datasets as Eqs. (20) and (21) from the original nine easy-measurable variables and the MI series with the delay time and the embedding dimension. Then go to step 6 directly. Step 5 Obtain the input and output training datasets as Eqs. (22) and (23). Step 6 Give the initial positions of all particles with two dimensions representing the parameters of the LSSVM, which are the regularization factor γ and the kernel parameter σ. Initialize the parameters c 1 , c 2 , r 1 , r 2 , v max , v min , w max , wmin, itermax, and the number of particles N, and randomly generate the initial position x of all particles. In this paper, the parameters are given as follows: c1 = c2 = 2.0, vmax = 2, vmin = − 2, wmax = 0.8, wmin = 0.2, itermax = 100, and N = 30. Set the iteration number k = 1. Step 7 Based on the LSSVM model and the training dataset, evaluate the initial fitness of each particle according to Eq. (26). To prevent the model overfitting, cross validation is used when calculating the fitness of the particle. Then, the particles with the best fitness are chosen as the initial pbest. The initial gbest is obtained from the pbest particles. Step 8 Calculate the weights wk according to Eq. (19). Step 9 Update the velocity and position of each particle xi according to Eqs. (17) and (18). If v i N v max , refine v i = v max ; if vi b vmin , refine v i = vmin. Step 10 Based on the LSSVM model and cross validation on the training dataset, calculate the fitness again with the updated xi. Step 11 Update the pbest according to the fitness; if the fitness of the current particle is lower than the fitness of the initial pbest, choose the current pbest and update gbest. Step 12 Update the iteration number as k = k + 1. If k ≥ itermax, go to Step 13; else go back to Step 8.

Fig. 6. The flow chart of the PSO–Chaos–LSSVM prediction model.

Step 13 Take the update gbest as the optimal parameters of the LSSVM model. Thus, the final optimal chaotic MI prediction model is established.

4. Results and discussion The performance of the proposed model is evaluated with several statistical methods. These are the mean absolute error (MAE), the mean relative error (MRE), the root of mean square error (RMSE), Theil's

238

M. Zhang et al. / Chemometrics and Intelligent Laboratory Systems 146 (2015) 232–240

inequality coefficient (TIC) and standard deviation of absolute error (STD). All performance measures are defined as follows respectively: MAE ¼

n 1X ^i j jy − y n i¼1 i

ð24Þ

MRE ¼

n ^i j 1X jyi −y  100% n i¼1 yi

ð25Þ

vffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi u n u1 X ^i Þ2 ð y −y RMSE ¼ t n i¼1 i

ð26Þ

sffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi n X ^i Þ2 ðyi −y i¼1 sffiffiffiffiffiffiffiffiffiffiffiffiffi TIC ¼ sffiffiffiffiffiffiffiffiffiffiffiffiffi n n X X ^2i y y2i þ i¼1

ð27Þ

i¼1

vffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi u n u 1 X ðe −eÞ2 STD ¼ t n−1 i¼1 i where y ¼

n 1 n ∑ yi , ei i¼1

Fig. 7. Performance of the proposed models on the testing dataset.

ð28Þ

= yi − ŷi, e ¼

1



n ∑i¼1 ei , n

yi and ŷi denote the mea-

sure value and predicted result, respectively. The MAE, MRE and RMSE confirm the prediction accuracy of the prediction models. The STD indicates the predictive stability, and the TIC indicates the level of agreement between the proposed model and the studied process [60]. In this case, the PSO–Chaos–LSSVM is employed to construct the MI prediction model. At the same time, the Chaos–LSSVM model, the pure LSSVM model [16] and the MPSO–SA–RNN method reported in the open literature [61] have also been considered to compare with the prediction model (PSO–Chaos–LSSVM) proposed in this paper. The data listed in Table 2 show the performance comparison of the different models on the testing dataset. It shows that the proposed PSO–Chaos–LSSVM model gives the best prediction performance for the testing dataset, with MRE of 0.15%, better than the pure LSSVM, MPSO–SA–RNN and Chaos–LSSVM models, with those of 3.66%, 0.83% and 0.20%, respectively. In terms of RMSE, the PSO–Chaos–LSSVM is 0.0054 and that of Chaos–LSSVM is 0.0076, much better than LSSVM (0.0214), decreasing 74.8% and 64.5% respectively. Similar results are observed in terms of STD, the PSO–Chaos–LSSVM is 0.0648 and that of Chaos–LSSVM is 0.0733, much better than LSSVM (0.1116), decreasing 41.9% and 34.3% respectively. Moreover, the TIC of PSO–Chaos–LSSVM is quite acceptable when compared with that of the pure LSSVM, with a decrease of 95.8% from 0.0240 to 0.0010. In addition, when compared with the MPSO–SA–RNN model in the published literature, the PSO– Chaos–LSSVM model still has a large advantage in error indicators, and the MAE, MRE, RMSE and TIC of the proposed model decrease 81.2%, 81.9%, 81.2% and 81.8% respectively. These data prove that the proposed PSO–Chaos–LSSVM model could have a promising potential for online MI prediction in the propylene polymerization process. The visual comparison illustration in Fig. 7 shows how the Chaos– LSSVM and PSO–Chaos–LSSVM models perform on the testing dataset. The curve marked with is crossed the real MI value obtained from a real plant, while the curves marked with circles and asterisks are the

Table 2 Performance of the prediction models on the testing dataset.

MI value predicted by the PSO–Chaos–LSSVM model and Chaos– LSSVM respectively. As can be seen from the figure, though the Chaos–LSSVM model predicts the MI much closer to the actual values, the PSO–Chaos–LSSVM model achieves even better performance on most testing dataset points. To illustrate the universality of the proposed models, a detailed comparison on the generalization dataset is listed in Table 3. In detail, the PSO–Chaos–LSSVM achieves a percentage decrease of 92.0% in RMSE from 0.0313 to 0.0025, compared to the pure LSSVM model. The same happens in terms of MAE, MRE, STD and TIC. In the comparison, although the Chaos–LSSVM model wins over the LSSVM and MPSO–SA– RNN models, it loses to the PSO–Chaos–LSSVM model. Fig. 8 shows another visual comparison to study how the models work on the generalization. The result also supports the conclusion that the PSO–Chaos– LSSVM model performs greatly, not only on the testing dataset but also on the generalization dataset. In addition, we believe that there is no model overfitting according to the performance on the testing and generalization datasets. The training set is taken from the same batch with the testing set, while the generalization set is obtained from another one. So in this paper the performance of test can reflect the model's prediction accuracy, and the performance of generalization can reflect the model's universality. When the model is tested on the testing dataset, the prediction results are quite close to the analytic MI values. The model's great performance on the generalization dataset further proves that there is no model overfitting, because if there is model overfitting, the model's prediction results on the generalization dataset (from the different batch with the testing dataset) will become worse compared with those on the testing dataset. 5. Conclusion Based on the chaos theory and the PSO–optimized LSSVM method, this paper presents an optimal chaotic model for the MI prediction. The MI series is proved to be chaotic with the positive maximum

Table 3 Performance of the prediction models on the generalization dataset.

Model

MAE

MRE(%)

RMSE

STD

TIC

Model

MAE

MRE (%)

RMSE

STD

TIC

LSSVM [16] MPSO–SA–RNN [61] Chaos–LSSVM PSO–Chaos–LSSVM

0.0842 0.0218 0.0052 0.0041

3.66 0.83 0.20 0.15

0.0214 0.0287 0.0076 0.0054

0.1116 – 0.0733 0.0648

0.0240 0.0055 0.0015 0.0010

LSSVM [16] MPSO–SA–RNN [61] Chaos–LSSVM PSO–Chaos–LSSVM

0.0662 0.0176 0.0041 0.0019

2.60 0.67 0.16 0.071

0.0313 0.0246 0.0052 0.0025

0.0751 – 0.0585 0.0418

0.0138 0.0047 9.98 × 10−4 4.73 × 10−4

M. Zhang et al. / Chemometrics and Intelligent Laboratory Systems 146 (2015) 232–240

Fig. 8. Performance of the proposed models on the generalization dataset.

Lyapunov exponent and Cao's method, which is distinct from the previous point that the MI is random. Based on the above chaotic characteristics, the Chaos–LSSVM model is explored to predict the MI in the polypropylene process. The PSO algorithm is introduced to optimize the parameters of the proposed model, so the PSO–Chaos–LSSVM model is further proposed to infer the MI of polypropylene, which is compared with other methods reported in the literatures. The PSO– Chaos–LSSVM model predicts the MI with an MRE of 0.15% and an RMSE of 0.0054 on the testing dataset, which is smaller than those of the Chaos–LSSVM model with an MRE of 0.20% and an RMSE of 0.0076, and much better than those of the pure LSSVM with an MRE of 3.66% and an RMSE of 0.0214. The results obtained indicate that the proposed PSO–Chaos–LSSVM model provides prediction accuracy and reliability in practical industrial polypropylene manufacturing. According to Chou's 5-step rule [27] and demonstrated in a series of recent publications (see, e.g., [28–33]) in developing new prediction methods, user-friendly and publicly accessible web-servers will significantly enhance their impacts [62], we shall make efforts in our future work to provide a web-server for the prediction method presented in this paper. Acknowledgments This work is supported by the National Natural Science Joint Funds of NSFC–CNPC of China (Grant U1162130), the National High Technology Research and Development Program (863, Grant 2006AA05Z226) and the Zhejiang Provincial Natural Science Foundation for Distinguished Young Scientists (Grant R4100133), and their supports are thereby acknowledged. References [1] S.S. Bafna, A.m. Beall, A design of experiments study on the factors affecting variability in the melt index measurement, J. Appl. Polym. Sci. 65 (1997) 277–288. [2] S. Lucia, T. Finkler, S. Engell, Multi-stage nonlinear model predictive control applied to a semi-batch polymerization reactor under uncertainty, J. Process Control 23 (2013) 1306–1319. [3] A. Shamiri, M.A. Hussain, F.S. Mjalli, M.S. Shafeeyan, N. Mostoufi, Experimental and modeling analysis of propylene polymerization in a pilot-scale fluidized bed reactor, Ind. Eng. Chem. Res. 53 (2014) 8694–8705. [4] Z. Tian, X.P. Gu, L.F. Feng, G.H. Hu, A model for the structures of impact polypropylene copolymers produced by an atmosphere-switching polymerization process, Chem. Eng. Sci. 101 (2013) 686–698. [5] A. Lu, V.G. Magupalli, J. Ruan, Q. Yin, M.K. Atianand, M.R. Vos, G.F. Schröder, K.A. Fitzgerald, H. Wu, E.H. Egelman, Unified polymerization mechanism for the assembly of ASC-dependent inflammasomes, Cell 156 (2014) 1193–1206.

239

[6] A. Shamiri, M. azlan Hussain, F. sabri Mjalli, N. Mostoufi, S. Hajimolana, Dynamics and predictive control of gas phase propylene polymerization in fluidized bed reactors, Chin. J. Chem. Eng. 21 (2013) 1015–1029. [7] N. Tan, L.Q. Yu, Z. Tan, B.Q. Mao, Kinetics of the propylene polymerization with prepolymerization at high temperature using Ziegler–Natta catalyst, J. Appl. Polym. Sci. 132 (2015). [8] H.C. Lou, H.Y. Su, L. Xie, Y. Gu, G. Rong, Inferential model for industrial polypropylene melt index prediction with embedded priori knowledge and delay estimation, Ind. Eng. Chem. Res. 51 (2012) 8510–8525. [9] F. Ahmed, S. Nazir, Y.K. Yeo, A recursive PLS-based soft sensor for prediction of the melt index during grade change operations in HDPE plant, Korean J. Chem. Eng. 26 (2009) 14–20. [10] S.Y. Zhu, Y.H. Zhang, Q.W. Li, F.G. Ji, S.W. Guan, Mechanical and tribological properties of PEEK coatings with different melt indexes prepared by electrostatic powder spray technique, Chem. J. Chin. Univ. Chin. 35 (2014) 1075–1079. [11] A. Mogilicharla, K. Mitra, S. Majumdar, Modeling of propylene polymerization with long chain branching, Chem. Eng. J. 246 (2014) 175–183. [12] T.Y. Kim, Y.K. Yeo, Development of polyethylene melt index inferential model, Korean J. Chem. Eng. 27 (2010) 1669–1674. [13] Y. Liu, Z.L. Gao, Industrial melt index prediction with the ensemble anti-outlier justin-time Gaussian process regression modeling method, J. Appl. Polym. Sci. 132 (2015). [14] I.S. Han, C. Han, C.B. Chung, Melt index modeling with support vector machines, partial least squares, and artificial neural networks, J. Appl. Polym. Sci. 95 (2005) 967–974. [15] J. Zhang, Q.B. Jin, Y.M. Xu, Inferential estimation of polymer melt index using sequentially trained bootstrap aggregated neural networks, Chem. Eng. Technol. 29 (2006) 442–448. [16] J. Shi, X. Liu, Melt index prediction by weighted least squares support vector machines, J. Appl. Polym. Sci. 101 (2006) 285–289. [17] H. Jiang, Y. Xiao, J. Li, X. Liu, Prediction of the melt index based on the relevance vector machine with modified particle swarm optimization, Chem. Eng. Technol. 35 (2012) 819–826. [18] F. Ahmed, L.H. Kim, Y.K. Yeo, Statistical data modeling based on partial least squares: application to melt index predictions in high density polyethylene processes to achieve energy-saving operation, Korean J. Chem. Eng. 30 (2013) 11–19. [19] M. Zhang, X. Liu, A soft sensor based on adaptive fuzzy neural network and support vector regression for industrial melt index prediction, Chemom. Intell. Lab. Syst. 126 (2013) 83–90. [20] Z.S. Fei, K.L. Liu, J. Liang, B.P. Hou, H. Zheng, Data-based soft-sensing for melt index prediction, Proceedings of the 2014 9th IEEE Conference on Industrial Electronics and Applications (ICTEA) 2014, pp. 1251–1253. [21] Y. Gao, S. Shao, X. Xiao, Y. Ding, Y. Huang, Z. Huang, K.C. Chou, Using pseudo amino acid composition to predict protein subcellular location: approached with Lyapunov index, Bessel function, and Chebyshev filter, Amino Acids 28 (2005) 373–376. [22] J.Y. Yang, Z.L. Peng, Z.G. Yu, R.J. Zhang, V. Anh, D.S. Wang, Prediction of protein structural classes by recurrence quantification analysis based on chaos game representation, J. Theor. Biol. 257 (2009) 618–626. [23] J.B. Xia, S.L. Zhang, F. Shi, H.J. Xiong, X.H. Hu, X.H. Niu, Z. Li, Using the concept of pseudo amino acid composition to predict resistance gene against Xanthomonas oryzae pv. oryzae in rice: an approach from chaos games representation, J. Theor. Biol. 284 (2011) 16–23. [24] X.L. Liu, J.L. Lu, X.H. Hu, Predicting thermophilic proteins with pseudo amino acid composition: approached from chaos game representation and principal component analysis, Protein Pept. Lett. 18 (2011) 1244–1250. [25] J.L. Lu, X.H. Hu, D.G. Hu, A new hybrid fractal algorithm for predicting thermophilic nucleotide sequences, J. Theor. Biol. 293 (2012) 74–81. [26] X.H. Niu, X.H. Hu, F. Shi, J.B. Xia, Predicting protein solubility by the general form of Chou's pseudo amino acid composition: approached from chaos game representation and fractal dimension, Protein Pept. Lett. 19 (2012) 940–948. [27] K.C. Chou, Some remarks on protein attribute prediction and pseudo amino acid composition, J. Theor. Biol. 273 (2011) 236–247. [28] W. Chen, P.M. Feng, H. Lin, K.C. Chou, iRSpot-PseDNC: identify recombination spots with pseudo dinucleotide composition, Nucleic Acids Res. 41 (2013). [29] H. Lin, E.Z. Deng, H. Ding, W. Chen, K.C. Chou, iPro54-PseKNC: a sequence-based predictor for identifying sigma-54 promoters in prokaryote with pseudo k-tuple nucleotide composition, Nucleic Acids Res. 42 (2014) 12961–12972. [30] W. Chen, P.M. Peng, E.Z. Deng, H. Lin, K.C. Chou, iTIS-PseTNC: a sequence-based predictor for identifying translation initiation site in human genes using pseudo trinucleotide composition, Anal. Biochem. 462 (2014) 76–83. [31] H. Ding, E.Z. Deng, L.F. Yuan, L. Liu, H. Lin, W. Chen, K.C. Chou, iCTX-type: a sequence-based predictor for identifying the types of conotoxins in targeting Ion channels, Biomed. Res. Int. 2014 (2014) (10 pp.). [32] S.H. Guo, E.Z. Deng, L.Q. Xu, H. Ding, H. Lin, W. Chen, K.C. Chou, iNuc-PseKNC: a sequence-based predictor for predicting nucleosome positioning in genomes with pseudo k-tuple nucleotide composition, Bioinformatics 30 (2014) 1522–1529. [33] Z. Liu, X. Xiao, W.R. Qiu, K.C. Chou, iDNA-methyl: identifying DNA methylation sites via pseudo trinucleotide composition, Anal. Biochem. 474 (2015) 69–77. [34] H.E. Hurst, Long-term storage capacity of reservoirs, Trans. Am. Soc. Civ. Eng. 116 (1951) 770–808. [35] A.M. Yaglom, Some classes of random fields in n-dimensional space, related to stationary random processes, Theory Probab. Appl. 2 (1957) 273–320. [36] W.-J. Xie, W.-X. Zhou, Horizontal visibility graphs transformed from fractional Brownian motions: topological properties versus the Hurst index, Physica A 390 (2011) 3592–3601.

240

M. Zhang et al. / Chemometrics and Intelligent Laboratory Systems 146 (2015) 232–240

[37] Z.-Q. Jiang, W.-J. Xie, W.-X. Zhou, Testing the weak-form efficiency of the WTI crude oil futures market, Physica A 405 (2014) 235–244. [38] D. Kugiumtzis, State space reconstruction parameters in the analysis of chaotic time series—the role of the time window length, Physica D 95 (1996) 13–28. [39] F. Takens, Dynamical systems and turbulence, Lect. Notes Math. 898 (1981) 366. [40] H. Zhao, Y.-Q. Shi, Detecting covert channels in computer networks based on chaos theory, IEEE Trans. Inf. Forensics Secur. 8 (2013) 273–282. [41] L. Zhang, F. Tian, S. Liu, L. Dang, X. Peng, X. Yin, Chaotic time series prediction of Enose sensor drift in embedded phase space, Sensors Actuators B Chem. 182 (2013) 71–79. [42] B. Bezruchko, A. Karavaev, V. Ponomarenko, M. Prokhorov, Reconstruction of timedelay systems from chaotic time series, Phys. Rev. E 64 (2001) 056216. [43] X. Han, E. Fridman, S.K. Spurgeon, Sampled-data sliding mode observer for robust fault reconstruction: a time-delay approach, J. Franklin Inst. 351 (2014) 2125–2142. [44] A.M. Albano, J. Muench, C. Schwartz, A.I. Mees, P.E. Rapp, Singular-value decomposition and the grassberger-procaccia algorithm, Phys. Rev. A 38 (1988) 3017–3026. [45] A.M. Fraser, H.L. Swinney, Independent coordinates for strange attractors from mutual information, Phys. Rev. A 33 (1986) 1134–1140. [46] W.G. Yao, C. Essex, P. Yu, M. Davison, Measure of predictability, Phys. Rev. E 69 (2004). [47] D.S. Broomhead, G.P. King, Extracting qualitative dynamics from experimental-data, Physica D 20 (1986) 217–236. [48] H.S. Kim, R. Eykholt, J.D. Salas, Nonlinear dynamics, delay times, and embedding windows, Physica D 127 (1999) 48–60. [49] L. Cao, Practical method for determining the minimum embedding dimension of a scalar time series, Physica D 110 (1997) 43–50. [50] M.B. Kennel, R. Brown, H.D. Abarbanel, Determining embedding dimension for phase-space reconstruction using a geometrical construction, Phys. Rev. A 45 (1992) 3403.

[51] L.G. de la Fraga, E. Tlelo-Cuautle, Optimizing the maximum Lyapunov exponent and phase space portraits in multi-scroll chaotic oscillators, Nonlinear Dyn. 76 (2014) 1503–1515. [52] H. Kantz, T. Schreiber, Nonlinear Time Series Analysis, Cambridge University Press, 2004. [53] M.T. Rosenstein, J.J. Collins, C.J. De Luca, A practical method for calculating largest Lyapunov exponents from small data sets, Physica D 65 (1993) 117–134. [54] V.N. Vapnik, V. Vapnik, Statistical Learning Theory, Wiley, New York, 1998. [55] L.Y. Wang, Z. Liu, C.L.P. Chen, Y. Zhang, S. Lee, X. Chen, Energy-efficient SVM learning control system for biped walking robots, IEEE Trans. Neural Netw. Learn. Syst. 24 (2013) 831–837. [56] J.A. Suykens, J. Vandewalle, Least squares support vector machine classifiers, Neural. Process. Lett. 9 (1999) 293–300. [57] K. Vaisakh, L.R. Srinivas, K. Meah, Genetic evolving ant direction particle swarm optimization algorithm for optimal power flow with non-smooth cost functions and statistical analysis, Appl. Soft Comput. 13 (2013) 4579–4593. [58] M.S. Li, X.Y. Huang, H.S. Liu, B.X. Liu, Y. Wu, A.H. Xiong, T.W. Dong, Prediction of gas solubility in polymers by back propagation artificial neural network based on selfadaptive particle swarm optimization algorithm and chaos theory, Fluid Phase Equilib. 356 (2013) 11–17. [59] J. Kennedy, Particle swarm optimization, Encyclopedia of Machine Learning, Springer 2010, pp. 760–766. [60] D. Murray_smith, Methods for the external validation of contiuous system simulation models: a review, Math. Comput. Model. Dyn. Syst. 4 (1998) 5–31. [61] J. Li, X. Liu, Melt index prediction by RBF neural network optimized with an MPSO– SA hybrid algorithm, Neurocomputing 74 (2011) 735–740. [62] K.C. Chou, Impacts of bioinformatics to medicinal chemistry, Med. Chem. 11 (2015) 218–234.