Modified kernel principal component analysis using double-weighted local outlier factor and its application to nonlinear process monitoring

Modified kernel principal component analysis using double-weighted local outlier factor and its application to nonlinear process monitoring

ISA Transactions ∎ (∎∎∎∎) ∎∎∎–∎∎∎ Contents lists available at ScienceDirect ISA Transactions journal homepage: www.elsevier.com/locate/isatrans Res...

2MB Sizes 0 Downloads 108 Views

ISA Transactions ∎ (∎∎∎∎) ∎∎∎–∎∎∎

Contents lists available at ScienceDirect

ISA Transactions journal homepage: www.elsevier.com/locate/isatrans

Research article

Modified kernel principal component analysis using double-weighted local outlier factor and its application to nonlinear process monitoring Xiaogang Deng n, Lei Wang College of Information and Control Engineering, China University of Petroleum, Qingdao 266580 China

art ic l e i nf o

a b s t r a c t

Article history: Received 15 March 2017 Received in revised form 19 September 2017 Accepted 22 September 2017

Traditional kernel principal component analysis (KPCA) based nonlinear process monitoring method may not perform well because its Gaussian distribution assumption is often violated in the real industrial processes. To overcome this deficiency, this paper proposes a modified KPCA method based on doubleweighted local outlier factor (DWLOF-KPCA). In order to avoid the assumption of specific data distribution, local outlier factor (LOF) is introduced to construct two LOF-based monitoring statistics, which are used to substitute for the traditional T 2 and SPE statistics, respectively. To provide better online monitoring performance, a double-weighted LOF method is further designed, which assigns the weights for each component to highlight the key components with significant fault information, and uses the moving window to weight the historical statistics for reducing the drastic fluctuations in the monitoring results. Finally, simulations on a numerical example and the Tennessee Eastman (TE) benchmark process are used to demonstrate the superiority of the proposed DWLOF-KPCA method. & 2017 ISA. Published by Elsevier Ltd. All rights reserved.

Keywords: Nonlinear process monitoring Kernel principal component analysis Local outlier factor Double weighting strategy

1. Introduction Process monitoring and fault diagnosis technologies have gained increasing interest because of the rising demands for ensuring process safety and improving product quality. As supervisory control and data acquisition (SCADA) systems have been extensively applied in modern industrial processes, massive data are gathered by industrial databases. Therefore, data-driven process monitoring methods, also called multivariate statistical process monitoring (MSPM) methods, have been one fascinating topic in process monitoring and fault diagnosis field [1–3]. Some typical MSPM methods include principal component analysis (PCA), partial least squares, independent component analysis and Fisher discriminant analysis [4–7]. Among these methods, PCA is the most popular one, which characterizes the process state by extracting the intrinsic latent variables meanwhile preserving main variance information. In order to deal with the complicate process characteristics, many enhanced PCA methods have been proposed. For monitoring dynamic processes, a dynamic PCA method was proposed by Ku et al. [8], which constructs the augmented variables for dynamic statistical modeling. In order to handle the missing data, Liu et al. [9] developed a variational Bayesian PCA method and tested its performance on the wastewater treatment n

Corresponding author. E-mail addresses: [email protected], [email protected] (X. Deng).

plants. To address the large-scale process monitoring problem, Jiang and Yan [10] designed a multi-block PCA method by applying mutual information for sub-block division. The basic PCA method is intrinsically a linear transformation while the real process data are with nonlinear characteristic. To handle this problem, some nonlinear PCA algorithms have been developed. Kramer [11] firstly presented a nonlinear PCA method by utilizing the auto-associative neural networks. Dong and McAvoy [12] proposed a nonlinear PCA approach using principal curves and neural networks. Considering that neural network based nonlinear PCA methods involve complicated nonlinear optimization, a much simpler nonlinear PCA using kernel function, referred to as kernel PCA (KPCA), has been put forward by Schölkpof [13]. In recent years, KPCA has been a state-of-art method in the nonlinear process monitoring field. Lee et al. [14] firstly constructed the KPCA-based nonlinear monitoring method, where two monitoring statistics T 2 and SPE are defined for fault detection. To determine the potential causes of faults, Cho et al. [15] discussed the KPCA based fault identification strategies. Considering that different faults may need different optimal kernel parameters, Li and Yang [16] proposed an ensemble KPCA by applying the Bayesian inference strategy to combine the kernel models with different kernel parameters. In order to extract data features better, Deng et al. [17] built a localized KPCA based fault detection method, which applies a modified optimization function involving both global and local data mining. For analyzing the multiscale data characteristic of nonlinear process, Zhang and Ma [18] proposed the multiscale KPCA method by combining wavelet analysis.

http://dx.doi.org/10.1016/j.isatra.2017.09.015 0019-0578/& 2017 ISA. Published by Elsevier Ltd. All rights reserved.

Please cite this article as: Deng X, Wang L. Modified kernel principal component analysis using double-weighted local outlier factor and its application to nonlinear process monitoring. ISA Transactions (2017), http://dx.doi.org/10.1016/j.isatra.2017.09.015i

2

X. Deng, L. Wang / ISA Transactions ∎ (∎∎∎∎) ∎∎∎–∎∎∎

To decrease the computation complexity, a reduced KPCA (RKPCA) was presented by Jaffel et al. [19], which selects a reduced set of training samples to construct the approximate kernel model. Other extension KPCA methods can be seen in [20–24]. Although KPCA based nonlinear process monitoring method has shown its success in many cases, there are still some valuable problems deserving further investigation. One important problem involves KPCA data distribution assumption. KPCA assumes that the process latent variables, that means kernel principal components, follow Gaussian distribution. Based on this assumption, the confidence limits of the KPCA T 2 and SPE statistics are computed through F distribution and χ 2 distribution [14]. However, the real process data characteristic is complex and usually unknown. The process latent variables may not conform to the strict Gaussian distribution, but follows the non-Gaussian distribution or the mixture distribution of Gaussian and non-Gaussian. Thus, the confidence limits of T 2 and SPE statistics, computed based on the Gaussian distribution assumption, are not reasonable for process monitoring and fault detection. To overcome the problem caused by the uncertainty of the data distribution, this paper is to propose a modified KPCA method by introducing local outlier factor (LOF) to construct new monitoring statistics. LOF is a famous data mining technique [25,26], which can locate the anomaly points of the given dataset without any specific data distribution assumption. Now LOF has been applied to many different data mining tasks including credit card fraud detection, industrial data analysis, and computer security analysis [27–29]. Especially, some researchers discussed the application of LOF in the process monitoring and fault detection. Lee et al. [30] integrated ICA and LOF to build an ICA-LOF model for real industrial process monitoring. Ma et al. [31] proposed a neighborhood standardized LOF method for fault detection. Song et al. [32] combined a LOF-based clustering strategy with PCA for multimode process monitoring. It should be noted that all these present LOFrelated studies are focused on linear process monitoring but do not consider nonlinear monitoring problem. To our best knowledge, there is no work to integrate LOF with KPCA in nonlinear process monitoring field. Motivated by the above analysis, this paper proposes an enhanced KPCA method using double-weighted LOF strategy, referred to as DWLOF-KPCA. The contributions of the proposed method include two aspects. Firstly, with the incorporation of LOF approach, two novel LOF-based monitoring statistics are established to monitor the variation of principal component subspace (PCS) and residual component subspace (RCS). These two LOF-based monitoring statistics, which do not need any prior data distribution assumption, are used to replace the traditional T 2 and SPE statistics. Secondly, a double-weighted strategy is applied to enhance the LOF method for better monitoring performance. On the one hand, the monitored components are weighted to emphasize the importance of key components containing significant fault information. On the other hand, considering the drastic fluctuations of monitoring results, the monitoring statistics in a moving window are weighted to indicate the current process status more clearly. The remainder of this paper is listed as follows. In Section 2, a brief review of the KPCA method is provided, including the analysis of two monitoring statistics T 2 and SPE. Then the modified KPCA scheme is presented in Section 3, where LOF-KPCA is firstly developed and then DWLOF-KPCA is formulated. Section 4 describes the process monitoring procedure based on the proposed method. In Section 5, the proposed method is tested through a numerical example and the Tennessee Eastman (TE) process. Finally, Section 6 draws the conclusions.

2. KPCA based process monitoring method KPCA is an effective nonlinear process monitoring method [13,14,17], which firstly projects the original input data onto a high-dimensional feature space and then executes the PCA modeling in the feature space. Given the training dataset X = [x(1) , x(2) , … , x (n)]T ∈ n × m , where m is the number of measured variables and n is the number of data samples, a nonlinear projection maps the original data onto a new feature space as: Φ(·) : m →  . That means the data matrix X is projected onto Φ(X ). Suppose that Φ(X ) has been mean centered, then the linear PCA decomposition is performed as p

Φ(X ) =

∑ t jvjT + E, j=1

(1)

where t j = Φ(X )vj ∈ n is the score vector, also called principal component vector, vj ∈  is the loading vector, E ∈ n ×  denotes the residual matrix, and p represents the number of kernel principal components (KPCs) retained in the KPCA model. The loading vector in Eq. (1) is obtained by an eigenvalue decomposition problem as

λ jvj = C F vj,

(2)

where λj denotes the j-th eigenvalue corresponding to the loading vector vj , while C F is the Φ(X )'s covariance matrix, computed as

CF =

1 ΦT(X )Φ(X ). n−1

(3)

It is known that the loading vector vj lies in the space spanned by the training data Φ(X ), expressed by [13]

vj = ΦT(X )αj,

(4) n

where αj ∈  is the coefficient vector. Combining Eqs. (2), (3), and (4) results in the following expression

λ jΦT(X )αj =

1 ΦT(X )Φ(X )ΦT(X )αj, n−1

(5)

which is further expressed by

λ jΦ(X )ΦT(X )αj =

1 Φ(X )ΦT(X )Φ(X )ΦT(X )αj. n−1

(6)

To avoid the difficulty of explicitly defining the nonlinear mapping, the well-known kernel trick is employed [13]. A kernel matrix is defined as K = Φ(X )ΦT (X ) ∈ n × n , whose (i, j )-th element ΦT (x(i ))Φ(x(j )) can be calculated by the kernel function as

ΦT(x(i))Φ(x(j )) = ker (x(i), x(j )),

(7)

where ker (·, ·) represents the kernel function. The commonly used kernel functions include the Gaussian kernel function and the polynomial kernel function [13,14,20]. With the use of the kernel trick, the eigenvalue problem in Eq. (6) can be expressed by

λ jKαj =

1 KKαj, n−1

(8)

whose solutions are obtained by solving

λ j αj =

1 Kαj. n−1

(9)

To solve Eq. (9) yields the eigenvectors α1, α2, … , αn corresponding to the n nonzero eigenvalues λ1 ≥ λ2 ≥ ⋯ ≥ λ n . For a test vector x(h) at the h-th sample instant, its j-th component t j(h) can be extracted by projecting Φ(x ) onto the j-th loading vector vj as

Please cite this article as: Deng X, Wang L. Modified kernel principal component analysis using double-weighted local outlier factor and its application to nonlinear process monitoring. ISA Transactions (2017), http://dx.doi.org/10.1016/j.isatra.2017.09.015i

X. Deng, L. Wang / ISA Transactions ∎ (∎∎∎∎) ∎∎∎–∎∎∎

t j(h) = ΦT(x(h))vj = k xTαj,

(10)

where k x = [ker (x(h) , x(1)) , ker (x(h) , x(2)) , …, ker (x (h) , x (n))]T ∈ n is the kernel vector. The components from Eq. (10) can be divided into two parts: the principal component vector [t1(h) , t2(h) , … , tp(h)] and the residual component vector [tp + 1(h) , tp + 2(h) , … , tn (h)]. Usually, KPCA-based process monitoring employs two monitoring statistics, known as the T 2 and SPE statistics [14]. The T 2 statistic is constructed to measure the variation of the principal components, formulated as

T2(h) = [t1(h), t2(h), …, tp(h)]Λ−1[t1(h), t2(h), …, tp(h)]T ,

(11)

where Λ is the p × p diagonal matrix with its diagonal elements as the eigenvalues λj(1 ≤ j ≤ p). The SPE statistic is built to monitor the changes of the residual components, defined by n

SPE(h) =

j=1

j=1

(12)

As KPCA assumes that the extracted components t j(h)(1 ≤ j ≤ n ) obey the Gaussian distribution, the confidence limit of the T 2 statistic is calculated according to the F distribution as [14] 2 Tlim =

p(n − 1) Fp, n − p, α, n−p

(13)

where p and α denote the number of principal components and the confidence level, respectively, while the confidence limit for the SPE statistic is approximated through a weighted χ 2 distribution as [14]

SPElim = gχl2 ,

compute the sample y 's K distance, defined by

KD(y ) = ∥ y − y K ∥,

(15)

which is the Euclidean distance between y and its K-th nearest neighbor. (2) For each local neighbor sample y k (1 ≤ k ≤ K ), obtain its K distance KD(y k ). (3) Compute the reachability distance of the sample y , expressed as

RD(y, y k ) = max{∥ y − y k ∥, KD(y k )} .

(16)

(4) Calculate the local reachability density (LRD) of the sample y as

lrd(y ) =

K

.

K

∑k = 1 RD(y, y k )

(17) k

p

∑ t 2j (h) − ∑ t 2j (h).

3

(14)

where g = s/(2a) and l = 2a2/s , a and s are the estimated mean and variance of the SPE statistic, respectively. Under normal operation, the T 2 and SPE statistics should be under their confidence limits. If one of the two monitoring statistics exceeds the corresponding confidence limit, it indicates the occurrence of the fault. More detailed descriptions about these two statistics can be found in [14,15].

3. Modified KPCA method using local outlier factor As discussed in Section 1, the traditional KPCA based monitoring scheme is performed under the Gaussian distribution assumption. When this assumption is not guaranteed, the monitoring statistics T 2 and SPE will not give the best monitoring results. To address this problem, we build an improved KPCA method by introducing the LOF-based monitoring statistics, which break the limitation of specific distribution. Further considering the utilization of fault information, a double-weighted LOF (DWLOF) method is designed to build DWLOF statistics for better online process monitoring. 3.1. LOF-KPCA method In this part, local outlier factor (LOF) is introduced to construct the monitoring statistics and the traditional KPCA method is modified as a LOF-based KPCA method (LOF-KPCA). A brief introduction of LOF is firstly presented. For the given training matrix Y and the test sample y , the LOF computing procedure is listed as [25,26]: (1) For the test sample y , find its K nearest neighbor samples {y1, … , y k , … , y K }(1 ≤ k ≤ K ) in the training dataset Y and

(5) For each local neighbor sample y (1 ≤ k ≤ K ), compute its LRD value lrd(y k ). (6) Obtain local outlier factor of the sample y , formulated by

LOF (y ) =

1 K

K

∑ k=1

lrd(y k ) . lrd(y )

(18)

When the sample y is not an outlier, the local reachability density lrd(y ) is close to lrd(y k ), which means the local outlier factor LOF (y ) would be approximately equal to 1. Otherwise, if the sample y is an outlier, the LOF (y ) would be larger than 1 because the local reachability density of y would be smaller than that of its neighbors. The LOF (y ) value indicates the relationship between the testing sample y and the normal operation. Therefore, LOF can identify if some sample is an outlier for the given training dataset without any data distribution assumption. However, it should be noted that LOF (y ) is constructed based on the local reachability density and does not consider the data covariance. So the local outlier factor can not indicate the change of data covariance. Next, we describe how to use LOF to construct the monitoring statistics. For the training dataset X , the KPCA modeling leads to the ∼ principal component matrix T and the residual component matrix T . Given the test vector x(h), its principal component vector and the residual component vector are denoted as t (h) = [t1(h) , t2(h) , … , tp(h)]T , ∼ t (h) = [tp + 1(h) , tp + 2(h) , … , tn (h)]T , respectively. For the matrix T and the vector t (h), a LOF-based PCS monitoring statistic is defined as

LOFPCS (h) = LOF (t (h)) =

1 K

K

∑ k=1

lrd(t k(h)) , lrd(t (h))

(19)

where t k(h) is the k-th nearest neighbor of t (h). ∼ Similarly, for the given training data matrix T and the test ∼ vector t , a LOF-based RCS monitoring statistic is defined by

1 ∼ LOFRCS (h) = LOF (t (h)) = K

K

∑ k=1

∼k lrd(t (h)) , ∼ lrd(t (h))

(20)

∼k ∼ where t (h) is the k-th nearest neighbor of t (h). Considering that each principal component in the vector t represents significantly different variance, it should be scaled before computing LOF-based PCS statistic. The scale procedure is formulated as

t^(h) = Λ−1/2 t (h).

(21)

∼ As the residual component vector t (h) corresponds to the very small eigenvalues, they are not scaled. The confidence limits of LOFPCS and LOFRCS can be obtained using the kernel density

Please cite this article as: Deng X, Wang L. Modified kernel principal component analysis using double-weighted local outlier factor and its application to nonlinear process monitoring. ISA Transactions (2017), http://dx.doi.org/10.1016/j.isatra.2017.09.015i

X. Deng, L. Wang / ISA Transactions ∎ (∎∎∎∎) ∎∎∎–∎∎∎

4

estimation (KDE) technique [33,34], which is data-driven and does not require the prior distribution assumption.

DWLOFPCS (h) = [WLOFPCS (h − 2d + 1)WLOFPCS (h − 2d + 2)⋯ WLOFPCS (h)]WPCS ,

(26)

3.2. DWLOF-KPCA method As an enhanced version of KPCA, the LOF-KPCA method has the capability of handling nonlinear problem and complex data distribution simultaneously. However, there are still some flaws in the LOF-KPCA based monitoring scheme. Firstly, when LOF statistics are constructed, they view all components equally. In fact, when a fault occurs, only some specific components show significant fault information while the other components may contain little fault information. So in the LOF statistics the useful information may be submerged by the useless ones. Secondly, in real industrial process, some variables may perform fluctuating so that the monitoring statistic runs around the confidence limit and can not exceed the confidence limit clearly. Both the flaws involve the mining of fault information. The first is about the fault information mining among the current components while the second one involves the historical fault information. Aiming at these two flaws, a two-level weighting strategy is designed to improve the LOF-KPCA method and the new method is called doubleweighted LOF based KPCA (DWLOF-KPCA). We firstly discuss the first level of weighting strategy. To highlight the fault components, the first weight strategy is designed to distinguish the roles of different components. For a component ti(h), its boundary can be obtained as

timax

|t |max1 + |ti|max2 = i (1 ≤ i ≤ n) 2 max1

DWLOFRCS (h) = [WLOFRCS (h − 2d + 1)WLOFRCS (h − 2d + 2)⋯ WLOFRCS (h)]WRCS ,

where 2d is the width of the moving window and WPCS , WRCS are the second level of weighting coefficients. The weighting vectors WPCS and WRCS are determined according to the current process status indices, defined as

⎧ 1, if WLOF (h) > LOF PCS PCS, lim, SPCS (h) = ⎨ ⎩ 0, otherwise ,

(28)

⎧ 1, if WLOF (h) > LOF RCS RCS, lim, SRCS (h) = ⎨ ⎩ 0, otherwise ,

(29)









where LOFPCS, lim and LOFRCS, lim are the confidence limits obtained in the LOF-KPCA modeling. According to the status indices, the weighting vectors are expressed by

WPCS

(22)

max2

where |ti| , |ti| are the first two maximums of the absolute values of the i-th component, which are obtained from the training data component matrix. When the component absolute value |ti(h)| exceeds the boundary timax , it can be regarded as the abnormal component and should be given a large weight. Otherwise, the weight should be relatively small. For each component, a real-time weighting coefficient wi(h) is designed as

⎧ ⎛ ⎞2 ⎪ γ ⎜ ti(h) ⎟ , if |t (h)| ≥ t max&|t (h − 1)| ≥ t max, i i i i max wi(h) = ⎨ ⎝ ti ⎠ ⎪ ⎪ 1, ⎩ otherwise ,

(23)

where γ ≥ 1 is a tuning parameter and set as 2 experientially in this paper. Based on the Eq. (23), the weighted LOF statistics are defined as

WLOFPCS (h) = LOF (W (h)t (h)) =

1 K

 (h)t∼(h)) = 1 WLOFRCS (h) = LOF (W K

K

∑ k=1

K

∑ k=1

lrd(W (h)t k(h)) , lrd(W (h)t (h))

(24)

 (h)t∼k(h)) lrd(W ,  (h)t∼(h)) lrd(W

(25)

 (h) = diag (w (h) , where W (h) = diag (w1(h) , w2(h) , … , wp(h)), and W p+1 wp + 2(h) , … , wn∼(h)) . With the above weighting strategy, the modified LOF-KPCA method is called WLOF-KPCA. Furthermore, the second level of weighting strategy is formulated. In real industrial process, some variables are with the fluctuating faults so that the monitoring statistics run around the confidence limits and can not exceed the confidence limit clearly. For more obvious monitoring results, a weighting moving window is employed to take the recent history process status into account. The double-weighted LOF statistics are developed as

(27)

WRCS

⎧ 2d ⎪ ⎪ [w1, w2, …, w2d ]T , if ∑ SPCS (h + 1 − i) > d, =⎨ i=1 ⎪ ⎪ otherwise , ⎩ [0, 0, …, 0, 1]T ,

(30)

⎧ 2d ⎪ ⎪ [w1, w2, …, w2d ]T , if ∑ SRCS (h + 1 − i) > d =⎨ i=1 ⎪ ⎪ otherwise , ⎩ [0, 0, …, 0, 1]T ,

(31)

where wi is the weighting parameter defined as

wi =

1 22d − i

(32)

By Eqs. (30) and (31), the non-zero weights w1, w2, … , w2d are applied to weigh the historical statistics if more than half of the sample points are considered to be under abnormal condition. Otherwise, only the current statistic is used and the weights of the historical statistics are set to 0. With the application of two levels of weighting procedures, the modified LOF-KPCA method is called DWLOF-KPCA. In the first level of weighting strategy, the importance of key components is emphasized to capture the significant fault information, while the second level of weighting strategy is to reduce the influence of drastic fluctuations of the monitoring statistics by utilizing the historical status information in a moving window.

4. DWLOF-KPCA based process monitoring procedure With the proposed method, the process monitoring framework includes two stages: offline modeling stage and online monitoring stage. The flowchart is shown in Fig. 1. During the offline modeling stage, the normal samples are collected and KPCA decomposition is performed. Then LOF-KPCA statistics are built and the corresponding confidence limits are computed by KDE technique. In the online monitoring stage, the new data sample is acquired and scaled by the mean and standard variance of the normal dataset. Then the new data sample is projected on the KPCA model and the DWLOF-KPCA statistics are calculated to judge if some fault occurs.

Please cite this article as: Deng X, Wang L. Modified kernel principal component analysis using double-weighted local outlier factor and its application to nonlinear process monitoring. ISA Transactions (2017), http://dx.doi.org/10.1016/j.isatra.2017.09.015i

X. Deng, L. Wang / ISA Transactions ∎ (∎∎∎∎) ∎∎∎–∎∎∎

Fig. 1. The DWLOF-KPCA based process monitoring flowchart.

5. Case study In this section, two case studies including a numerical example and the benchmark Tennessee Eastman process are used to verify the proposed method. Three methods of KPCA, LOF-KPCA, and DWLOF-KPCA are applied to process monitoring. For all the used monitoring statistics, 99% confidence limits are utilized to detect faults. In the following monitoring charts, the confidence limit is plotted with dashed line while the monitoring statistic is plotted with solid curve. To compare the monitoring performance of different methods, two indices, referred to as false alarm rate (FAR) and missing detection rate (MDR) [35,36], are used in this work. FAR is the percentage of normal samples identified as fault samples over all the normal samples, while MDR is the percentage of missing fault samples over all the fault samples. As to the index FAR, its small value means the method has a good performance on normal sample monitoring, while as to the index MDR, its small value indicates a good performance on fault sample detection. 5.1. A numerical example A six-variable numerical system, which is the modified version of the example developed by Dong and McAvoy [12], is simulated according to the mathematical model as

⎧ x1 = r1 + e1, ⎪x =r +e , 2 2 ⎪ 2 ⎪ ⎪ x3 = 2r1 + 3r2 + e3, ⎨ x = 5r − 2r + e , 1 2 4 ⎪ 4 ⎪ x = r 2 − 3r + e , 1 2 5 ⎪ 5 3 2 ⎪ ⎩ x6 = − r1 + 3r2 + e6 ,

(33)

where x1 to x6 are the six monitored variables, r1 and r2 denote two independent sources following uniform distribution U(0,2), while e1 to e6 are zero-mean independent Gaussian noises with the standard deviation of 0.01. Based on the Eq. (33), a normal operating dataset with 400 samples is simulated as the training dataset for offline modeling. To test the different fault detection methods,

5

another 400 samples are generated as the faulty testing dataset where a sine-type fluctuation with the amplitude 8 is introduced to x6 from the 201-th sample. When the methods KPCA, LOF-KPCA and DWLOF-KPCA are applied, the feature space dimension n is chosen so that the corresponding eigenvalue cumulative sum exceeds 99.99% of the sum of all the eigenvalues. In this case study, the value of n is set as 18. When we select the retained KPC number p, two methods are considered including the cumulative percentage of variance (CPV) method and the average eigenvalue method. The used p value is the average value of the results obtaining by these two methods. In this case study, the value of p is chosen as 4. The Gaussian kernel function ker (x, y ) = exp( − ∥ x − y ∥2 /σ ) is adopted in this work and the kernel width parameter σ is set as 5 m, where m is the number of process variables. The neighborhood size K is set as 15 for LOF calculation while moving window parameter d is set as 5 when computing the DWLOF-KPCA monitoring statistics. The normal training data with 400 samples are used to build the KPCA model. Two extracted kernel components t1 and t6 are illustrated in Fig. 2. To test whether they are Gaussian distributed, the corresponding normal probability plots are given in Fig. 3. It is obvious that these two kernel components do not follow Gaussian distribution strictly. Therefore, if we apply the traditional T 2 and SPE statistics, which are based on the Gaussian assumption, the monitoring results will be deteriorated. The monitoring results of three methods KPCA, LOF-KPCA and DWLOF-KPCA on the testing fault dataset are presented in Fig. 4. From Fig. 4(a), it is evident that the KPCA T 2 statistic can hardly detect the fault and the KPCA SPE statistic is also unsatisfactory. Their MDRs are 0.98 and 0.82, respectively. By contrast, two LOF based methods perform significantly better. As for the monitoring results in the PCS, the two statistics indices show identical monitoring performance with the 0.815 MDR, which is smaller than the MDR of the KPCA T 2 statistic. According to Fig. 4(b)–(c), the statistics indices in RCS provide better monitoring performances than the KPCA SPE statistic. In detail, the LOFRCS statistic can alarm the fault at the 252-th sample but fluctuates around the confidence limits. Accordingly, the MDR of the LOFRCS statistic is reduced to 0.395. With the double weighting strategy, the DWLOFRCS statistic detects the fault at the 205-th sample and it further improves the MDR to 0.010. The detailed FARs and MDRs of the three methods are summarized in Table 1. According to Table 1, the FARs of three methods are all not larger than 0.01, which are consistent to the 99% confidence limit. By combining Table 1 and Fig. 4, it is clear that the proposed DWLOF-KPCA method provides the best fault detection performance. 5.2. The Tennessee Eastman process The Tennessee Eastman (TE) process, proposed by Downs and Vogel [37], is a real industrial process for chemical products. As a benchmark case, this process has been widely used to evaluate different control and monitoring strategies [38–40]. The TE process consists of five operation units involving a reactor, a vapor liquid separator, a product condenser, a recycle compressor, and a product

1

0.5

0

t6

t1

0.5 0

−0.5 −1

0

100

200 300 Sample Number

400

−0.5

0

100

200 300 Sample Number

400

Fig. 2. Two illustrated kernel components t1(left) and t6 (right).

Please cite this article as: Deng X, Wang L. Modified kernel principal component analysis using double-weighted local outlier factor and its application to nonlinear process monitoring. ISA Transactions (2017), http://dx.doi.org/10.1016/j.isatra.2017.09.015i

X. Deng, L. Wang / ISA Transactions ∎ (∎∎∎∎) ∎∎∎–∎∎∎

6

Normal Probability Plot 0.999 0.997 0.99 0.98 0.95 0.90

0.75

0.75

Probability

Probability

Normal Probability Plot 0.999 0.997 0.99 0.98 0.95 0.90

0.50 0.25 0.10 0.05 0.02 0.01 0.003 0.001

0.50 0.25 0.10 0.05 0.02 0.01 0.003 0.001

−0.6

−0.4

−0.2

0 Data

0.2

0.4

0.6

−0.2

−0.1

0 Data

0.1

0.2

Fig. 3. Normal probability plots of kernel components t1(left) and t6 (right).

stripper. The whole process flowchart is shown in Fig. 5. In this paper, a total of 33 variables, including 22 continuous variables and 11 manipulated variables, are collected for process status analysis. Normal operation data including 960 samples are collected as the training dataset and another 500 normal samples are used to constitute the normal testing dataset. To test the fault detection and process monitoring algorithms, 21 programmed faults, which are named as Fault 1 to Fault 21, are simulated to provide the fault testing data. For each fault dataset, it includes 960 samples where a fault is introduced at the 161-th sample. All datasets can be downloaded from http://web.mit.edu/braatzgroup/links.html. More detailed descriptions about the TE process can be found in the related literature [37,38]. We apply three methods KPCA, LOF-KPCA and DWLOF-KPCA for process monitoring. Firstly, the statistical models of these

Table 1 FARs and MDRs of KPCA, LOF-KPCA and DWLOF-KPCA methods for the numerical system monitoring. Methods

KPCA

LOF-KPCA

DWLOF-KPCA

statistics

T2

SPE

LOFPCS

LOFRCS

DWLOFPCS

DWLOFRCS

FAR MDR

0 0.980

0.010 0.820

0.005 0.815

0.010 0.395

0.005 0.815

0.010 0.010

methods are developed based on the normal training dataset. In the kernel modeling procedure, the kernel width parameter is set as 500 m, where m is the number of the monitored variable. The parameters n , K, p and d are determined by the same rules to

Fig. 4. Monitoring results for the numerical system fault dataset.

Please cite this article as: Deng X, Wang L. Modified kernel principal component analysis using double-weighted local outlier factor and its application to nonlinear process monitoring. ISA Transactions (2017), http://dx.doi.org/10.1016/j.isatra.2017.09.015i

X. Deng, L. Wang / ISA Transactions ∎ (∎∎∎∎) ∎∎∎–∎∎∎

7

Fig. 5. The TE process flowchart.

Fig. 6. Monitoring results for the normal testing data of the TE process.

Table 2 FARs of KPCA, LOF-KPCA and DWLOF-KPCA methods for monitoring the TE process normal data. Methods

KPCA

LOF-KPCA

DWLOF-KPCA

statistics

T2

SPE

LOFPCS

LOFRCS

DWLOFPCS

DWLOFRCS

FAR

0.002

0.028

0.01

0.008

0.01

0.008

the first case study. Then the developed statistical models are applied to monitor the normal testing dataset, and the monitoring results are presented in Fig. 6. According to this figure, it is clear that most of the monitoring statistics are below the corresponding confidence limits. The detailed FAR results are given in Table 2. By this table, it is seen that the FAR of KPCA SPE statistic is 0.028, a little higher than 1%, while other FARs are no larger than 1%. This indicates that LOF-based modified KPCA methods can provide better monitoring performance on the normal samples.

Please cite this article as: Deng X, Wang L. Modified kernel principal component analysis using double-weighted local outlier factor and its application to nonlinear process monitoring. ISA Transactions (2017), http://dx.doi.org/10.1016/j.isatra.2017.09.015i

8

X. Deng, L. Wang / ISA Transactions ∎ (∎∎∎∎) ∎∎∎–∎∎∎

Fig. 7. Monitoring results for the TE process fault 5.

Fig. 8. Monitoring results for the TE process fault 10.

Please cite this article as: Deng X, Wang L. Modified kernel principal component analysis using double-weighted local outlier factor and its application to nonlinear process monitoring. ISA Transactions (2017), http://dx.doi.org/10.1016/j.isatra.2017.09.015i

X. Deng, L. Wang / ISA Transactions ∎ (∎∎∎∎) ∎∎∎–∎∎∎

Furtherly, three fault cases including Fault 5, 10 and 19 are used to test the proposed method. Firstly, Fault 5 is illustrated, which is caused by a step change of the condenser cooling water inlet temperature. The monitoring results of KPCA, LOF-KPCA, and DWLOFKPCA are listed in Fig. 7. According to Fig. 7(a), the fault is detected at the 161-th sample by the KPCA T 2 and SPE statistics. However, both the monitoring statistics return under the confidence limits after the 350-th sampling point. This may provide a mistaken indication that the fault has disappeared. By contrast, although the LOFPCS statistic in the Fig. 7(b) can not obtain clear improvement, the LOFRCS statistic achieves a better performance by indicating that the fault is still existing. In the Fig. 7(c), the DWLOFRCS monitoring statistic also has a good monitoring performance. To sum up, the LOF-based and DWLOF-based modified KPCA methods outperform the traditional KPCA method in terms of the fault detection of fault 5. Fault 10 is caused by a random variation of C feed temperature (Steam 4) and the monitoring results of the three methods are shown in Fig. 8. According to Fig. 8(a), the fault is detected at the 192-th sample by the KPCA SPE statistic. However, a great quantity of points are regarded as normal operating samples, which means a poor monitoring performance. The MDRs of the KPCA T 2 and SPE statistics are 0.666 and 0.371, respectively. By contrast, the LOFKPCA monitoring chart in Fig. 8(b) can detect the fault at the 185th sample by LOFRCS statistics. The MDRs are reduced to 0.619 and 0.169 for LOFPCS and LOFRCS , respectively. With the incorporation of the weighting strategies, DWLOF-KPCA obtains better monitoring performance in Fig. 8(c) and it has the MDRs as 0.504 and 0.092, respectively. Therefore, DWLOF-KPCA can improve the fault detection performance clearly in the case of Fault 10. The third tested fault is Fault 19 and the corresponding monitoring results are given in Fig. 9. As shown in Fig. 9 (a), the KPCA T 2 statistic

9

misses most of the fault samples with a high MDR of 0.867, while the SPE statistic performs a little better with a 0.435 MDR. By introducing the LOF-based statistics, the LOF-KPCA monitoring chart in Fig. 9 (b) can decrease the missing fault samples, which brings the MDR values of 0.805 and 0.217 for LOFPCS and LOFRCS statistics, respectively. Among these three methods, DWLOF-KPCA method can give the best monitoring performance by combining LOF and the double weighting strategy. Its DWLOFPCS statistic in Fig. 9(c) is with the 0.792 MDR while the DWLOFRCS statistic detects the fault at the 165-th sample with only 0.002 MDR. As the historical status information is considered in DWLOF-KPCA, the drastic fluctuations of monitoring statistics are reduced obviously. The monitoring results on the Fault 19 demonstrate the validity of the proposed method again. For the comprehensive monitoring comparisons, the monitoring results of KPCA, LOF-KPCA, WLOF-KPCA, and DWLOF-KPCA are listed in Table 3. According to Table 3, traditional KPCA can not give a satisfying monitoring performance for fault 5, 10, 16, 19, 20, and 21. By integrating LOF technique, the proposed LOF-KPCA method obtains significant improvement with respect to the six faults. By applying one level of weighting strategy, WLOF-KPCA performs better than LOF-KPCA. With incorporation of two weighting strategies, the fault information of key components can be emphasized and the previous history status information can be utilized to better estimate the current process status. Therefore, DWLOF-KPCA further improves the monitoring results on these six faults. According to the average MDRs in Table 3, KPCA has the MDRs of 0.366 and 0.290 for T 2 and SPE, respectively, while DWLOF-KPCA achieves the 0.318 and 0.192 MDRs for DWLOFPCS and DWLOFRCS , respectively. Therefore, the proposed method can decrease the MDRs obviously and provide better monitoring performance than traditional KPCA method.

Fig. 9. Monitoring results for the TE process fault 19.

Please cite this article as: Deng X, Wang L. Modified kernel principal component analysis using double-weighted local outlier factor and its application to nonlinear process monitoring. ISA Transactions (2017), http://dx.doi.org/10.1016/j.isatra.2017.09.015i

X. Deng, L. Wang / ISA Transactions ∎ (∎∎∎∎) ∎∎∎–∎∎∎

10

Table 3 Missing detection rates (MDR) (%) for the TE process obtained by the KPCA, LOF-KPCA, WLOF-KPCA and DWLOF-KPCA. Fault

KPCA

LOF-KPCA

WLOF-KPCA

No.

T2

Q

LOFPCS

LOFRCS

WLOFPCS

WLOFRCS

DWLOFPCS

DWLOFRCS

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 Average

0.001 0.016 0.984 0.015 0.750 0.006 0.000 0.020 0.981 0.666 0.304 0.011 0.050 0.000 0.979 0.835 0.061 0.101 0.867 0.547 0.500 0.366

0.001 0.020 0.972 0.000 0.493 0.000 0.069 0.022 0.962 0.371 0.269 0.022 0.042 0.001 0.964 0.376 0.032 0.096 0.435 0.375 0.575 0.290

0.001 0.014 0.949 0.005 0.724 0.006 0.000 0.022 0.962 0.619 0.271 0.011 0.050 0.000 0.954 0.769 0.056 0.099 0.805 0.504 0.451 0.346

0.002 0.020 0.971 0.000 0.000 0.000 0.000 0.019 0.976 0.169 0.247 0.005 0.047 0.001 0.955 0.136 0.036 0.097 0.217 0.292 0.592 0.228

0.001 0.014 0.939 0.004 0.711 0.006 0.000 0.022 0.956 0.560 0.244 0.011 0.049 0.000 0.944 0.721 0.037 0.099 0.796 0.472 0.447 0.335

0.002 0.016 0.965 0.000 0.000 0.000 0.000 0.019 0.971 0.135 0.222 0.002 0.045 0.001 0.949 0.094 0.034 0.096 0.151 0.227 0.541 0.213

0.001 0.014 0.937 0.000 0.701 0.006 0.000 0.022 0.946 0.504 0.122 0.006 0.049 0.000 0.944 0.681 0.024 0.099 0.792 0.396 0.426 0.318

0.002 0.016 0.965 0.000 0.000 0.000 0.000 0.020 0.966 0.092 0.122 0.002 0.045 0.001 0.946 0.040 0.025 0.096 0.002 0.161 0.531 0.192

For the four methods of KPCA, LOF-KPCA, WLOF-KPCA and DWLOF-KPCA, their computational loads of online sample monitoring are different. The detailed computation tasks and computation time for monitoring single online sample are listed in Table 4. It is clear that among these four methods, KPCA has the lowest computational complexity, while DWLOF-KPCA is with the highest computational complexity. The DWLOF-KPCA online monitoring procedure involves four parts: 1)obtain the kernel scores, 2)search the K nearest neighbors, 3)calculate the two LOF statistics, and 4)employ the two weighting strategies. By contrast, KPCA only needs two steps while LOF-KPCA online monitoring procedure includes three parts. To investigate the computation time of online monitoring algorithms, we test these four methods in the same computer with the configuration of Pentium DualCore E5800 3.2 GHz processor and 4G RAM memory. With the TE process as the monitored object, the online monitoring programmes of each method are run 20 times and the average computation times per sample are tabulated in Table 4. It is observed that KPCA only needs 0.0216 s for monitoring single sample and LOF-KPCA spends 0.037 s, while WLOF-KPCA's and DWLOF-KPCA's computational time increase to 0.0381 s and 0.0411 s, respectively. Although DWLOF-KPCA has higher computational complexity, its computational time is still rather short and can be acceptable for online monitoring. For the LOF-based monitoring methods including LOF-KPCA, WLOF-KPCA and WLOF-KPCA, the local neighbor sample number K is a very important parameter, which influences the monitoring results remarkably. If K is small, the LOF computation involves a small number of neighbor samples and the statistical fluctuations of different neighbors would affect the value of LOF significantly. Therefore, the tiny process changes can be easily detected, which means a low missing fault rate (MDR). However, the small K value also makes the LOF statistics sensitive to the process noises, which means a high false alarming rate (FAR). When K is increased gradually, the statistical fluctuations in the LOF value are weaken accordingly so that the FAR is decreased while the MDR is increased. Also, a large K value leads to the increasingly computational loads. To demonstrate the K's influence clearly, we test different K values and plot the average FARs and MDRs of DWLOF-KPCA for monitoring the 21 TE process faults in

DWLOF-KPCA

Table 4 Computation complexity analysis for monitoring single online sample by the KPCA, LOF-KPCA, WLOF-KPCA and DWLOF-KPCA methods. Method

Main computation tasks per sample

Average computation-time per sample (s)

KPCA

obtain the kernel scores compute the two KPCA statistics

0.0216

LOF-KPCA

obtain the kernel scores search the K nearest neighbors calculate the two LOF statistics

0.0370

WLOF-KPCA

obtain the kernel scores search the K nearest neighbors calculate the two LOF statistics employ the single weighting strategy

0.0381

DWLOF-KPCA obtain the kernel scores search the K nearest neighbors calculate the two LOF statistics employ the two weighting strategies

0.0411

the Fig. 10. By this figure, it is clear that the average MDRs of DWLOFKPCA increase while the average FARs decreases with the increasing of K value. To consider the influences on both MDRs and FARs, the parameter K is selected as 15 in this paper.

6. Conclusion In this paper, a modified KPCA based nonlinear process monitoring method is developed using the double-weighted local outlier factor. In order to break the limitation of Gaussian assumption, LOF approach is firstly introduced to KPCA to build two LOF-based monitoring statistics. For better online monitoring, two weighting strategies are designed to improve the LOF technique. The first weighting strategy is to highlight the importance of some key components containing significant fault information while the second weighting strategy is to evaluate the current process status

Please cite this article as: Deng X, Wang L. Modified kernel principal component analysis using double-weighted local outlier factor and its application to nonlinear process monitoring. ISA Transactions (2017), http://dx.doi.org/10.1016/j.isatra.2017.09.015i

X. Deng, L. Wang / ISA Transactions ∎ (∎∎∎∎) ∎∎∎–∎∎∎

11

DWLOF

DWLOF

PCS

0.8

Average FARs

Average MDRs

1 DWLOFRCS

0.6 0.4 0.2 0

0

5

10

15 K

20

25

30

PCS

DWLOFRCS

0.1

0.05

0

0

5

10

15 K

20

25

30

Fig. 10. The average false alarming rates and fault detection rates of LOF-KPCA method for TE process monitoring.

by some historical process information with the incorporation of a moving weighting window. The simulation results on a numerical example and the TE process demonstrate that the proposed DWLOF-KPCA method outperforms the traditional KPCA method in terms of the fault detection performance.

Acknowledgement This work was supported by the National Natural Science Foundation of China (No. 61403418, and No. 21606256), the Natural Science Foundation of Shandong Province, China (No. ZR2014FL016, No. ZR2016FQ21, and No. ZR2016BQ14), the Fundamental Research Funds for the Central Universities, China (No. 17CX02054), the Applied Basic Research Programs of Qingdao City, China (No. 16-5-1-10-jch), and the Postgraduate Innovation Project of China University of Petroleum (No. YCX2017058).

References [1] Yin S, Ding SX, Xie X, Luo H. A review on basic data-driven approaches for industrial process monitoring. IEEE T Ind Electron 2014;61:6418–28. [2] Ge Z, Song Z, Gao F. Review of recent research on data-based process monitoring. Ind Eng Chem Res 2013;52:3543–62. [3] Hao H, Zhang K, Ding SX, Chen Z, Lei Y. A data-driven multiplicative fault diagnosis approach for automation processes. ISA T 2014;53:1436–45. [4] Liu Y, Pan Y, Wang Q, Huang D. Statistical process monitoring with integration of data projection and one-class classification. Chemom Intell Lab 2015;149:1–11. [5] Jiao J, Zhao N, Wang G, Yin S. A nonlinear quality-related fault detection approach based on modified kernel partial least squares. ISA T 2017;66:275–83. [6] Cai L, Tian X, Chen S. Monitoring nonlinear and non-Gaussian processes using Gaussian mixture model-based weighted kernel independent component analysis. IEEE T Neur Net Lear 2017;28:122–35. [7] Jiang B, Zhu X, Huang D, Paulson JA, Braatz RD. A combined canonical variate analysis and Fisher discriminant analysis approach for fault diagnosis. Comput Chem Eng 2015;77:1–9. [8] Ku W, Storer RH, Georgakis C. Disturbance detection and isolation by dynamic principal component analysis. Chemom Intell Lab 1995;30:179–96. [9] Liu Y, Pan Y, Sun Z, Huang D. Statistical monitoring of wastewater treatment plants using variational Bayesian PCA. Ind Eng Chem Res 2014;53:3272–82. [10] Jiang Q, Yan X. Plant-wide process monitoring based on mutual informationmultiblock principal component analysis. ISA T 2014;53:1516–27. [11] Kramer MA. Nonlinear principal component analysis using autoassociative neural networks. AICHE J 1991;37:233–43. [12] Dong D, Mcavoy TJ. Nonlinear principal component analysis-based on principal curves and neural networks. Comput Chem Eng 1996;20:65–78. [13] Schölkopf B, Smola A, Müller K. Nonlinear component analysis as a kernel eigenvalue problem. Neural Comput 1998;10:1299–319. [14] Lee JM, Yoo CK, Sang WC, Vanrolleghem PA, Lee IB. Nonlinear process monitoring using kernel principal component analysis. Chem Eng Sci 2004;59:223–34. [15] Cho JH, Lee JM, Sang WC, Lee D, Lee IB. Fault identification for process monitoring using kernel principal component analysis. Chem Eng Sci 2005;60:279–88.

[16] Li N, Yang Y. Ensemble kernel principal component analysis for improved nonlinear process monitoring. Ind Eng Chem Res 2015;54:318–29. [17] Deng X, Tian X, Chen S. Modified kernel principal component analysis based on local structure analysis and its application to nonlinear process fault diagnosis. Chemom Intell Lab 2013;127:195–209. [18] Zhang Y, Ma C. Fault diagnosis of nonlinear processes using multiscale KPCA and multiscale KPLS. Chem Eng Sci 2011;66:64–72. [19] Jaffel I, Taouali O, Harkat MF, Messaoud H. Moving window KPCA with reduced complexity for nonlinear dynamic monitoring. ISA T 2016;64:184–92. [20] Tian X, Zhang X, Deng X, Chen S. Multiway kernel independent component analysis based on feature samples for batch process monitoring. Neurocomputing 2009;72:1584–96. [21] Deng X, Tian X. Nonlinear process fault pattern recognition using statistics kernel PCA similarity factor. Neurocomputing 2013;121:298–308. [22] Yao M, Wang H. On-line monitoring of batch processes using generalized additive kernel principal component analysis. J Process Contr 2015;28:56–72. [23] Xie L, Li Z, Zeng J. Block adaptive kernel principal component analysis for nonlinear process monitoring. AICHE J 2016;62:4334–45. [24] Deng X, Tian X, Chen S, Harris CJ. Fault discriminant enhanced kernel principal component analysis incorporating prior fault information for monitoring nonlinear processes. Chemom Intell Lab 2017;162:21–34. [25] Breunig MM, Kriegel HP, Ng RT, Sander J. LOF: identifying density-based local outliers. SIGMOD Rec 2000;29:93–104. [26] Duan L, Xu L, Guo F, Lee J, Yan B. A local density based spatial clustering algorithm with noise. Inform Sys 2007;32:978–86. [27] Huang T, Zhu Y, Wu Y, Bressan S, Dobbie G. Anomaly detection and identification scheme for VM live migration in cloud infrastructure. Future Gener Comp Sy 2016;56:736–45. [28] Bai M, Wang X, Xin J, Wang G. An efficient algorithm for distributed densityoutlier detection on big data. Neurocomputing 2016;181:19–28. [29] Ma Y, Shi H, Ma H, Wang M. Dynamic process monitoring using adaptive local outlier factor. Chemom Intell Lab 2013;127:89–101. [30] Lee J, Kang B, Kang SH. Integrating independent component analysis and local outlier factor for plant-wide process monitoring. J Process Contr 2011;21:1011–21. [31] Ma H, Hu Y, Shi H. Fault detection and identification based on the neighborhood standardized local outlier factor method. Ind Eng Che Res 2013;52:2389–402. [32] Song B, Shi H, Ma Y, Wang J. Multisubspace principal component analysis with local outlier factor for multimode process monitoring. Ind Eng Che Res 2014;53:16453–64. [33] Deng X, Tian X. Multimode process fault detection using local neighborhood similarity analysis. Chin J Chem Eng 2014;22:1260–7. [34] Zhong N, Deng X. Multimode non-Gaussian process monitoring based on local entropy independent component analysis. Can J Chem Eng 2017;95:319–30. [35] Jiang Q, Huang B. Distributed monitoring for large-scale processes based on multivariate statistical analysis and Bayesian method. J Process Contr 2014;46:75–83. [36] Wang B, Yan X, Jiang Q, Lv Z. Generalized Dice's coefficient-based multiblock principal component analysis with Bayesian inference for plant-wide process monitoring. J Chemom 2015;29:165–78. [37] Downs JJ, Vogel EF. A plant-wide industrial process control problem. Comput Chem Eng 1993;17:245–55. [38] Ricker NL. Decentralized control of the Tennessee Eastman challenge process. J Process Contr 1996;6:205–21. [39] Grbovic M, Li W, Xu P, Usadi AK, Song L, Vucetic S. Decentralized fault detection and diagnosis via sparse PCA based decomposition and maximum entropy decision fusion. J Process Contr 2012;22:738–50. [40] Gao X, Hou J. An improved SVM integrated GS-PCA fault diagnosis approach of Tennessee Eastman process. Neurocomputing 2016;174:906–11.

Please cite this article as: Deng X, Wang L. Modified kernel principal component analysis using double-weighted local outlier factor and its application to nonlinear process monitoring. ISA Transactions (2017), http://dx.doi.org/10.1016/j.isatra.2017.09.015i