Chemical processes monitoring based on weighted principal component analysis and its application

Chemical processes monitoring based on weighted principal component analysis and its application

Chemometrics and Intelligent Laboratory Systems 119 (2012) 11–20 Contents lists available at SciVerse ScienceDirect Chemometrics and Intelligent Lab...

2MB Sizes 0 Downloads 50 Views

Chemometrics and Intelligent Laboratory Systems 119 (2012) 11–20

Contents lists available at SciVerse ScienceDirect

Chemometrics and Intelligent Laboratory Systems journal homepage: www.elsevier.com/locate/chemolab

Chemical processes monitoring based on weighted principal component analysis and its application Qingchao Jiang, Xuefeng Yan ⁎ Key Laboratory of Advanced Control and Optimization for Chemical Processes of Ministry of Education, East China University of Science and Technology, Shanghai 200237, PR China

a r t i c l e

i n f o

Article history: Received 24 April 2012 Received in revised form 13 September 2012 Accepted 15 September 2012 Available online 23 September 2012 Keywords: Chemical process monitoring Fault detection Fault diagnosis Weighted principal component

a b s t r a c t Conventional principal component analysis (PCA)-based methods employ the first several principal components (PCs) which indicate the most variances information of normal observations for process monitoring. Nevertheless, fault information has no definite mapping relationship to a certain PC and useful information might be submerged under the retained PCs. A new version of weighted PCA (WPCA) for process monitoring is proposed to deal with the situation of useful information being submerged and reduce missed detection rates of T2 statistic. The main idea of WPCA is building conventional PCA model and then using change rate of T2 statistic along every PC to capture the most useful information in process, and setting different weighting values for PCs to highlight useful information when online monitoring. Case studies on Tennessee Eastman process demonstrate the effectiveness of the proposed scheme and monitoring results are compared with conventional PCA method. © 2012 Elsevier B.V. All rights reserved.

1. Introduction Multivariate statistical process monitoring approaches have progressed significantly in recent years and among them principal component analysis (PCA) as a classical method is the most widely used [1–7]. Currently, many extensions of PCA, such as Kernel PCA (KPCA), Dynamic PCA (DPCA), Probabilistic PCA (PPCA) and Multiway PCA (MPCA), and so on, have been proposed to improve the performance of process monitoring and solve more problems. KPCA as a promising method for tackling nonlinear systems can efficiently compute principal components in high-dimensional feature spaces by means of integral operators and nonlinear kernel functions [8–10]. DPCA takes the serial correlations in process data into account in order to deal with a process with fast sampling time [11–13]. PPCA is a probability based model and defines the generation way of data, which can efficiently take the data missing into account in process monitoring [14–16]. MPCA is proposed for batch process monitoring [17,18]. These extensions of PCA result in increased sensitivity and robustness of the process monitoring scheme. However, there are still some problems that need to be discussed and the situation of useful information being submerged is an important one. When using PCA model, two statistics are constructed to interpret the mean and variance information of process, known as T 2 statistic and Q (also known as Squared Prediction Error, SPE) statistic [19,20].

⁎ Corresponding author at: East China University of Science and Technology, P.O. BOX 293, MeiLong Road No. 130, Shanghai 200237, PR China. E-mail address: [email protected] (X. Yan). 0169-7439/$ – see front matter © 2012 Elsevier B.V. All rights reserved. http://dx.doi.org/10.1016/j.chemolab.2012.09.002

Generally, PCA employs the first several principal components (PCs) which represent the most variances of normal observations, however, some of these PCs' scores may not change fast and significantly when a fault occurs, i.e. there is not much fault information reflected on these PCs. The useful fault information might be submerged by the PCs with useless information and would not be reflected on the statistics, leading to poor detecting performance. Moreover, the PCs retained should provide sufficient fault information for further diagnosis, especially in contribution plots method, which is the most widely used in PCA-based monitoring [21,22]. If the useful information was not captured, the diagnosis will be affected. Therefore, it is really important to highlight those PCs with useful fault information and suppress those useless ones. A novel version of weighted principal component analysis (WPCA) and its application in process monitoring for both fault detection and diagnosis is presented in this article. T 2 statistic has the property that it can measure the variation directly along each of the loading vectors, i.e., the direction of each principal component that is well used by WPCA for useful information indexing [23]. Firstly, WPCA uses normal operational data to build conventional PCA model. Secondly, change rate of T 2 statistic along each principal component is constructed to capture the most useful information in process and select the principal components with useful information for online monitoring. Distinct weighting values are then set on different principal components and T 2 and Q statistics are calculated to determine the state of process. Concerning weighting and multivariate statistical monitoring, some researches has been reported. Wold suggested an exponentially weighted moving PCA (EWM-PCA) method, which models a weighted moving average with recent observations weighted more heavily than

12

Q. Jiang, X. Yan / Chemometrics and Intelligent Laboratory Systems 119 (2012) 11–20

earlier observations [24]. He et al. introduced a variable-weighted kernel Fisher discriminant analysis (VW-KFDA) method, which enhances the fault information through weighting the related variables heavily [25]. Ferreira et al. used a sample-wise weighted PCA to estimate the between-campaign submodel for multicampaign process monitoring [26]. Although the weighting method has been researched, the WPCA method proposed in this work which directly examines the direction of each principal component, to best of our knowledge, has never been studied. The main merit of the proposed WPCA is not only using normal operational process data to build PCA model, but also taking fault information into consideration. It determines the weighting values according to the importance of the PC objectively, to identify the useful components as well as useless ones. This paper is organized as follows. The classical PCA model used in process monitoring is reviewed briefly in Section 2, including its two statistics known as T 2 and Q statistics. A simple simulated process is employed to demonstrate the performance of T 2 statistic in PCA, as well as the situation of information being submerged. In Section 3, WPCA for process monitoring is proposed and some details are introduced. Moreover, contribution plots method for fault diagnosis based on weighted principal components is presented. Tennessee Eastman process (TEP) is employed to demonstrate the performance of proposed method in both fault detection and diagnosis. Finally, in Section 5, we present a conclusion of our work. 2. Process monitoring based on PCA 2.1. Principal component analysis Let X ∈ R N × s denote the scaled data matrix with zero mean and unit variance, where N is sample number and s is the number of variables in process. Based on the SVD algorithm, the matrix X can be decomposed as follows: T

X ¼ TP þ E ¼ X þ E

ð1Þ

where T ∈ R N × k and P ∈ R s × k are the score matrix and the loading matrix; k is the number of the principal components retained; X ∈ R N × s is the projection of T back into the N-dimensional observation space and E is the residual matrix. The number of PCs is commonly determined based on cumulative percent variance (CPV) method, which is introduced as follows: k X i¼1

s X λi = λi  100% ≥ 95%

ð2Þ

i¼1

where λi is the variance of score vector and λ1 ≥ λ2 ≥ ⋯ ≥ λs ≥ 0. When CPV is larger than 95%, the corresponding number of PCs is determined. The subspaces spanned by X and E are called the score space and the residual space, respectively and T 2 and Q statistics are constructed to monitor the two spaces [27,28]. The T 2 statistic is an important index in process monitoring based on PCA methods. Given an observation vector x ∈ R s × 1, the T 2 statistic of the first k PCs can be calculated as follows: 2

T

−1 T

T ¼ x P ðΛ k Þ

P x≤δT

2

The Q statistic can be calculated as follows: T

2

Q ¼ e e≤δQ ;

  T e ¼ I−PP x

ð4Þ

where e is the residual vector, a projection of the observation x into  1 pffiffiffiffiffiffi h0 C h 2θ 2 is the threshthe residual space, δQ ¼ θ1 α 0θ 2 þ 1 þ θ2 h0θðh20 −1Þ 1

old of Q,Q; θi ¼

s X

i

λj ;

j¼kþ1

1

h0 ¼ 1− 2θ3θ1 θ2 3 , and Cα is the normal deviate 2

corresponding to the (1 − α) percentile. 2.2. Motivational example A simulated simple process is employed to illustrate the monitoring performance of PCA. The following simulations are in Matlab 7.12.0 (R2011a) environment. Consider a simple process with 6 Gaussian distributed variables and their linear combinations, shown as follows: x1 ¼ 3 þ 0:1εðnÞ þ 0:01θðnÞ; x2 ¼ 5 þ 0:15εðnÞ þ 0:015θðnÞ; x3 ¼ 7 þ 0:2εðnÞ þ 0:02θðnÞ; x4 ¼ 9 þ 0:25εðnÞ þ 0:025θðnÞ; x5 ¼ 11−0:3x1 þ 0:8x2 þ 0:9x3 þ 0:3εðnÞ þ 0:03θðnÞ; x6 ¼ 6 þ x2−0:3x3 þ x5 þ 0:35εðnÞ þ 0:035θðnÞ; x7 ¼ 8−0:5x1 þ 0:8x2 þ x4 þ 0:01θðnÞ; x8 ¼ 15 þ x2 þ x3 þ 0:01θðnÞ; where ε(n) ~ N(0,1) and n is the sample time; θ(n) ~ N(0,1) is Gaussian process noise. We make n = 1000 to get 1000 normal observations for offline modeling and generate another 200 points in which a fault occurs in the 151th sample point. Two types of faults are pulled into the process, shown as follows: Fault 1: a step change in x2 with amplitude of 0.5; Fault 2: a step change in x4 with amplitude of 0.5. PCA monitoring performances of the simple process are shown in Fig. 1 and Fig. 2. Fig. 1 shows that the first 6 principal components own above 95% CPV, therefore the first 6 components are employed to construct statistic for process monitoring. Fig. 2(a) shows the monitoring performance of fault 1 and Fig. 2(b) shows the monitoring performance of fault 2, in which the significance level α is 0.99. From Fig. 2(a), we can see that fault 1 can be detected by T 2 statistics successfully. T 2 statistic changes quickly and significantly when fault 1 occurs, and stayed above the control limits to indicate the fault. Fig. 2(b) reveals that when fault 2 occurs, T 2 statistic changes a little and it doesn't stay above the control limit, resulting in high missed detection rate. To illustrate the cause of the failure in detecting fault 2, the T 2 statistic of each principal component is calculated due to T 2 statistic's ability to measure the variation directly along the direction of each

ð3Þ

where Λk = diag(λ1, ⋯,λk) ∈ R k × k is a diagonal matrix, which is the estimated covariance matrix of principal component scores; δT 2 ¼ ðN−1ÞðNþ1Þk F α ðk; N−kÞ NðN−kÞ

is the threshold of T 2 statistic on condition

that observations in process are Gaussian distributed, Fα(k, N − k) is F‐distribution with k and N-k degrees of freedom with the level of significance α.

Fig. 1. Cumulative percent variance of principal components.

Q. Jiang, X. Yan / Chemometrics and Intelligent Laboratory Systems 119 (2012) 11–20

13

(a)

2 Fig. 3. Tm of each principal component when fault 3 occurs.

(b) information and those with useless information or even negative information is really important. The main idea of weighted principal component analysis is to highlight the useful information and suppress the useless information by using different weighting values on different principal components. The change rate of T 2 statistic along each principal component RT2m is constructed to evaluate fault information reflected on each principal component and to determine which principal component should be given high weighting value.

3. Fault detection based on weighted principal component 3.1. Weighted principal component analysis

Fig. 2. (a). Monitoring results of fault 1 using PCA model. (b). Monitoring results of fault 2 using PCA model.

principal component. T 2 statistic along the mth principal component 2 Tm is defined as follows: 2

T

−1 T pm x

T m ¼ x pm ðλm Þ

ð5Þ

where pm is the mth loading vector in P, λm is the mth eigenvalue of X TX, and m = 1, …, 6 in this work because the 7th and 8th eigenvalues are much too small. Therefore, there are 6 subfigures for a fault and the T 2 monitoring behavior of fault 2 is shown in Fig. 3. Fig. 3 reveals that when fault 2 occurs, the 6 principal components are not equally sensitive to the fault and the T2 statistics perform differently with each other. Actually, along with the analysis in Section 3, we can find the 3th principal component changes hugely and contains most variational information of fault 2. The principal component with useful fault information is the 3th principal component, but the fault can't be detected by T 2 statistic, probably because the variation of fault 2 is inhibited by other PCs. And this is the situation of useful information being submerged. The analysis above reveals that too many principal components retained will lead to information being submerged; however, too few ones retained will lead useful information missed. Therefore, to make difference between the ones with useful

In PCA-based process monitoring, T 2 statistic is constructed to measure the variation in score space. In geometric meaning, the original axes X have been rotated to T, which represent the directions with maximum variance of normal samples. The matrix P rotates the major axes for the covariance matrix of x so that they directly correspond to the elements of T, and the elements of principal component is scaled to produce a set of variables with unit variance (axes Z). The conversion of the covariance matrix is demonstrated graphically in Fig. 4 for a two-dimensional observation space (s = 2) [23,29]. The idea of WPCA is to adaptively set different weighting values on different principal components, to highlight the importance of principal components with significant information of process variation.

t2

x2

z2 t1 x1

z1

Fig. 4. A graphical illustration of the covariance conversion for the T2 statistic.

14

Q. Jiang, X. Yan / Chemometrics and Intelligent Laboratory Systems 119 (2012) 11–20

Suppose the loading matrix P = [p1,p2, …,pk] ∈ R s × k, where s is the number of variable and k is the number of principal components retained. pk ∈ R s × 1 is the loading vector corresponding to the kth prin2 3 w1 … 0 4 ⋮ ⋱ ⋮ 5∈Rkk cipal component. Set a weighting matrix W ¼ 0 … wk on P, then the weighted loading matrix 2

3

… 0 ⋱ ⋮ 5 ¼ ½w1 p1 ; w2 p2 ; …; wk pk : ð6Þ … wk

w1 P W ¼ PW ¼ ½p1 ; p2 ; …; pk 4 ⋮ 0

The weighted principal components T W ¼ XP W :

ð7Þ

The T 2 statistic after weighted becomes 2

w12 6λ 2 T −1 T T T 6 1 T ¼ x PW ðΛ k Þ W P x ¼ x P 6 6 ⋮ 4 0 2

T

ð8Þ

−1

T ¼ x P ðΛ k Þ

T

YP x ¼

k X η t2 i i

i¼1

λi

:

ð9Þ

The conversion of weighting matrix is demonstrated graphically in Fig. 5. If the weighting value along the component z2 is high weighted, the information reflected on z2 will be highlighted. In conventional PCA monitoring, the threshold of T 2 statistic is approximated by a specified distribution, which is rooted in the presupposition that the process measurements follow a multivariate Gaussian distribution. However, after weighting, the distribution of T 2 statistics becomes complicated and the threshold can't be determined directly from Eq. (3). In the present work, the limit is determined via kernel density estimation (KDE) because normal observations for training are easy to obtain and the KDE has superior ability in dealing with this situation. With enough training samples, a reliable density fitting can be well obtained. A univariate kernel estimator is defined as follows. f ðuÞ ¼

  n 1 X u−uðaÞ K nd a¼1 d

þ∞

∫−∞ K ðuÞdu ¼ 1:

ð11Þ

There are a number of possible kernel functions and the Gaussian kernel function is the most commonly used. Gaussian kernel function is used as kernel function in the present work and details on kernel density estimation could be seen in [30,31]. The control limit of T 2 statistic used in weighted PCA monitoring charts is obtained as follows. Firstly, the training data from normal operating conditions is required. Assuming the training data X ∈ R n × s, where n is the sample number and s is the number of variables, the weighted T 2 value of the ath normal operating data, denoted as T 2(a)(a = 1, …, n), can be calculated through Eq. (8) (The weighting matrix W used in the Eq. (8) is described in Section 3.3). Secondly, the univariate kernel density estimator, presented in   Eq. (10), is used to estimate the density function f T 2 of the weighted T 2 values in normal operating conditions based on T2(a)(a = 1,…, n). The value occupying the 99% area of the density function can be obtained and is regarded as the control limit of normal operating data. 2 That is, the control limit TCL of weighted T 2 statistic is defined by

3 … 0 ⋱ ⋮ 5 ¼ WW T , where ηi = wi2, then … ηk

η1 Let Y ¼ 4 ⋮ 0 2

3 0 7 7 T ⋱ ⋮ 7 7P x 25 wk … λk ⋯

where u is the data point under consideration; u(a) is an observation value from the data set; d is the window width (also known as the smoothing parameter); n is the number of observations, and K is the kernel function. The kernel function K determines the shape of the smooth curve and satisfies the condition

ð10Þ

  2 T 2 2 ∫∞CL f T dT ¼ 0:99:

One major advantage of the confidence region obtained by KDE is that it follows the data more closely. More details regarding the determination of control limit via KDE refer to the work by Lee et al. [4]. 2 3.2. Change rate RT2mof Tm

In PCA monitoring, T 2 statistic can indicate the variation directly along the directions of principal components and this ability is used to capture the most useful information when online monitoring. Here two sets of normal operating data are collected as training sets, denoted as training set A(N × s) and training set B(N × s). The following analyses in this section are based on these training data. To describe the variation along the direction of the mth principal component and determine which principal components should be empha2 sized, the change rate RT2m of Tm on the a th sample point is defined as follows:

RT 2m;a ¼

z2

z1

Fig. 5. A graphical illustration of weighted principal component for the T2 statistic.

ð12Þ

2 T m;a

9 ,8 = <1 X n 2 T m;j ; :n j¼1

ð13Þ

2 where Tm,a is the T 2 statistic of the mth principal on the ath sample point in set B; n is the number of observations in set B. RT2m,a directly describes the change of process data along the mth principal component and if the process is in normal condition, the value of RT2m,a should be under the control limit CL. The control limit CL of RT2m is also determined through KDE because the normal process data is easy to obtain and KDE is an easy and effective approach for nonparametric density estimation. The threshold of RT2m,a is determined as the following steps:

1) Calculate RT2m,a of each principal component on the ath sample point based on normal operational observations, where m = 1, ⋯, k. 2) The univariate kernel density estimator is used to estimate the density function of RT m 2 of each principal component on the normal operating condition. 3) Determine the control limit of each component, which occupies the 99% area of density function and is denoted by CLm.

Q. Jiang, X. Yan / Chemometrics and Intelligent Laboratory Systems 119 (2012) 11–20

4) The threshold of RT2m is the maximum one among CLm and is represented by CL, i.e. CL = max{CLm|CLi, i = 1, 2, ⋯ k}, in which k is the number of principal components. The principal components could be considered as the key principal components that contain useful information of this fault on the condition that RT 2m;a ≥CL

ð14Þ

where a is the current sample point. The principal component corresponding to the largest value RT2m,a max among RT2m,a is considered containing most useful information of process, where n o max RT 2m;a ¼ max RT 2m;a RT 2 ; i ¼ 1; 2; ⋯; k :

ð15Þ

i;a

Considering the challenge of process noise in practical industry,

max ¼ RTmax þ RTmax =2 the mean value of the largest two RT2m,a , RTmax 2 2 2 m;a

m1 ;a

m2 ;a

(signed as MRT 2 for convenience in the following figures) is used max to instead of RT2m,a for key principal components selection, where max max 2 2 RTm1,a , RTm2,a are the largest two values among RT2m,a. The determination are illustrated in Fig. 6(a), of CL and the performance of RTmax 2 m;a

Fig. 6(b) and Fig. 6(c). Fig. 6(c) and the analysis above reveal that the 3th principal component changes significantly when fault 2 occurs. Therefore, if the 3th principal component is high weighted, fault 2 is expected to be detected successfully. Here the weighting matrix Y is set as Y = diag

(a)

15

[1,1,2,1,1,1] for illustration, and the monitoring performance of fault 2 using weighted principal component is shown in Fig. 7. From Fig. 7, we can see that fault 2 has been detected successfully by T 2 statistic. The missed detection rate of T 2 statistics is reduced significantly in comparison with that in Fig. 2(b). 3.3. Fault detection using weighted principal component analysis When using WPCA to monitor a process, one has to determine the weighting values in weighting matrix Y. In our work, the change rate 2 RT2m of Tm is regarded as an index of the importance of a principal component. Consequently, if RT2m of a principal component exceeds the control limit CL, the corresponding value in weighting matrix Y is set as RT2m, that is, if RT2m,a ≥ CL. ηi ¼ RT 2m;a :

ð16Þ

The formulation of WPCA-based process monitoring is presented as follows: Off line modeling: 1) Get training data set A : X ∈ R N × s from normal operation condition, where N is the sample number and s is the number of variables, as well as a normal training set B. 2) Calculate the mean value and variance of the variables and normalize the training data A and training data B. 3) Get principal components based on training set A, and then s X X¼ t i pTi , where ti is score vector and pi is loading vector. Use i¼1

the CPV method to determine the first k PCs occupying 95% CPV of normal sample data and calculate the control limit of T 2 statistic with k principal components δT2. 4) Initialize the weighting matrix Y, i.e. Y = I ∈ R k × k. 2 5) Based on the training set B, calculate the Tm,a statistics and the RT2m of each component, where m = 1, …, k. 6) Use Kernel density estimation to determine the control limit CLm of each component and the threshold CL of RTm2 . Save the mean values and variances of the variables, loading vectors and eigenvalues of X TX, δT2 and CL for the online monitoring. Online monitoring:

(b)

2 m

1) Normalize the current sample data using mean values and variance of training data. 2) Calculate the T 2 statistics with the first k principal components 2 retained, as well as Tm,a , RT2m,a and RTmax (m = 1, …, k) of the current 2 m;a data.

(c)

2 2 Fig. 6. (a). Threshold of change rate of Tm . (b). Maximum change rate of Tm when fault 2 1 occurs. (c). Maximum change rate of Tm when fault 2 occurs.

Fig. 7. Monitoring performance of fault 2 using WPCA.

16

Q. Jiang, X. Yan / Chemometrics and Intelligent Laboratory Systems 119 (2012) 11–20

Online monitoring

Offline modeling Next Training data normalization

Current data normalization

SVD decomposition and get conventional PCA model

Calculate Tm2,a , RT 2 , RTmax 2 m ,a

m, a

Initialize weighted matrix Y RTmax 2

CL ?

m, a

Determine the threshold CL and control limits of T 2 and Q

Y Set weighted values in Y

End Calculate T 2 and Q

N

Exceed limits? Y There is a fault in process

Fig. 8. The steps of WPCA for process monitoring.

Fig. 9. Base control scheme for the Tennessee Eastman process.

N

Q. Jiang, X. Yan / Chemometrics and Intelligent Laboratory Systems 119 (2012) 11–20

3) Determine whether the values of RTmax run out of the control limit 2 m;a

If RTmax ≥CL, 2 m;a

CL or not. If not, go to step 5); there may be a fault and go to the following steps. 4) Select the principal components with RT2m,a ≥ CL and set the 2 . weighting values in matrix Y with ηm = RTm,a 5) Calculate the T 2 and Q statistics of weighted principal components, and determine the control limits of the statistics. 6) Determine whether the statistics exceed the control limits or not. If exceed, there is a fault in process. The steps of WPCA for process monitoring are illustrated in Fig. 8. Because the first several principal components are not sensitive to process noise and the weighting matrix only works on these PCs, the monitoring performance would not be degraded by introducing the WPCA method. However, in practical industries, the process noise as well as some harmless disturbances is rather challenging. Therefore, weighting matrix should be determined prudently and released timely to increase the robustness. In the current work, the following 2 strategy is employed: If there are 2 successive points with MRTm exceeding the control limits, the weighting matrix is determined according to the first point; if there are 2 successive points that with both T 2 and Q statistics under the control limits, the weighting matrix is initialized to unit matrix. The only aim of the strategy is to

(a)

(b)

Fig. 10. (a). Monitoring results of fault 4 in TE process using PCA. (b). Monitoring results of fault 4 in TE process using WPCA.

17

remove of the influences of process noise and it is proven effective in the following case studies.

3.4. Application in TE process Tennessee Eastman process is a benchmark problem in process engineering, which is developed by Downs and Vogel. This simulator consists of five major unit operations: a reactor, a product condenser, a vapor–liquid separator, a recycle compressor, and a product stripper. Two products are produced by two simultaneous gas–liquid exothermic reactions, and a byproduct is generated by two additional exothermic reactions. The process has 12 manipulated variables, 22 continuous process measurements, and 19 compositions and all the process measurements include Gaussian noise (default level). Once a fault enters the process, it affects almost all state variables in the process [32,33]. Base control scheme for the TE process is shown in Fig. 9 and the simulation code for the open loop can be downloaded from http://brahms.scs.uiuc.edu. The second plant-wide control structure described in [34] is implemented to simulate the realistic conditions. To build the monitoring models, a normal process dataset (500 samples) has been collected under the base operation. A set of 21 programmed faults (default values) are simulated and process data are collected for testing.

(a)

(b)

Fig. 11. (a). Monitoring results of fault 11 in TE process using PCA. (b). Monitoring results of fault 11 in TE process using WPCA.

18

Q. Jiang, X. Yan / Chemometrics and Intelligent Laboratory Systems 119 (2012) 11–20

(a)

3.4.1. Case study on fault 4 Fault 4 involves a step change in the reactor cooling water inlet temperature. A significant effect of fault 4 is to introduce a step change in the reactor cooling water flow rate. When the fault occurs, there is a sudden temperature increase in the reactor, which is compensated by the control loops. The other 50 measurement and manipulated variables remain steady after the fault occurs; the mean and standard deviation of each variable differ less than 2% between fault 4 and the normal operating condition. This makes the fault detection and diagnosis rather challenging. Monitoring performances of fault 4 based on PCA and WPCA are shown in Fig. 10. From Fig. 10(a) and Fig. 10(b), we can see that the T 2 statistics using WPCA perform better when fault 4 occurs than that using classical PCA method. The missed detection rate of T 2 statistic is reduced significantly. 3.4.2. Case study on fault 11 Fault 11 in TE process induces a fault in the reactor cooling water inlet temperature. The fault is a random variation and induces large oscillations in the reactor cooling water flow rate, which results in a fluctuation of reactor temperature. The other variables are able to remain around the set-points and behave similarly as in the normal operating conditions. Monitoring performance of fault 11 using PCA and WPCA is shown in Fig. 11. Fig. 11(a) and Fig. 11(b) show that the missed detection rate of T 2 statistic is reduced significantly using WPCA than that using PCA when detecting fault 11.

(b)

Fig. 12. (a). Monitoring results of fault 0 in TE process using PCA. (b). Monitoring results of fault 0 in TE process using WPCA.

3.4.3. Case study on fault 0 Fault 0 is the normal operating condition in TE process and is used for testing the false alarming performance. The monitoring performances of PCA and WPCA for fault 0 are shown in Fig. 12(a) and 12(b). It can be found that both of the PCA and WPCA methods can give good monitoring results. Therefore, the normal monitoring performance of the process has not been degraded by introducing the weighted method. Actually, the WPCA examines the directions of the first several PCs with larger variances and these PCs are not sen2 sitive to process noise. Consequently, the MRTm (Fig. 12(b)) is not easy to be influenced by the noise and can indicate the process conditions accurately. The monitoring results of all faults in TE process using PCA and WPCA are tabulated in Table 1. The monitoring results of T 2 statistics using DPCA are also provided [13,23]. Specially, faults 3, 9 and 15 are

Table 1 Missed detection rates/detection delays (minutes) for the testing set. Fault

Disturbance state

PCA T2

DPCA T2

WPCA T2

Q

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21

A/C feed ratio, B composition constant (stream 4) B composition, A/C ratio constant (stream 4) D feed temperature (stream 2) Reactor cooling water inlet temperature Condenser cooling water inlet temperature A feed loss (stream 1) C header pressure loss—reduced availability (stream 4) A,B,C feed composition (stream 4) D feed temperature (stream 2) C feed temperature (stream 4) Reactor cooling water inlet temperature Condenser cooling water inlet temperature Reaction kinetics Reactor cooling water valve Condenser cooling water valve Unknown Unknown Unknown Unknown Unknown The valve for stream 4 was fixed at the steady state position

0.008/21 0.020/51 0.998/– 0.456/3 0.775/48 0.011/30 0.085/3 0.034/69 0.994/– 0.666/288 0.794/18 0.029/66 0.060/147 0.158/12 0.988/– 0.834/936 0.259/87 0.113/279 0.996/– 0.701/261 0.736/1689

0.006/18 0.019/48 0.991/– 0.939/453 0.758/6 0.013/33 0.159/3 0.028/69 0.995/– 0.580/303 0.801/585 0.010/9 0.049/135 0.061/18 0.964/– 0.783/597 0.240/84 0.111/279 0.993/– 0.644/267 0.644/1566

0.006/18 0.016/42 0.968/– 0.084/3 0.719/3 0.008/15 0/3 0.030/60 0.945/– 0.490/78 0.349/18 0.013/24 0.045/111 0/3 0.873/2010 0.615/606 0.155/66 0.101/36 0.859/27 0.478/201 0.624/1434

0.003/9 0.014/36 0.991/– 0.038/9 0.746/3 0/3 0/3 0.024/60 0.981/– 0.659/147 0.356/33 0.025/24 0.045/111 0/3 0.973/2220 0.755/591 0.108/75 0.101/252 0.873/– 0.550/261 0.570/855

Q. Jiang, X. Yan / Chemometrics and Intelligent Laboratory Systems 119 (2012) 11–20

excluded since the fault magnitude is too small and have been suggested to be difficult to detect [6,23]. From Table 1, we can see that using WPCA can efficiently reduce missed detection rates of T 2 statistic and detection delays, compared with PCA and DPCA methods. It should be pointed out that WPCA seems more complicated than the traditional PCA, however, the added part is just simple operation and it doesn't add much computational complexities when online monitoring. The main complexity of WPCA falls on the determining of the weighting matrix, which is a key step in WPCA and when the matrix is determined, there is not much difference between traditional PCA and WPCA. In TE process, the sample time is determined as 3 min, however, the average computing time for each sample point in PCA and WPCA is 8.2 × 10 −3s and 8.4 × 10 −3s, which can satisfy the online monitoring requirement easily. 4. Contribution plots in WPCA for fault diagnosis Once a fault has been detected, the next step is to determine the root cause of the out-of-control status. Contribution plots method is the most widely used for fault diagnosis in PCA-based process monitoring. The procedure of contribution plots in WPCA is summarized as follows: 1) Select the r scores  that out-of-control status, for instance, the scores ti with T 2i > 1k T 2 2) Calculate the contribution of each variable xj to the out-of-control scores ti

cont i;j ¼

ηi t i   p x λi i;j j

ð17Þ

where pi,j is the (i,j) th element of the loading matrix P. 3) When conti,j is negative, set it equal to zero. 4) Calculate the total contribution of the j th process variable, xj,

CONT j ¼

r  X

 cont i;j :

5. Conclusions This paper focuses on improving the monitoring performance of conventional PCA-based monitoring scheme. Since T 2 statistic of PCA fail to detect some faults in process, detecting behavior of T 2 statistic is illustrated and the monitoring performance is analyzed to unveil the cause. Process monitoring based on WPCA is proposed to highlight the significant information in online process for both fault defection and diagnosis. In WPCA method, T 2 statistic of each principal component is examined to search the most useful information for fault detecting. Distinct weighting values are set according to the importance of the components in fault detecting, which can efficiently deal with the situation that useful information being submerged and reduce missed detection rates of T 2 statistic. The superiority of the proposed method is demonstrated through the TE process and the results indicate that the monitoring performance is improved significantly in comparison with PCA-based method. The proposed WPCA method considered the situation of useful information being submerged in PCA monitoring, however, this situation widely exists in multivariate statistical process monitoring. If the problem is considered in other monitoring schemes, such as nonlinear, non-Gaussian, multi-level solutions, the monitoring performance would be further improved. Acknowledgments The authors gratefully acknowledge the supports from the following foundations: the National Natural Science Foundation of China (21176073), Doctoral Fund of Ministry of Education of China (20090074110005), Program for New Century Excellent Talents in University (NCET-09-0346), “Shu Guang” project (09SG29), 973 project (2012CB721006) and the Fundamental Research Funds for the Central Universities. References

ð18Þ

i¼1

5) Plot CONTj for all s process variables, xj, on a single graph. The contribution plots for fault 4 using PCA and WPCA are shown in Fig. 13. It is obvious that both PCA method and WPCA method indicate the change in condenser cooling water flow and the change in reactor temperature, which are essential causes of fault 4. The fault information is enhanced by using WPCA compared to that using PCA, providing an informational sign for fault diagnosis.

Fig. 13. Contribution plots for fault 4 using PCA and WPCA methods.

19

[1] L.H. Chiang, E.L. Russell, R.D. Braatz, Fault diagnosis in chemical processes using Fisher discriminant analysis, discriminant partial least squares, and principal component analysis, Chemometrics and Intelligent Laboratory Systems 50 (2000) 243–252. [2] J.V. Kresta, J.F. Macgregor, T.E. Marlin, Multivariate statistical monitoring of process operating performance, Canadian Journal of Chemical Engineering 69 (1991) 35–47. [3] U. Kruger, G. Dimitriadis, Diagnosis of process faults in chemical systems using a local partial least squares approach, AICHE Journal 54 (2008) 2581–2596. [4] J.M. Lee, C.K. Yoo, I.B. Lee, Statistical process monitoring with independent component analysis, Journal of Process Control 14 (2004) 467–485. [5] S.J. Qin, Statistical process monitoring: basics and beyond, Journal of Chemometrics 17 (2003) 480–502. [6] E.L. Russell, L.H. Chiang, R.D. Braatz, Fault detection in industrial processes using canonical variate analysis and dynamic principal component analysis, Chemometrics and Intelligent Laboratory Systems 51 (2000) 81–93. [7] A. Widodo, B.S. Yang, Application of nonlinear feature extraction and support vector machines for fault diagnosis of induction motors, Expert Systems with Applications 33 (2007) 241–250. [8] F. Jia, E. Martin, A. Morris, Non-linear principal components analysis with application to process fault detection, International Journal of Systems Science 31 (2000) 1473–1487. [9] J.M. Lee, C.K. Yoo, S.W. Choi, P.A. Vanrolleghem, I.B. Lee, Nonlinear process monitoring using kernel principal component analysis, Chemical Engineering Science 59 (2004) 223–234. [10] V.H. Nguyen, J.C. Golinval, Fault detection based on Kernel Principal Component Analysis, Engineering Structures 32 (2010) 3683–3691. [11] S.W. Choi, I.B. Lee, Nonlinear dynamic process monitoring based on dynamic kernel PCA, Chemical Engineering Science 59 (2004) 5897–5908. [12] R.F. Luo, M. Misra, D.M. Himmelblau, Sensor fault detection via multiscale analysis and dynamic PCA, Industrial and Engineering Chemistry Research 38 (1999) 1489–1495. [13] F. Tsung, Statistical monitoring and diagnosis of automatic controlled processes using dynamic PCA, International Journal of Production Research 38 (2000) 625–637. [14] Z.Q. Ge, Z.H. Song, Maximum-likelihood mixture factor analysis model and its application for process monitoring, Chemometrics and Intelligent Laboratory Systems 102 (2010) 53–61. [15] Z.Q. Ge, Z.H. Song, Kernel generalization of PPCA for nonlinear probabilistic monitoring, Industrial and Engineering Chemistry Research 49 (2010) 11832–11836.

20

Q. Jiang, X. Yan / Chemometrics and Intelligent Laboratory Systems 119 (2012) 11–20

[16] D.S. Kim, I.B. Lee, Process monitoring based on probabilistic PCA, Chemometrics and Intelligent Laboratory Systems 67 (2003) 109–123. [17] J.M. Lee, C. Yoo, I.B. Lee, Fault detection of batch processes using multiway kernel principal component analysis, Computers and Chemical Engineering 28 (2004) 1837–1847. [18] P. Nomikos, J.F. MacGregor, Monitoring batch processes using multiway principal component analysis, AICHE Journal 40 (1994) 1361–1375. [19] Q. Chen, U. Kruger, M. Meronk, A.Y.T. Leung, Synthesis of T-2 and Q statistics for process monitoring, Control Engineering Practice 12 (2004) 745–755. [20] J.E. Jackson, J. Wiley, A User's Guide to Principal Components, Wiley Online, Library, 1991. [21] T. Kourti, J.F. MacGregor, Multivariate SPC methods for process and product monitoring, Journal of Quality Technology 28 (1996) 409–428. [22] P. Miller, R. Swanson, C.E. Heckler, Contribution plots: a missing link in multivariate quality control, Applied Mathematics and Computer Science 8 (1998) 775–792. [23] L.H. Chiang, E. Russell, R.D. Braatz, Fault Detection and Diagnosis in Industrial Systems, Springer Verlag, London, 2001. [24] S. Wold, Exponentially weighted moving principal components analysis and projections to latent structures, Chemometrics and Intelligent Laboratory Systems 23 (1994) 149–161. [25] X. Bin He, Y.P. Yang, Y.H. Yang, Fault diagnosis based on variable-weighted kernel Fisher discriminant analysis, Chemometrics and Intelligent Laboratory Systems 93 (2008) 27–33.

[26] D.L.S. Ferreira, S. Kittiwachana, L.A. Fido, D.R. Thompson, R.E.A. Escott, R.G. Brereton, Multilevel simultaneous component analysis for fault detection in multicampaign process monitoring: application to on-line high performance liquid chromatography of a continuous process, Analyst 134 (2009) 1571–1585. [27] J.E. Jackson, Quality control methods for several related variables, Technometrics 1 (1959) 359–377. [28] J.E. Jackson, G.S. Mudholkar, Control procedures for residuals associated with principal component analysis, Technometrics 21 (1979) 341–349. [29] R.A. Johnson, D.W. Wichern, Applied Multivariate Statistical Analysis, Prentice Hall, Upper Saddle River, NJ, 2002. [30] D.W. Scott, Multivariate Density Estimation, Wiley Online Library, 1992. [31] A.R. Webb, K.D. Copsey, G. Cawley, Statistical Pattern Recognition, Wiley, 2011. [32] J.J. Downs, E.F. Vogel, A plant-wide industrial process control problem, Computers and Chemical Engineering 17 (1993) 245–255. [33] T. McAvoy, N. Ye, Base control for the Tennessee Eastman problem, Computers and Chemical Engineering 18 (1994) 383–413. [34] P. Lyman, C. Georgakis, Plant-wide control of the Tennessee Eastman problem, Computers and Chemical Engineering 19 (1995) 321–331.