ISA Transactions 49 (2010) 415–432
Contents lists available at ScienceDirect
ISA Transactions journal homepage: www.elsevier.com/locate/isatrans
Multiple sensor fault diagnosis for dynamic processes Cheng-Chih Li a , Jyh-Cheng Jeng b,∗ a
Department of Chemical Engineering, National Taiwan University, Taipei 106, Taiwan
b
Department of Chemical Engineering and Biotechnology, National Taipei University of Technology, Taipei 106, Taiwan
article
info
Article history: Received 17 November 2009 Received in revised form 30 April 2010 Accepted 13 May 2010 Available online 12 June 2010 Keywords: Sensor fault diagnosis Fault detection Fault isolation Fault identification Fault isolatability
abstract Modern industrial plants are usually large scaled and contain a great amount of sensors. Sensor fault diagnosis is crucial and necessary to process safety and optimal operation. This paper proposes a systematic approach to detect, isolate and identify multiple sensor faults for multivariate dynamic systems. The current work first defines deviation vectors for sensor observations, and further defines and derives the basic sensor fault matrix (BSFM), consisting of the normalized basic fault vectors, by several different methods. By projecting a process deviation vector to the space spanned by BSFM, this research uses a vector with the resulted weights on each direction for multiple sensor fault diagnosis. This study also proposes a novel monitoring index and derives corresponding sensor fault detectability. The study also utilizes that vector to isolate and identify multiple sensor faults, and discusses the isolatability and identifiability. Simulation examples and comparison with two conventional PCA-based contribution plots are presented to demonstrate the effectiveness of the proposed methodology. © 2010 ISA. Published by Elsevier Ltd. All rights reserved.
1. Introduction Process monitoring is important in manufacturing. Promptly detecting, diagnosing, and appropriately curing process abnormalities reduces manufacturing costs and enhances plant safety. Typically, process monitoring involves four major tasks: (1) fault detection, which checks the status of a process; (2) fault isolation, which identifies variables most relevant to the fault; (3) fault identification, which estimates the size of faults; and (4) process recovery, which removes the root causes of the fault [1]. To accomplish the aforementioned tasks, sensor reliability becomes very important, especially in modern chemical plants with a large number of instrumentations and control loops. Sensor failure or malfunction in such plants is inevitable and propagates through control loops to disturb the outcomes of other normally operating sensors, seriously affecting the plant operation. Due to propagation, sensor faults may not be easily identified through the trend plots resulting from the corresponding readings, unless those sensors are independent of others. Such faults may also seriously affect plant safety and performance. Therefore, an effective approach for fault detection, isolation, and identification (FDII) triggered by sensor failures is always desirable in modern industrial processes. The sensor FDII has been an interesting research topic in the past decades and many studies have proposed and successfully applied related methods to industrial processes.
∗
Corresponding author. Tel.: +886 2 2771 2171x2540. E-mail address:
[email protected] (J.-C. Jeng).
Some of these methods are classified into at least three major categories: diagnostic model-based, statistical model-free, and knowledge-based FDII methods. Diagnostic model-based methods use relationships between model states and faults to analyze the root causes and the size of a fault. Most methods in this category are based on observations and Kalman filters [2–4], parity equations [5–7], and structured or directional residuals [1, 8]. Qin and Li [9,10] and Li and Shah [11] have addressed more recent works in this area. Because of existing causal models for processes, the diagnostic models can be built accurately and the corresponding faults can be identified precisely only if the identified diagnostic models correctly characterize all physical and chemical phenomena occurring in industrial plants. The diagnostic model-based methods, however, often encounter difficulties in identifying good diagnostic models due to the lack of prior process knowledge and heavy modeling costs. To overcome difficulties encountered by the diagnostic modelbased methods, statistical model-free methods for process monitoring were developed. Most of these methods are relatively easy to apply to rather large and complex processes. Traditional statistical model-free methods for monitoring processes originated from the univariate Shewhart chart, the CUSUM (cumulative sum) chart and the EWMA (exponentially weighted moving average) chart. These univariate methods are easy to implement, are well-established, and have been shown to be effective in many academic and industrial examples [12]. However, the univariate control charts are not appropriate for diagnosing faults in a multivariate environment, due to cross-correlations between the variables [13]. To overcome the shortcomings of univariate statistical
0019-0578/$ – see front matter © 2010 ISA. Published by Elsevier Ltd. All rights reserved. doi:10.1016/j.isatra.2010.05.001
416
C.-C. Li, J.-C. Jeng / ISA Transactions 49 (2010) 415–432
process control (SPC) methods, some multivariate analysis methods, such as principal component analysis (PCA) and partial least squares (PLS), have been applied to monitor and diagnose multivariate processes [14]. In addition, Hotelling’s T 2 and Q statistics from PCA analysis, together with their corresponding contribution plots, are usually applied to identify the root causes of such faults [13,15–18]. These contribution plots cannot typically isolate the correct faulty variables since these conventional multivariate analysis techniques only consider the correlation model among variables. Yue and Qin [19] proposed a combined index of T 2 and Q for monitoring and a reconstruction-based method for fault identification based on that combined index. In order to identify optimal faulty variables, prior knowledge, i.e. possible fault scenerios, is required, and the number of calculations is equal to the number of selected fault scenerios. The conclusion made by such a method may be incorrect if the current faulty condition is not included in the pre-defined scenarios. Research has also proposed different indices of similarity measures for process monitoring and fault diagnosis [20–22]. On the other hand, Fisher discriminant analysis (FDA), is widely used for pattern classification [23]. The FDA utilizes within/between-class scattering information to analyze the data from different classes. Exploring the optimal separation directions separates raw data into lower dimensions. The work of Chiang et al. [24], is perhaps the first to use the FDA to classify unidentified chemical process data into proper groups. He et al. [25] recently proposed contribution plots based on pair-wise FDA to analyze the root causes of associated faults. The contribution plots from FDA, however, do not always identify such causes clearly and consistently because the root variables are related to other variables and the faults may propagate [25,26]. Unlike conventional modelbased approaches, the major difficulty of the statistical model-free ones is that these methods do not provide a causal description that can be used to diagnose faults. Knowledge-based expert systems have also been developed as a complementary FDII tool that uses diagnostic rules and qualitative process knowledge models. However, building such models requires considerable effort. The literature reports using neural networks as an alternative approach to build knowledge-based models. However, the required comprehensive and excessive plant data or detailed simulation limits their applicability. Yoon and MacGregor [26] proposed a fault diagnosis method that can clearly identify root causes of abnormalilty, based on known patterns in the normal and abnormal data sets. Fault signatures are constructed for clearly defined faults in PCA model and residual spaces. After detecting a new fault in the same space, the method uses the angles between the fault vectors and the fault signatures to identify their root causes. Other than the existing PCA and PLS methods for FDII, this paper presents a new method to isolate and identify multiple sensor faults in a process. This study first defines a deviation vector, and then derives the basic fault vector, which is the deviation vector with a specific sensor fault, by experiments or analyses. The current work then constructs a basic sensor fault matrix (BSFM) as the basis of a fault space, with these basic fault vectors. By projecting a deviation vector onto the BSFM space, this research proposes a novel monitoring index based on corresponding projection weights to detect process abnormalities. Analyzing these projection weights clearly isolates and identifies the associated root causes for a newly detected fault. The remaining sections of this paper are organized as follows. Section 2 gives a brief review of the current fault diagnostic method using PCA and FDA, using a simple numerical example to illustrate the shortcoming of these existing methods. Section 3 deals with constructing BSFM, which is the kernel of the proposed fault diagnosis. Section 4 presents computing weights from projecting a newly observed deviation vector onto the constructed BSFM space
and proposes a novel process monitoring index for fault detection, which is very sensitive to abnormal conditions. The section also gives the corresponding control threshold with this index. Section 5 depicts the methods for isolating and identifying multiple sensor faults in a dynamic system based on the weights developed in Section 4. Section 6 presents industrial processes as examples to demonstrate the effectiveness and practicality of the proposed FDII methodology. Comparisons of fault isolability between proposed method and conventional PCA-based contribution plots are also given. Concluding remarks based on the results constitute the contents in Section 7. 2. Review of conventional methods The proposed method contains statistical model-free characteristics, and therefore briefly reviews some relevant methods in literature, such as monitoring statistics and contribution plots in PCA, typical classification with FDA, and corresponding FDA contribution plots. Then, a simple numerical example is used to illustrate the weakness of these methods. This contrast addresses the motivation of this work. 2.1. Principal component analysis (PCA) Principal component analysis (PCA) decomposes a data matrix into a transformed space defined by the eigenvectors of the covariance matrix associated with the data matrix. PCA maps the data set onto a reduced dimension subspace which is defined by the span of a chosen subset of those eigenvectors. Each chosen eigenvector, or principal component (PC), captures the maximum amount of variability in the data in an ordered fashion. In other words, the first PC explains the greatest amount of variation, and the second explains the next largest amount after removal of the first, and so on. In this way, PCA forms the basis of multivariate data analysis [27]. Let X ∈ Rn×m denote the mean-removed raw data matrix with n observations and m variables, and SX ∈ Rm×m denote the estimate of the covariance matrix of X. Applying eigendecomposition to a square and semi-positive-definite matrix, SX , results in the following expression. SX = V3VT .
(1)
v1 · · · vl vl+1 · · · vm Let V = = P e P be the collecting eigenvectors, or loading vectors, with decreasing eigenvalues. The matrix X can thus be decomposed as follows X = TPT + e Te PT
(2)
where T = XP and e T = Xe P. Using PCA to perform process monitoring, the Hotelling T 2 and squared prediction error (SPE or Q ) from this principal component analysis are often used in process monitoring. T 2 is a scaled 2-norm of a score vector, t, in the principal component space, and Q is a 2-norm of a residual vector in the residual space. 1 −1 T T T 2 = tT 3− l t = x P3l P x
Q =e xT e x,
e x =e Pe PT x = (I − PPT )x
(3)
where 3l is the first l largest eigenvalue of the corresponding loadings and scores. Therefore, this allows scalar thresholds, Tα2 and Qα , to qualify for process status. The approximated thresholds of Tα2 and Qα can be determined automatically in several ways by applying the probability distribution assumptions [14,28–30]. Once a fault has been detected, contribution plots are the common tool used to isolate the possible variables that most affect abnormality. Since Q can be broken down as the sum of the squared residuals, i.e. Q = e x21 + · · · + e x2m , the element e xi usually stands for the contribution to Q from the ith variable. Since the definition of contribution based on T 2 is non-unique [17], this article adopts ci = tT 3−1 pTi xi , where pi is the ith row of P, as the contribution of T 2 for demonstration.
C.-C. Li, J.-C. Jeng / ISA Transactions 49 (2010) 415–432
417
Fig. 1. (a) Scatter plot of X0 and X1 (b) contribution plots for X1 (c) scatter plot of X0 and X2 (d) contribution plots for X2 .
(j)
2.2. Fisher discriminant analysis (FDA)
normal data set and the faulty set i. The element wi is considered as the contribution of the jth variable to the fault i.
The Fisher discriminant analysis, extensively studied in pattern classification, accounts for the information between classes. The FDA basically uses a linear transformation on an original data set, which is then classified optimally to maximize the separation
2.3. Limitations of conventional FDII methods
T
among different classes. Let X = XT0 XT1 · · · XTc denotethe data set consisting of c + 1 classes. In each class (i.e. Xi ), there are ni observations and m variables. The definitions of the withinscatter, within-class-scatter, and between-class-scatter matrices (Si , Sw , and Sb ) are as follows:
Si =
X (x − xi )T x∈Xi
Sw =
c X
Si
(4)
i −0
Sb =
c X
ni (xi − x)(xi − x)T
xi ∼ N (µi , 6),
i=0 1 where xi = n1 x∈Xi x and x = n x∈X x are the weight centers i of each class and of the total population, respectively. The optimal discriminant vector, w, based on Fisher’s discriminant criteria can be found by solving the following generalized eigenvalue problem.
P
P
Sb w = λSw w.
(5)
For fault diagnosis, He et al. [25] proposed contribution plots based on the discriminant vectors obtained by applying FDA in a pairwise way to the normal data, X0 , and to each class of faulty data,
h
Xi (i = 1 · · · c ). Let Xpi = XT0
This subsection uses a simple illustrative numerical example to test the effectiveness of using contribution plots obtained from PCA and FDA for FDII. The study assesses the effectiveness of each method by its accuracy and consistency from the associated contribution plots. This work generates the data used in this example by a multivariate random number generation engine in Matlab with different seeds. Let Xi ∈ R200×2 represent a data set with two hundred observations of two variables. X0 designates the data set under normal operation, Xj (j = 1, 2) designates the data sets that have a bias fault in the xj direction. The current study assumes that the observation xi ∈ Xi has a bivariate normal distribution having the same covariance and means as
T T
Xi
i
denote the pair-wise set of
X0 and Xi . Then, the corresponding Swi and Sbi for Xpi are given as P T Swi = S0 + Si and Sbi = j=0,i nj (xj − xpi )(xj − xpi ) , respectively. Applying Eq. (5) to Swi and Sbi , He et al. [25] defined
0
i = 0, 1, 2 2 2
i
2 3
, µ0 =
(6)
0
T
0
, µ1 =
10
T
0
, µ2 =
T
15 . Fig. 1(a) shows the scattering plot of data sets X0 and X1 , where symbol ‘‘◦’’ represents the observations under normal operation and the symbol ‘‘×’’ represents the faulty samples with a mean shift in the x1 direction. By performing PCA on the normal data set X0 and pair-wise FDA on normal and faulty data sets X0 and X1 , the resulted principal, residual and Fisher discriminant directions are plotted in Fig. 1(a) (shown as solid line, dotted line, and dashed line, respectively). Fig. 1(b) shows the contribution plots for T 2 , Q and FDA, where the black bar is the contribution from variable x1 and the white bar is the contribution from x2 , according to the statistics T 2 , Q and discriminant vector, respectively. The values of those contributions are given as
T
the corresponding discriminant vector, wi = wi(1) · · · wi(m) , with the largest eigenvalue as the fault direction between the
where 6 =
h
c1
e x1
w1
1 = 0
0.81 −0.59
−0.84 . 0.54
(7)
418
C.-C. Li, J.-C. Jeng / ISA Transactions 49 (2010) 415–432
Finally, define variables, z, as the augmented vector of xm and r to stand for the current state of a process. z = xTm
rT
T
.
(10)
0
Denote z as the estimate of the state under NOC, which is the average of the collected normal data set, Z0 . z0 = mean(Z0 ) ≈ E (z|NOC).
(11)
Then, vector z0 is a typical reference vector. Define the deviation vector, d, as the difference between any state z and the reference z0 , that is: d∗ = z − z 0 .
Let δ um (i), δ ym (j), δ uc (k) and δ yc (l) be the constant biases associated with each of the four variable groups. That is:
Fig. 2. Proposed FDII scheme.
For the samples X2 with a bias fault in the x2 direction, Fig. 1(c) shows the scattering plot, where symbol ‘‘◦’’ represents the observations under normal operation and the symbol ‘‘+’’ represents the faulty samples with a mean shift in the x2 direction. Fig. 1(d) depicts the corresponding contribution plots for T 2 , Q and FDA. The corresponding values of those contribution plots are as follows.
c2
e x2
w2
0 = 1
−0.81 0.59
−0.72 . 0.69
(12)
δ um (i) =
Note that the contributions for T and Q are the average of the individual sample contributions based on PCA. Results from this demonstration clearly show that the contribution plots based on Q and pair-wise FDA provide ambiguous or incorrect conclusions regarding root causes of the corresponding faults. In contrast, the contribution plots based on T 2 explicitly give consistent root causes of faults. Yoon and MacGregor [26] argue however, that the contribution plots of T 2 and Q statistics come from an underlying correlation model, which does not provide casual relationships among the variables, and do not provide direct fault isolation. These plots only show which group of variables highly correlate with the fault. The engineer must use his process insight to provide a reasonable interpretation. A simple illustrative example demonstrated by He et al. [25] clearly displays the corresponding shortcomings of the convention T 2 and Q contribution plots. To summarize, the conventional contribution plots based on multivariate statistics lack the capability to isolate root causes of the fault, because these methods provide only the correlations among the variables from training but not the casual relations in the process. Therefore, a novel approach with higher capability to deal with such problems is proposed to relax this limitation.
}| ··· 0
z
}| ··· 0
z
}| ··· 0
z
}| ··· 0
"
δ uc (k) = δ yc (l) =
0
···
0
0
0
···
∈ Rnym ×1
0
T { δ
0
···
0
T
l
∈ Rnum ×1
#T { δ
k
0
{ δ
j
0
(8)
2
z
0
δ ym (j) =
T
i
{ δ
0
···
0
∈ Rnuc ×1 ∈ Rnyc ×1
where δ 6= 0 is the magnitude resulting from a constant bias as the root cause of the fault. Due to each of biases, i.e. δ um , δ ym , δ uc and δ yc , the resulting basic fault vector for sensor i, d∗i , can be defined, where i = 1 · · · num , num + 1 · · · num + nym , num + nym + 1 · · · num + nym + nuc , num + nym + nuc + 1 · · · n for um , ym , uc and yc , respectively. Then, a matrix D which collects all these basic fault vectors is defined by Eq. (13) given in Box I where, n = num + nym + nuc + nyc is the total number of sensors, and di = d∗i /kd∗i k is the ith unit length vector in the set of D. This paper refers to the unit length vector, di , as the normalized basic sensor fault vector, and to the matrix D as the basic sensor fault matrix (abbr. as BSFM). The first num columns are the basic fault vectors associated with sensors for um , and the next nym , nuc , and nyc columns are associated with the variables ym , uc , and yc . 3.1. Constructing BSFM by the perturbing method
Fig. 2 illustrates the basic configuration of a process and possible entries of faults, where Gp is the plant, Gc designates controllers,
Constructing the basic fault vector (i.e. d), using the perturbing method includes two major steps. The first step collects the normal set Z0 and estimate the associated reference state z0 using Eq. (11). The second step introduces a small constant bias as a perturbation to the ith sensor, and records the corresponding data set, Z(i) and computes their means as:
ˆ represents the submodel. Variables associated with Gp , Gc , and G
z(i) = mean(Z(i) ) ≈ E (z|ith sensor fault).
3. Basic fault vectors and basic sensor fault matrix
ˆ can be classified as open-loop variables (designated as and G xm ) and close-loop variables (designated as xc ). Table 1 lists the nomenclature, description and corresponding dimensions of those variables depicted in Fig. 2. Therefore, this research defines the variable vectors xm , xc and xˆ c as follows xm = uTm
uTc
xˆ c = ˆ
yTm
T
T yTc
ˆ
.
,
xc = uTc
yTc
T
,
(9)
Under normal operating condition (NOC), this work assumes perturbations to u and y (designated as δ u and δ y) as having an independently and identically distributed (i.i.d.) normal distribution with zero means and specified variations. Fig. 2 also provides the estimates of xc (i.e. xˆ c ) from the submodel and controller outputs. Define vector r as the residual between xc and xˆ c , i.e. r = xc − xˆ c .
(14)
∗
The basic fault vector, di , is given as: d∗i = z(i) − z0 .
(15)
The major information contained in d∗i is its associated direction; its magnitude is relevant to the size of the introducing bias. For convenience, the vector d∗i is normalized to have unit length, i.e.: di = d∗i /kd∗i k
(16)
∗
where, di is the un-scaled basic fault vector from Eq. (15). Empirically, the magnitude of the introduced bias to each of the sensors could be set as large as one standard deviation of its normal measurement; Furthermore, if variable readings are scaled by some scaling matrix S−1 , the vector di can be corrected as di = S−1 d∗i /kS−1 d∗i k.
(17)
C.-C. Li, J.-C. Jeng / ISA Transactions 49 (2010) 415–432
419
Table 1 Description of variables. Variables
Description
u0m , y0m , u0c , y0c um , ym , uc , yc
Actual inputs and outputs of plant Measurement of variables
ˆ c , yˆ c u
Estimate of controlled variables
δ udm , δ ydm , δ udc , δ ydc
Outside disturbance of process
δ um , δ ym , δ uc , δ yc h T rT = uc − u ˆc
Measurement error of variables yc − yˆ c
T i
Residual between measurements and estimates
u0m , um , δ udm , δ um ∈ Rnum , y0m , ym , δ ydm , δ ym ∈ Rnym u0c , uc , δ udc , δ uc ∈ Rnuc , y0c , yc , δ ydc , δ yc ∈ Rnyc
D = d1
= Dum
}| { z · · · dnum dnum +1 Dym
ym
um
z
Duc
Dyc
}| { · · · dnum +nym dnum +nym +1 |
· · · dnum +nym +nuc dnum +nym +nuc +1 {z } | {z uc
···
dn ,
D ∈ Rn×n
}
yc
(13) Box I.
Finally, the BSFM, D, can be computed repeatedly by the perturbing method from the previous steps for each of the sensors in the process directly.
By a similar procedure, the basic fault vector associated with δ ym can be derived as Eq. (23). dδ ym = 0T
3.2. Constructing BSFM by the analytical method An alternative way to construct the BDSM can be done in an analytical way provided that the process gain matrices are available. ˆ denote the steady state gain matrices of process Gp Let Kp and K
ˆ respectively. These gain matrices can be divided and sub-model G, into the following blocks given by Eqs. (18) and (19), in Box II. Let δ um , δ ym , δ uc and δ yc be the sensor faults due to constant biases in the corresponding measured variables (i.e. um ym , uc and yc ), respectively. Let dδ um be the basic fault vector associated with biased um (i.e. δ um ) only. Similar definitions apply to other variations such as δ ym , δ uc and δ yc . The basic fault vector, dδ um , associated with δ um can be derived as follows. When bias δ um occurs, it appears as a false reading only but does not affect the actual inputs u0m and u0c . Consequently, the measurement values of ym and yc will not be affected. Furthermore, the submodel inputs and outputs will also conduct normally. Therefore, the following statements hold. E (uc ) = E (u0c ) = u0c
ˆ c ) = E( E (u
u0c
u0c
E (ym ) = E (m Kp,m u0m +c Kp,m u0c + δ ydm + δ ym )
= E (y0m ) = y0m
(20)
The sensor bias, δ um , will only appear in the measurement reading of um as (21)
From the definition of the deviation vector equation (12), the basic fault vector, dδ um , for δ um is as u0m + δ um − u0m δ um 0 y0m − y0m = . = 0 0 (uc − uˆ 0c ) − (u0c − uˆ 0c ) 0 0 0 0 0 (yc − yˆ c ) − (yc − yˆ c )
δ um
d
.
(23)
ˆ c ) = u0c , E (u
E (ym ) = y0m ,
E (yc ) = y0c .
(24)
However, the sensor bias δ uc will contribute to both uc and the estimated outputs yˆ c of the submodel. In other words, E (uc ) and E (yc ) will be given as follows: E (uc ) = E (u0c + δ uc ) = u0c + δ uc
ˆ c ) = E (Ku ˆ 0c + Kˆ δ uc ) = yˆ 0c + Kˆ δ uc . E (ˆyc ) = E (Ku
0T
(δ uc )T
(−Kˆ δ uc )T
T
.
(26)
For analytical derivation, the following relations hold true in case of the occurrence of δ yc : (27)
Eq. (27) can be interpreted in the following way: (1) measured um are not affected by δ yc and (2) controlled variables, uc , and its ˆ c , should be consistent since no fault exists between estimates, u ˆ c ) 6= u0c , because the actual close-loop them. Notably, E (uc ) = E (u outputs, y∗c , are not kept at their corresponding set points due to the false output feedback. Instead, the false measurements, yc , are controlled and stay at the nominal set points. E (yc ) = E (y0c + δ ydc + δ yc ) = y∗c + δ yc = y0c
(22)
(25)
Applying Eq. (12), the basic fault vector dδ uc can be expressed as the following vector:
ˆ c ). E (uc ) = E (u
ˆ c ) = E (Ku ˆ 0c + Kˆ δ uc ) = E (ˆy0c ) = yˆ 0c . E (ˆyc ) = E (Ku
T
E (um ) = u0m
E (yc ) = E (m Kp,c u0m +c Kp,c u0c + δ ydc + δ yc ) = E (y0c ) = y0c
E (um ) = E (u0m + δ um ) = E (u0m ) + δ um = u0m + δ um .
0T
E (um ) = u0m ,
dδ uc = 0T
u0c
0T
As the sensor bias δ uc occurs, the actual inputs, u0m and u0c will not be affected. This also means that the relevant actual outputs, y0m and y0c , will behave normally around their corresponding steady states, thus:
− δ ) = E( ) = udc
(δ ym )T
∗
(28)
where, yc is the actual output of a process, and is not the same as corresponding sensor readings. Using the relations between the
420
C.-C. Li, J.-C. Jeng / ISA Transactions 49 (2010) 415–432
h
Kp =
k(p1)
(num )
m (1) kp,m
(n
| kp um m (num ) | kp,m
kp
··· ··· −
− − = m (1) m (num ) kp,c · · · kp,c {z } | um m c K K = m p,m c p,m Kp,c
h
ˆ = kˆ (1) K
+1) c
(nu
kp,mm
|
−
c
|
(num +nuc )
kp
···
(nu
kp,c m
+1 ) +1 )
|
··· −
c
i (nu
kp,mm
+nuc )
o ym o (n +n ) y · · · c kp,cum uc c {z } −
uc
(18)
Kp,c
k(ˆ2)
ˆ (nuc ) k
···
i
(19) Box II.
actual outputs, y∗c , and the relevant inputs, um and uc , results in ∗
yc =
y0c
− δ yc = E ( Kp,c um + Kp,c uc ) m
c
= E ( Kp,c (u0m + δ um )) + c Kp,c E (uc ) m
= m y0c + c Kp,c E (uc ).
(29)
If the feedback control structure is squared, the steady state closeloop inputs, uc , associated with δ yc can be computed as m 0 ˆ c ). E (uc ) = (c Kp,c )−1 (y0c − δ y− c yc ) = E (u
(30)
Consequently, the corresponding steady states of the submodel and controlled outputs, yˆ c and ym will be m 0 ˆ c ) = Kˆ (c Kp,c )−1 (y0c − δ y− E (ˆyc ) = E (Ku c yc )
E (ym ) = E (m Kp,m um + c Kp,m uc ) = m y0m
+ Kp,m ( Kp,c ) ( c
−1
c
y0c
(31)
− δ yc yc ). −m 0
Using Eq. (12), the basic fault vector, dδ yc , can be written as follows.
0
m 0 0 + c Kp,m (c Kp,c )−1 (y0c − δ y− c yc ) − ym . 0 0 c − 1 0 − m 0 0 0 [yc − Kˆ ( Kp,c ) (yc − δ yc yc )] − [yc − yˆ c ]
dδ y c =
m 0 ym
(32)
Note that dδ um , dδ ym , dδ uc and dδ yc are subsets of basic fault vectors in the BSFM defined for um , ym , uc and yc . For constructing the BSFM, D, each basic fault vector in dt (t = δ um , δ ym , δ uc or δ yc ) can be computed by using Eqs. (22), (23), (26) and (32). Notice also that each vector is normalized according to Eq. (16) or (17) before being included as a vector in the BSFM. From these aforementioned equations, the BSFM, D, can be expressed as follows. From Eqs. (22) and (23), the matrices Dum and Dym are therefore given as
Dum = I
0
0
T
0
,
Dym = 0
I
0
T
0
.
(33)
Using Eqs. (26) and (32), the matrices Duc and Dyc corresponding to δ uc and δ yc are as
(uc Duc )T
Duc = 0T
0T
Dyc = 0T
(ym Dyc )T
0T
(yc Duc )T
T
(yc Dyc )T
T
.
(34)
The complete BSFM, D, is as follows.
D=
I | D 0 | uc
Dyc .
(35)
ˆ Under the proposed monitoring architecture, the submodel G should be identified for the proposed FDII scheme. Therefore, the matrix Duc can be easily derived with Eq. (26) by replacing δ uc as the single sensor fault pattern. Moreover, the matrix Dyc can also be derived analytically only if the whole process model, Gp ,
or its corresponding gain matrix, Kp , is available. Consequently, the hybrid method proposed in the following subsection is used to compute Dyc directly without identifying Gp or Kp with considerable efforts. Similar to the perturbing method, Eq. (17) should be used to adjust the BSFM if the variables are scaled with some scaling matrix, S−1 . 3.3. Constructing BSFM by the hybrid method The following summarizes the advantages and disadvantages of the perturbing and analytical methods in constructing matrix D, from depictions in previous subsections. The perturbing method is straightforward and easy to implement in a process. However, it requires a total number of num + nym + nuc + nym perturbing experiments. Experiments are usually time-consuming and costly, and seriously interrupt process operations during experiments. On the other hand, provided that the process model Gp is available, there is no need for conducting experiments to construct matrix D by the analytical method. Nevertheless, this method requires considerable effort for identifying Gp , especially a large scale process. The hybrid method uses the advantages of both, and, hence, the most recommended. According to the basic monitoring configuration of a process ˆ and Gc should be always available. given in Fig. 2, models G Eqs. (22) and (23), show that single sensor fault matrices Dum and Dym can be built without any prior knowledge of a process. From
ˆ only. Eq. (26), the matrix Duc can be constructed directly using G The information Gp is necessary for developing Dyc using the analytical method. Here, the analytical method is most beneficial for developing Dum , Dym and Duc analytically, and the perturbing method for Dyc to avoid the tedious procedure for identifying Gp . Given this modification, the method requires only nyc experiments to build the matrix D, significantly reducing the number of experiments. 3.4. Adaptively updating BSFM Maintenance and revamping are practical issues in a real plant, and disabling, displacing, or plugging sensors during these kinds of operations is common. Effectively adapting BSFM to satisfy system requirements is one of the beneficial properties of the proposed method. Adding or removing sensors does not require retraining or re-analyzing all sensors for building a new BSFM, but only modifying some related columns. Based on the aforementioned procedures for establishing BSFM, the current work introduces the following corresponding adaptive procedure. • Disable / remove sensors from the process. Reconstruct the new BSFM by removing the corresponding rows and columns from the original one. • Plug new sensors, which belong to um , to the process. Establish dδ um by Eq. (22) and normalize it to unit length. Append this column to the original BSFM and fill the remaining appended elements with zeros.
C.-C. Li, J.-C. Jeng / ISA Transactions 49 (2010) 415–432
• Plug new sensors, which belong to ym , to the process.
Establish dδ ym using Eq. (23) and scale it to unit length as dym . Modify Dyc using Eqs. (32) and (34) or the perturbing method. Append dym to the original BSFM; modify Dyc and fill the remaining appended elements with zeros. • Plug new sensors, which belong to uc , to the process. Establish dδ uc by Eq. (26) with unit length scaling or by the perturbing method as duc . Append duc to the original BSFM, and fill the remaining appended elements with zeros. • Plug new sensors, which belong to yc , to the process. Establish dδ yc by Eq. (32) with unit length scaling or by the perturbing method as dyc . Modify Duc with Eqs. (26) and (34) or by the perturbing method. Append dyc and modify Duc to the original BSFM; fill the remaining appended elements with zeros.
4. Fault detection
df (k) = λdf (k − 1) + (1 − λ)df (k)
5. Fault isolation and identification 5.1. Fault isolation Based on the general assumption that the deviation vector df has multivariate normal distributions, the corresponding weight ˆ will also follow multivariate normal distribution as estimate ϕ ˆ must follow shown in Eq. (40). Thus, each element ϕˆ i of ϕ univariate normal distribution [32]. Let P denote the projection matrix of Eq. (39) and define pTi as the ith row vector of P;
Let df (k) denote the deviation of z(k) from z , that is: df (k) = z(k) − z0 .
Note that df can be a vector under NOC or faulty conditions. According to the previous analysis, each basic fault vector is independent of the others, and the BSFM serves as a basis for the space. Therefore, df can be spanned by this set of bases and expressed as a linear combination of the basic fault vectors: df (k) = ϕ1 (k)d1 + ϕ2 (k)d2 + · · · + ϕn (k)dn
(38)
Moreover, since the BSFM is a square matrix as shown in Eq. (13) given in Box I, the projection matrix of Eq. (38) can be replaced as follows:
(DT D)−1 DT = D−1 .
(39)
Under NOC, assume the vector df follows multivariate normal distribution with zero mean and covariance R, i.e. df ∼ N (0, R). ˆ , can be calculated using Eqs. (38) and (39), The weight vector, ϕ following the multivariate normal distribution [31] as
ϕˆ ∼ N (0, Rϕ )
(40)
ˆ under NOC, i.e. Rϕ = cov(ϕ( ˆ k)|k ∈ where, Rϕ is the covariance of ϕ NOC). It can also be estimated from the training data set or analyzed by the algebraic relation as Eq. (41). Rϕ = D
−1
cov df
−1 T
D
(41)
where cov(df ) can be directly estimated by the NOC data set. This paper defines a squared weighted index, η(k), by Eq. (42) as a scalar index for process monitoring. 1 ˆ k). η(k) = ϕˆ T (k)R− ϕ ϕ(
(42)
Performance of the squared weighted index is a better measure for detecting faults than the widely used squared prediction ˆ follows error (SPE) [11]. Based on Eq. (40), the weight vector ϕ multivariate normal distribution. Consequently, the index η(k) follows a central chi-squared distribution with n degrees of freedom [32], i.e.:
η(k) ∼ χ (n). 2
···
pn
T
.
(45)
ϕˆ i = pTi df ∼ N (0, rϕii )
(43)
However, if any sensor has a fault, η(k) will no longer be central chi-squared distributed. Therefore, with a pre-selected level of significance α , e.g. α = 95%, fault detection can be carried out by checking η(k) with a control threshold χα2 (n). Eventually, in order to reduce transient status effects and abrupt changes of a process, an EWMA filter can also be applied to the df (k).
(46)
in which rϕ is the element (i, i) of Rϕ , i.e., rϕ is the corresponding variance of ϕˆ i . Consequently, with a pre-selected significant level α , the confidence interval of ϕˆ i can be easily determined as ii
ii
ϕˆ i − cα ≤ q ≤ cα
(47)
rϕii
(37)
in which, n = num + nym + nuc + nyc . Finding the optimal estimate of ϕ(k) means projecting df (k) onto each of the bases to obtain:
ˆ k) = (DT D)−1 DT df (k). ϕ(
p2
ˆ can be computed as follows: Therefore, the ith element of ϕ (36)
= Dϕ(k)
(44)
where 0 ≤ λ ≤ 1 is known as the forgetting factor. Applying Eq. (44), the filtered deviation vector with smaller variances of the current process results [12].
P = D −1 = p 1
0
421
where, cα is the upper limit of a (1 − (1 − α)/2) × 100 percentile confidence in terms of a standard normal distribution. Using ˆ, Eq. (47), a hypothesis test can be applied to each element of ϕ respectively. From Eq. (37), ϕˆ i represents the relative weight contributed by individual di associated with the ith sensor fault. After a fault has been detected using the monitoring index defined in Eq. (42), the ith sensor should be considered faulty if its corresponding weight ϕˆ i stays outside the determined confidence limits. Likewise, the weight vector computed based on the filtered deviation vector df (k) with Eq. (44) can also be used for fault isolation. ˆ k) are useful for isolating The contribution plots based on ϕ( the faulty sensors, especially when multiple sensor faults occur simultaneously in a process. Let f (ϕˆ i ) designate the probability density function (abbr. PDF) of the random variable ϕˆ i . The type I error, i.e. miss-classification rate, for the normal sensor i can be represented as follows:
ϕ ˆ ϕ ˆ i i P q ≥ cα or q ≤ −cα rϕii
rϕii
Z
cα
=1− − cα
ϕˆ i ϕ ˆ i f q dq = 1 − α
rϕii
(48)
rϕii
in which P (•) is the probability of the corresponding event. 5.2. Fault identification Eqs. (22), (23), (26) and (32) show that the value of the ith element of d∗i should be δ if d∗i is not normalized to have unit length, where d∗i is the ith basic fault vector with a fault magnitude δ . In practice, the value of δ is usually set to 1 or σi which corresponds to the standard deviation of sensor i. Let D∗ be the unscaled BSFM; and thus the ith column of D∗ is the basic fault vector associated with the ith sensor that has fault magnitude σi . Then, the algebraic relation between D and D∗ is: D∗ = D diag σ1
σ2
···
σn S−1 = DF
(49)
422
C.-C. Li, J.-C. Jeng / ISA Transactions 49 (2010) 415–432
where, S−1 is the scaling matrix of variables. Using Eqs. (38) and ˆ ∗ , which corresponds to D∗ can be (39), the weight estimate, ϕ expressed as
ϕˆ ∗ = (D∗ )−1 df .
(50)
ˆ is the estimate of the weight from The ith element, ϕˆ i , of ϕ sensor i. The estimated magnitude of bias in sensor i would be ϕˆ i∗ σi . ˆ ∗ and ϕˆ is Furthermore, the algebraic relation between ϕ ∗
∗
ϕˆ ∗ = D∗
−1
df = (DF)−1 df
ˆ = F−1 D−1 df = F−1 ϕ.
(51)
5.3. Analysis of fault isolation sensitivity From Eq. (47), the confidence interval for ϕi with significance level α will be
P −cα
q
rϕii ≤ ϕˆ i ≤ cα
q rϕii
= α.
(52)
Let d∗f = df + ∆df be a vector with a constant bias, ∆df . Then, the
ˆ can be expressed as corresponding ϕ
• Implement steps. 1. Develop BSFM, D, using the perturbing, analytical or hybrid method introduced in Section 3. 2. Compute the weight vectors, ϕˆ 0 , for the deviation vectors under NOC. The covariance structure of the weight vectors, Rϕ , is also estimated. If necessary, an EWMA filter can be applied to these normal vectors to reduce variations. Accordingly, the covariance of the filtered weight vectors is estimated. 3. Using Eq. (58), the sensitivity of each sensor could be assessed. • Diagnosis steps. 1. With sampled data, compute deviation vectors df . If necessary, apply the same EWMA filter used in the previous step to the computed df . 2. Generate the monitoring index and weight vector by Eqs. (38) and (42). Check those index values with their corresponding thresholds (Eqs. (43) and (47)) for FDII decision making. If necessary, the magnitudes of the fault can also be estimated using Eq. (50) or (51).
∗
ϕˆ ∗ = Pd∗f = ϕˆ + P∆df
(53)
or, in terms of its element,
ϕˆ i∗ = ϕˆ i + pTi ∆df .
(54)
Therefore, the confidence interval for ϕˆ i∗ can be expressed as
P −cα
q
rϕii + pTi ∆df ≤ ϕˆ i∗ ≤ cα
q
rϕii + pTi ∆df
= α.
(55)
Comparing Eqs. (52) and (55) shows that ϕˆ i is significant with α level confidence, if the following inequalities hold. ∗
q
−cα rϕii + pTi ∆df ≥ cα rϕii or q q cα rϕii + pTi ∆df ≤ −cα rϕii .
(56)
This inequality implies:
q T p ∆df ≥ 2cα r ii .
(57)
ϕ
Assume that the root cause of the fault ∆df is the bias malfunction of the ith sensor, i.e. ∆df = ωi di . The definition of P gives pTi di = 1. If the value ϕˆ i∗ is significant, Eq. (57) gives:
q |ωi | ≥ 2cα rϕii .
(58)
Designate f (ϕˆ i∗ ) as the PDF of the random variable ϕˆ i∗ . The type II error, i.e. the miss-isolation rate, of a faulty sample on the ith sensor would be as follows:
Z ∗ ϕ ˆ i P −cα ≤ q ≤ cα =
rϕii
cα
− cα
Before applying the proposed method to industrial dynamic processes, this study first tests the proposed method to the simple numerical illustration in Section 2.3. The BSFM here is established using the simplest perturbing method since there is no prior information of the process model. Constant biases with the magnitudes of one standard deviation are introduced to variables one by one, and 200 experimental observations of each experiment are collected. The BSFM is evaluated as
D=
q
i
6. Illustrative examples
∗ ϕ ˆ ϕˆ ∗ i f q d qi .
rϕii
(59)
rϕii
Here, the variances of ϕˆ i and ϕˆ i∗ equal to rϕii , because the additive fault is a constant bias, ∆df . In summary, the proposed sensor FDII algorithm can be implemented by the following three major steps.
1 −0.0054
0.021 . 1
(60)
In Eq. (60), the off-diagonal elements are quite close to zero, implying that variables in this system do not affect each other, and the fault would not propagate between x1 and x2 in this system. Utilizing the same data sets in Section 2.3, Fig. 3(a) is the scatter plot of data sets X0 , X1 and X2 , where symbol ‘‘◦’’ represents the observations of normal operation condition, and the symbol ‘‘×’’ represents faulty samples with a positive mean bias of magnitude 10 in x1 -direction, i.e. fault scenario 1. The symbol ‘‘+’’ represents faulty samples with a positive mean bias of magnitude 15 in x2 direction, i.e. fault scenario 2. Applying the proposed FDII method to these data sets, the elements of weight vector, ϕ, can be regarded as the contribution of each variable. Fig. 3(b) is the contribution plot of these fault scenarios, where the black bar is the contribution from variable x1 and the white bar is the contribution from variable x2 . Note that the plotted weight vectors in Fig. 3(b) are normalized to unit length as ϕ/kϕk. Fig. 3(b) clearly demonstrates that x1 is the root cause of fault scenario 1 and x2 is the root cause of fault scenario 2. Therefore, the proposed method can exactly isolate the faults introduced to this process. The following subsections use two simulated industrial dynamic processes to demonstrate the presented method to diagnose multiple sensor faults. 6.1. Nonisothermal continuous stirred tank reactor (CSTR)
• Preliminary steps. 1. Classify the sensors in the process into ‘‘open-loop’’ or ‘‘closeloop’’ categories, i.e. xm or xc . ˆ of the process for those ‘‘close2. Identify the sub-model G loop’’ variables; and install this model as diagnostic blocks. 3. Under NOC, collect sensor measurements xm , xc and xˆ c from the plant. Then, the original reference z0 is calculated from these collected measurements.
A nonisothermal CSTR process [26] is considered for the application of the proposed method. The schematic diagram of the process is given in Fig. 4. In the CSTR, reactant A is premixed with a solvent, and then it was converted into product B with a first-order reaction rate r = k0 e−E /RT C .
(61)
C.-C. Li, J.-C. Jeng / ISA Transactions 49 (2010) 415–432
423
Fig. 3. (a) Scatter plot of X0 , X1 and X2 (b) Contribution plots of proposed method for X1 (fault 1) and X2 (fault 2). Table 2 Model and controller parameters used in the CSTR process. Notation
Parameters and constants
Value
V
Volume of reaction mixture in the tank Density of reaction mixture Density of coolant Specific heat capacity of the reaction mixture Specific heat capacity of the coolant Heat of reaction Preexponential kinetic constant Activation energy/ideal gas constant Heat transfer coefficient Temperature controller gain Integral time Concentration controller gain Integral time
1 m3 106 g/m3 106 g/m3 1 cal/(g K)
ρ ρc cp cpc
∆Hr k0 E /R UA Kc (T ) τI (T ) Kc (C ) τI (C )
1 cal/(g K) −1.3 × 107 cal/kmol 1010 min−1 8330 K 6.5 × 106 −1.5 5 0.4825 2
Table 3 Mean of process input variables and variance of measurement errors. Fig. 4. Nonisothermal CSTR process and nine measured variables: (1) coolant temperature, (2) reactant mixture temperature, (3) reactant A concentration, (4) solvent concentration, (5) solvent flow rate, (6) solute flow rate, (7) coolant flow rate, (8) outlet concentration, and (9) outlet temperature.
The dynamics of the process is described as follows: V
dC dt
V ρ cp
= F (Ci − C ) − Vr dT dt
= ρ cp F (Ti − T ) − + (−∆Hr ) Vr .
(62) UA 1 + 0.5UAFc ρc cpc
(T − Tc ) (63)
The concentration of the reactant mixture is calculated by Ci =
Fa Ca + Fs Cs Fa + Fs
.
(64)
The outlet temperature (T ) and concentration (C ) are controlled with PI controllers by manipulating the inlet coolant flow rate (Fc ) and the inlet reactant flow rate (Fa ), respectively. The model and controller parameters are given in Table 2.
Variable (1) (2) (3) (4) (5) (6) (7) (8) (9)
Tc Ti Ca Cs Fs Fa Fc C T
Mean
Variance of measurement error
365 K 370 K 17.3 kmol/m3 0.3 kmol/m3 0.9 m3 /min
2.5 × 10−2 2.5 × 10−2 1.0 × 10−1 2.5 × 10−4 4.0 × 10−5 4.0 × 10−5 1.0 × 10−1 2.5 × 10−4 4.0 × 10−3
There are total nine measurements in this CSTR process. Five of them (Tc , Ti , Ca , Cs , and Fs ) can be classified as the measured inputs (i.e. um ); variables Fa , and Fc are classified as the control inputs uc ; the controlled outputs yc are referred as variables C and T . For simplicity, this work assumes that submodel used is identical to the process model, and the measured inputs are all set as the constants of their mean (steady-state) values given in Table 3. In addition, the measurement error of each variable follows i.i.d. Gaussian distribution with zero mean and variance as given in Table 3 under
424
C.-C. Li, J.-C. Jeng / ISA Transactions 49 (2010) 415–432
σ = 0.0511
0.0515
0.0993
0.0052
0.0019
0.0034
0.1057
0.0057
0.0213
(65)
Box III.
NOC. Six hundreds samples under normal condition are generated to compute the original reference z0 . The corresponding standard deviations for all measurements of all variables, um uc yc ,are then estimated from this training set as given in Box III. In the following parts of this example, the variables (including ˆ c and submodel outputs yˆ c ) are all scaled the controller outputs u in accordance with their estimated means and standard deviations, i.e. auto-scaling. The BSFM is developed here using the perturbing method. A constant bias with magnitude of one standard deviation as a perturbation is introduced into each of the measurements, respectively. Six hundred samples under these conditions are generated to estimate D. Finally, the resulting matrix D of this example is constructed as in Box IV. Then, the EWMA filtered deviation vector df (i.e. λ = 0.6) are computed under NOC using Eqs. (36) and (44). From Eq. (38), the ˆ s for those normal df can be computed and so can weight vectors ϕ the corresponding covariance matrix Rϕ . The resulting matrix Rϕ is then used for assessing the sensitivities of sensors and detecting faults. The significant limits for each sensor can be computed using Eq. (58). In this example, the significant limits of all the sensors are as in Eq. (67) given in Box V. A magnitude of ϕi larger than Si σi , the ith sensor will be significant, and will be isolated as the root cause of df . Two fault scenarios are selected to demonstrate our proposed fault diagnosis method, and those scenarios are as follows.
6.2. Ternary distillation column
Fault scenario 1: biases on sensor 1, 5, 7, 8, 9 with magnitude 2 3 3 5 −5 . Fault scenario 2: biases on sensor 2, 4 with magnitude −2 −2 , drifts on sensor 5, 9 as x5 = 0.0001(t − tf ) and x7 = 0.001(t − tf ).
The remaining variables R (u1 ), QB (u2 ), xD1 (y1 ), and xB3 (y2 ) are open loop variables without control. Fig. 8 gives the control system for this study (the inventory control is omitted). For the proposed FDII scheme, a submodel should be identified and incorporated into the system to generate residuals for the controlled outputs. Here, the dynamic submodel with respect to inputs u3 , u4 and outputs y3 and y4 is identified as Eq. (72) given in Box VII, using the identification algorithm mentioned in Jeng and Huang [34]. Its steady state gain matrix is given in Eq. (73) (see Box VII) by replacing q−1 with one in the submodel. Note that the variables u3 , u4 , y3 and y4 are scaled to avoid illconditions encountered in the identification procedure. This study applies the auto-scaling method, which scales each variable into zero mean and unit variance. Under NOC, this work assumes that the measurement error and outside disturbance of each variable follows i.i.d. Gaussian distributions with zero mean and variance 0.1, i.e.
These fault conditions are introduced at the time instant tf = 200 for each experiment, and 600 samples for each simulation are generated to demonstrate the proposed FDII method. Note that the actual values of bias faults should multiply the standard deviations of the considered sensors, and the corresponding fault magnitudes are all significant for isolation (ref. Eq. (77)). The process monitoring plots for these two fault conditions are given in Fig. 5, where the fault detection index is computed without any filtering (see Section 4). For the sake of easy limit checking in this figure, the fault detection index is scaled by its confidence limit, i.e.
η=
η , χα2 (9)
α = 95% in this example
(68)
to have a common unit confidence limit. Fig. 5 clearly shows that the proposed monitoring index can detect the introduced abnormal conditions promptly. Fig. 6 represents the isolation plots for fault scenario 1. Likewise, the indices for isolation (elements of weight vector ϕ) are also scaled by its standard deviation for easy limit checking, i.e.
ϕi ϕi = q , r ii
q
rϕii is estimated standard deviation of ϕi .
(69)
ϕ
The confidence interval of each ϕ i is ±1.96 at a significance level of 95%. According to these trend plots, it is clearly observed that sensor 1, 5, 7, 8 and 9 are faulty, and these faulty sensors are with the type of biases. Fig. 7 is the isolation plots for fault scenario 2, which shows that those indices for sensor 2, 4, 5 and 9 start to exceed their control thresholds after faults occurred. The possible fault types are constant biases for sensor 2, 4 and are drifting for sensor 5, 9. Notice that all the observations are in agreement with the fault scenarios introduced.
Process setup. Alatiqi and Luyben [33] presented a control system for a complex 4 × 4 multivariable distillation column with a side-stream stripper for separating a ternary mixture (benzene/toluene/o-xylene) into three products. The system includes four controlled variables: benzene composition in column distillate (xD1 ), o-xylene composition in column bottom (xB3 ), toluene composition in stripper product (xS2 ), and one temperature difference between tray 16 and tray 28 of the column (∆T ). The four manipulated variables include reflux flow rate (R), heat input to column reboiler (QB ), heat input to the stripper reboiler (QBs ), sidedraw rate (LS ). The transfer function models (see Eq. (70) in Box VI) for this 4 × 4 system [33] are used for producing the simulated data. In order to demonstrate usage of the proposed FDII method, this work assumes that variables QBs (u3 ), LS (u4 ), xB3 (y3 ), and ∆T (y4 ) operate under a feedback control structure, and the multiloop proportional-integral (PI) controller Gc is used to control the outputs y3 and y4 .
0.36
Gc =
23s
0.36 +
0
0 0.35 +
0.35
.
(71)
39s
δ um , δ ym , δ udm , δ ydm , δ uc , δ yc , δ udc , δ ydc ∼ N (0, 0.1I).
(74)
Furthermore, this research assumes variations of variables to be (i) mutually independent; and (ii) independent of the initial conditions. The current study generates 200 samples under normal inputs and outputs as an initial set to compute the original reference z0 . The corresponding standard deviations for all measurements of all variables, u1 u2 y1 y2 u3 u4 y3 y4 ,are then estimated from this training set as in Box VIII. In the following parts of this example, the variables are all scaled in accordance with their estimated means and standard deviations, i.e. autoscaling. Note that the estimated inputs ˆ c ) and outputs (submodel outputs, yˆ c ) should (controller outputs, u be scaled according to the parameters of their corresponding measurement variables uc and yc , since the importance of the estimates and measurements should be the same. Preliminary works. The BSFM is developed here using the hybrid method. A constant bias with magnitude of one standard deviation
C.-C. Li, J.-C. Jeng / ISA Transactions 49 (2010) 415–432
1 0.0010 −0.0027 −0.0001 D = −0.0020 −0.0016 0.0010 −0.0009 0.0024
−0.0026 1 −0.0027 −0.0001 −0.0020 −0.0016 0.0010 −0.0009 0.0024
−0.0026 0.0010
−0.0026 0.0010 −0.0027
1
−0.0001 −0.0020 −0.0016 0.0010 −0.0009 0.0024
1
−0.0020 −0.0016 0.0010 −0.0009 0.0024
−0.0026 0.0010 −0.0027 −0.0001 1
−0.0016 0.0010 −0.0009 0.0024
−0.0014 0.0004 0.0011 0.0001 −0.0004 0.1938 −0.0010 0.7116 0.6753
425
−0.0059 0.0008 −0.0072 −0.0142 −0.0100 0.0011 0.9840 0.0192 −0.1762
−0.0116 −0.0010 −0.0114 −0.0021 −0.0038 0.0062 0.0064 0.9998 0.0045
−0.0080 0.0044 −0.0046 0.0003 0.0002 −0.0012 −0.0021 0.0076 0.9999
(66)
Box IV.
S = 1.9930
1.9377
1.8371
2.0089
1.9485
1.2501
1.7294
4.7725
4.9513
(67)
Box V.
a
b
Fig. 5. Process monitoring plots for CSTR example (a) fault 1 (b) fault 2.
as a perturbation is introduced into each of the measurements of the controlled outputs, respectively. Five hundred samples under these conditions are generated to estimate Dyc . From Section 3.3, the remaining parts, Dum , Dym and Duc , are derived analytically using Eqs. (22), (23) and (26). Finally, the resulted matrix D of this example is constructed as follows. 1 0 0 0 D= 0 0 0 0
0 1 0 0 0 0 0 0
0 0 1 0 0 0 0 0
0 0 0 1 0 0 0 0
0 0 0 0 0.24 0 −0.97 −0.02
0 0 0 0 0 0.14 0.74 −0.66
0 0 0.09 0.01 0 0 0.99 0
0 0 0.19 −0.33 . 0 0 0 0.92
(76)
Then, samples of the EWMA filtered deviation vector df (i.e. λ = 0.6) are computed under NOC using Eqs. (36) and (44). From ˆ s for those normal df can be Eq. (38), the weight vectors ϕ computed and so can the corresponding covariance matrix Rϕ . The resulting matrix Rϕ is then used for assessing the sensitivities of sensors and detecting faults. The significant limits for each sensor
can be computed using Eq. (58). In this example, the significant limits of all the sensors are given as follows: S = 1.89
1.99
3.14
3.99
1.50
1.68
8.26
7.63 .(77)
A magnitude of ϕi larger than Si σi , the ith sensor will be significant, and will be isolated as the root cause of df . Fault detection, isolation and identification. Since there are eight variables in this example process, there are a total of 28 − 1 = 255 scenarios for single or multiple sensor faults. In this subsection, the proposed method is applied to four of them for demonstration. In addition, the proposed contribution plots are also compared with those of conventional T 2 and Q contribution plots from PCA to show the superior fault isolation capability of the proposed method. The PCA model is constructed from the NOC data set described previously, and 6 principal components are selected for explaining about 86% total variance. The four fault scenarios are as follows. Fault scenario 1: biases on sensor 1, 5, 7, 8 with magnitude 3 −2 9 −8 . Fault scenario 2: biases on sensor 2, 3, 4, 6 with magnitude −2 4 −5 2 .
426
C.-C. Li, J.-C. Jeng / ISA Transactions 49 (2010) 415–432
Fig. 6. Multiple sensor faults isolation plots for fault scenario 1 in CSTR example (faulty sensors: 1, 5, 7, 8, 9).
Fig. 7. Multiple sensor faults isolation plots for fault scenario 2 in CSTR example (faulty sensors: 2, 4, 5, 9).
Fault scenario 3: biases on sensor 1, 3 with magnitude 2 4 , drifts on sensor 6, 8 as x6 = −0.01(t − tf ) and x8 = 0.05(t − tf ). Fault scenario 4: biases on sensor 2, 4 with magnitude 3 −5 , drifts on sensor 5, 7 as x5 = −0.01(t − tf ) and x7 = −0.05(t − tf ).
These fault conditions are introduced at the time instant tf = 200 for each simulation, and 600 samples for each simulation are generated to demonstrate the proposed FDII method. Note that the actual values of bias faults should multiply the standard deviations of the considered sensors, and the corresponding fault magnitudes are all significant for isolation (Ref. Eq. (77)).
C.-C. Li, J.-C. Jeng / ISA Transactions 49 (2010) 415–432
−6.36e−1.2s (31.6s + 1)(20s + 1) 6.93e−1.02s 44.6s + 1 5.11e−12s (13.3s + 1)2 14(10s + 1)e−0.02s (45s + 1)(17.4s2 + 3s + 1)
4.09e−1.3s
(33s + 1)(8.3s + 1) −4.17e−5s 45s + 1 Gp (s) = 1.73e−18s (13s + 1)2 −11.2e−2.6s (43s + 1)(6.5s + 1)
−0.25e−1.4s 21s + 1 −0.05e−6s (34.5s + 1)2 4.61e−1.01s 18.5s + 1 0.1e−0.05s (31.6s + 1)(5s + 1)
427
−0.49e−6s 2 (22s + 1) 1.53e−3.8s 48s + 1 −1.5s −5.49e 15s + 1 −0.6s 4.49e (48s + 1)(6.3s + 1)
(70)
Box VI.
Fig. 8. Control system of distillation column with a side-stream stripper.
0.4455q−1
(1 − 0.89q−1 ) 0.1(0.0027 + 0.0025q−1 ) (1 − 0.968q−1 )(1 − 0.838q−1 ) ˆK = Gˆ (q−1 )|q−1 =1 = 4.05 −5.39 0.1 4.76
ˆ (q−1 ) = G
−0.35q−2 1 − 0.935q−1 −1 4.546(0.0018 + 0.0017q ) (1 − 0.98q−1 )(1 − 0.833q−1 )
(72)
(73) Box VII.
σ = 0.4541
0.4559
0.4807
0.4986
0.5425
0.5246
0.7208
0.6053
(75)
Box VIII.
The process monitoring plots for these four fault conditions are given in Fig. 9, where the fault detection index is computed without any filtering (see Section 4). For the sake of easy limit checking in this figure, the fault detection index is scaled by its confidence limit, i.e.
η=
η , χα2 (8)
α = 95% in this example
(78)
to have a common unit confidence limit. Fig. 9 shows that the proposed monitoring index can detect the faulty conditions promptly. Moreover, it is found that the fault magnitudes increase with time from Fig. 9(c) and (d). Fig. 10 represents the isolation plots for fault scenario 1. Likewise, the indices for isolation (elements of weight vector ϕ) are also scaled by its standard deviation for easy limit checking, i.e.
ϕi ϕi = q , rϕii
q
rϕii is estimated standard deviation of ϕi .
(79)
The confidence interval of each ϕ i is ±1.96 at a significance level of 95%. According to these trend plots, it is clearly observed that sensor 1, 5, 7 and 8 are faulty, and these faulty sensors are with the type of biases. Fig. 11 is the isolation plots for fault scenario 2. It shows that indices for sensor 2, 3, 4 and 6 start to exceed their control thresholds after time instant 201, and the possible types of fault are constant biases. Fig. 12 is the isolation plots for fault scenario 3, which shows that those indices for sensor 1, 3, 6 and 8 start to exceed their control thresholds after time instant 201. It can also be observed that the possible fault types are constant biases for sensors 1, 3 and are drifting for sensors 6, 8. Fig. 13 is the isolation plots for fault scenario 4, which shows that those indices for sensor 2, 4, 5 and 7 start to exceed their control thresholds after time instant 201. The possible fault types are constant biases for sensor 2, 4 and are drifting for sensor 5, 7. Notice that all the observations are in agreement with the fault scenarios introduced. Fig. 14 is the contribution plots of the proposed method, where the contribution
428
C.-C. Li, J.-C. Jeng / ISA Transactions 49 (2010) 415–432
a
b
c
d
Fig. 9. Process monitoring plots for distillation column example (a) Fault 1 (b) Fault 2 (c) Fault 3 (d) Fault 4.
Fig. 10. Multiple sensor faults isolation plots for fault scenario 1 in distillation column example (faulty sensors: 1, 5, 7, 8).
is calculated as the average value of ϕ(k)/kϕ(k)k from time instant k = 301–400. For comparison, the T 2 and Q contribution plots resulted from PCA are given in Figs. 15 and 16, where the mean contribution of faulty data samples from time instant 301–400 is plotted. For fault scenario 1, Fig. 14(a) indicates that sensors 1, 5, 7, and 8 significantly contribute to the fault, which consists with the fault scenario. Moreover, the fault direction (positive or negative bias)
is also correctly indicated. On the other hand, Fig. 15(a) shows that sensors 1, 4, 5, 6 have significant T 2 contributions, and Fig. 16(a) gives that sensors 5, 6, 7, 8 have significant Q contributions. Both the PCA-based contribution plots fail to indicate the faulty sensors and fault direction. For fault scenarios 2, 3, and 4, similar results can be observed as shown in Figs. 14(c)–(d), 15(c)–(d), and 16(c)–(d). The comparison of isolation results are summarized in Table 4. These results clearly demonstrate the superior fault
C.-C. Li, J.-C. Jeng / ISA Transactions 49 (2010) 415–432
429
Fig. 11. Multiple sensor faults isolation plots for fault scenario 2 in distillation column example (faulty sensors: 2, 3, 4, 6).
Fig. 12. Multiple sensor faults isolation plots for fault scenario 3 in distillation column example (faulty sensors: 1, 3, 6, 8).
isolation capability of the proposed method than the conventional PCA method. For fault magnitude identification, the un-scaled weight vector ϕ∗ can be computed using Eq. (51). The estimated magnitude for each fault scenario is as follows.
Fault 2: sensor 2, 3, 4, 6 with magnitudes [−2.10 3.89 − 4.98 2.15].
Fault 1: sensor 1, 5, 7, 8 with magnitudes [2.93 − 8.71].
Fault 4: sensor 2, 4, 5, 7 with magnitudes [2.88 − 5.16 − 2.98 − 12.74].
− 2.15 7.69
Fault 3: sensor 1, 3, 6, 8 with magnitudes [1.88 4.12 − 2.91 10.95].
430
C.-C. Li, J.-C. Jeng / ISA Transactions 49 (2010) 415–432
Fig. 13. Multiple sensor faults isolation plots for fault scenario 4 in distillation column example (faulty sensors: 2, 4, 5, 7).
a
b
c
d
Fig. 14. Contribution plots of proposed method (a) Fault 1 (b) Fault 2 (c) Fault 3 (d) Fault 4.
The computed fault magnitudes are very close to the actual values, especially for those constant bias faults. 7. Conclusions Modern industrial plants typically rely on sensor readings for industrial safety and performance. Nevertheless, sensor failure is almost inevitable. For this, prompt and automatic detection and isolation of sensor faults from the collected process operating data
becomes important and bifacial. Currently, industrial processes use contribution plots as the well-known fault diagnostic tool. The contribution of variables, however, always depends on the corresponding monitoring statistics, and cannot usually correctly identify the root causes of faults. This paper proposes a novel sensor fault detection and isolation architecture and method based on the BSFM to enhance multiple fault diagnosis in multivariate systems. The previous sections clearly show that most parts of the BSFM can be derived easily using analytical procedures based on a given submodel of the process. A few experiments on
C.-C. Li, J.-C. Jeng / ISA Transactions 49 (2010) 415–432
431
Table 4 Comparison of fault isolation results for proposed and PCA contribution plots. Actual fault condition
Proposed method
T 2 from PCA
Fault scenario 1
Faulty sensor Fault direction
[1, 5, 7, 8]
[1, 5, 7, 8]
[2, 4, 5, 6]
[5, 6, 7, 8]
[+, −, +, −]
[+, −, +, −]
[+, +, +, +]
[−, +, −, +]
Fault scenario 2
Faulty sensor Fault direction
[2, 3, 4, 6]
[2, 3, 4, 6]
[2, 3, 4]
[4, 5, 6, 8]
[−, +, −, +]
[−, +, −, +]
[+, +, +]
[−, +, +, +]
Fault scenario 3
Faulty sensor Fault direction
[1, 3, 6, 8]
[1, 3, 6, 8]
[3, 4, 5, 6]
[5, 6, 7, 8]
[+, +, −, +]
[+, +, −, +]
[+, +, +, +]
[−, −, −, −]
Fault scenario 4
Faulty sensor Fault direction
[2, 4, 5, 7]
[2, 4, 5, 7]
[2, 4]
[4, 5, 6, 7, 8]
[+, −, −, −]
[+, −, −, −]
[+, +]
[−, −, +, −, +]
a
b
c
d
Q from PCA
Fig. 15. T 2 Contribution plots from PCA (a) Fault 1 (b) Fault 2 (c) Fault 3 (d) Fault 4.
a
b
c
d
Fig. 16. Q Contribution plots from PCA (a) Fault 1 (b) Fault 2 (c) Fault 3 (d) Fault 4.
controlled output variables develop a complete BSFM. The newly proposed contribution plots based on the weights will be very
helpful to isolate multiple sensor faults easily. This study also develops a novel fault detection statistic, which is very sensitive
432
C.-C. Li, J.-C. Jeng / ISA Transactions 49 (2010) 415–432
to process abnormalities, using the squared weighted sum of the weights for process monitoring. The proposed method has been successfully applied to two industrial multivariate processes under close loop operations, and provides good performance for multiple sensor fault FDII. This good performance is usually far from reach by statistical methods alone (e.g. PCA-based method). Process knowledge must be incorporated to construct BSFM. Once BSFM is determined, the remaining steps this article proposes are fairly straightforward. Finally, this novel sensor fault FDII method provides a unified and efficient solution for multiple sensor fault diagnosis in multivariate processes. Extensions of this method to deal with both sensor faults and process faults are under research. References [1] Gertler J. Fault detection and diagnosis in engineering systems. New York: Marcel Dekker; 1998. [2] Clark R, Willsky A. The dedicated observer approach to instrument fault detection. In: 15th IEEE-CDC. 1979. p. 237–41. [3] Frank P. Fault diagnosis in dynamic systems using analytical and knowledgebased redundancy—a survey and some new results. Automatica 1990;26: 459–74. [4] Willsky A. A survey of design method for failure in dynamic systems. Automatica 1976;12:601–11. [5] Basseville M, Nikiforov I. Detection of abrupt changes—theory and applications. New York: Prentice-Hall; 1993. [6] Chow E, Willsky A. Analytical redundancy and the design of robust failure detection systems. IEEE Trans Automat Control 1984;29:603–14. [7] Massoumnia M, Verghese G, Willsky A. Failure detection and identification. IEEE Trans Automat Control 1989;34:316–21. [8] Theilliol D, Noura H, Sauter D, Hamelin F. Sensor fault diagnosis based on energy balance evaluation: application to a metal processing. ISA Trans 2006; 45:603–10. [9] Qin SJ, Li W. Detection, identification, and reconstruction of faulty sensors with maximize sensitivity. AIChE J 1999;45:1963–76. [10] Qin SJ, Li W. Detection and identification of faulty sensors in dynamic processes. AIChE J 2001;47:1581–93. [11] Li W, Shah S. Structured residual vector-based approach to sensor fault detection and isolation. J Process Control 2002;12:429–43. [12] Montgomery DC. Introduction to statistical quality control. New Jersey: John Wiley & Sons; 2001.
[13] Koruti T, MacGregor JF. Process analysis, monitoring and diagnosis using multivariate projection methods. Chemometr Intell Lab Syst 1995;28. [14] MacGregor JF, Kourti T. Statistical process control of multivariate processes. Control Eng Pract 1995;3:403–14. [15] Russell E, Chiang LH, Braatz RD. Data-driven methods for fault detection and diagnosis in chemical processes. London: Springer; 2000. [16] Westerhuis JA, Gurden S, Smilde A. Generalized contribution plots in multivariate statistical process monitoring. Chemometr Intell Lab Syst 2000; 51:95–114. [17] Qin SJ, Valle S, Pivoso MJ. On unifying multi-block analysis with applications to decentralized process monitoring. J Chemom 2001;15:715–42. [18] Miller P, Swanson RE, Heckler CE. Contribution plots: a missing link in multivariate quality control. Appl Math Comput Sci 1998;8:775–92. [19] Yue HH, Qin SJ. Reconstruction-based fault identification using a combined index. Ind Eng Chem Res 2001;40:4403–14. [20] Singhal A, Seborg DE. Pattern matching in multivariate time series databases using a moving-window approach. Ind Eng Chem Res 2002;41:3822–38. [21] Kano M, Hasebe S, Hashimoto I, Ohno H. Statistical process monitoring based on dissimilarity of process data. AIChE J 2002;48:1231–40. [22] Huang HP, Li CC, Jeng JC. Multiple multiplicative fault diagnosis for dynamic processes via parameter similarity measures. Ind Eng Chem Res 2007;46: 4517–30. [23] Duda R, Hart PE, Stock DG. Pattern classification. 2nd ed. New York: Wiley; 2001. [24] Chiang LH, Russell EL, Braatz RD. Fault diagnosis in chemical processes using fisher discriminant analysis, discriminant partial least squares, and principal component analysis. Chemometr Intell Lab Syst 2000;50:243–52. [25] He QP, Qin SJ, Wang J. A new fault diagnosis method using fault directions in fisher discirminant analysis. AIChE J 2005;51:555–71. [26] Yoon S, MacGregor JF. Fault diagnosis with multivariate statistical models part I: using steady state fault signatures. J Process Control 2001;11:387–400. [27] Jackson JE. A user’s guide to principal components. New York: John Wiley & Sons; 1991. [28] Jackson JE, Mudholkar G. Control procedures for residuals associated with principal component analysis. Technometrics 1979;21:341–9. [29] Nomikos P, MacGregor JF. Multivariate SPC charts for monitoring batch processes. Technometrics 1995;37:41–59. [30] Tracy ND, Young JC. Multivariate control charts for individual observations. J Qual Technol 1992;24:88–95. [31] Seber GAF. Linear regression analysis. New York: John Wiley & Sons; 1977. [32] Johnson RA, Wichern DW. Applied multivariate statistical analysis. New Jersey: Prentice-Hall; 1998. [33] Alatiqi IM, Luyben WL. Control of a complex sidestream column/stripper distillation configuration. Ind Eng Chem Process Des Dev 1986;25:762–7. [34] Jeng JC, Huang HP. Model-based autotuning systems with two-degree-offreedom control. J Chin Inst Chem Eng 2006;37:95–102.