ISA Transactions xxx (xxxx) xxx–xxx
Contents lists available at ScienceDirect
ISA Transactions journal homepage: www.elsevier.com/locate/isatrans
Research article
A novel approach for benchmarking and assessing the performance of state estimators Laya Dasa, Gaurav Kumara, Raghunathan Rengaswamyb, Babji Srinivasanc,∗ a
Department of Electrical Engineering, Indian Institute of Technology Gandhinagar, Gandhinagar, 382355, Gujarat, India Department of Chemical Engineering, Indian Institute of Technology Madras, Chennai, 600036, Tamil Nadu, India c Department of Chemical Engineering, Indian Institute of Technology Gandhinagar, Gandhinagar, 382355, Gujarat, India b
A R T I C LE I N FO
A B S T R A C T
Keywords: Multivariate systems State estimation Performance analysis Hurst exponent Detrended fluctuation analysis
State estimation is a widely adopted soft sensing technique that incorporates predictions from an accurate model of the process and measurements to provide reliable estimates of unmeasured variables. The reliability of such estimators is threatened by measurement related challenges and model inaccuracies. In this article, a method for benchmarking of state estimation techniques is proposed. This method can be used to quantify the performance and hence reliability of an estimator. The Hurst exponents of a posteriori filtering errors are analyzed to characterize a benchmark (minimum mean squared error) estimator, similar to the minimum variance control benchmark developed for control loops. A distance metric is then used to quantify the extent of deviation of an estimator from the benchmark. The proposed technique is developed for linear systems and extended to nonlinear systems with single as well as multiple measurable variables. Simulation studies are carried out with Kalman based as well as Monte Carlo based estimators whose computational details are significantly different. Results reveal that the technique serves as a tool that can quantify the performance and assess the reliability of a state estimator. The strengths and limitations of the proposed technique are discussed with guidelines on applications and deployment of the technique in a real life system.
1. Introduction The state space form of system description expresses measurements yk obtained from a system Ξ as functions of external inputs uk and internal variables xk of the system as:
Ξ:
xk + 1 = f (xk , uk ) + νk + 1 yk = h (xk , uk ) + ωk
(1)
where x ∈ n , u ∈ m , y ∈ p . The process uncertainty and measurement noise (denoted νk and ωk respectively) are typically assumed to be normally distributed white noise with zero mean and variance-covariance matrices Q and R respectively. The internal variables (xk) called state variables often represent physical quantities that cannot be measured or are prohibitively expensive to measure, while at the same time whose knowledge can be valuable in assessing the state of the system (as healthy or faulty, steady or transient, etc.), in monitoring the system and in deciding control actions. At times, these variables provide information that is more critical about the state of the system than can be obtained from measurements alone. For example, in a reactor with gaseous reactants and products the partial pressure of gases (that
∗
cannot be measured directly) are more informative about the system's state (say, extent of reaction) than the measurable total pressure; in an aircraft, the slosh of liquid fuel inside the tank is more critical for maintaining balance of the aircraft [1] than the mass of the tank that can be measured. Accurate knowledge of state variables is therefore critical to knowing the current state of operation and margins to operating limits of a system, deciding necessary control actions to maintain the plant and hence for overall system safety. State estimation is a widely adopted tool that incorporates predictions from an accurate model and measurements from the process to provide accurate estimates of state variables of the system. Kalman [2] developed a recursive prediction-correction based state estimation algorithm which serves as the best unbiased estimator for linear systems with Gaussian noise. The Kalman filter uses a model to predict the states and measurements. It then appropriately weighs the predicted state variables and the plant measurements to minimize the corrected state error covariance and provides the corrected states. This predictor-corrector algorithm of the Kalman filter has been shown to be best linear unbiased estimator (BLUE). These properties of the Kalman filter can be attributed to the assumption of all random variables being
Corresponding author. E-mail address:
[email protected] (B. Srinivasan).
https://doi.org/10.1016/j.isatra.2018.06.005 Received 1 February 2018; Received in revised form 27 May 2018; Accepted 11 June 2018 0019-0578/ © 2018 ISA. Published by Elsevier Ltd. All rights reserved.
Please cite this article as: Das, L., ISA Transactions (2018), https://doi.org/10.1016/j.isatra.2018.06.005
ISA Transactions xxx (xxxx) xxx–xxx
L. Das et al.
different approaches and make different approximations to the estimation problem, that can have a varied impact on the performance and sensitivity to MPM. The literature has widely acknowledged MPM as a threat to the reliability of estimators. Techniques have been suggested to avoid MPM by cautious selection of parameters and by suitably incorporating unmodelled effects in the noise covariance matrices [20]. The innovation sequence test has been proposed to detect MPM [21]. Error bounds for the state estimates in the presence of non-Gaussian noise corrupting the system have been provided using probability theory in Refs. [22–25]. An alternative version of the Extended Kalman filter has also been recently proposed to address modeling uncertainties in Ref. [26]. However, quantification of the impact of MPM on an estimator's performance has not been extensively addressed. The most straightforward approach is to use the mean squared error (MSE) of filtered measurements as an index of performance. However, this approach does not allow one to benchmark the estimator. In other words, the ideal value of MSE for a system is not known beforehand. As a result, one can only detect an increase or decrease in the MSE, thereby preventing MSE from being used as an absolute measure of performance.
Gaussian along with the linearity of the system. This results in the Gaussian nature of state estimates being preserved over all time and the filter being optimal. Owing to nonlinear operations performed on the variables resulting in non-Gaussian nature of the state estimates, a best optimal estimator (in a mean squared error sense) does not exist for nonlinear systems. However, extensive research has led to a host of estimators being developed for these systems that perform different approximations to the estimation problem. The Kalman filter has been extended to non-linear systems resulting in the extended Kalman filter, which approximates the non-linearity by using a Taylor series approximation so that the Kalman filter equations can be used for state estimation. Variants of the Kalman filter (particle filters [3,4] and ensemble Kalman filter [5–7]) have appeared in the literature that provide reliable state estimates for more challenging and non-linear systems. These estimators approach the state estimation from different, albeit based on similar fundamental concepts, vantage points. For instance, the particle filter is a Monte Carlo estimation method that performs a random sampling of states (particles) to propagate through the model, and uses a likelihood function to obtain corrected state estimates. However, the extended Kalman filter is the most popular filtering technique for state estimation of non-linear systems and has been applied in several applications such as global positioning system, aircraft inertial navigation system etc. The Unscented Kalman filter (UKF) [8,9], developed in the early 2000s addressed the non-linear state estimation problem as one of approximating probability density function of state estimates under a nonlinear transformation rather than approximating the non-linearity itself and was shown to outperform EKF with comparable computational expense (for small values of n). The cubature Kalman filter (CKF) [10] is also similar to the UKF which has been proposed as an alternate method of nonlinear estimation. The EKF and UKF have been modified to handle differential algebraic equation (DAE) systems [11] and systems with constraints on state variables [12,13]. A moving horizon approach to handle constraints for linear state estimation was proposed in Ref. [14], which was then extended to non-linear systems [15] with the idea of capturing more information with a window of past estimates as compared to the recursive EKF and UKF. Although this technique does capture more information than the recursive extension of Kalman filter, its large computational expense resulted in research attention being focused on developing recursive estimators. Recently, a recursive state estimator (receding horizon non-linear Kalman filter) [16] was proposed for non-linear systems which was shown to perform as good as MHE at a much lower computational expense.
1.2. Contributions As discussed earlier, the performance of an estimator depends on the quality of the model (extent of MPM) and the severity of approximations made by the (nonlinear) estimator. In this article, we attempt to assess an estimator with respect to a benchmark filter in the presence of both factor affecting performance. However, since the approximations made by any particular estimator are the same, the present study reduces to quantifying the performance of the estimator with varying levels of model plant mismatch. We therefore refer to this problem as quantifying the impact of MPM on a state estimator and discuss the influence of the approximations made by the estimator in a separate section. Specifically, we (i) develop method to characterize a benchmark estimator such as the Kalman filter with measures other than MSE and (ii) quantify the performance of an estimator in the presence of MPM for systems with multiple measurable variables. We first characterize the Kalman filter which is the optimal estimator for linear systems in Section 2. These properties are used to derive the conditions for individual performance metrics of the ideal multivariate estimator (without MPM and approximations) in Section 3, which are then used to devise a performance measure of any estimator. Simulation results are presented in Section 4 followed by discussion in Section 5 and concluding remarks and directions for future work in Section 6.
1.1. Motivation Despite the diversity and sophistication of state estimators, they inevitably require two pieces of information to provide reliable estimates of the state variables - the process model (Equation (1)) and measurements yk. This renders model based estimators vulnerable to measurement issues (such as delayed measurements, missing data) and model plant mismatch (i.e., disagreement between process dynamics and model description). Measurement related challenges have been addressed in the literature [17–19] that have resulted in modified versions of the standard estimation (EKF and UKF) algorithms. On the other hand, model plant mismatch can occur in the form of parametric deviations of the model (f, h, Q and R) or errors in the assumed structure of the model (e.g. measurement and state equations corrupted with coloured noise instead of IID process). Such mismatches can be due to changes in plant dynamics over time as a result of change in operating conditions or wear and tear of the process equipments. When there is a mismatch between the model and plant dynamics, there is a conflict between the two pieces of information fed to the estimator, i.e., the process model (predictions) and measurements. This may result in biased or divergent state estimates, which can misguide a plant operator. Furthermore, for nonlinear systems, different estimators adopt
2. Preliminaries Consider a system described by Equation (1). The problem of estimation involves providing reliable estimates of state variables (xk) from measurements (yk) in the presence of model uncertainty (νk) and noise in measurements (ωk). Here we address estimation in the context of filtering, i.e., obtaining state estimates at time k given measurements till time k, i.e., xˆk k as opposed to prediction and smoothing that use past and future measurements respectively to estimate state variables. 2.1. Innovation properties of the Kalman filter The Kalman filter, which is the best linear unbiased estimator (BLUE) solves the above estimation problem in two steps: (i) prediction, in which the state estimates and their covariances are propagated through the model and (ii) correction, in which a weighted combination of the predicted and observed measurements is used to correct the state estimates and covariances thereof. For a linear system, Equation (1) can be written as: 2
ISA Transactions xxx (xxxx) xxx–xxx
L. Das et al.
xk + 1 = Axk + Buk + νk + 1 yk = Cxk + Duk + ωk
numbers, does not provide such a measure. Furthermore, a binary whiteness test of the errors will not be able quantify the performance of the estimator. We therefore turn to the Hurst exponent (which is related to the ACF) and is a single measure quantifying the predictability of a time series [27]. The Hurst exponent of a time series y(t), denoted as α is calculated by solving the following equation:
The Kalman filter equations can be summarized as:
xˆk k − 1 =
Axˆk − 1 k − 1 + Buk
Pk k − 1 =
APk − 1 k − 1 AT + Q
Kk xˆk k
=
Pk k − 1 C T (CPk k − 1 C T + R)−1 = xˆk k − 1 + Kk (yk − Cxˆk k − 1 − Duk )
t y (t ) = aα y ⎛ ⎞ ⎝a⎠
Pk k
=
The Hurst exponent is a characteristic property of a time series and is related to self similarity and long range correlations in the data. The Hurst exponent has been applied in several areas for diverse applications [28–31]. For stationary data, the value of Hurst exponent (α) lies between 0 and 1. For a white noise process, the value of α is 0.5. A persistent time series is characterized by α > 0.5 while an anti-persistent time series by α < 0.5. So, any deviation of α from 0.5 indicates presence of a predictable component in the data. The Hurst exponent has been proposed earlier as a metric quantifying the impact of MPM on the performance of linear and nonlinear state estimators [32]. However, the proposed technique lacks in the following aspects:
(I − Kk C ) Pk k − 1
where xˆk k − 1 and Pk∣k−1 are the predicted state estimate and error covariance matrix respectively, Kk is the Kalman gain that weighs the predictions and measurements and xˆk k and Pk∣k represent the corrected state estimate and error covariance matrix respectively. Since the Kalman filter performs only linear operations on the estimates and assumes the initial state estimate to be a Gaussian random variable, the estimates obtained at all subsequent iterations remain Gaussian random variables. Moreover, it can be shown that the innovation sequences (εk = yk − ŷk∣k−1) exhibit Gaussian white noise characteristics at steady state [21]:
E [εk εTj ] = 0,
j≠k
1. The technique considers the predictability of a posteriori filtering errors (the difference between filtered outputs ŷk∣k and measurements yk) instead of the innovation sequence without establishing the equivalence between predictability of the two time series 2. The deviation of Hurst exponent from its ideal value of 0.5 is used to detect and quantify the performance of estimator in the presence of MPM. However, the covariance of Hurst exponents is not incorporated in the index which makes it difficult to systematically define bounds for departure from 0.5, rendering the metric to certain extent, a heuristic 3. The technique is limited to systems with single measurement variable, with no guidelines provided on extending the idea to multivariate systems. Although the above technique was applied to a large scale system with four measurable variables in Ref. [33], the development of a single performance index was not addressed.
(2)
with E[εk] = 0 and E [εk εkT ] = HPk k − 1 H T + R where H is the measurement matrix and Pk∣k−1 is the predicted state error covariance matrix. For nonlinear systems, let us consider an ideal state estimator that does not make any approximations, exhibiting the above properties such as minimum mean squared error and the innovation sequence exhibiting white noise characteristics. This estimator can then be thought of as being the equivalent of Kalman filter for non-linear systems. It must be noted here that the above properties are never satisfied for a nonlinear estimator. The ideal estimator considered above is hypothetical and is assumed solely to draw parallels from the Kalman filter. We then consider this ideal non-linear state estimator as the benchmark for comparing the performance of physically realizable state estimators available in the literature.
Furthermore, to the best of the authors' knowledge, there exists no benchmark in the literature for performance assessment of state estimators. In the following we propose a method that allows us to characterize a benchmark estimator such as the Kalman filter and a metric for assessing the performance with respect to the benchmark.
2.2. Signatures of optimality, sub-optimality, and deviations therefrom Equation (2) implies that the benchmark filter does not make any systematic or predictable errors in estimation and is therefore optimal (in estimating up to the second moment of the state variables), when the assumptions made on the model and measurements are satisfied. Violation of the assumptions will therefore lead to loss of optimality of the filter. On the other hand, it is a well accepted fact that due to several reasons such as randomness in measurements, overlooked environmental factors and assumptions of ideal behaviour, all models are to certain extent approximations of real systems. Therefore, elemental levels of mismatch between model and plant dynamics and therefore minimal correlation between the innovation sequences will be present even under normal operating conditions. With increase in mismatch, predictions obtained from model exhibit a temporal behaviour that is distinctly different from those of measurements, leading to increase in the mean squared filtering error (MSE), and the innovation sequences being more correlated. This is indicative of the fact that the estimator (Kalman filter and both ideal and practical estimators for nonlinear systems) is being driven away from its initial state of (sub)optimality. This can be observed by inspecting the MSE and autocorrelation function (ACF) of the innovation sequences. However, we demonstrate later in the article that MSE is (i) sensitive to noise and (ii) does not offer a benchmark for performance assessment because of which we choose to analyze the ACF of filtering errors. However, quantifying the impact of MPM to distinguish minimal mismatches during normal operation from significant mismatches that require an operator's attention calls for development a single interpretable measure of performance. The ACF being a sequence of
3. The proposed technique Consider a multivariate system as described in Equation (1) with p measured variables. Here we: 1. Establish the equivalence between the predictability of innovation sequence and that of the a posteriori errors 2. Develop a benchmark against which the performance of estimators can be assessed 3. Devise a multivariate performance metric that intelligently merges individual Hurst exponents of the system 3.1. Properties of a posteriori errors of benchmark estimator Let us first consider a linear system for which Equation (1) can be written as:
xk + 1 = Axk + Buk + νk + 1 yk = Cxk + Duk + ωk Let us denote the innovation sequence as εk = yk − ŷk∣k−1 and the a posteriori errors as ek = yk − ŷk∣k. According to the state update equation of the Kalman filter, we have: xˆk k = xˆk k − 1 + Kk εk and the updated measurements as: ŷk∣k = ŷk∣k−1 + CKkεk, where Kk is the Kalman gain at 3
ISA Transactions xxx (xxxx) xxx–xxx
L. Das et al.
the Hurst exponent α of y(k).
time k. The a posteriori filtering errors can then be expressed as:
ek =
yk − ŷk k
Let us now denote the Hurst exponent of the ith a posteriori error of an estimator as α(i). We now mention the assumptions made for defining the characteristics of the benchmark estimator and subsequently identify the statistical properties of the estimated Hurst exponents.
ek = yk − ŷk k − 1 − CKk εk ek =
(I − CKk ) εk
The a posteriori filtering errors are therefore linear combinations of the innovation sequences whose covariance is:
E [ek ekT ]
= =
E [(I − CKk ) εk εkT (I − CKk )T ] (I − CKk ) E [εk εkT ](I − CKk )T
Assumptions. 1. The system (denoted as Ξ) is assumed to be described according to Equation (1) with additive noise corrupting the states and measurements. 2. The noise corrupting the system is assumed to be identically (Gaussian) and independently distributed (IID). If the noise is not IID, then an adequate model of the noise is assumed to be available. The functions f and h are assumed to be smooth. 3. The model available for estimation is assumed to be ideal, i.e., there is no model plant mismatch. 4. There exists an ideal state estimator (ϒ) for the system described in Equation (1) which, given the model and measurements, can obtain the optimal (minimum mean squared error) estimates of states. 5. The ideal estimator (ϒ) is operating at steady state. For linear systems, this is equivalent to the Kalman filter exhibiting a fixed value of Kalman gain.Assumptions 1 and 2 state that the system can be represented in a state space form (Equation (1)) with additive noise. The process and measurement noise are assumed to be IID (Gaussian) random variables. If the system is such that the noise does not satisfy this assumption, then an ideal model of the coloured noise process is required. The functions describing the system evolution are assumed be smooth, i.e., continuous and differentiable. Assumptions 3, 4 and 5 state that there is no model plant mismatch, that there exists an optimal state estimator for the system Ξ and that the estimator has reached steady state conditions.
(3)
It can be observed from Equation (3) that the covariance of the a posteriori errors, in general depends on the time instant k. However, when the Kalman filter has reached its steady state, i.e., the Kalman gain has reached a steady value (that does not depend on k, Kk = K) we have from Equation (3):
E [ek ekT ] = (I − CK ) E [εk εkT ](I − CK )T and
E [ek eTj ] = (I − CK ) E [εk εTj ](I − CK )T
∀ j, k
(4)
For the steady state Kalman filter it is also known that the innovation sequences are Gaussian white noise processes with the following properties [21]:
0, E [εk εTj ] = ⎧ T ⎨ ⎩CPk k − 1 C + R,
j≠k j=k
(5)
Using Equations (4) and (5) we have
0, E [ek eTj ] = ⎧ T ⎨ ( I − CK )Σ ε (I − CK ) , ⎩
j≠k j=k
(6)
where Σε = (CPk∣k−1C + R). Equation (6) summarizes the properties of covariance of a posteriori error sequence for the steady state Kalman filter which states that ek is also Gaussian white noise process. Note that the steady state conditions of Kalman filter are required to ensure that the covariance calculated in Equation (6) is independent of the time instants k and j resulting in a stationary Gaussian white noise process. Equations (5) and (6) together establish the equivalence between the correlation structure of the innovation sequences and that of a posteriori filtering errors. The above properties are valid for the ideal (BLUE) estimator for linear systems. Using similar assumptions made in Section 2, we assume that the above properties will be exhibited by the ideal (minimum mean squared error) estimator that makes no approximations, for non-linear systems. We next discuss the computational details of the algorithm used for estimating Hurst exponent of a time series. We then show how Equation (6) manifests in the statistical properties such as correlation structure of Hurst exponents of a posteriori errors, and finally employ these properties to define the benchmark. T
Remark 3.1. It must be noted that these assumptions are made solely to derive the properties of the Hurst exponents under ideal (hypothetical for nonlinear systems) conditions. These assumptions, specifically Assumptions 3, 4 and 5 need not be valid in practice for an estimator whose performance we wish to assess. Lemma 1. Under the above assumptions, the a posteriori filtering errors of the estimator ϒ will exhibit white noise characteristics and hence their Hurst exponents will be equal to 0.5. Proof. According to Assumption 2, the noise corrupting the system Ξ is IID and the ideal estimator ϒ (Assumption 4) is provided with the perfect model (Assumption 3). As a result, the errors made by ϒ after filtering the measurements will be uncorrelated at steady state (Assumption 5), as discussed earlier (Equation (6)). Therefore, we have
α (∗i) = 0.5,
(7)
It must be noted that DFA provides only an estimate of the true Hurst exponent of a time series, and will always have a standard deviation associated with it. In order to be able to adequately use these estimates for performance assessment of multivariate systems and systematically provide bounds on the performance, it is necessary to account for this standard deviation.
3.2. Defining the benchmark for performance assessment: properties of hurst exponents of the benchmark estimator In this work, the Hurst exponent is estimated using detrended fluctuation analysis (DFA) [34] for a time series y(k) of length N as follows:
Observation 1: It has been observed from the simulation studies performed in this work that the standard deviation of estimated Hurst exponents under ideal conditions (i.e., for a white noise process) obtained with detrended fluctuation analysis (DFA) is equal to 0.061. A comparative analysis of the properties of Hurst exponents estimated with different techniques [35] has also reported that the standard deviation of Hurst exponents calculated with DFA is ≈ 0.07. Therefore, we have
1. The given time series y(k) is integrated after mean removal: k Y (k ) = ∑i = 1 (y (k ) − y )̄ . 2. Y (k) is then divided into windows of length n and a least squares line ŷ is fit in each window. 1
i = 1, 2, 3, …, p
N
3. The fluctuation is then defined as: F (n) = N ∑i = 1 (Y (i) − yi )̂ 2 This is a function of the window size n. 4. Steps 2 and 3 are repeated for varying window sizes n. A log-log plot of the fluctuation with the corresponding window size is made, and the slope of this plot is calculated, which provides the estimate of
s(∗i) = 0.06, 4
i = 1, 2, 3, …, p
(8)
ISA Transactions xxx (xxxx) xxx–xxx
L. Das et al.
3.3. Assessing the performance with respect to the benchmark: a hurst exponent - Mahalanobis distance measure
Table 1 Correlation of Hurst exponents for different time series. Situation
E [α1̂ ]
E [α2 ̂ ]
corr(αˆ 1, αˆ 2)
Uncorrelated time series
0.49 0.17 0.78 0.17 0.78 0.35
0.49 0.35 0.69 0.35 0.69 0.69
0 0 0 0.99 0.99 0.99
Correlated time series
Now that the properties of the benchmark estimator have been established, a measure quantifying the deviation of the properties of any given estimator from those of the benchmark estimator can be developed to quantify the impact of MPM. An important feature of multivariate systems is the interactions between measurements as well as internal states. In such systems a mismatch in any parameter can propagate through the system dynamics and affect all states and measurements. From an estimator performance assessment standpoint, this will lead to a rise in correlations of, and among the errors made in estimating/filtering the variables (states and measurements) as compared to a system with no mismatch. A desirable feature of multivariate performance index is to be able to quantitatively account for such interactions among variables. For a multivariate system with p measured variables, there will be p Hurst exponents of the corresponding a posteriori filtering errors that will quantify their respective predictability. The task at hand is to use these p Hurst exponents and devise a performance index that accounts for correlations in the system i.e., detects and quantifies increments in the correlations introduced due to MPM. Distance measures such as Mahalanobis [36] and Bhattacharyya distance [37] and divergence measures such as the Kullback-Leibler divergence [38] can be employed to handle the correlations in data. In the proposed approach we use Mahalanobis distance as a tool to adapt the Hurst exponent technique to systems with multiple measurement variables. Mahalanobis distance is a generalized distance that is a single measure of the degree of divergence in the mean values of different characteristics of a population by considering correlation between variables [39]. The concept of Mahalanobis distance can be intuitively explained by first considering univariate data, whose spread can be quantified by the standard deviation (or variance). For multivariate data (k variables), the spread in k dimensional space is characterized by the covariance (or correlation). Mahalanobis distance employs the correlation structure to quantify (dis)similarity between a data point and a data set (with said correlation structure). Mahalanobis distance has been widely used for distinguishing unhealthy from healthy systems, and in fusion with Taguchi methods for fault detection and isolation [40–43]. The calculation of Mahalanobis distance (MD) requires characterising the aforementioned data set with its mean, standard deviation and correlation structure. These parameters constitute the Mahalanobis space (MS) from which the distance metric is calculated. In applying a Hurst exponent - Mahalanobis distance approach for performance assessment of state estimators, the reference data set corresponds to the Hurst exponents obtained under ideal conditions which have been derived earlier, and summarized in Equations (7)–(9). The MD can then be used to quantify the deviation of an estimator from ideal operation and therefore quantify the effect of MPM on estimator performance, which is computed as follows:
In addition to the individual standard deviation, we also examine the correlation between Hurst exponents of two time series. Observation 2: It has been observed from simulation studies that the estimated Hurst exponents of two time series are uncorrelated when the underlying series are themselves uncorrelated. Let us consider two time series y1(k) and y2(k) that are not correlated with each other and have Hurst exponents α1 and α2 respectively. Although the true Hurst exponents of the time series are fixed, DFA provides only an estimate of the true value, i.e., αˆ1 and αˆ2 , which are random variables. It has been observed in simulation studies that the estimates of Hurst exponents of two time series are uncorrelated, i.e., corr(αˆ1, αˆ2) = 0 when E[y1(k) y2(k)] = 0. Table 1 shows the mean and correlation of Hurst exponents for different synthetically generated time series. It can be observed from Table 1 that when the individual time series are not cross-correlated, their estimated Hurst exponents do not exhibit any correlation, irrespective of the individual auto-correlations (a value of zero indicates that the confidence interval of the estimates contains zero). On the other hand, the estimated Hurst exponents of two cross-correlated time series exhibit large correlation. Now, let e(i)[k] denote the a posteriori error sequence for the ith measurement variable and e(i)[k − l] denote the error sequence with a lag of l time units. Then according to Equation (6), we have
E [e (i) [k ] e (j) [k − l]] = 0,
l ≠ 0,
i≠j
As a result, under the assumptions stated above, the estimated Hurst exponents of time shifted a posteriori filtering errors are uncorrelated and we have E [αˆ i, αˆ j] = 0 , i ≠ j where αˆ i and αˆ j are the estimated Hurst exponents of e(i)[k] and e(j)[k − l] for l ≠ 0. This results in the correlation matrix of Hurst exponents being an identity matrix, i.e.,
Cα∗ = Ip
(9)
The above statistical properties are independent of the system and its nature and can be used to characterize the benchmark state estimator. Any deviation of the Hurst exponents (mean and/or correlation) will be indicative of model plant mismatch, which information can be used in a performance measure to assess the performance of any given state estimation algorithm.
1. Collect measurements from the system and calculate Hurst exponents of a posteriori errors with time, αij, i = 1, 2, 3, …, p denoting index of variable and j denoting time. 2. Obtain the normalized Hurst exponent matrix Z = (zij) with
Remark 3.2. It is noteworthy that the a posteriori errors of different variables do not have a zero correlation with each other for l = 0. In order to characterize the ideal estimator, Hurst exponents of different a posteriori errors should therefore be calculated for the lagged processes, i.e, for e(i)[k] and e(j)[k − l] with l ≠ 0. The Hurst exponents calculated even for l = 1 will be able to characterize the benchmark estimator. However, calculation of Hurst exponents through techniques such as DFA requires at least 1000 data points for obtaining reliable values. As a result, the lag required to satisfy the above conditions is very small compared to the length of actual time series. Therefore, from a computational point of view, the Hurst exponents of individual errors that are not time shifted can also serve to characterize the ideal estimator.
z ij =
αij − α (∗i) s (∗i)
where α (∗i) and s(∗i) are obtained from Equations (7) and
(8) respectively. 3. Calculate the performance metric as the (squared) Mahalanobis distance of the normalized Hurst exponents as:
MD =
1 T ∗−1 Z Cα Z ) p(
(10)
where Cα∗ is obtained from Equation (9). The Mahalanobis distance is the square root of the expression on the 5
ISA Transactions xxx (xxxx) xxx–xxx
L. Das et al.
right hand side of Equation (10). The spread (variance) of the MS is equal to 1, as a result of which data with no MPM will have an MD that is ≤ 1. A new data point that does not lie within the spread of this data set is then expected to have been generated by a different system configuration, i.e., with model plant mismatch. This property of the Mahalanobis space enables detection of any deviation from the ideal estimator characterized by Equations (7)–(9). In addition, larger the extent of MPM in the system, further away the Hurst exponents will lie from the spread of the ideal region. As a result, the magnitude of MD correlates positively with the severity of MPM. This property of the Mahalanobis distance allows for quantification of model plant mismatch and hence performance of the state estimator. The mean of MD obtained over a period of time is used to ascertain the state of the system and quantify the extent of mismatch over that period of time.
Table 2 α − MD index for state estimation of two component spring mass damper system with and without MPM. α
Situation
Without MPM With MPM
α1
α2
0.4937 0.5919 0.5838 0.6881 0.7335
0.4906 0.4992 0.5279 0.6418 0.6721
c12
MD
0 0 0 0.3177 0.3962
0.9990 1.9935 2.0502 8.7035 12.8332
random perturbations were introduced in the parameters so as to avoid cancellation of the effects of different parameters). The results are summarized in Table 2 from which it can be observed that without any mismatch, individual Hurst exponents are very close to the ideal value of 0.5 indicating optimality of the Kalman filter. This is accompanied by the performance measure taking a value of 0.9990. Table 2 also shows that the Hurst exponents do not exhibit any correlation (c12 = corr(α1, α2)) in the absence of MPM (a value of zero indicates that the confidence interval for the correlation coefficient contains zero, implying that the null hypothesis of zero correlation could not be rejected). Furthermore, small levels of mismatch result in deviations of the Hurst exponents from 0.5, while not inducing statistically significant correlations. This leads to small increments in the value of the proposed performance measure. As the severity of mismatch increases, the Hurst exponents deviate further from 0.5 and also exhibit significant correlation, indicating increase in both autocorrelation and cross-correlation of the a posteriori estimation errors. The performance measure also exhibits large increments for these configurations. It can therefore be observed that the proposed metric serves as a reliable measure of overall estimator performance.
Remark 3.3. As mentioned earlier, the performance metric devised for systems with single measurement variable did not account for covariances of the Hurst exponents. This work addresses the above concern by incorporating the covariance in characterizing the benchmark for performance assessment. In other words, optimality is now characterized by a region (mean and covariance) as opposed to a number (0.5). The performance metric (MD of new Hurst exponents) is always calculated from the MS as result of which MD = 1 marks the boundary between optimality and sub-optimality. A value of MD ≤ 1 therefore signifies operation in the optimal region, which in turn improves the confidence in identifying poor performance from the proposed metric (MD > 1). 4. Results The proposed technique is demonstrated with one linear system (with Kalman filter) and two non-linear systems (with EKF, UKF and PF). In each of the systems, model plant mismatch is introduced by changing the plant parameters keeping the model used for estimation fixed and α − MD is used for performance assessment. The parameters involved in the UKF have been set as: α = 0.01, β = 2 and κ = 0 for both the nonlinear case studies.
4.2. Case study 2: magnetic levitation system The second system considered is a magnetic levitation system which consists of an electromagnet (a coil of resistance R and inductance L connected to a voltage source V) that is used to levitate a metallic ball of mass m against gravity. Magnetic levitation is used in applications such as frictionless bearings and high speed trains. This system can be described according to the following equations:
4.1. Case study 1: linear system The first case study is a linear system - a two component spring mass damper (SMD) system described as:
xk + 1 = Axk + νk yk = Cxk + ωk
x1̇ =
x2
with the following system matrices:
x2̇ =
C m
1 0 0 ⎤ ⎡ 0 k1 b1 ⎢− k1 − b1 ⎥ m1 m1 m1 m1 ⎥ Ac = ⎢ 0 0 1 ⎥ ⎢ 0 ⎢ k1 ⎥ k k b b + + b1 − 1m 2 − 1m 2 ⎥ ⎢ m2 2 2 ⎦ ⎣ m2
x3̇ = −
with A = e Ac TS and
The position (x = x1) and velocity (v = x2) of the levitated ball and the current (I = x3) through the coil constitute the state variables, while only the current and position are measured. The parameters of the system are: R = 1 (resistance of coil), C = 0.0001, L = 0.01, m = 0.05 and g = 9.8. The initial conditions of the system are x0 = [0.012; 0; 0.84] with Q = 10−6 × I3 and R = 10−6 × I2. Model plant mismatch is introduced into the system by changing m, L and R. Table 3 presents the results of estimation obtained with EKF, UKF and PF (20 particles). It can be seen from Table 3 that with increase in severity of mismatch, the individual Hurst exponents deviate further from 0.5 and α − MD index increases and provides a quantitative measure of the overall performance of the estimator for non-linear systems. For this system, correlations between Hurst exponents were not found to be significant even in the presence of MPM and are hence not listed in Table 3. In addition to quantifying the performance of a particular estimator
y =
1 0 0 0⎤ C=⎡ ⎣0 0 1 0⎦ where mi, ki and bi are respectively the mass, spring constant and damping coefficient of the ith component. The states of the system are positions and velocities of the two masses, and positions of the masses are measured with a sampling time of TS = 0.25. The parameters of the system are mass mi, spring constant ki and damping coefficient bi of the ith component. The system configuration is set to: m1 = m2 = 1, k1 = k2 = 0.3, b1 = b2 = 0.1; initial conditions x0 = [3; 5; 3; 5]; and noise generated according to Equation (1) with Q = 0.005 × I4 and R = 0.005 × I2 where Ij is identity matrix of dimension j. System parameters are changed in steps to introduce MPM and α − MD technique was applied to quantify the estimator performance. (Small 6
g− R L
+
2C L
x3 2 x1
( )
2
( )+ x 2 x3 x12
u L
x ⎡ 1⎤ ⎡ 1 0 0 ⎤ ⎢ x2 ⎥ 0 0 1 ⎣ ⎦ x3 ⎣ ⎦
ISA Transactions xxx (xxxx) xxx–xxx
L. Das et al.
model plant mismatch, linearizing the model can cause large errors in the computation of state estimates, rendering the EKF more sensitive to model plant mismatch as compared to the UKF which uses multiple sigma points with the nonlinear model. Finally, the particle filter uses a relatively large number of particles for estimation and hence exhibits resilience to MPM, albeit at the cost of computational expense.
Table 3 α − MD index for state estimation of magnetic levitation system with and without MPM for EKF, UKF and PF with 20 particles (i: increase; d: decrease). Situation
EKF
UKF
PF
α1
α2
MD
α1
α2
MD
α1
α2
MD
No MPM
0.49
0.49
1.05
0.49
0.5
1.01
0.49
0.5
1.01
R (10%i), m (20%i) R (10%i), m (40%i)
0.49 0.49
0.43 0.4
2.7 4.3
0.5 0.49
0.45 0.44
2.02 3.2
0.5 0.5
0.47 0.46
1.3 1.8
L (20%d), m (40% i) L (30%d), m (40% i)
0.5
0.37
6.4
0.49
0.39
5.5
0.49
0.41
4.2
0.5
0.29
7.7
0.5
0.35
7.1
0.5
0.39
4.9
4.3. Case study 3: double pendulum system The final system considered is the double pendulum system (with distributed masses) described as: l2 pθ1 − l1 pθ2 cos(θ1 − θ2)
θ1̇ =
l12 l2 [m1 + m2 sin2 (θ1 − θ2)] l1 (m1 + m2) pθ2 − l2 m2 pθ1 cos(θ1 − θ2)
θ2̇ =
l1 l22 m2 [m1 + m2 sin2 (θ1 − θ2)]
pθ̇ 1 = − (m1 + m2) gl1 sin θ1 − C1 + C2 pθ̇ 1 =
− m2 gl2 sin θ2 + C1 − C2
with pθ1 pθ2 sin(θ1 − θ2)
C1 ≡ C2 ≡
l1 l2 [m1 + m2 sin2 (θ1 − θ2)] l22 m2 p12 + l12 (m1 + m2) pθ2 − l1 l2 m2 pθ1 pθ2 cos(θ1 − θ2) 2
2
2l12 l22 [m1 + m2 sin2 (θ1 − θ2)]
sin[2(θ1 − θ2)]
The state variables of the system are angles (θi) and momenta ( pθi ) of the pendulums, with the angles being measured from the system. The parameters of the system are: m1 = m2 = 1 and l1 = l2 = 1. The system is simulated with no friction and losses so as to exhibit sustained oscillations, when initiated from a non-equilibrium, non-chaotic initial condition. Unlike the first two systems that operate at steady state, this system oscillates continuously and is representative of a scenario involving dynamic state estimation. The initial conditions are x0 = [π∕4; 0; π∕4; 0], and process and measurement noise are generated according to Equation (1) with Q = 10−6 × I4 and R = 10−6 × I2 respectively. Mismatch is introduced into the system by changing the masses (mi) and lengths (li) of the two pendulums. Table 4 presents the performance of EKF, UKF and PF for varying levels of mismatch between model and plant. With increase in severity of MPM, the individual Hurst exponents deviate from 0.5 and the α − MD index increases from 1. It can be observed that for the double pendulum, unlike magnetic levitation system the correlation between Hurst exponents also increases with the severity of mismatch. Moreover, for this system also the PF outperforms UKF and EKF. Similar conclusions based on the computational details can be drawn about the performance of each estimator as in the case of magnetic levitation system. Note, however that the PF uses 50 particles in this system as compared to 20 particles for the magnetic levitation system. This observation can be attributed to the difference in type of
Fig. 1. Comparison of α − MD for State Estimation of Magnetic Levitation System with EKF, UKF and PF (The indexes on X-axis refer to the row number in Table 3).
with varying levels of MPM, the proposed technique can be used to compare different estimators for a particular system. A comparison of the performance of EKF, UKF and PF is also provided in Table 3 which is graphically presented in Fig. 1. It can be observed that the particle filter outperforms UKF and EKF in the presence of model plant mismatch. This observation can be explained on the basis of computational details of the techniques. The particle filter performs a random (sequential Monte Carlo) sampling of particles (20 in this case) based on the current state estimate and its error covariance, and updates states based on the likelihood of particles obtained from measurements. The UKF performs a deterministic sampling to generate “sigma points” (9 in this case) based on the estimate and covariance and propagates all sigma points while the extended Kalman filter propagates only the point estimate and its covariance by linearizing the model about the current operating point and using Kalman filter formulation. As a result, when there is
Table 4 α − MD index for state estimation of double pendulum system with and without MPM for EKF, UKF and PF (50 particles); i: increase. Situation
EKF
UKF
PF
α1
α2
c12
MD
α1
α2
c12
MD
α1
α2
c12
MD
No MPM
0.49
0.49
0
1.02
0.49
0.5
0
1.01
0.49
0.5
0
1.01
m1 (2%i) m1 (5%i) m1 (8%i)
0.5 0.52 0.6
0.51 0.64 0.8
0 0.23 0.56
1.3 7.2 22.65
0.5 0.51 0.63
0.52 0.6 0.69
0 0.2 0.49
1.28 3.8 15.6
0.49 0.51 0.55
0.52 0.57 0.67
0 0.12 0.43
1.2 2.6 13.8
m2 (2%i) m2 (5%i) m2 (8%i)
0.51 0.52 0.55
0.53 0.66 0.76
0 0.32 0.46
1.37 7.8 13
0.51 0.52 0.53
0.52 0.6 0.67
0 0.27 0.47
1.28 3.3 9
0.51 0.52 0.53
0.51 0.56 0.61
0 0.29 0.38
1.2 2.8 8.2
l1 (2%i) l1 (5%i) l1 (8%i)
0.51 0.67 0.76
0.52 0.52 0.53
0 0.18 0.28
1.4 6.6 11.03
0.51 0.62 0.72
0.51 0.52 0.53
0 0.13 0.2
1.3 4.2 6.9
0.51 0.58 0.62
0.5 0.52 0.53
0 0.1 0.19
1.2 3.6 5.8
7
ISA Transactions xxx (xxxx) xxx–xxx
L. Das et al.
chose to use the Mahalanobis distance because it calculates the degree of dissimilarity between a probability density function (of the ideal estimator) characterized by the mean and covariance, and a new data point. In other words, one needs to calculate only one set of values of (p) Hurst exponents to quantify the performance of the estimator. On the other hand, Bhattacharyya distance and Kullback-Leibler divergence quantify the degree of dissimilarity between two different distributions. As a result, one needs to calculate enough samples of Hurst exponents to be able to characterize the density function of the operating data in order to calculate the metric. These measures however have the advantage of being able to account for the correlations between variables of both the density functions as opposed to Mahalanobis distance that can account for the correlation structure of the ideal estimator alone. The choice of the metric therefore depends on the amount of data available to the user. However, the framework of performance assessment and the benchmark remain the same irrespective of the measure used. It is also noteworthy that the correlation matrix of Hurst exponents (Cα∗) of the ideal estimator is an identity matrix (Equation (9)). In other words, there is no correlation between the Hurst exponents and hence the a posteriori filtering errors. As a result, the Mahalanobis distance measure reduces to scaled Euclidean distance. The proposed MD acts as a performance measure of the estimator, and can potentially be used as an indicator the requirement of a model update. A threshold value of the MD can be set, and the performance measure exceeding this threshold can be used as a indicator of unacceptable performance, triggering a model update. However, as observed in the case studies, the value of MD depends on the system dynamics (type of nonlinearity) and the estimator used. As a result, it is difficult to develop thresholds that can be applied to all systems. In an industrial setting, on the other hand, these thresholds can easily be elicited by an expert based on their experience with the system. It must be emphasized that the proposed framework and the measure can detect and quantify the performance of an estimator and cannot be used to isolate the source of poor performance. As mentioned earlier, the performance can be influenced by the validity of the model as well as the approximations made by the estimator. While one can use the proposed technique to quantify the impact both the sources on performance, it cannot be used to isolate the source of as being the model or the approximations. For a fixed estimator however since the approximations are fixed, the performance is assessed with respect to the validity of the model, while for different estimators the technique can be used to compare the influence of the approximations made as demonstrated in the previous section.
non-linearity of the two systems, thereby requiring different computational expense for performing state estimation. Therefore, if for example the bounds on the extent of MPM is known a priori then the proposed technique can be used to decide the appropriate trade off between the performance desired and the computational expense that can be afforded. 5. Discussion In addition to the results presented in the previous section, we used the cubature Kalman filter (CKF) for state estimation of the double pendulum and magnetic levitation systems. However, the Hurst exponents of the filtering errors were observed to be greater than 1, indicating non-stationarity - even with no MPM. Upon closely examining the filtering errors, it was also observed that they appeared secondorder non-stationary. The proposed method therefore suggests that the CKF does not perform as well as the other filters. However, these observations are limited to the systems under consideration and cannot be generalized to other systems, without further analysis. It was mentioned earlier that the mean squared filtering errors also increase with MPM, but owing to their sensitivity to noise and the lack of a benchmark, MSE does not lend itself as a favourable choice for performance assessment. This is demonstrated in Table 5 which lists the influence of noise in measurements on the sum squared error (SSE) and Hurst exponents for the double pendulum system. It can be observed that for the fixed process dynamics, a change in measurement noise influences the SSE while the Hurst exponents are not affected. This implies that the variances of the filtering errors have increased because of increase in measurement noise, but there is still no change in predictability of the errors and hence no change in performance. In other words, SSE will be influenced by measurement noise even in the absence of any significant model plant mismatch. Moreover, unlike the Hurst exponents, (i) the expected value of MSE (or SSE) for the ideal estimator is not known and (ii) the covariance of MSE (or SSE) will depend on the system dynamics, deriving which might not be tractable for all systems. As a result, it is not feasible to identify small deviations in MSE from statistically significant deviations and to provide systematic bounds on the performance metric. The Hurst exponent therefore provides a better measure for performance assessment of estimators. It must be noted that while the Kalman filter is the ideal estimator for linear systems, a version of the ideal estimator for nonlinear systems that can be practically implemented does not exist. As a result, for linear systems the proposed index quantifies the performance of an estimator with respect to a practical (and ideal) estimator. On the other hand, for non-linear estimators, the performance is assessed with respect to a hypothetical estimator. The proposed technique however is amenable to be modified for user-defined benchmarking of estimators by providing the mean (α ∗) and correlation matrix (Cα∗ ) of the estimator configuration that is identified as being the best performance from say analysis of historical data, which shall be explored in further studies. In the derivation of the performance measure, we mentioned that distance metrics as well as divergence measures can be used to evaluate the performance of an estimator with respect to the benchmark. We
6. Conclusion This article proposed a technique for quantifying the performance of a state estimator based soft sensor. The first contribution of this article is a benchmark for state estimation, very much similar to the minimum variance controller (MVC) benchmark developed for control loop performance assessment. The second contribution of the article is a measure of the performance of a state estimator assessed with respect to the above benchmark. The Mahalanobis distance was used in a similar scenario for control loop performance assessment of multivariate systems with the advantage of being able to account for correlations between variables under ideal conditions. It was found that owing to the lack of correlation between variables under ideal conditions, the Mahalanobis distance reduces to scaled Euclidean distance for estimators. However, it is also possible to use other metrics such as Bhattacharya distance or the Kullback-Leibler divergence which can account for correlations under non-ideal conditions, and will perhaps result in a more complete performance measure. The proposed benchmark is independent of the nature of the system (linear or non-linear) and of the actual estimator used for state estimation. The technique was demonstrated to perform well for linear and
Table 5 Sum squared error and Hurst exponent for Double Pendulum system for varying levels of measurement noise. Situation
α
MSE
α1
α2
MSE1
MSE2
Without MPM
0.49
0.49
0.6514
0.6516
R (10×) R (100×)
0.49 0.49
0.49 0.49
0.7514 0.7813
0.7618 0.7843
8
ISA Transactions xxx (xxxx) xxx–xxx
L. Das et al.
nonlinear systems with three different types of estimators (EKF, UKF, PF) for the latter. The technique was able to reliably quantify the estimator's performance for both types of systems, and for all types of filters, whose underlying computations are significantly different. It was observed that for the two non-linear systems considered in this study, particle filter always outperformed UKF and EKF in the presence of MPM, indicating robustness to such mismatch. The UKF seems to provide a trade off between the computational expense and performance for the two systems. While the particle filter proved to be more robust than EKF and UKF for the systems considered in this article, a generalized comment on the robustness of state estimators will require addressing all estimation techniques for a large variety of non-linear systems. In each class of systems, robust estimators can be used soft sensors for control and monitoring, while highly sensitive estimators can be used as techniques for early detection of model plant mismatch. Future work will be directed towards comparing the performance of state estimators for different classes of non-linear systems and using different distance measures. Performance assessment of a larger variety of estimation techniques such as optimization based estimators (MHE), ensemble Kalman filters (EnKF) and the numerically stable square root estimators will be explored. The problem of identifying the best performance of an estimator for analysis of historical data for user-defined benchmarking of nonlinear state estimators will also be addressed.
IEEE Trans 2003;48(2):246–58. [16] Rengaswamy R, Narasimhan S, Kuppuraj V. Receding-horizon nonlinear kalman (rnk) filter for state estimation. IEEE Trans Automat Contr 2013;58(8):2054–9. [17] Gopalakrishnan A, Kaisare NS, Narasimhan S. Incorporating delayed and infrequent measurements in extended kalman filter based nonlinear state estimation. J Process Contr 2011;21(1):119–29. [18] Ray A, Liou L, Shen J. State estimation using randomly delayed measurements. J Dyn Syst Meas Contr 1993;115(1):19–26. [19] Bavdekar VA, Prakash J, Patwardhan SC, Shah SL. Moving window ensemble kalman filter for delayed and multi-rate measurements. Proceedings of 18th IFAC world congress. 2011. [20] Schneider R, Georgakis C. How to not make the extended kalman filter fail. Ind Eng Chem Res 2013;52(9):3354–62. [21] Mehra R. On the identification of variances and adaptive kalman filtering. Automat Contr IEEE Trans 1970;15(2):175–84. [22] Maryak JL, Spall JC, Silberman GL. Uncertainties for recursive estimators in nonlinear state-space models, with applications to epidemiology. Automatica 1995;31(12):1889–92. [23] Maryak JL, Spall JC, Heydon BD. Use of the kalman filter for inference in statespace models with unknown noise distributions. IEEE Trans Automat Contr 2004;49(1):87–90. [24] Spall JC. The kantorovich inequality for error analysis of the kalman filter with unknown noise distributions. Automatica 1995;31(10):1513–7. [25] Aliev FA, Ozbek L. Evaluation of convergence rate in the central limit theorem for the kalman filter. IEEE Trans Automat Contr 1999;44(10):1905–9. [26] Zhao J. Dynamic state estimation with model uncertainties using h-∞ extended kalman filter. IEEE Trans Power Syst 2018;33(1):1099–100. [27] Hurst H. Long-term storage of reservoirs: an experimental study. Trans Am Soc Civ Eng 1951;116:770–99. [28] Alvarez-Ramirez J, Alvarez J, Rodriguez E, Fernandez-Anaya G. Time-varying hurst exponent for us stock markets. Phys A Stat Mech Appl 2008;387(24):6159–69. [29] Ivanov PC, Amaral LAN, Goldberger AL, Havlin S, Rosenblum MG, Struzik ZR, Stanley HE. Multifractality in human heartbeat dynamics. Nature 1999;399(6735):461–5. [30] Kang Y, Ko MH, Woo KJ, Kim SD, Park SH, Yashima M, Fan LT. Mixing of particles in gas-liquid-solid fluidized beds containing a binary mixture of particles. Ind Eng Chem Res 1998;37(10):4167–74. [31] Srinivasan B, Spinner T, Rengaswamy R. Control loop performance assessment using detrended fluctuation analysis (dfa). Automatica 2012;48(7):1359–63. [32] Das L, Srinivasan B, Rengaswamy R. Data driven approach for performance assessment of linear and nonlinear kalman filters. American control conference (ACC), 2014. IEEE; 2014. p. 4127–32. [33] Das L, Kumar G, Rani MD, Srinivasan B. A novel approach to evaluate state estimation approaches for anaerobic digester units under modeling uncertainties: application to an industrial dairy unit. J Environ Chem Eng 2017;5(4):4004–13. [34] Peng C-K, Havlin S, Stanley HE, Goldberger AL. Quantification of scaling exponents and crossover phenomena in nonstationary heartbeat time series. Chaos: Interdiscipl J Nonlinear Sci 1995;5(1):82–7. [35] Kirichenko L, Radivilova T, Deineko Z. Comparative analysis for estimating of the hurst exponent for stationary and nonstationary time series. Inf Technol Knowl 2011;5(1):371–88. [36] Mahalanobis PC. On the generalized distance in statistics. Proceedings of the national institute of sciences (Calcutta), vol. 2. 1936. p. 49–55. [37] A Bhattacharyya, On a measure of divergence between two statistical populations defined by their probability distribution, Bull Calcutta Math Soc. [38] Kullback S, Leibler RA. On information and sufficiency. Ann Math Stat 1951;22(1):79–86. [39] Taguchi G, Jugulum R. The Mahalanobis-Taguchi strategy: a pattern technology system. John Wiley & Sons; 2002. [40] Egan WJ, Morgan SL. Outlier detection in multivariate analytical chemical data. Anal Chem 1998;70(11):2372–9. [41] Jin X, Chow TW. Anomaly detection of cooling fan and fault classification of induction motor using mahalanobis–taguchi system. Expert Syst Appl 2013;40(15):5787–95. [42] Kumar S, Chow TW, Pecht M. Approach to fault identification for electronic products using mahalanobis distance, Instrumentation and Measurement. IEEE Trans 2010;59(8):2055–64. [43] Tong C, Song Y, Yan X. Distributed statistical process monitoring based on foursubspace construction and bayesian inference. Ind Eng Chem Res 2013;52(29):9897–907.
References [1] Nichkawde C, Harish P, Ananthkrishnan N. Stability analysis of a multibody system model for coupled slosh–vehicle dynamics. J Sound Vib 2004;275(3):1069–83. [2] Kalman RE. A new approach to linear filtering and prediction problems. J. Basic Eng 1960;82(1):35–45. [3] Handschin J, Mayne DQ. Monte Carlo techniques to estimate the conditional expectation in multi-stage non-linear filtering. Int J Contr 1969;9(5):547–59. [4] Gordon NJ, Salmond DJ, Smith AF. Novel approach to nonlinear/non-Gaussian bayesian state estimation. Radar and signal processing, IEE proceedings F, vol. 140. IET; 1993. p. 107–13. [5] Evensen G. Sequential data assimilation with a nonlinear quasi-geostrophic model using Monte Carlo methods to forecast error statistics. J Geophys Res: Oceans 1994;99(C5):10143–62. [6] Evensen G, Van Leeuwen PJ. Assimilation of geosat altimeter data for the agulhas current using the ensemble kalman filter with a quasigeostrophic model. Mon Weather Rev 1996;124(1):85–96. [7] Houtekamer PL, Mitchell HL. Data assimilation using an ensemble kalman filter technique. Mon Weather Rev 1998;126(3):796–811. [8] Julier S, Uhlmann J, Durrant-Whyte HF. A new method for the nonlinear transformation of means and covariances in filters and estimators. IEEE Trans Automat Contr 2000;45(3):477–82. [9] Julier SJ, Uhlmann JK. Unscented filtering and nonlinear estimation. Proc IEEE 2004;92(3):401–22. [10] Arasaratnam I, Haykin S. Cubature kalman filters. IEEE Trans Automat Contr 2009;54(6):1254–69. [11] Mandela RK, Rengaswamy R, Narasimhan S, Sridhar LN. Recursive state estimation techniques for nonlinear differential algebraic systems. Chem Eng Sci 2010;65(16):4548–56. [12] Vachhani P, Narasimhan S, Rengaswamy R. Robust and reliable estimation via unscented recursive nonlinear dynamic data reconciliation. J Process Contr 2006;16(10):1075–86. [13] Vachhani P, Rengaswamy R, Gangwal V, Narasimhan S. Recursive estimation in constrained nonlinear dynamical systems. AIChE J 2005;51(3):946–59. [14] Rao CV, Rawlings JB, Lee JH. Constrained linear state estimation — a moving horizon approach. Automatica 2001;37(10):1619–28. [15] Rao CV, Rawlings JB, Mayne DQ. Constrained state estimation for nonlinear discrete-time systems: stability and moving horizon approximations. Automat Contr
9