Adaptive Extended Kalman Filter using Exponencial Moving Average⁎

Adaptive Extended Kalman Filter using Exponencial Moving Average⁎

9th IFAC Symposium on Robust Control Design 9th IFAC Symposium on Robust Control Florianopolis, Brazil, September 3-5, 2018Design 9th on Design 9th IF...

662KB Sizes 0 Downloads 124 Views

9th IFAC Symposium on Robust Control Design 9th IFAC Symposium on Robust Control Florianopolis, Brazil, September 3-5, 2018Design 9th on Design 9th IFAC IFAC Symposium Symposium on Robust Robust Control Control Designonline at www.sciencedirect.com Florianopolis, Brazil, September 3-5, 2018 Available Florianopolis, Brazil, September 3-5, 2018 9th IFAC Symposium on Robust Control Florianopolis, Brazil, September 3-5, 2018Design Florianopolis, Brazil, September 3-5, 2018

ScienceDirect

IFAC PapersOnLine 51-25 (2018) 208–211

Adaptive Extended Kalman Filter using Adaptive Extended Kalman Filter using Adaptive Extended Kalman Filter using  Adaptive Extended Kalman Filter  Exponencial Moving Average Adaptive Extended Kalman Filter using using Exponencial Moving Average Exponencial Moving Average Exponencial Moving Average  Exponencial Moving Average ∗ Jean Gonzalez Silva ∗ ∗

Jean Gonzalez Silva ∗ ∗ Jean Silva Jos´ e De Aquino Filho ∗ ∗ Jean Gonzalez Silva ∗∗ Jos´ e Oniram Oniram DeGonzalez Aquino Limaverde Limaverde Filho ∗ ∗ Jean Gonzalez Silva ∗ Jos´ e Oniram De Aquino Limaverde Filho Eugˆ e nio Liborio Feitosa Fortaleza ∗ ∗ Jos´ eEugˆ Oniram De Aquino Limaverde Filho enio Liborio Feitosa Fortaleza ∗ ∗ Jos´ eEugˆ Oniram De Aquino Limaverde Filho ∗ e nio Liborio Feitosa Fortaleza Eugˆ enio Liborio Feitosa Fortaleza ∗ ∗ Eugˆ e nio Liborio Feitosa Fortaleza ılia, Bras´ılia, Brazil (e-mail: ∗ Universidade de Bras´ ∗ ∗ Universidade de Bras´ ∗ Universidade de Bras´ Bras´ılia, ılia, Bras´ Bras´ılia, ılia, Brazil Brazil (e-mail: (e-mail: [email protected]). Universidade de ılia, Bras´ ılia, Brazil (e-mail: [email protected]). ∗ Universidade de Bras´ılia, Bras´ılia, Brazil (e-mail: [email protected]). [email protected]). [email protected]). Abstract: This This paper paper presents presents aa novel novel adaptive adaptive extended extended Kalman Kalman filter filter for for nonlinear nonlinear discretediscreteAbstract: Abstract: This paper presents a Kalman filter discretetime systems systems that deals with variations variations in the the extended process noise noise covariance. The process process noise Abstract: This paper presents a novel novel adaptive adaptive extended Kalman filter for for nonlinear nonlinear discretetime that deals with in process covariance. The noise Abstract: This paper presents a novel adaptive extended Kalman filter for nonlinear discretetime systems that deals with variations in the process noise covariance. The process noise covariance is estimated at each sample time by calculating the innovation term covariance time systems that deals atwith in the noisethe covariance. process noise covariance is estimated eachvariations sample time by process calculating innovationThe term covariance time systems that deals variations in the process noise covariance. The noise covariance is estimated estimated atwith each sample time bysimulations calculating the innovation term covariance through exponential moving average. Numerical simulations were carried out to process illustrate the covariance is at each sample time by calculating the innovation term covariance through exponential moving average. Numerical were carried out to illustrate the covariance is estimated at each sample time by calculating the innovation term covariance through exponential moving average. Numerical simulations were carried out to illustrate the proposed algorithm based on a closed-loop nonlinear mass-spring-damper system. through exponential moving average. Numerical simulations were carried out to illustrate the proposed algorithm based on a closed-loop nonlinear mass-spring-damper system. through exponential moving Numerical simulations were carried system. out to illustrate the proposed algorithm based on a nonlinear mass-spring-damper proposed algorithm based on average. a closed-loop closed-loop nonlinear mass-spring-damper system. © 2018, IFAC (International Federation of Automatic Control) Hosting by Elsevier Ltd. All rights reserved. proposed algorithm based on a closed-loop nonlinear mass-spring-damper system. Keywords: Estimators. Estimators. Adaptive Adaptive filters. filters. Covariance Covariance matrices. matrices. Nonlinear Nonlinear systems. systems. Extended Extended Keywords: Keywords: Estimators. Adaptive Adaptive filters. filters. Covariance Covariance matrices. matrices. Nonlinear Nonlinear systems. systems. Extended Extended Kalman filters. Keywords: Estimators. Kalman filters. Keywords: Estimators. Adaptive filters. Covariance matrices. Nonlinear systems. Extended Kalman filters. Kalman filters. Kalman n×p m×n 1. filters. INTRODUCTION where Ak−1 ∈ Rn×n n×n , B k−1 ∈ Rn×p n×p , C k−1 ∈ Rm×n m×n n×n 1. INTRODUCTION where A ,, B ,, C k−1 ∈ Rn×n k−1 ∈ Rn×pm×1 k−1 ∈ Rm×n k−1 k−1 k−1 n×1 1. INTRODUCTION n×n n×p m×n where A ∈ R B ∈ R C ∈ R are known. ω ∈ R and v ∈ R are Gaussian n×1 m×1 k−1ω∈ k−1 v∈ k−1 1. INTRODUCTION k ∈ k R n×1 m×1 where Ak−1 R , B , C ∈ R k−1 k−1 are known. R and ∈ R are Gaussian k k n×n n×pm×1 m×n n×1 k k 1. INTRODUCTION where A ∈ R , B ∈ R , C ∈ R n×1 m×1 are known. ω ∈ R and v ∈ R are Gaussian white noise vectors (vectors of zero-mean independent Since its introduction by Kalman (1960) the Kalman filter k−1 k−1 k−1 k k are known. ωvectors and of v k zero-mean ∈ Rm×1 are Gaussian k ∈ Rn×1 Since its introduction by Kalman (1960) the Kalman filter white noise (vectors independent are known. ω ∈ R and v ∈ R are Gaussian white noise noise vectors (vectors ofk zero-mean zero-mean independent Since by (1960) the filter random variables), and their probability density function (KF) its hasintroduction been intensively intensively used on on different domains. It white k vectors (vectors of independent Since its introduction by Kalman Kalman (1960) the Kalman Kalman filter (KF) has been used different domains. It random variables), and their probability density function white noise vectors (vectors ofpvkzero-mean independent Since its introduction by Kalman (1960) the Kalman filter random variables), and their probability density function (KF) has been intensively used on different domains. It (PDF) are p (ω ) and (v ) with covariances has been incremented and modified in order to be applied ω k−1 k random variables), and their probability density function k−1 (KF) has been intensively used on different domains. It has been incremented and modified in order to be applied (PDF) are pω (ω ) and p (v ) with covariances ω k−1 k−1 v k k k−1 v k k−1 (ω k (v ) with random variables), and their probability density function (KF) has been intensively used on different domains. It (PDF) are p ) and p covariances has been incremented and modified in order to be applied matrices (second-order raw moment) expressed by on distinct applications of signal predicting, filtering, and ω k−1 v k k−1 (ω k−1raw (PDF) are pωk−1 ) and pvkk (v k )expressed with covariances hasdistinct been incremented and in order to be applied (second-order moment) by Q Qk−1 on applications of modified signal predicting, filtering, and matrices k−1 k−1 (PDF) are p (ω ) and p (v ) with covariances has been incremented and modified in order to be applied matrices (second-order raw moment) expressed by Q on distinct applications of signal predicting, filtering, and and R , respectively. The state estimation problem is smoothing. The initial concept provides the best unbiased ω k−1 v k k−1 k k (second-order raw moment) expressed by Qk−1 on distinct applications of signal predicting, filtering, and matrices smoothing. The initial concept provides the best unbiased k−1 and R , respectively. The state estimation problem is k k ∗ matrices (second-order raw moment) expressed by Q on distinct applications of signal predicting, filtering, and and R , respectively. The state estimation problem is smoothing. The initial concept provides the best unbiased defined for k ∈ Z , where the initial state estimate PDF state estimation for linear systems with known process ∗ k−1 k ∗ + and R , respectively. The state estimation problem is smoothing. The initial concept provides the best unbiased state estimation for linear systems with known process and definedkfor k ∈ Z∗+ , where the initial state estimate PDF +, a ,considered respectively. Thethe state estimation problem is smoothing. The initial concept provides the best unbiased ∗ defined k ∈ Z where initial state estimate PDF state estimation for linear systems with process and p (xR)kfor known Gaussian distribution with measurement noise covariances. + 0 (x0 ) defined for k ∈ Z , where the initial state estimate PDF state estimation for covariances. linear systems with known known process and and px considered a known Gaussian distribution with measurement noise + x00 0 ∗ x 0 defined for k ∈ Z , where the initial state estimate PDF state estimation for linear systems with known process and p (x ) considered a known Gaussian distribution with measurement noise covariances. ˆ mean x and covariance P . Also, measurements x 0 + 0(+) . Also, px00 (x0x considered a known PGaussian distribution with measurement noiseworks covariances. ˆ) 0(+) and covariance measurements 0(+) 0(+) Presented in in the the of Grewal Grewal and and Andrews Andrews (2014), (2014), mean 0(+) p considered a known Gaussian with measurement noiseworks covariances. Presented of ˆˆ) 0(+) mean and covariance P .. Also, measurements zxk0, (x signal inputs uk−1 , are available fordistribution every k. Based 0x 0(+) 0(+) mean x and covariance P Also, measurements 0(+) 0(+) Presented in the works of Grewal and Andrews (2014), z , signal inputs u , are available for every k. Based k k−1 Simon (2006), Chui and Chen (2017), one of the main Presented in the works Grewal and one Andrews ˆ 0(+) mean x and covariance P 0(+)of. Also, measurements Simon (2006), Chui andofChen (2017), of the(2014), main on z kkk ,, these signal inputs uk−1 , are available for every k. Based hypotheses, the solution the state estimation k−1 z signal inputs u , are available for every k. Based k−1 Presented in the works ofChen Grewal and one Andrews Simon (2006), and (2017), the main the solution of the estimation modifications isChui the extended Kalman filterof (EKF) for on Simon (2006), Chui and Chen (2017), one of(EKF) the(2014), main z k , these signalhypotheses, inputs uby , are available fortostate every Based modifications is the extended Kalman filter for on these hypotheses, the solution of state estimation ˆˆk. problem, proposed Kalman (1960), is find x that k−1 k(+) on these hypotheses, the solution of the the state estimation problem, proposed by Kalman (1960), is to find x Simon (2006), Chui and Chen (2017), one of the main modifications is the extended Kalman filter (EKF) for k(+) that nonlinear system state estimation. However, the EKF has k(+) modifications is the Kalman filterthe(EKF) for on these hypotheses, the solution of the state estimation nonlinear system stateextended estimation. However, EKF has ˆ problem, proposed by Kalman (1960), is to find x that |(z , ..., z )). Then, maximize the conditional PDF p(x k(+) ˆ)). by Kalman (1960), is to findz kkx that 1 , ..., k(+) modifications is the Kalman filterthe for problem, Then, maximizeproposed the conditional PDF p(xkkk |(z nonlinear system state estimation. However, EKF has some limitations limitations suchextended as sensibility regarding the initial 1 find 1 kx nonlinear system state estimation. However, the(EKF) EKF has ˆ)). problem, proposed by Kalman (1960), is to that some such as sensibility regarding the initial k(+) |(z , ..., z Then, maximize the conditional PDF p(x the obtained equations are: k 1 k maximize theequations conditional PDF p(xk |(z 1 , ..., z k )). Then, nonlinear system state estimation. However, the EKF has the obtained are: some limitations such as sensibility regarding the initial conditions and tunning of process noise covariance. That some limitations such asof sensibility regarding the initial conditions and tunning process noise covariance. That the |(z , ..., z )). Then, maximize the conditional PDF p(x obtained equations are: k 1 k obtained equations are: some such asof regarding the initial conditions tunning process covariance. That could limitations lead and to divergence divergence or even noise instability Reif et al. the conditions and tunning of sensibility process noise covariance. That could lead to or even instability Reif et al. the obtained equations are: conditions and tunning of or process noise covariance. That could lead to divergence even instability Reif et al. (1999). could lead to divergence or even instability Reif et al. (1999). ˆ k−1(+) + B k−1 uk−1 , ˆ k(−) = Ak−1 x (3) x could lead to divergence or even instability Reif et al. (1999). ˆˆ k−1(+) ˆˆ k(−) (3) x k−1 x k−1 uk−1 k−1 , k(−) = Ak−1 k−1(+) + B k−1 (1999). The present paper focuses on providing a real-time adapta= A (3) x x k−1 x k−1 u k−1 ,, TB k(−) = Ak−1 k−1(+) + ˆ ˆ + B u (3) x (1999). The present paper focuses on providing a real-time adaptak−1 k−1 k(−) k−1(+) T = A P A + Q , (4) P T k−1 k(−) k−1(+) k−1 k−1 ˆ ˆk(−) + B u , (3) x x The present paper focuses on providing a real-time adaptation of the process noise covariance based on the exponenk−1 k−1 k−1 = A P A + Q , (4) P k−1 k(−) k−1(+) k−1(+) T k−1 k−1 The present paper focuses on providing a real-time adaptak−1 k(−) = A k−1(+) AT k−1 + Qk−1 , tion of the process noise covariance based on the exponenP (4) P k−1 T + Qk−1 TP k−1(+) −1 = Ak−1 Ak−1 ,, (4) Pkk(−) The present paper focuses on providing achanges real-time adaptation of the process noise covariance based on the exponenk(−) k−1(+) tial moving average. By following the of process k−1 k−1 T T T [C P C + R ] (5) = P C K −1 T T tion of the process noise covariance based on the exponenk k −1 k(−) k(−) k k tial moving average. By following the changes of process Ak−1 P k−1(+) A C Tkk + Qk−1 ,, (4) P = P=k(−) C (5) K k(−)k−1 tion ofcovariance, the process noise based on theof exponen−1 , tial average. By following the changes process noisemoving the system state can can be better estimated +R Rkkkk ]]]−1 (5) =P P k(−) C TTkkk [C [C kkkk P P k(−) C Tk + K kkkkk(−) tial moving average. By covariance following the changes of process k(−) k(−) = C [C P C + R , (5) K noise covariance, the system state be better estimated k(−) k(−) ˆ ˆ ˆ k k = x + K [z − C ], (6) x x T T −1 k k k k(+) k(−) k(−) tial moving average. By following the be changes of process noise covariance, the state can better estimated +x (5) = P=k(−) Ck + [CK CC Kx resulting better state feedback control. In practice, this ˆˆ k(−) ˆˆkk(+) ˆˆR x (6) k k k(−) kP k ] ], , k(−) k k k [z k − k noise covariance, the system system statecontrol. can be better estimated k(+) k(−) k(−) resulting better state feedback In practice, this = + − ], (6) x x k k(−) ˆ= ˆ k(+) ˆ k(−) =x x +K K−kk [z [z − kC CPkkk(−) ], (6) x x noise covariance, the system state can be In better estimated kk C k(+) k(−) k(−) resulting better state feedback control. practice, this represents a improvement of the actuators by reducing P K , (7) P k(+) k(−) resulting better state feedback control. In practice, this represents a improvement of the actuators by reducing ˆ ˆ ˆ = x + K [z − C ], (6) x x = P − K C P , (7) P k k k(+) k(−) k(−) k k k k(+) k(−) k(−) k C k P k(−) , k(+) k(−) resulting better state feedback control. In practice, this represents a improvement of the actuators by reducing = P − K (7) P their wear wear over over time and and aa improvement improvement of the thebysystem system dyrepresents a improvement of the actuators reducing (7) P k(+) kCk k P k(−) their time of dyk(+) = P k(−) k(−) − K k k(−) , represents a improvement of the actuators bysystem reducing (7) P k(+) = P k(−) − K k C k P k(−)T, their over time and improvement of the the system dynamicwear performance. Using adaptive Kalman filters, works their wear over time Using and aa improvement of dynamic performance. adaptive Kalman filters, works ˆˆ k(+) ]TT [xk − x ˆ ˆˆ k(+) and P k(+) = E[xk − x where x k(+) ] ˆ their wear over time and a improvement of the system dyand P = E[x − x ] [x − x ] where x k k namic performance. Using adaptive Kalman filters, works k(+) k(+) k(+) k(+) done by Tannuri et al. (2006), Evensen (2009), and Popov T k k k(+) k(+) k(+) k(+) namicbyperformance. Using adaptive Kalman filters, works where T [xk − x done Tannuri et al. (2006), Evensen (2009), and Popov ˆˆand ˆ ˆˆposteriori P −x ] are values of the estik k(+) and k(+) = k(+) ]]covariance k(+) ˆ k(+) andn×m P = E[x [xk − x ] x k − x are the thex posteriori values of E[x the state state and covariance estik(+) k(+) k(+) namic performance. Using adaptive Kalman filters, works where done by Tannuri et al. (2006), Evensen (2009), and Popov T et al. (2017) are emphasized. done by Tannuri et al. (2006), Evensen (2009), and Popov ˆ ˆ ˆ and P = E[x − x ] [x − x ] where x et al. (2017) are emphasized. are the posteriori values of the state and covariance esti∈ R is the Kalman gain. In which (3), (4), mation. K k k k(+) k(+) k(+) k(+) n×m k n×m are the posteriori values of the state and covariance estidone by Tannuri et al. (2006), Evensen (2009), and Popov ∈ R is the Kalman gain. In which (3), (4), mation. K k et al. (2017) are emphasized. n×m k ∈ Rn×m et al. (2017) are emphasized. are the posteriori values ofKalman the as state and covariance estiis the Kalman gain. In which (3), (4), mation. K (5), (6), and (7) can be written the following predicting k ∈ R is the gain. In which (3), (4), mation. K k (7) n×m et (2017) Filter are emphasized. (5), (6), and be written as the following predicting 1.1al.Kalman Kalman ∈(7) R can is the Kalman gain. In which (3), (4), mation. K (5), (6), and can written as following predicting form of filter. k Kalman 1.1 Filter (5), (6), and (7) can be be written as the the following predicting form of the the Kalman filter. 1.1 Kalman Filter (5), (6), and (7) can be written as the following predicting form of the Kalman filter. 1.1 Kalman Filter of the Kalman filter. T T −1 1.1 Kalman Filtercan be linearly represented by stochastic form K C (8) −1 ,, Dynamic systems K kkk = =P P kkkfilter. C TTkTkTk [C [C kkk P P kkk C C TTkTkTk + +R Rkkk ]]−1 (8) Dynamic systems can be linearly represented by stochastic form of the Kalman −1 −1 K = P C [C P C + R ] , (8) k k k k k Dynamic systems can be linearly represented by stochastic state-space model as follow. k k K = P C [C P C + R ] , (8) k k T k k T k −1 Dynamic systems be linearly represented by stochastic k k state-space model can as follow. K k = P k C k [C k P k C k + Rk ] , (8) Dynamic systems can be linearly represented by stochastic state-space model as follow. state-space model as follow. T P k = Ak−1 (P k−1 − K k−1 C k−1 P k−1 )ATTk−1 state-spacex model as x follow. (9) P A k−1 (P k−1 k−1 − K k−1 k−1 C k−1 k−1 P k−1 k−1 )AT k−1 B ω (1) k−1 k−1 k−1 + k−1 u k−1 + k−1 ,, T P kkkk = = A K (9) xkkk = =A Ak−1 x + B u + ω (1) +Q k−1,,(P k−1 − k−1 C k−1 P k−1 )A k−1 k−1 k−1 k−1 k−1 k−1 k−1 k−1 k−1 P = A (P − K C P )A k−1 k−1 k−1 k−1 k−1 k−1 k−1 k−1 +Q (9) k−1 T x = A x + B u + ω , (1) k−1 (9) k k−1 k−1 k−1 k−1 k−1 k−1 xk = Ak−1zxkk−1 +kB u + ω , (1) P = A (P − K C P )A +Q , k−1 k−1 k−1 k k−1 k−1 k−1 k−1 k−1 = C x + v , (2) k−1 k−1 , k, + ω +Qk−1 (9) =C xkkkk−1 +u vk−1 (2) xk = Ak−1zxkkk−1 +kkB (1) k k−1 , k +Qk−1 , C (2) =acknowledgment C kk x xkk + +v v kk ,, for finance supporting (2) z kkto=  The authors would likez ˆ ˆ xk = Ak−1 xk−1 + B k−1 uk−1 +  The authors would likez kto= C k xk + v k , for finance supporting (2) ˆˆ kk = Ak−1 ˆˆ k−1 x k−1 x k−1 + B k−1 k−1 uk−1 k−1 + (10) acknowledgment  The the Brazillian and PETROBRAS. x = A + x ˆˆ+ authors would acknowledgment for K [z C x  (10) k−1 k−1 k−1 k−1 k−1 k−1 ]. ˆ kk A ˆ k−1 x = k−1 Ak−1 +B B− u + x The authorsinstitutions: would like like to toCAPES acknowledgment for finance finance supporting supporting the Brazillian institutions: CAPES and PETROBRAS. k−1 k−1 A K k−1 [z k−1 − Cu x (10) k−1 k−1 k−1 k−1 k−1 ]. k−1 k−1 k−1 k−1 k−1  (10) ˆk A ˆ x = A + B u + x theThe Brazillian institutions: CAPES and PETROBRAS. PETROBRAS. authorsinstitutions: would like toCAPES acknowledgment for finance supporting ˆ K [z − C ]. x k−1 k−1 k−1 k−1 k−1 k−1 k−1 k−1 k−1 the Brazillian and ˆ k−1 ]. Ak−1 K k−1 [z k−1 − C k−1 x (10) the Brazillian institutions: CAPES and PETROBRAS. ˆ A K [z − C ]. x k−1All k−1 2405-8963 © © 2018 2018, IFAC IFAC (International Federation of Automatic Control) rights k−1 reserved. k−1 k−1 Copyright 319 Hosting by Elsevier Ltd. Copyright 2018 IFAC 319 Control. Peer review© under responsibility of International Federation of Automatic Copyright © 319 Copyright © 2018 2018 IFAC IFAC 319 10.1016/j.ifacol.2018.11.106 Copyright © 2018 IFAC 319

IFAC ROCOND 2018 Jean Gonzalez Silva et al. / IFAC PapersOnLine 51-25 (2018) 208–211 Florianopolis, Brazil, September 3-5, 2018

1.2 Extended Kalman Filter The EKF is performed from analytical or numerical linearization of nonlinear model in order to implement the KF equations. xk = f (xk−1 , uk−1 , ω k−1 , k − 1)

(11)

z k = h(xk , v k , k),

(12)

where f : R × R × R × N → R and h : R × R × N → Rm . The jacobian matrices (first terms of Taylor’s expansion series) of f and h evaluated in the recent state estimation are used as n

p

n

n

n

 ∂f  , ∂xk−1 xˆ k−1(+) , uk−1 , 0n×1 , k−1  ∂h  ˆ Ck  . ∂xk xˆ k(−) , 0m×1 , k

ˆ k−1  A

m

(13)

Then, EKF equations become

ˆ k(−) = f (ˆ x xk−1(+) , uk−1 , 0q×1 , k − 1),

ˆ k−1 P k−1(+) A ˆT P k(−) = A k−1 T ˆ ˆ ˆT K k = P k(−) C k [C k P k(−) C k

Adaptations methods involving Bayesian, maximum likelihood (ML), correlations, and covariance-matching approaches based on the simple moving average were presented by Mehra (1972) and improved by distinct authors. A more intuitive covariance-matching approach were considered by Myers and Tapley (1976). Later on, Maybeck (1982) developed his procedure to ML estimator considering that the process noise is unbiased and other simplifications are considered to attain real-time applicability. Also based on ML approach, Dee (1995) presented a simpler adaptive scheme dealing with operational weather prediction. At the cited approches, the adaptive filter requires to storage the last N sets of measurement and batch processing data at each time step k ≥ N . Also, the process noise is considered fully stationary over any given interval of N sample periods and they require efforts in tunning the window size N to achieve convergence in close-loop. The proposed adaptation require just the current data to determine the estimation of variable Qk−1 . 2.1 New Process Noise Covariance Estimation Method

(14)

+ Qk−1 ,

(15)

+ Rk ]−1 ,

(16)

ˆ k(−) + K k [z k − h(ˆ ˆ k(+) = x xk(−) )], x ˆ P k(+) = P k(−) − K k C k P k(−) .

209

(17) (18)

1.3 Exponencial Moving Average The exponencial moving average has simple implementation and can be more responsive to new information relative to the simple moving average considering the same amount of noise attenuation (Shumway (2015); Smith (1999) presents exponencial moving averages as single pole recursive filters). Being sk any signal, we can define T k = E(sk − Esk )(sk − Esk )T  and S k = (sk − Esk )(sk − Esk )T . So, the exponencial moving average can be expressed by its definition as (19) T k = (1 − α)T k−1 + αS k , where α is a weighted coeficient, in which its value determines the smoothing intensity of S k . As the covariance matrix of sk is defined by P sk = E[sk − Esk ]T [sk − Esk ], an aproximation of P sk can be recursively obtained by using (19). 2. KALMAN FILTER ADAPTATION The knowledge of covariance matrices Qk−1 and Rk directly impacts the Kalman filter performance. A strategy in determining Rk is obtaining a series of measurements by keeping the system in steady state. Thus, it is possible to obtain an order of magnitude of the measurement noise covariance after its mean has been removed. On the other side, the determinination of Qk−1 becomes a far more arduous task. Assuming Rk are known, we propose an adaptation based on innovation term covariance to estimate Qk−1 at each sample time. 320

The inovation term defined by ν k is an independent zeromean Gaussian sequence. ˆ k = v k + C k ek , (20) ν k = zk − C k x ˆ k . The use of the innovation term where ek = xk − x is adopted since it is less correlated than the output z k according to Mehra (1972). Subtracting (1) and (10) and rearranging the terms using (20) , we obtain ek = (Ak−1 − Ak−1 K k−1 C k )ek−1 + ω k−1 (21) −Ak−1 K k−1 v k−1 . Once v k and ek are independent variables between themselves, so are ek−1 , ω k−1 , and v k−1 , we apply covariance operation on (20) and (21). (22) cov(ν k ) = Rk + C k cov(ek )C Tk , cov(ek ) = (Ak−1 − Ak−1 K k−1 C k )cov(ek−1 ) .(Ak−1 − Ak−1 K k−1 C k )T + Qk−1 (23) −Ak−1 K k−1 Rk−1 (Ak−1 K k−1 )T . Using the exponencial moving average, cov(ν k ) can be estimated. (24) cov(ν k ) = (1 − α)cov(ν k−1 ) + αν k−1 ν Tk−1 . Then, the error covariance term is determined as cov(ek ) = C †k (cov(ν k ) − Rk )(C Tk )† , where († ) is the Moore-Penrose inverse.

(25)

For steady state performance, we assume cov(ek ) changes very slowly regarding the sample time, mathematically, cov(ek ) ∼ = cov(ek−1 ) which yields Qk−1 = cov(ek−1 ) − (Ak−1 − Ak−1 K k−1 C k ) .cov(ek−1 )(Ak−1 − Ak−1 K k−1 C k )T (26) +Ak−1 K k−1 Rk−1 (Ak−1 K k−1 )T . If m ≥ n, i.e., the observation vector dimention is greater than or equal to the state vetor dimention, a unique solution is obtained for Qk−1 . In other cases, we must impose addicional restrictions (besides positive semidefiniteness and symmetry) on Qk−1 to obtain a unique solution based on solutions of system of equations.

IFAC ROCOND 2018 210 Jean Gonzalez Silva et al. / IFAC PapersOnLine 51-25 (2018) 208–211 Florianopolis, Brazil, September 3-5, 2018

3. NUMERICAL SIMULATIONS

Table 1. Simulation Parameters

In order to simulate the proposed adaptive extended Kalman filter, the following scenario was assumed; a nonlinear mass-spring-damper system initialized with nonzero initial conditions and its control aiming to approach the mass position to the equilibrium point. 3.1 Mass-spring-damper System A nonlinear mass-spring-damper model is given by the following equations:   x˙ 1 = x2 (27) mx˙ 2 = −k1 x1 − k2 x31 − cx2 + u  z = x1 where x1 and x2 are position and velocity respectively of a body with mass m. The mass is holded by a damper with damping parameter of c and a nonlinear spring with k1 and k2 parameters (Khalil (2002)). Entrance of external force and measurement of the position are u and z respectively. Discretizing (27), we obtain:  x1 (k) = x1 (k − 1) + T x2 (k − 1)        x2 (k) = x2 (k − 1) + T − k1 x1 (k − 1) m  c 1 k2 3   x x u(k − 1) (k − 1) − (k − 1) + −  2 1  m m m   z(k) = x1 (k), T is the sampling period and k is the present iteration. Note that zero-mean independent random variables have to be added to the model in order to obtain stochastic model as in (11) and (12).

Parameters Mass (m) Spring constant 1 (k1 ) Spring constant 2 (k2 ) Damping coefficient (c) Discreted period (T ) Inicial condition of position Inicial condition of velocity Measurement noise covariance (r) Process noise covariance (q) Initial condition of AEKF and EKF position Initial condition of AEKF and EKF velocity AEKF and EKF initial measurement noise covariance (rf ) AEKF and EKF initial process noise covariance (qf ) AEKF and EKF initial state covariance (P 0 )

Values 2 [kg] 1 [kg/s2 ] 0.5 [kg/s2 ] 0.6 [kg/s] 0.01 [s] 2 [m] 0 [m/s] 10−4 10−4 0 [m] 2 [m/s] 10−4 10−2 100*I

2 EKF estimated velocity [m/s] AEKF estimated velocity [m/s]

1.5 1 0.5 0 -0.5 -1 -1.5 -2 -2.5 -3 0

10

20

30

40

50

Time [s]

3.2 Control and Estimation The control was designed based on the full state feedback. However, not all states are available. Therefore, an estimator is required and the control law is ˆk, uk = −K c, k x (28) where K c, k is the control gain matrix obtained by pole placement with Ackermann’s formula (detailed by Ogata (2010)). Because the system is nonlinear we used jacobian matrices and calculate the control gain matrix at each sample time. Knowing that the model is complete state controllable, we assumed the desired closed-loop poles to be at s = 0.1 and s = 1. In the AEKF implementation, rank of C k is less than n . Then, its pseudo-inverse has indetermination terms. However, the process noise was considered in the input of the model (27), then Qk−1 has a known structure that turns possible estimation of it.   0 0 Qk−1 = 0 q System parameters and constants used in the simulation are in the table 1. 3.3 Numerical Simulation Results The state estimation using the AEKF presented less oscillations than the EKF. As a consequence, the control signal using the AEKF also presented less oscillation. 321

Fig. 1. Velocity signals 6 EKF control signal AEKF control signal

4

2

0

-2

-4 0

10

20

30

40

50

Time [s]

Fig. 2. Control signals In the EKF, the process noise covariance is constant and equal to its initial process noise covariance with intensity being two order of magnitude higher than the intensity of the simulated process noise covariance. On the other side, the AEKF started also at the same initial process noise covariance but it tends to reach the simulated process noise covariance. Analysing a steady state period, i.e. 30s to 50s, the ratios between the standard deviations of the signals were calculated as the following formula that gives a quantitative idea of reducing oscillations. Therefore, we obtained 56.97% less oscillations in the state estimation and 57.02% less use of actuator by using the AEKF.

IFAC ROCOND 2018 Jean Gonzalez Silva et al. / IFAC PapersOnLine 51-25 (2018) 208–211 Florianopolis, Brazil, September 3-5, 2018

REFERENCES

10 -3

1.5

q q

e

1

0.5

0 0

10

20

30

40

50

Time [s]

Fig. 3. Process noise covariance terms  var(AEFK signals) SDratio =  var(EFK signals)

Additionally, the system was simulated by setting the process noise covariance to zero in a period of the simulation, i.e., the process model is considered perfect in that period. By doing this, the AEKF process noise covariance should also reach zero. 1.5

10 -3 q Estimated q

1

0.5

0 60

70

80

90

100

110

120

130

Time [s]

Fig. 4. Process noise covariance terms with dropping of q 0.3 Velocity [m/s] AEKF estimated velocity [m/s]

0.2 0.1 0 -0.1 -0.2 -0.3 60

211

70

80

90

100

110

120

130

Time [s]

Fig. 5. Velocity signals with dropping of q CONCLUSION The proposed AEKF algorithm exhibits a satisfactory convergence of the process noise covariance in which the state estimation had significant reduction of oscillations compared to EKF. For control purpose the numerical simulations show that the adaptation increases actuator efficiency by drastically reducing the use of the actuators. For future works we expect to compare this adaptation technique with others in literature and extend its use in further applications to compare with the Unscented Kalman filter and the particle filter. 322

Chui, C. and Chen, G. (2017). Kalman Filtering: with Real-Time Applications. Springer International Publishing. Dee, D.P. (1995). On-line Estimation of Error Covariance Parameters for Atmospheric Data Assimilation. Monthly Weather Review, 123(4), 1128–1145. Evensen, G. (2009). Data Assimilation. Springer. Grewal, M. and Andrews, A. (2014). Kalman Filtering: Theory and Practice with MATLAB. Wiley. Kalman, R.E. (1960). A new approach to linear filtering and prediction problems. ASME Journal of Basic Engineering. Khalil, H.K. (2002). Nonlinear systems. Prentice-Hall, 3rd edition. Maybeck, P.S. (1982). Stochastic models, estimation, and control - Volume 2. Mathematics in Science and Engineering, 141-2, 1–289. Mehra, R.K. (1972). Approaches to Adaptive Filtering. IEEE Transactions on Automatic Control, 17(5), 693– 698. Myers, K.A. and Tapley, B.D. (1976). Adaptive Sequential Estimation with Unknown Noise Statistics. IEEE Transactions on Automatic Control, 21(4), 520–523. Ogata, K. (2010). Modern Control Engineering. Prentice Hall. Popov, I., Koschorrek, P., Haghani, A., and Jeinsch, T. (2017). Adaptive kalman filtering for dynamic positioning of marine vessels. IFAC-PapersOnLine, 50(1), 1121– 1126. Reif, K., G¨ unther, S., Yaz, E., and Unbehauen, R. (1999). Stochastic stability of the discrete-time extended Kalman filter. IEEE Transactions on Automatic Control, 44(4), 714–728. Shumway, R.H. (2015). Time Series Analysis and Its Applications With R Examples, volume 1. Simon, D.J. (2006). Optimal State Estimation: Kalman, Hinf, and Nonlinear Approaches. John Wiley and Sons. Smith, S. (1999). The Scientist and Engineer’s Guide to Digital Signal Processing. California Technical Pub. Tannuri, E., Kubota, L., and Pesce, C. (2006). Adaptive techniques applied to offshore dynamic positioning systems. Journal of the Brazilian Society of Mechanical Sciences and Engineering, XXVIII(3), 323–330.