Recursive online IV method for identification of continuous-time slowly time-varying models in closed loop

Recursive online IV method for identification of continuous-time slowly time-varying models in closed loop

Proceedings of the 20th World The International Federation of Congress Automatic Control Proceedings of 20th Congress Proceedings of the the 20th Worl...

1010KB Sizes 1 Downloads 21 Views

Proceedings of the 20th World The International Federation of Congress Automatic Control Proceedings of 20th Congress Proceedings of the the 20th World World The International International Federation of Congress Automatic Control Control Proceedings of the 20th World Congress Toulouse, France, July 9-14, 2017 The Federation of Automatic The International Federation of Automatic Control Available online at www.sciencedirect.com Toulouse, France, July The International of Automatic Control Toulouse, France,Federation July 9-14, 9-14, 2017 2017 Toulouse, France, July 9-14, 2017 Toulouse, France, July 9-14, 2017

ScienceDirect

IFAC PapersOnLine online 50-1 (2017) 4008–4013 Recursive IV method for Recursive Recursive online online IV IV method method for for identification of continuous-time slowly Recursive online IV method for identification identification of of continuous-time continuous-time slowly slowly time-varying in closed loop identification ofmodels continuous-time slowly time-varying models in closed loop time-varying models in closed loop time-varying models in closed loop A. Padilla ∗,∗∗ H. Garnier ∗,∗∗ P. C. Young ∗∗∗ J. Yuz ∗∗∗∗

∗,∗∗ ∗∗∗ ∗∗∗∗ ∗,∗∗ H. Garnier ∗,∗∗ A. Padilla ∗,∗∗ ∗∗∗ J. ∗∗∗∗ A. H. Garnier ∗,∗∗ P. P. C. C. Young Young ∗∗∗ J. Yuz Yuz ∗∗∗∗ A. Padilla Padilla ∗,∗∗ ∗,∗∗ H. Garnier ∗,∗∗ P. C. Young ∗∗∗ J. Yuz ∗∗∗∗ H. Garnier P.Recherche C. Young J. Yuz de ∗ A. Padilla University of Lorraine, Centre de en Automatique ∗ ∗ University of Lorraine, Centre de Recherche en Automatique de ∗ University of Lorraine, Nancy, Centre de Recherche France Recherche en en Automatique Automatique de de ∗ University of Lorraine, Centre de France ∗∗ University of Lorraine, Nancy, Centre de RecherchedeenNancy, Automatique de Nancy, France CNRS, Centre de Recherche en Automatique UMR 7039, Nancy, France ∗∗ ∗∗ CNRS, Centre Centre de de Recherche Recherche en Automatique de Nancy, UMR 7039, Nancy, France ∗∗ CNRS, en Automatique de Nancy, UMR 7039, France CNRS, Centre de Recherche en Automatique de Nancy, UMR 7039, ∗∗ France ∗∗∗CNRS, Centre de Recherche en Automatique Nancy,Systems UMR 7039, France EnvironmentdeCentre, and France ∗∗∗ Lancaster University, Lancaster ∗∗∗ University, Lancaster Environment Centre, Systems and France ∗∗∗ Lancaster Lancaster University, Lancaster Environment Centre, Systems and Control Group, UK Lancaster University, Lancaster Environment Centre, Systems and ∗∗∗ Control Group, UK ∗∗∗∗ Lancaster University, Lancaster Environment Centre, Systems and Control Group, UK Universidad T´ e cnica Federico Santa Mar´ ı a, Department of Control Group, UK ∗∗∗∗ ∗∗∗∗ Universidad Electronic T´eecnica cnica Federico Santa Mar´ ı a, Department of Control Group, UK ∗∗∗∗ Universidad T´ Federico Santa Mar´ ı a, Department of Engineering, T´ecnica Federico Santa Chile Mar´ıa, Department of ∗∗∗∗ Universidad Electronic Engineering, Universidad Electronic T´ecnica Federico Santa Chile Mar´ıa, Department of Engineering, Chile Electronic Engineering, Chile Electronic Engineering, Chile

Abstract: Abstract: Abstract: Model estimation of industrial processes is often done in closed loop due, for instance, to Abstract: Model estimation of industrial processes is often often done in in closed loop due, for for instance, instance, to Abstract: Model estimation of is done due, production constraints or safety processes reasons. On other many loop processes Model estimation of industrial industrial processes is the often donehand, in closed closed loop due, are for time-varying instance, to to production constraints or safety reasons. On the other hand, many processes are time-varying Model estimation of industrial processes is often done in closed loop due, for instance, to production constraints or safety reasons. On the other hand, many processes are time-varying because of aging effectsororsafety changes in theOn environmental conditions. In this study, a recursive production constraints reasons. the other hand, many processes are time-varying because of aging aging effectsfor orsafety changes in the theOn environmental conditions. In this this study, a recursive recursive production constraints or reasons. the other hand, many processes are time-varying because of effects or changes in environmental conditions. In study, a estimation algorithm linear, continuous-time, slowly time-varying systems operating in because of aging effects or changes in the environmental conditions. In this study, a recursive estimation algorithm for linear, continuous-time, slowly in time-varying systems operating in because of aging effectsfor changes in themethod environmental this study, a recursive estimation algorithm continuous-time, slowly time-varying systems in closed loop, is developed. The proposed consists couplingInlinear filteroperating approaches estimation algorithm fororlinear, linear, continuous-time, slowlyconditions. time-varying systems operating in closed loop, is developed. The proposed method consists in coupling linear filter approaches estimation algorithm for linear, continuous-time, slowly time-varying systems operating in closed loop, developed. proposed method consists linear filter to handle theis closed-loop variable (IV) to deal closed loop, is time-derivative, developed. The The with proposed methodinstrumental consists in in coupling coupling lineartechniques filter approaches approaches to handle the time-derivative, with closed-loop instrumental variable (IV) techniques to deal closed loop, developed. The with proposed consists in coupling linear filtermethod. approaches to handle the closed-loop instrumental variable techniques to with measurement noise. Simulations showmethod the advantages of using this(IV) IV-based to handle theis time-derivative, time-derivative, with closed-loop instrumental variable (IV) techniques to deal deal with measurement noise. Simulations show the advantages of using this IV-based method. to handle the time-derivative, with closed-loop instrumental variable techniques to deal with measurement noise. show of this IV-based method. with measurement noise. Simulations Simulations show the the advantages advantages of using using this(IV) IV-based method. © 2017, IFAC (International Federation of Automatic Control) Hosting by Elsevier Ltd. All rights reserved. with measurement noise. Simulations show the advantages of using this IV-based method. Keywords: recursive estimation, instrumental variable method, continuous-time models, closed Keywords: recursive estimation, instrumental variable method, continuous-time models, closed Keywords: recursive estimation, estimation, instrumental instrumental variable variable method, method, continuous-time continuous-time models, models, closed closed loop identification Keywords: recursive loop identification Keywords: recursive estimation, instrumental variable method, continuous-time models, closed loop identification loop identification loop identification 1. INTRODUCTION factor approach, which consists in discarding old data 1. INTRODUCTION factor approach, which consists in discarding old data 1. INTRODUCTION INTRODUCTION factor approach, which consists consists in discarding old data data by weighting the prediction error in accordingly (Ljung and 1. factor approach, which discarding old by weighting the prediction error accordingly (Ljung and 1. INTRODUCTION factor approach, which consists in discarding old data by weighting the prediction error accordingly (Ljung and S¨ o derstr¨ o m, 1983). However, if the parameters vary at weighting the prediction error accordingly (Ljung and Recursive on-line identification, also known as real-time by S¨ o derstr¨ o m, 1983). However, if the parameters vary at by weighting the prediction error accordingly (Ljung and S¨ o derstr¨ o m, 1983). However, if the parameters vary at Recursive on-line identification, also known as real-time different rates, other approaches are more suitable. One S¨ o derstr¨ o m, 1983). However, if the parameters vary at Recursive on-line on-line identification, also known known as real-time real-time identification, is aidentification, well studied subject. Standard litera- S¨ Recursive also as different rates, other approaches are more suitable. One o derstr¨ o m, 1983). However, if the parameters vary at different rates, other approaches are more suitable. One identification, is a well studied subject. Standard literaof them consists in modeling the parameter variations by Recursive on-line identification, also known as real-time different rates, other approaches are more suitable. One identification, is topic wellare studied subject. Standard literature about theis (Ljungsubject. and S¨ oStandard derstr¨ om, litera1983; different identification, aa well studied of them consists in modeling the parameter variations by rates, other approaches areone more suitable. One them consists consists in and modeling the parameter parameter variations by ture about the topic are (Ljung and S¨ oderstr¨ om, 1983; aofstochastic model the simplest is the generalized identification, a well studied them in modeling the variations by ture about aboutand theisSin, topic are (Ljung and Gunnarsson, S¨ derstr¨ m, litera1983; of Goodwin 1984; Ljung subject. and 1990; ture the topic are (Ljung and S¨ ooStandard derstr¨ oom, 1983; a stochastic model and the simplest one is the generalized of them consists in modeling the parameter variations by a stochastic model and the simplest one is the generalized Goodwin and Sin, 1984; Ljung and Gunnarsson, 1990; random walk class of models (see chapters 4 and 5 in ture about the topic are (Ljung and S¨ o derstr¨ o m, 1983; stochastic model and the simplest one is the generalized Goodwin and The Sin, estimation 1984; Ljung Ljung and Gunnarsson, 1990; Ljung, 1999). is and usually performed 1990; con- aarandom Goodwin and Sin, 1984; Gunnarsson, walk class of models (see chapters 44generalized and 55 in stochastic model and the simplest one is the random walk class of models (see chapters and in Ljung, 1999). The estimation is usually performed conYoung, 2011) of which the simplest is the random walk Goodwin and Sin, 1984; Ljung and Gunnarsson, 1990; random walk class of models (see chapters 4 and 5 in Ljung, 1999). 1999). The estimation estimation is usually usually performed con- random sidering discrete-time (DT) models insteadperformed of continuousLjung, The is conYoung, 2011) of which the simplest is the random walk walk class of models (seestudy, chapters 4 and 5 in Young,Due 2011) of which theinsimplest simplest is the the random walk sidering discrete-time (DT) models instead of continuousitself. to its flexibility, this a simple random Ljung, 1999). The estimation is usually performed conYoung, 2011) of which the is random walk sidering discrete-time (DT) models instead of continuoustime (CT) models, because latterinstead case is of more involved Young, sidering discrete-time (DT)the models continuousitself. Due to its flexibility, in this study, a simple random 2011) of flexibility, which thein is the random walk itself. Due Due to its flexibility, insimplest this study, study, simple random time (CT) models, because the latter case is more involved walk-based method is proposed. sidering discrete-time (DT)the models continuousto its this aa simple random time to (CT) models, because the latter case is is of more involved due the need ofbecause computing the instead time-derivatives. For itself. time (CT) models, latter case more involved walk-based method is proposed. itself. Due to its flexibility, in this study, a simple random walk-based method is proposed. due to the need of computing the time-derivatives. For time (CT) models, because the latter case is more involved is proposed. due to totime-invariant the need need of of computing computing the time-derivatives. time-derivatives. For walk-based linear (LTI) models, linear filter methods due the the For In real-life, method experiments for model identification are often method is proposed. linear time-invariant (LTI) models, linear filter methods due the need of an computing theconsists time-derivatives. Fora walk-based In real-life, experiments for model identification are often linearto time-invariant (LTI) models, linear filter filter methods can betime-invariant used. Such approach in applying linear (LTI) models, linear methods In real-life, experiments for model model identification are often done in closed loop because there are economic are or safety In real-life, experiments for identification often can be used. Such an approach consists in applying a linear time-invariant (LTI) models, linear filter methods done in closed loop because there are economic or safety In real-life, experiments for model identification are often can be used. Such an approach consists in applying a low-pass filter Such to theanmodel and obtain timedone in in closed closed loop because there are economic economic or safety safety can be used. approach consistsprefiltered in applying a done constraints, or the system is unstable. The issue with loop because there are or low-pass filter to the model and obtain prefiltered timecan be used. Such approach consists in applying a done constraints, or the system is unstable. The issue with in closed there arefeedback economic or safety low-pass filter to the theanmodel model andlinear obtain prefiltered timederivatives. Nevertheless, in the time-varying (LTV) low-pass filter to and obtain prefiltered timeconstraints, or loop the because system is unstable. The mechanism issue with closed-loop identification is that the constraints, or the system is unstable. The issue with derivatives. Nevertheless, in the linear time-varying (LTV) low-pass filter to the model andlinear obtain prefiltered timeclosed-loop identification is that the feedback mechanism or the system is unstable. The issue with derivatives. Nevertheless, in the the linear time-varying (LTV) case, multiplication with the differentiation operator(LTV) is not constraints, derivatives. Nevertheless, in time-varying closed-loop identification is that the feedback mechanism introduces correlation between the disturbances and the closed-loop identification is that the feedback mechanism case, multiplication with the differentiation operator is not derivatives. Nevertheless, in the linear time-varying (LTV) introduces correlation between the disturbances and the identification is that thedisturbances feedback mechanism case, multiplication multiplication withtime-dependent the differentiation operator is e.g. not closed-loop commutative with the variables (see introduces correlation between the disturbances and the case, with the differentiation operator is not input signal, yielding biased estimates if no special stratintroduces correlation between the and the commutative with the time-dependent variables (see e.g. case, multiplication with the differentiation operator is not input signal, yielding biased estimates if no special stratintroduces correlation between the disturbances and the commutative with the time-dependent variables (see e.g. (Laurain et al., 2011)). Thus, it would appear at first commutative with the time-dependent variables (see that e.g. input inputis signal, signal, yielding biased estimates estimates if no no variable special stratstrategy used. For LTI systems, instrumental (IV) yielding biased if special (Laurain et al., 2011)). Thus, it would appear at first that commutative with thetotime-dependent variables (see e.g. egy is used. For LTI systems, instrumental variable (IV) yielding biased estimates no problem special strat(Laurain etlonger al., 2011)). Thus, it would would appear at first first that we are noet able use linear filterappear methods as inthat the input (Laurain al., 2011)). Thus, it at egy is is signal, used.have For LTI systems, systems, instrumental variable (IV) techniques been proposed to solve ifthis both egy used. For LTI instrumental variable (IV) we are no longer able to use linear filter methods as in the (Laurain et al., 2011)). Thus, it would appear at first that techniques have been proposed to solve this problem both is used. For LTI systems, instrumental variable (IV) we are are no One longer able to towould use linear linear filter methods as in in the the egy LTI case. solution be to filter modelmethods the time-varying techniques have been proposed to solve this problem both we no longer able use as for DT models (Gilson and Van den Hof, 2005; Gilson techniques have been proposed to solve this problem both LTI case. One solution would be to model the time-varying we no One longer able towould use linear in the for DT models (Gilson and Van den Hof, 2005; Gilson have to solve thisal., problem both LTIare case. One solution would beway to filter model the time-varying time-varying parameters in solution a deterministic such methods that the as new pa- techniques LTI case. be to model the for al., DT2011) models (Gilson and Van Van den et Hof, 2005; Gilson et andbeen CTproposed models (Gilson 2008); (see for DT models (Gilson and den Hof, 2005; Gilson parameters in a deterministic way such that the new paLTI case. One solution would be to model the time-varying et al., 2011) and CT models (Gilson et al., 2008); (see for DT models (Gilson and Van den Hof, 2005; Gilson parameters in a deterministic way such that the new parameters areinconstant or nearlyway constant, as inthe (Jiang parameters a deterministic such that new and pa- et et al., al.,Chapter 2011) and and CT models (Gilson et ideas al., 2008); 2008); (see also, 9 in Young, 2011). These have been 2011) CT models (Gilson et al., (see rameters are constant or nearly constant, as in (Jiang and parameters in a deterministic way such that the new paalso, Chapter 9 in Young, 2011). These ideas have been et al., 2011) and CT models (Gilson et al., 2008); (see rameters are constant or nearly constant, as in (Jiang and Schaufelberger, 1991). For on-line estimation, the difficulty also,extended Chapter to 9 the in Young, Young, 2011). of These ideas parameterhave been been rameters are constant or nearly constant, as in (Jiang and also, also identification DT linear Chapter 9 in 2011). These ideas have Schaufelberger, 1991). For on-line estimation, the difficulty rameters constant or constant, as in (Jiang and also, also extended to the identification of DT linear parameterChapter 9 inthe in(Toth Young, These ideas havehand, been Schaufelberger, 1991).lies Fornearly on-line estimation, the difficulty of such anareapproach inon-line finding suitable deterministic Schaufelberger, 1991). For estimation, the difficulty also extended extended to the identification of DT linear parametervarying modelsto et 2011). al., 2012). Onlinear the other also identification of DT parameterof such an approach lies in finding suitable deterministic Schaufelberger, 1991). For on-line estimation, the difficulty varying models in (Toth et al., 2012). On the other hand, extended to the identification of DT linear parameterof such such an an approachextra lies in in finding suitable suitable deterministic models. Moreover, parameters are introduced and, also of approach lies finding deterministic varying models in (Toth et al., 2012). On the other hand, recursive identification of nonlinear CT systems in closed varying models in (Toth et al., 2012). On the other hand, models. Moreover, extra parameters are introduced and, of such an approachextra lies in finding suitable deterministic recursive identification of nonlinear CT systems in closed (Tothof al., 2012). Onsystems hand, models. Moreover, extra parameters are introduced and, as a consequence, larger variance ofare theintroduced estimates are varying recursive identification ofetnonlinear nonlinear CT systems in closed closed models. Moreover, parameters and, loop hasmodels been in considered in (Landau etthe al.,other 2001). In recursive identification CT in as a consequence, larger variance of the estimates are models. Moreover, extra parameters are introduced and, loop has been considered in (Landau et al., 2001). In recursive identification of nonlinear CT systems in closed as a consequence, larger variance of the estimates obtained. In this study, assuming that the parameters are as a consequence, larger variance of the estimates loop has hasetbeen been considered in (Landau (Landau et al., al., 2001). 2001). In (Padilla al., 2016), CT LTV systems operating in open loop considered in et In obtained. In this study, assuming that the parameters are as a consequence, larger variance of the parameters estimates (Padilla et al., 2016), CT LTV systems operating in open has been considered in (Landau et al., 2001). In obtained. In this thiswestudy, study, assuming that the parameters are loop slowly varying, are able to apply a linear filter method obtained. In assuming that the are (Padilla et al., 2016), CT LTV systems operating in open loop are identified in a recursive fashion using an IV-based (Padilla et al., 2016), CT LTV systems operating in open slowly varying, we are able to apply a linear filter method obtained. In this assuming that the parameters are (Padilla loop are identified in recursive fashion using an IV-based et al., 2016), LTV systems operating in open slowly varying, westudy, are able able to apply apply linear filter method method to circumvent the time-derivative issue. loop are are identified identified in aaaCT recursive fashion using an IV-based IV-based slowly varying, we are to aa linear filter method. That study isrecursive extended here using for the recursive loop in fashion an to circumvent the time-derivative issue. slowly varying, we are able to apply a linear filter method method. That study is extended here for the recursive loop are identified in a recursive fashion using an IV-based to circumvent the time-derivative issue. to the to time-derivative method. That Thatofstudy study is CT extended here for the the recursive recursive identification linearis slowly here time-varying systems extended for A circumvent standard way deal with theissue. on-line estimation of method. to circumvent the to time-derivative issue. identification of linear CT slowly time-varying systems method. That study is CT extended here for the recursive identification of linear CT slowly time-varying systems A standard way deal with the on-line estimation of operating in closed loop. identification of linear slowly time-varying systems A standard standard way to to parameters deal with with the the on-line estimation of identification slowly time-varying is to use the forgetting A way deal on-line estimation of operating in closed loop. of linear CT slowly time-varying systems operating in closed loop. slowly time-varying parameters is to use the forgetting A standard way to deal with the on-line estimation of operating in closed loop. slowly time-varying parameters is to use the forgetting slowly time-varying parameters is to use the forgetting operating in closed loop. slowly time-varying parameters is to use the forgetting

Copyright © 2017 IFAC 4081 Copyright © 2017, 2017 IFAC IFAC 4081Hosting by Elsevier Ltd. All rights reserved. 2405-8963 © IFAC (International Federation of Automatic Control) Copyright 4081 Copyright © © 2017 2017 IFAC 4081 Peer review©under of International Federation of Automatic Copyright 2017 responsibility IFAC 4081Control. 10.1016/j.ifacol.2017.08.715

Proceedings of the 20th IFAC World Congress A. Padilla et al. / IFAC PapersOnLine 50-1 (2017) 4008–4013 Toulouse, France, July 9-14, 2017

4009

A3. The controller C is known. A4. The controller C ensures BIBO stability of the closed-loop system defined by (1) and (2). A5. The reference signal r(tk ) = r1 (tk ) + Cr2 (tk ) is persistently exciting.

Fig. 1. Closed-loop system. The paper is organized as follows: In Section 2, the identification problem is formulated. Then, in Section 3, the closed-loop IV identification method for LTI models is reviewed, recalling first the optimal (unbiased and minimum variance) result for off-line estimation and then considering the recursive algorithm. The proposed closedloop IV-based identification method for linear CT slowly time-varying systems is given in Section 4 and afterwards a numerical example is presented in Section 5. Finally, the conclusions are drawn in Section 6. 2. PROBLEM FORMULATION Let us consider a CT LTV system S with plant Go , input u and output y  Ao (p, t)x(t) = Bo (p, t)u(t) (1) S y(tk ) = x(tk ) + eo (tk )

where p is the differentiation operator; x(t) is the noisefree output; and the additive term eo (tk ) is a zero-mean DT white noise sequence. The argument tk in the second equation denotes the sampled value of the associated variable at the kth sampling instant. In the closed-loop configuration in Figure 1, the CT controller C can be any nonlinear and/or time-varying controller. Knowing C, we can compute u(t) as follows u(tk ) = r1 (tk ) + C(r2 (tk ) − y(tk )) (2) where C is the operator form of the controller and r1 (tk ), r2 (tk ) are external signals. Assuming that the system (1) belongs to the model set M, i.e. S ∈ M, we have  A(p, t, θ)x(t) = B(p, t, θ)u(t) M (3) y(tk ) = x(tk ) + e(tk )

A(p, t, θ) and B(p, t, θ) are the following polynomials with time-varying parameters: B(p, t, θ) = b0 (t)pnb + b1 (t)pnb −1 + . . . + bnb (t) (4) na na −1 + . . . + ana (t) (5) A(p, t, θ) = p + a1 (t)p where na ≥ nb and e(tk ) is a zero-mean DT white noise. The time-varying parameters can be gathered in the parameter vector θ(t), i.e. θ(t) = [a1 (t) . . . ana (t) b0 (t) . . . bnb (t)]T which contains d = na + nb + 1 elements.

(6)

Suppose additionally that the following assumptions are satisfied:

Then, the identification problem is to recursively estimate the time-varying parameters that characterize the model structure given by (3), based on the sequences ′ ′ {r(tk ), u(tk ), y(tk )}N k=1 , where N is the number of samples which increases by one with every recursion. 3. IV ESTIMATION OF LTI SYSTEMS IN CLOSED LOOP In this section we assume that both the plant and the controller are LTI systems, i.e. Bo (p) (7) Go : Go (p) = Ao (p) Qc (p) C : Cc (p) = (8) Pc (p) with the pairs (Ao , Bo ) and (Pc , Qc ) assumed to be coprime. First we address the off-line estimation problem and afterwards its recursive counterpart. 3.1 Optimal off-line estimation Informally, when the parameters in (3) are constant, a snapshot of the model at the kth sampling instant can be written as (9) y(tk ) = G(p, θ)u(tk ) + e(tk ) This can be written also as the following linear regression y (na ) (tk ) = ϕT (tk )θ + v(tk )

(10) where   ϕT (tk ) = −y (na −1) (tk ) · · · − y(tk ) u(nb ) (tk ) · · · u(tk ) (11) and (12) v(tk ) = A(p)e(tk ) If the time-derivatives were available, the parameters θ could be estimated by minimizing the sum of squares of ǫ(tk ) = y (na ) (tk ) − ϕT (tk )θ

(13)

i.e., ′

N 1  2 θˆ = arg min ′ �ǫ(tk )�2 θ∈Rd N

(14)

k=1

However, due to the correlation between ϕ(tk ) and v(tk ) introduced by the feedback mechanism, the parameter estimates will be biased. A solution is to use the closedloop IV method given by (see (Gilson et al., 2008))  ′   N   1 T ˆ  θ = arg min ′  F (p)ζ(tk )F (p)ϕ (tk ) θ θ∈Rd N  k=1  N′  2   (na ) − (15) F (p)ζ(tk )F (p)y (tk )    k=1

A1. The polynomial degrees na and nb are a priori known. W A2. The true time-varying parameter  vector  is bounded. where F (p) is a stable prefilter and �x�2 = xT W x,   W Moreover it is slowly varying, i.e. θ˙o (t) ≤ ǫθ , where with W a positive definite weighting matrix. ζ(tk ) is the ǫθ is a small number. instrument vector that can be computed in different ways. 4082

Proceedings of the 20th IFAC World Congress 4010 A. Padilla et al. / IFAC PapersOnLine 50-1 (2017) 4008–4013 Toulouse, France, July 9-14, 2017

Applying the filter (18) to (9) yields (except for transients) (na )

yf

(na −1)

(tk ) + a1 yf

(n ) b0 uf b (tk )

(tk ) + . . . + ana yf (tk ) =

+ . . . + bnb uf (tk ) + vf (tk )

(20)

with vf (tk ) = F (p)v(tk ) = F (p)A(p)e(tk ) (21) Equation (20) can be rewritten as a linear regression,

Fig. 2. Auxiliary model. If S ∈ M, the estimates (15) are consistent under the following conditions 1 : ¯ (p)ζ(tk )F (p)ζ T (tk )} is full column rank. C1. E{F ¯ (p)ζ(tk )F (p)vo (tk )} = 0. C2. E{F Optimal estimates, i.e. unbiased and minimum variance estimates, can be obtained if the following additional conditions are satisfied (Gilson et al., 2008) C3. W = I C4. F (p) = Ao1(p) C5. The instrument vector is computed using the auxiliary model from Figure 2 as follows  ζ(tk ) = −˚ x(na −1) (tk ) . . . −˚ x(tk ) T (nb ) (16) (tk ) . . . ˚ u(tk ) ˚ u where Ao (p)˚ x(tk ) = Bo (p)˚ u(tk ) (17a) x(tk )) (17b) ˚ u(tk ) = r1 (tk ) + C(r2 (tk ) − ˚ 3.2 CLSRIVC algorithm Conditions C1-C5 have to be fulfilled to obtain the optimal IV solution. The first three conditions can be readily satisfied (S¨ oderstr¨ om and Stoica, 1983). However C4-C5 require knowledge of the true unknown system, which is the usual dilemma with optimal estimation. To circumvent this problem, different estimation algorithms have been proposed, which vary depending on the choice of the instruments ζ(tk ), the filter F (p) and the model structure of the true system (Gilson et al., 2008). In the following we recall the closed loop simplified refined instrumental variable method for continuous-time model estimation (CLSRIVC). This is an optimal method for COE models in closed loop and it was proposed in (Gilson et al., 2008). CLSRIVC is an iterative method, which is based on a linear filter approach and the IV technique. The former allows us to compute prefiltered time-derivatives using a filter, and the latter considers instruments that are used to reduce bias. Both the filter and the instruments are updated in each iteration based on the parameter estimates obtained at the previous iteration. The filter is defined as follows 1 F (p, θˆi−1 ) = (18) ˆ A(p, θˆi−1 ) (i)

(i)

Then, the signals yf (tk ) and uf (tk ) correspond to the prefiltered time-derivatives of the output and input in the bandwidth of interest, i.e. (i) y (tk ) = pi F (p, θˆi−1 )y(tk ) i = 0, . . . , na (19a) f (i) uf (tk )

= pi F (p, θˆi−1 )u(tk )

i = 0, . . . , nb N ′

(19b)

1 The notation E[.] ¯ = limN ′ →∞ 1′ E[.] is adopted from the N k=1 prediction error framework of (Ljung, 1999).

(na )

yf

(tk ) = ϕTf (tk )θ + vf (tk )

(22)

where (na )

yf

(tk ) = pna F (p, θˆi−1 )y(tk ) ϕT (tk ) = F (p, θˆi−1 )ϕT (tk )

(23a)

(23b) ϕ(tk ) is defined in (11) and its filtered version is given by   b) ϕTf (tk ) = −yf(na −1) (tk ) · · · − yf (tk ) u(n (t ) · · · u (t ) k f k f (24) The filtered instrument is computed as follows ζf (tk , θˆi−1 ) = F (p, θˆi−1 )ζ(tk , θˆi−1 )  ˆf (tk ) ˆ(na −1) (tk ) . . . −˚ = −˚ x x f T ˆf (tk ) ˆ(nb ) (tk ) . . . ˚ (25) u ˚ u f f

with

ˆ(tk ) = B(p, ˆ(tk ) ˆ θˆi−1 )˚ ˆ θˆi−1 )˚ A(p, x u ˆ(tk )) ˆ(tk ) = r1 (tk ) + C(r2 (tk ) − ˚ x ˚ u

(26a) (26b)

Then, the CLSRIVC estimates are  N′ −1  θˆi = ζf (tk , θˆi−1 )ϕT (tk , θˆi−1 ) · f

k=1





N 

ˆi−1

ζf (tk , θ



(n ) )yf a (tk , θˆi−1 )

k=1

3.3 Recursive estimation algorithm The algorithm presented previously can be adapted for recursive estimation. The recursive version of CLSRIVC is given by the following algorithm, which is based on the methodology used in the DT case (see e.g. (Young, 2011)): ˆ k ) = θ(t ˆ k−1 ) + L(tk )ε(tk ) θ(t (n ) ˆ k−1 ) ε(tk ) = y a (tk ) − ϕT (tk )θ(t f

f

P (tk−1 )ζf (tk ) L(tk ) = 1 + ϕTf (tk )P (tk−1 )ζf (tk ) P (tk ) = P (tk−1 ) − L(tk )ϕTf (tk )P (tk−1 )

(27a) (27b) (27c) (27d)

with (na )

ˆ k−1 ))y(tk ) (tk ) = pna F (p, θ(t (28a) ˆ (28b) ϕf (tk ) = F (p, θ(tk−1 ))ϕ(tk ) ϕ(tk ) is defined in (11), and the adaptive prefilter is 1 ˆ k−1 )) = F (p, θ(t (29) ˆ k−1 )) ˆ θ(t A(p, yf

The instrument is given by ˆ k−1 ))ζ(tk , θ(t ˆ k−1 )) ζf (tk ) = F (p, θ(t  ˆf (tk ) ˆ(na −1) (tk ) · · · −˚ = −˚ x x f T ˆf (tk ) ˆ(nb ) (tk ) · · · ˚ u ˚ u f

4083

(30)

Proceedings of the 20th IFAC World Congress A. Padilla et al. / IFAC PapersOnLine 50-1 (2017) 4008–4013 Toulouse, France, July 9-14, 2017

ˆ k |tk−1 ) + L(tk )ε(tk ) ˆ k ) = θ(t θ(t (n ) ˆ k |tk−1 ) ε(tk ) = y a (tk ) − ϕT (tk )θ(t

This algorithm will be called the CLRSRIVC method, where the additional ‘R’ stands for recursive. Note that unlike CLSRIVC for the off-line estimation case where the filter and instrument is defined using estimates from a previous iteration, here it is computed from the estimates obtained at the previous recursion time tk−1 .

4011

(33c)

(33d) f P (tk |tk−1 )ζf (tk ) (33e) L(tk ) = 1 + ϕTf (tk )P (tk |tk−1 )ζf (tk ) P (tk ) = P (tk |tk−1 ) − L(tk )ϕTf (tk )P (tk |tk−1 ) (33f) f

(n )

4. IV ESTIMATION OF LTV SYSTEMS IN CLOSED LOOP We assume now that the CT parameters of the plant are slowly time-varying. 4.1 Handling of the time-derivative issue In theoretical terms, multiplication with the differentiation operator p is not commutative with the time-dependent variables (see e.g. (Laurain et al., 2011)). Thus, it would appear at first that we are no longer able to filter (3) and obtain a linear regression, as in the LTI case (see (22)). A solution is to assume that the parameters are slowly varying (see Assumption (A2) in Section 2) in the sense specified by Lemma 1 in (Dimogianopoulos and Lozano, 2001), which states that for a certain time interval, under slow parameter variations, an LTV model can be locally approximated by an LTI model. In this study, under Assumption (A2) in the sense of Lemma 1 in (Dimogianopoulos and Lozano, 2001), we neglect the non-commutativity aspect and apply a filter to (3), which yields (na )

yf where

(n ) yf a (tk )

(tk ) = ϕTf (tk )θ(tk ) + vf (tk )

where yf a (tk ) and ϕf (tk ) are defined in (28). The normalized covariance matrix Qn , also called noise-variance ratio matrix, is defined by Qn = Qw /σe2 (34) The instrument is computed as follows ˆ k−1 ))ζ(tk , θ(t ˆ k−1 )) ζf (tk ) = F (p, θ(t � ˆf (tk ) ˆ(na −1) (tk ) . . . −˚ = −˚ x x f �T ˆf (tk ) ˆ(nb ) (tk ) . . . ˚ (35) u ˚ u f ˆ(tk ) generated from ˆ(tk ) and ˚ u with ˚ x ˆ k−1 ), t)˚ ˆ(tk ) = B(p, ˆ k−1 ), t)˚ ˆ(tk ) ˆ θ(t ˆ θ(t A(p, x u (36a) ˆ(tk ) = r1 (tk ) + C(r2 (tk ) − ˚ xˆ(tk )) (36b) ˚ u where C is the controller.

4.3 Implementation issues To apply the proposed estimation algorithm in practice, the following implementation issues have to be considered: • Initialization of the IV-based methods. The initialization is done using the recursive least squares statevariable filter (RLSSVF) approach (Padilla et al., 2016), which is the algorithm (33) but with ζf (tk ) = ϕf (tk ). The estimation with RLSSVF is performed using only the measurements of the input and out′ put {u(tk ), y(tk )}N k=1 as for open-loop operation. This corresponds to a direct estimation approach (Ljung, 1999). • Choice of λsvf in SVF. RLSSVF requires the use of a state-variable filter (SVF) of the form 1 (37) F (p) = (p + λsvf )na with λsvf the cut-off frequency of the SVF. λsvf is a user parameter that should be chosen somewhat larger than the system bandwidth. In the LTV case and especially for systems with large variations in the bandwidth, the specification of λsvf can be critical since the system bandwidth is time-varying. • Specification of the normalized covariance matrix Qn . The random walk model requires the specification of Qn . The performance of the algorithm in terms of tracking ability and noise sensitivity depends on Qn , which can be estimated, for instance, by the maximum likelihood method (Young, 2011). So far this method has not been extended to be used with the approaches proposed in this study.

(31)

and ϕTf (tk ) are defined in (28).

4.2 Random walk-based CLRSRIVC algorithm A solution to the LTV estimation problem when the parameters vary at different rates is to represent them through a simple random walk model, so the full model becomes:  θ(tk ) = θ(tk−1 ) + w(tk ) M A(p, t, θ)x(t) = B(p, t, θ)u(t) (32)  y(tk ) = x(tk ) + e(tk )

Here, w(tk ) and e(tk ) are independent zero-mean DT Gaussian noise processes with covariance matrix Qw and variance σe2 , respectively.

Although the comments about bias in Section 3 apply to LTI systems, we should expect them to be valid when an LTV system approaches an LTI one, i.e. when the parameters vary slowly. A suitable solution is to consider the CLRSRIVC algorithm that can be adapted for LTV systems. In this time-varying case, the estimates are obtained by means of the following algorithm Prediction step: ˆ k |tk−1 ) = θ(t ˆ k−1 ) θ(t P (tk |tk−1 ) = P (tk−1 ) + Qn Correction step:

(33a) (33b)

5. NUMERICAL EXAMPLE Two recursive algorithms are evaluated: CLRSRIVC and RLSSVF. The example presented in (Padilla et al., 2016)

4084

Proceedings of the 20th IFAC World Congress 4012 A. Padilla et al. / IFAC PapersOnLine 50-1 (2017) 4008–4013 Toulouse, France, July 9-14, 2017

60

a ˆ1

is adapted here to the closed-loop configuration shown in Figure 1, where S is the following second order system � �� 2 p + ao1 (t)p + ao2 (t) x(t) = bo0 (t)u(t) (38) S y(tk ) = x(tk ) + e(tk )

We present next some results for a single experiment run out of the 100 that are considered later in a Monte Carlo simulation analysis. 5.1 Single experiment analysis In the LTI open-loop case, it is known that the RLSSVF estimates are always biased due to the noise. Even if the bias cannot be removed, it can be reduced by a proper choice of the cut off frequency λsvf . In the open-loop LTV case, this is more difficult since the system bandwidth is varying, while the SVF bandwidth is constant (Padilla et al., 2016). When the system operates in closed loop there is an additional issue, namely the correlation between the input u(tk ) and the noise eo (tk ) due to the feedback mechanism. That might result in a larger bias on the estimates. In order to illustrate the impact of the noise on RLSSVF in the closed-loop LTV situation, we compare the deterministic case with the noisy case. From Figure 3,

200

400

600

800

1000

0

200

400

600

800

1000

0

200

400

600

800

1000

a ˆ2

200 100 0 300 250 200 150

t (s)

Fig. 3. True parameters and RLSSVF estimates for the deterministic (det.) case and noisy case. (The estimates for the deterministic case are matching the true values).

CT filtering operations for the computation of prefiltered time-derivatives and instruments are implemented from their discretized counterparts. The discretized version of the PID controller is given by ki C : Cc (q −1 ) = kp + + kd (1 − q −1 ) (39) 1 − q −1 with q −1 the backward shift operator and kp = 1.79, ki = 13.8, kd = 5.83 · 10−2. Considering (38) and (39), and assuming that the given parameters are slowly varying, the closed-loop system is exponentially stable according to the frozen-time approach (Ilchmann et al., 1987). Moreover, since the parameters are bounded, the closed-loop system is BIBO stable (Rugh, 1996). For all the simulations, we use the same value for the hyperparameters Qn and λsvf . The former is a diagonal matrix and its diagonal elements are determined by means of a numerical search that yields 10−5 , 10−4 and 10−10 for ao1 , ao2 and bo0 , respectively. The value corresponding to bo0 has been chosen assuming that it is known that this parameter is constant. For the SVF, λsvf = 16 rad/s is chosen, i.e. a value larger than the maximum bandwidth.

0

300

ˆb0

The parameters are slowly varying as follows: varies between 5 and 45 in a linear fashion, bo0 (t) remains constant at 200 and ao2 (t) is given by ao2 (t) = 160 − 90 cos(2πt/1000) The sampling time is set to 0.01 s and the total simulation time is 1000 s. Regarding the external signals (see Figure 1), r1 (t) is a PRBS and r2 (t) = 0. e(tk ) is a zero-mean DT Gaussian noise with constant variance 0.1. In this example, as a consequence of the time-varying parameters, the DC gain is decreasing towards half of the simulation time; and since the noise variance is kept constant, the signal-to-noise ratio (SNR) is decreasing around half of the simulation time. Notice that the ratio between the maximum and minimum bandwidths is nearly 10, i.e. the bandwidth variation is relatively large over the total simulation time.

20 0

ao1 (t)

RLSSVF det. RLSSVF noisy true

40

50

a ˆ1

CLRSRIVC true

a ˆ2

0

0

200

400

600

800

1000

0

200

400

600

800

1000

0

200

400

600

800

1000

200

0

ˆb0

300

200

t (s)

Fig. 4. True parameters and CLRSRIVC estimates. we can see that RLSSVF is able to track the parameters only in the noise-free situation. It is important to notice that the value used for λsvf is a very good choice since it is slightly higher than the maximum system bandwidth. To circumvent this problem, the closed-loop IV approach introduced in Section 4 can be used. From Figure 4 we can see that CLRSRIVC is able to track the parameters in this situation where the parameters vary at different rates. 5.2 Monte-Carlo simulation analysis To complete the analysis, a Monte-Carlo simulation with 100 experiments is run. The parameter relative error given by |θi (tk ) − θˆi (tk )| θ˜i (tk ) = × 100 (40) |θi (tk )| with     θˆ1 (tk ) a ˆ1 (tk )  θˆ2 (tk )  =  a ˆ2 (tk )  ˆb0 (tk ) θˆ3 (tk )

4085

Proceedings of the 20th IFAC World Congress A. Padilla et al. / IFAC PapersOnLine 50-1 (2017) 4008–4013 Toulouse, France, July 9-14, 2017

4013

¯ a ˜1 (%)

ACKNOWLEDGEMENTS 50

¯ a ˜2 (%)

0

0

200

400

600

800

1000

REFERENCES 50

0

¯ ˜b0 (%)

This work was supported by the Advanced Center for Electrical and Electronic Engineering, AC3E, Basal Project FB0008, CONICYT.

0

200

400

0

200

400

600

800

1000

600

800

1000

50

0

t (s)

¯ a ˜1 (%)

Fig. 5. Average of the relative errors for RLSSVF for a Monte-Carlo simulation with 100 runs. 50

¯ a ˜2 (%)

0

200

400

600

800

1000

0

200

400

600

800

1000

0

200

400

600

800

1000

50

0

¯ ˜b0 (%)

0

50

0

t (s)

Fig. 6. Average of the relative errors for CLRSRIVC for a Monte-Carlo simulation with 100 runs. is used for comparison purposes. Averaging the relative errors of the 100 experiments, we obtain the results presented in Figure 5 for RLSSVF and in Figure 6 for CLRSRIVC. It can be seen that the results for a single experiment hold for the Monte Carlo simulations. 6. CONCLUSIONS In this study, the CLRSRIVC algorithm has been developed for the recursive on-line identification of linear, continuous-time, slowly time-varying systems operating in closed loop. With this approach we are able to track the time-varying parameters that vary at different rates by considering a stochastic model. For the estimation of CT models, prefiltered time-derivatives of the input and output are computed using an adaptive low-pass prefilter. The advantage of using CLRSRIVC over a direct approach like RLSSVF is illustrated by means of a Monte-Carlo simulation analysis considering a system whose bandwidth variation is large.

Dimogianopoulos, D. and Lozano, R. (2001). Adaptive control for linear slowly time-varying systems using direct least-squares estimation. Automatica, 37, 251– 256. Gilson, M., Garnier, H., Young, P.C., and Van den Hof, P. (2008). Instrumental variable methods for closed-loop continuous-time model identification. In Identification of Continuous-time Models from Sampled Data, volume H. Garnier and L. Wang (Eds.), 133–160. Springer. Gilson, M., Garnier, H., Young, P.C., and Van den Hof, P. (2011). Optimal instrumental variable method for closed-loop identification. IET Control Theory and Applications, 5(10), 1147–1154. Gilson, M. and Van den Hof, P. (2005). Instrumental variable methods for closed-loop system identification. Automatica, 41, 241–249. Goodwin, G.C. and Sin, K.S. (1984). Adaptive Filtering Prediction and Control. Dover Publications Inc. Ilchmann, A., Owens, D., and Pr¨atzel-Wolters, D. (1987). Sufficient conditions for stability of linear time-varying systems. System & Control Letters, 9, 157–163. Jiang, Z. and Schaufelberger, W. (1991). A recursive identification method for continuous time-varying linear systems. In 34th Midwest Symposium on Circuits and Systems, volume 1, 436–439. Monterey, CA, USA. Landau, I., Anderson, B.D.O., and De Bruyne, F. (2001). Recursive identification algorithms for continuous-time nonlinear plants operating in closed loop. Automatica, 37, 469–475. Laurain, V., T´oth, R., Gilson, M., and Garnier, H. (2011). Direct identification of continuous-time linear parameter-varying input/output models. IET Control Theory and Applications, 5(7), 878–888. Ljung, L. (1999). System Identification. Theory for the User. Prentice Hall, Upper Saddle River, 2nd edition. Ljung, L. and Gunnarsson, S. (1990). Adaptation and tracking in system identification - a survey. Automatica, 26(1), 7–21. Ljung, L. and S¨oderstr¨om, T. (1983). Theory and practice of recursive identification. The MIT Press, Cambridge, MA. Padilla, A., Garnier, H., Young, P.C., and Yuz, J. (2016). Real-time identification of continuous-time linear timevarying systems. In IEEE Conference on Decision and Control. Las Vegas, US. Rugh, W.J. (1996). Linear System Theory. Prentice Hall. S¨oderstr¨om, T. and Stoica, P. (1983). Instrumental Variable Methods for System Identification. Springer-Verlag. Toth, R., Laurain, V., Gilson, M., and Garnier, H. (2012). Instrumental variable scheme for closed-loop LPV model identification. Automatica, 48, 2314–2320. Young, P. (2011). Recursive Estimation and Time-Series Analysis: An Introduction for the Student and Practitioner. Springer.

4086