Persistently-exciting signal generation for Optimal Parameter Estimation of constrained nonlinear dynamical systems

Persistently-exciting signal generation for Optimal Parameter Estimation of constrained nonlinear dynamical systems

ISA Transactions xxx (2018) 1–11 Contents lists available at ScienceDirect ISA Transactions journal homepage: www.elsevier.com/locate/isatrans Prac...

984KB Sizes 0 Downloads 59 Views

ISA Transactions xxx (2018) 1–11

Contents lists available at ScienceDirect

ISA Transactions journal homepage: www.elsevier.com/locate/isatrans

Practice article

Persistently-exciting signal generation for Optimal Parameter Estimation of constrained nonlinear dynamical systems Leonardo M. Honório a,* , Exuperry Barros Costa a , Edimar J. Oliveira a , Daniel de Almeida Fernandes a , Antonio Paulo G.M. Moreira b a b

Department of Energy, Federal University of Juiz de Fora, Brazil Faculty of Engineering, University of Porto, Porto, Portugal

article info

abstract

Article history: Received 27 October 2017 Revised 15 March 2018 Accepted 29 March 2018 Available online XXX

This work presents a novel methodology for Sub-Optimal Excitation Signal Generation and Optimal Parameter Estimation of constrained nonlinear systems. It is proposed that the evaluation of each signal must also account for the difference between real and estimated system parameters. However, this metric is not directly obtained once the real parameter values are not known. The alternative presented here is to adopt the hypothesis that, if a system can be approximated by a white box model, this model can be used as a benchmark to indicate the impact of a signal over the parametric estimation. In this way, the proposed method uses a dual layer optimization methodology: (i) Inner Level; For a given excitation signal a nonlinear optimization method searches for the optimal set of parameters that minimizes the error between the outputs of the optimized and benchmark models. (ii) At the outer level, a metaheuristic optimization method is responsible for constructing the best excitation signal, considering the fitness coming from the inner level, the quadratic difference between its parameters and the cost related to the time and space required to execute the experiment. © 2018 ISA. Published by Elsevier Ltd. All rights reserved.

Keywords: Optimal signal generation Optimal Parameter Estimation Optimization in parameter estimation Constrained systems parameter estimation Optimal Input Design Non-linear systems

1. Introduction Obtaining a suitable mathematical model of Nonlinear Dynamic System (NDS) — the plant model — is of fundamental importance to both synthesis and tuning of any robust model-based observer or controller [1]. Furthermore, the model is necessary for carrying out both the (analytical) stability assessment and the (numerical) system performance assessment [2–5]. Such a light-gray-box model — differential equations with parameter estimation — has to satisfactorily reproduce the dynamical behavior of a plant. In this way, parameters estimation of nonlinear systems has been extensively investigated in the literature as described in Refs. [6–10]. There are several methods proposed in the literature such as optimization techniques, neural networks [11], Fuzzy [12], and others [13]. A full description of these methods, as well as their limitations and advantages, can be found in Ref. [14]. However, one common drawback for any Optimal Parameter Estimation (OPE) methodology is the necessity of an input signal that presents some properties such as: (i) its generation must be trivial,

(ii) it must consider boundaries and safety, and (iii) it must provide a rich excitation to estimate the system’s dynamics [8,14–16]. Due to these facts it is common that the Optimal Parameter Estimation (OPE) techniques are coupled with Optimal Input Design (OID). As for the problem of combined Optimal Input Design (OID) with Optimal Parameter Estimation (OPE), several contributions have been made to the literature since the early 1960’s [15,17,18]. It has been shown that step-like signals such as (Amplitude-modulated) Pseudo-Random Binary Signal((A)PRBSs) [4,5] are much richer than the sum of a multitude of sinusoidal signals, therefore more likely being persistently exciting [5,17]. Sufficiently informative data can be raised through the use of persistently exciting signals of appropriate order [19]. It has also been shown that it is always better to excite all the inputs simultaneously [19,20], either in open- or closed-loop parameter estimation frameworks. Reference [21] shows a Lagrangian-based optimization methodology but it only directly applies constraints to the input variables, the output is constrained by predicting the maximum input space envelope without violations. In rigid body dynamics the computa-

* Corresponding author. E-mail address: [email protected] (L.M. Honório).

https://doi.org/10.1016/j.isatra.2018.03.024 0019-0578/© 2018 ISA. Published by Elsevier Ltd. All rights reserved.

Please cite this article in press as: Honório LM, et al. Persistently-exciting signal generation for Optimal Parameter Estimation of constrained nonlinear dynamical systems, ISA Transactions (2018), https://doi.org/10.1016/j.isatra.2018.03.024

2

L.M. Honório et al. / ISA Transactions xxx (2018) 1–11

tional complexity of such approach would be inviable. There are other references that deal with Optimal Input Design [22–24] with constraints only over the body frame. Their focus is the system’s operational limits; they do not consider conditions such as minimum space and time to run de experiment. In fact, to the best of our knowledge, there is no work in literature for such purpose. However, this is an important aspect in situations where tests must be carried out in restricted spaces or considering plants that demands a considerable amount of time and money to be deployed and tested, such as deep water Remotely Operated Vehicles (ROVs) [25]. Another important observation is that those works approximate the non-differentiable input signals into a series of differentiable functions in order to use gradient-based optimization techniques. However, from an optimization perspective [26,27], to turn discrete variables into continuous ones tends to provide poor results. Under the aforementioned background this work presents a new approach for generating persistently-exciting signals for Optimal Parameter Estimation (OPE) of constrained NDSs. It is based on the fact that, ideally a methodology should be able to start by finding the best parameter set that minimizes the output error between a parametric optimized model and the real system. This result would determine a Light-Gray Box Model (LGBM) of the real system. However, as stated before, although the excitation signal has a major impact over the correct parameter estimation, it is not possible to directly evaluate how good a resulting Light-Gray Box Model (LGBM) represents the real system. Therefore, an important contribution of this work is the formulation of the following hypothesis; “if it is possible to generate a White Box Model (WBM) of a nonlinear system, it is also possible to use this White Box Model (WBM) as the benchmark and evaluate if a given signal is rich enough to estimate the desired parameters. If so, the same signal may be used on the real system to estimate its parameters”. This hypotheses assumes that if a system can just be reasonably modeled as a White Box Model (WBM), its behavior — even not perfectly — may represent the real system dynamics. By adopting this hypothesis, it is now possible to use a White Box Model (WBM), which has well known parameters, as a benchmark to emulate the real system. Now, it would be also possible to use an optimized LightGray Box Model (LGBM) to evaluate a given signal by comparing the resulting parameters with those of the benchmark systems. This approach would be able to provide both excitation signal and the system parameters. Furthermore, it would be also possible to include constraints in the optimization formulation that would allow to keep the system under desirable limits during the real world experiment. The solution consists of a dual layer optimization strategy where the inner layer is responsible for finding the best parameter set that minimizes the output error between the optimized Light-Gray Box Model (LGBM) and the benchmark White Box Model (WBM) system. The outer layer is responsible to evaluate the effectiveness of each signal used at the inner layer by using three metrics: a) the error fitness provided by the inner layer, b) space constraint, safety and cost to execute the signal, and c) the parametric error between the optimized and real system. Due to the problem characteristics, its solution adopts two different optimization algorithms. Regarding the inner layer process, although literature shows several works using Least Square [28–31] to estimate the parameters, this work uses the Safety Barrier Interior Point algorithm [32] due to its superior mathematical stability and faster convergence time. As any other Lagrangian-based method, it enables the analysis of both Lagrange multipliers and slack variables. As for the outer layer, it is responsible for generating and evaluating the input signal through a multiple-criteria penalty function composed by the metrics mentioned above. For this stage literature shows that dynamic optimization [33] and population-based methods such as particle swarm optimization (PSO) [34]. This paper will also use a PSO-like algorithm adding three different evolutionary

operators in order to avoid local-optima. This strategy is based on [35]. To demonstrate the proposed framework, this work is divided in two parts. The first part is dedicated to the problem formulation and the presentation of a practical tutorial Case. The second part focus is on sensibility, convergence and an interactivity study through the verification and discussion of several scenarios. This paper consists of the first part, and it is organized as follows; the concepts, ideas and propositions are presented in Section 2; the metrics and the inner layer are shown in Section 3; the outer layer optimization methodology is shown in Section 4; the results in Section 5; and the conclusions in Section 6. 2. Suboptimal Excitation Signal Generation and Optimal Parameter Estimation (SESGOPE) 2.1. Preliminary discussion Consider that the real system (𝚪) is an underwater robot such as [25,36] and must be approximated by a non-linear parametric model ̂ is (̂ 𝚪) where 𝚪 is the set of real and unknown parameters and 𝚪 its estimation. Consider also that yr and ym are the output signal histories from the real and modeled systems respectively. The estimation of ̂ 𝚪 can be done by using an excitation signal u over (𝚪) and trying to minimize ∥yr −ym ∥. However, several considerations must be made. For example, if the experiment must be carried out in a closed environment, then that u must be carefully designed in order to keep the desired operating limits such as velocity, position, depth, etc. Moreover, u must also encode the correct frequency and ampli𝚪. For instance, if the frequency of u is tude response to estimate ̂ much higher or lower than the one that drives the dynamics of (𝚪), the parameters will not be well estimated [8,14]. Similarly, if the signal amplitude is not properly chosen, non-linearities such as thruster saturation, viscous friction and added mass could not be captured. The problem, then, is to find a signal u that is able to correctly identify all specified parameters while keep the experiment under desirable operating limits. However, even though it is possible find ̂i that keeps a signal ui that allows to find a parameter estimative 𝚪 all desired limits, it is not possible to ensure that (𝚪) ≡ (̂ 𝚪i ) for other signals. Moreover, the only way to test it would be using several other signals, which is inviable in most real cases. 𝚪− ) Alternatively, it is possible to design a white box model (̂ − ̂ from (𝚪), where 𝚪 is an initial parameter estimation and use this model as a benchmark system. Thus, it is also possible to use a signal ui and obtain an estimative (̂ 𝚪𝐢 ) that minimizes the error between the output of both estimated and benchmark systems. Moreover, it is also possible to design an optimization problem where the objeĉ− ‖ and the constraints 𝚪𝐢 − 𝚪 tive function is, for instance, minimize ‖̂ are time and desirable operational limits. Finally, if ui can be used to correctly estimate ̂ 𝚪− , then it should be also good for estimating the parameters of (𝚪). 2.2. Mathematical presentation Consider the an (NDS) (𝚪) that can be satisfactorily approximated by a nonlinear parametric model (̂ 𝚪) with n states, p inputs, m outputs, and r parameters, which is defined as

{

(̂ 𝚪) ≔

̂) ẋ (t ) = f (x(t ), u(t ), 𝚪

̂) y(t ) = h(x(t ), u(t ), 𝚪

(1)

with initial state x0 = x (0), where x(t ) ∈ ℝn | x(t ) = [ x1 (t ), x2 (t ), …, xn (t ) ]T is the state vector, u(t ) ∈ ℝp |u(t ) = [ u1 (t ), u2 (t ), …, up (t ) ]T is the input vector, y(t ) ∈ ℝm | y(t ) =

Please cite this article in press as: Honório LM, et al. Persistently-exciting signal generation for Optimal Parameter Estimation of constrained nonlinear dynamical systems, ISA Transactions (2018), https://doi.org/10.1016/j.isatra.2018.03.024

L.M. Honório et al. / ISA Transactions xxx (2018) 1–11

[ y1 (t ), y2 (t ), …, ym (t ) ]T is the output vector and ̂ 𝚪 ∈ ℝr ∣ ̂ 𝚪= T 𝛾 1, ̂ 𝛾 2 , …, ̂ 𝛾 r ] is the vector that represents the set of model [̂ parameters. Consider x as being the state trajectory, i.e. the history of the states in the time interval T = [t 0 , t f ], u as being the control history, and y as being the output history [37]. Thus, by considering that T is divided into tu ∈ ℤ≥0 simulation samples, the proper dimensions are given by x ∈ ℝn×tu , u ∈ ℝp×tu and y ∈ ℝm×tu . The vector 𝚪 ∈ ℝr 𝚪 = [ 𝛾1 , 𝛾2 , …, 𝛾r ]T is the best, although unknown,1 set of parameter values in order for  to satisfactorily represent (𝚪), and ̂ 𝚪− is a sufficiently good “a priori” estimate of 𝚪. The input signals ui , i ∈ {1, 2, … , p}, are discrete. As already stated, the main goal of this work is to generate a suboptimal signal u⊕ to be used for carrying out a real-world experiment with (𝚪). The signal u⊕ has to be persistently exciting in order for an “a posteriori” optimal estimate ̂ 𝚪+ to be obtained such that (̂ 𝚪+ ) ≈ (𝚪). The signal u⊕ also has to fulfill the other criteria. Towards that end, the following hypothesis is formulated: Hypothesis 1. Assuming that the set of parameters ̂ 𝚪− is a rough, although valid, approximation of 𝚪, then, if a suboptimal input signal ̂− , then u⊕ is suitu⊕ can be used with (̂ 𝚪− ) in order to estimate 𝚪 able for exciting (𝚪) such that a proper “a posteriori” estimation ̂ 𝚪+ can be obtained. If Hypothesis 1 holds true, then the result of a simulation performed with (̂ 𝚪− ) is an acceptable approximation of (𝚪). By − using (̂ 𝚪 ) as a starting point of the optimization process, the search space is reduced around it, decreasing the possibility of finding a local minimum. A perturbed model (̃ 𝚪p ) is used as the real system and ̂ 𝚪− as the start point of the optimization process. Consider the perturbed model (̃ 𝚪p ), where

̃ 𝚪p = ̂ 𝚪− + ̃ 𝜹p

(2)

and ̃ 𝜹p ∈ ℝ r | ̃ 𝜹p = [

𝛿̃p1 ,

𝛿̃p2 ,

…,

𝛿̃pr ]T

is a set of Gaussian perturba-

tions with zero mean and variance q ∈ ℝr>0 ∣ q = [ q1 , q2 , …, qr ]T . The value of each qi , i ∈ {1, 2, …, r}, is determined empirically and is proportional to the inverse of the knowledge one has on the corresponding parameter. 𝚪p ), the objective now is, for a given signal u, to Considering (̃ + ̂+ is the posteriori estimation that minimizes find (̂ 𝚪 ) where 𝚪

Σ𝜀 ≔

tu ∑

𝜀( k)

(3)

k=0

where Σ𝜀 ∈ ℝ≥0 , 𝜀 ∈ ℝtu and

𝜀(k) ≔ ‖̂x+ (k) − ̃ xp (k)‖ + ‖̂ y + ( k) − ̃ y p ( k) ‖ ̃p

̃p

The pairs x and y ,

and ̂ x+

and ̂ y+ ,

(4)

are the state and output vec-

tors of the perturbed (̃ 𝚪p ) and estimated (̂ 𝚪− + ̂ 𝜹+ ), respectively (cf. Eq. (1)). Even if Σ𝜖 = 0, it would not guarantee that the parameters were correctly estimated. However, considering

̂ 𝚪+ = ̂ 𝚪− + ̂ 𝜹+

(5)

with ̂ 𝜹+ as the outcome of the output minimization process, if

̂ 𝜹+ ≈ ̃ 𝜹p

(6)

the signal u has enough persistence of excitation to correctly estimate the parameters. This two-step approach, which is sketched in Fig. 1a and b, starts by evaluating a fitness that compares both the state and the output

𝚪p ) and (̂ 𝚪− + ̂ 𝜹+ ), given the same excitation errors between (̃ 1

Notice that, as for a real system, 𝚪 can never be exactly known.

3

signal u. An important observation is that, during the final stage (the parameter estimation of the real system), the same diagram shown in 1a is valid, only the perturbed model (̃ 𝚪p ) is replaced by the real system (𝚪). It is required that both systems have the same initial condition x0 . This approach enables the evaluation of the state space boundary

xp and ỹ p without the necessity of searching for ̂ by just analyzing ̃ 𝜹+ . The importance of doing so will be explained latter in this section. Let f tilde and f plus be evaluation functions respectively defined as

{ {

̃ xp , ̃ yp , t u

}

Σ𝜀 , ̂ x+ , ̂ y+

̃p ∣ u) ≔ f tilde (x0 , 𝚪

}

(7)

̂− , ̂ 𝜹+ , ̃ xp , ̃ yp ∣ f tilde , u) ≔ f plus (x0 , 𝚪

(8)

and

⎧ ⎪minimize + fo (·) = ⎨ ̂ 𝜹 ⎪ ⎩subject to

tu ∑

̂− , ̂ 𝜀(k) = f 𝜺plus (x0 , 𝚪 𝜹+ , ̃ xp , ̃ yp ∣ f tilde , u)

(9)

k=0

−𝚫 ≤ ̂ 𝜹+ ≤ 𝚫

where f 𝜺plus regards only the output Σ𝜀 of equation (8). Therefore, f o

(·) is the optimal value of f 𝜺plus considering ̂ 𝜹+ as the control vari-

able. The limits over ̂ 𝜹+ are required to keep the parameters under acceptable physical limits, otherwise the result could be an optimal, but unfeasible solution. The representation f(a∣b) means that f is a function of a and depends on the previous evaluation of b. For exam-

̂− , ̂ 𝜹+ , ̃ xp , ̃ yp ∣ f tilde , u), the values of ̃ xp and ple, considering f 𝝐plus (x0 , 𝚪 p ỹ are previously evaluated from f tilde which, by definition 7, also depends on a given signal u. The task of finding a (sub)optimal excitation signal u⊕ , considering parameter and output fitness along with time, can be transformed into the multi-parametric sequential unconstrained optimization problem defined as minimize + u,̂ 𝜹

̂− , ̂ 𝜅1 𝚯(̃xp , ̃ yp , tu ∣ f tilde , u) + 𝜅2 f o (x0 , 𝚪 𝜹+ , ̃ xp , ̃ yp ∣ 𝚯) + 𝜅3 f ̂𝜹 (̂ 𝜹+ ∣ f o )

where

𝜅1 ⋙ 𝜅2 , 𝜅3

(10)

where 𝜅 1 , 𝜅 2 , 𝜅3 ∈ ℝ≥0 are weighting constants, and 𝚯(·), f o (·), and f ̂𝜹 (·) respectively represent the fitness regarding state space limits, the output accuracy, and the parameter recoverability. Furthermore, the problem defined in 10 is a mixed integer nonlinear problem where the objective function is not differentiable with respect to u. Although it is possible to transform the input signal u in a continuous one by using sigmoid or exponential-like transformations literature shows that a change like this may produce poor results [26]. This is especially true for a highly nonlinear system with a high number of variables. In this situation the optimization process could easily get stuck in a local optima. In order to achieve better results, problem 10 is broken into a two-layer sequential evaluation functions that use both discrete and continuous optimization algorithms. The outer layer consists of a mixed integer dynamic programming that has as the optimization control variable the input signal u. The inner layer consists in Continuous Nonlinear Problem (CNP) with a multi-parametric objective function. It has as the optimization con-

𝜹+ , which is related to the currently analyzed trol variable the set ̂ input signal u. In other words, for each candidate input signal u, it is necessary to perform Continuous Nonlinear Problem (CNP) in 𝜹+ . This point can be better order to find the optimal parameter set ̂ understood through an example with the generic problem presented

Please cite this article in press as: Honório LM, et al. Persistently-exciting signal generation for Optimal Parameter Estimation of constrained nonlinear dynamical systems, ISA Transactions (2018), https://doi.org/10.1016/j.isatra.2018.03.024

4

L.M. Honório et al. / ISA Transactions xxx (2018) 1–11

̂− , ̃xp , ̃yp , and u. Fig. 1. Optimal estimation framework of ̂ 𝜹+ , given 𝚪

below. Example 1. Consider the following example on a generic two-layer optimization problem.

( minimize u

(

f (u) + minimize g(̂ 𝜹+ ∣ u ) +

) )

̂ 𝜹

(11)

𝜹+ ∣ u) is the inner layer function, which is a function of the where g (̂ variable ̂ 𝜹+ and depends on the previously known signal u. The result of the inner layer is a part of the fitness function of the outer layer min (f (·) + min (g (·))). Therefore, each change in u demands the re-evaluation of min (g (·)). 3. Excitation Signal evaluation metrics The inner layer is responsible for, given an Excitation Signal (ES), finding the best set of parameters that can represent a benchmark system. However, depending on the system’s dynamics this process can spend reasonable computation resources. Moreover, it is not possible to ensure that the provided Excitation Signal (ES) is a valid signal considering space and time constraints. Thus, to avoid unnecessary computations, it is first analyzed whether the current Excitation Signal (ES) attends ot not the initial boundary constraints as shown in section (3.1). Topologically, as this first step is responsible for evaluating the signal and not the parameters, it is related with the outer layer rather then the inner one. However, by doing this procedure first it is possible, beforehand, to discard undesirable signals.

3.1. State space, output, and time constraints – Θ(·) The state variable constraints are responsible for maintaining x and y inside admissible bounds. Apart from these mandatory constraints, it is also desirable, for economic, physical, and/or safety reasons, to minimize the vector subspace needed to carry out a realworld experiment. Therefore, the state space occupancy also contributes to the fitness. If any system state is outside the admissible bounds, a severe penalty is imposed; otherwise, a part of the fitness that regards the occupancy is normally accounted. The evaluation process starts at equation (7) for a given signal u. As the evaluation process is sequential, if (7) returns any state out of the admissible subspace of ℝn , then the penalty imposed on the objective function becomes so considerable that no further evaluation of any other objective function is necessary. When it is possible 𝚪+ ) ≈ (̃ 𝚪p ), then ̂ x+ ≈ ̃ xp and yˆ + ≈ ỹ p are reached to obtain (̂ as a consequence. Therefore excitation signals driving the states of (̃ 𝚪p ) toward outside the admissible subspace of ℝn do likewise with those of (̂ 𝚪+ ), which would penalize the objective function ̂+ would just compromise the in a manner that the evaluation of 𝚪 computational performance. When it comes to the optimization process, the signal u is the variable to be optimized, and the fitness function is defined as 𝚯(·),

which is an 𝛼 -constrained-based function [38,39]. Originally, the 𝛼 constrained bounds the objective function f (·) and penalty 𝚿(·) to a pair P 1 ≔ (f (·), 𝚿(·)). A lexicographic order of priority, adjusted by 𝛼 , chooses one element of this pair (fitness or penalty) to compare P 1 with a different instance, e.g. P 2 , returning the better one. Here, the 𝛼 -constrained is transformed into an 𝛼 -function 𝚯(·) — ̃ xp , ̃ yp , and t u are (previously) computed in 7 — defined as

𝚯(̃ xp , ̃ yp , tu ∣ f tilde , u)

𝜅t 𝜅 ⎧1 xp ) + t f V (̃ yp ) , ⎪ 𝜅 tu + 𝜅 f V (̃ 𝜅 1 1 ≔⎨ 1 p ⎪𝚿(̃ yp ) + 𝚿(tu ) + 1, ⎩ x ) + 𝚿(̃



if

𝚿(·) ≤ 𝛼

p p ̃ x ,̃ y ,t u

(12)

otherwise

where

𝚿(tu ) ≔ |tu − Tmax |, 𝚿(tu ) ≔ 0,

if

if tu > Tmax

tu ≤ Tmax

tu n ∑ ⎧∑ |̃ xp,i (k) − xi,max |, ⎪ ⎪k=0 i=1 ⎪+ ⎪t n u ∑ ⎪∑ 𝚿(̃ xp ) ≔ ⎨ |̃ xp,i (k) − xi,min |, ⎪k=0 i=1 ⎪+ ⎪ ⎪0, ⎪ ⎩

∀̃ xp,i (k) > xi,max (13)

∀̃ xp,i (k) < xi,min xi,min ≤ ∀̃ x p ,i ( k ) ≤ xi,max

where 𝛼 ∈ ℝ is a relaxation index. It starts with 𝛼 = 100 and, in each iteration, it is assumed that 𝛼 = 0.6𝛼 such that 𝛼 → 0 at the end of the optimization process. Tmax ∈ ℝ>0 is the maximum allowed experiment time, k ∈ ℤ≥0 ∣ k ≤ tu ,  x𝐦𝐢𝐧 ,  x𝐦𝐚𝐱 ∈ ℝn are respectively the minimum and the maximum allowed operational bounds of the state variables, and f V (·) is the hyper-volume of the hyper-dimensional Axis-Aligned Bounding Box (AABB) [40], which is defined as xp ) ≔ (x − x)T W (x − x) f V (̃

(14)

where the diagonal weighting matrix W ∈ ℝn×n ∣ W ≥ 0 is defined as W ≔ diag (w1 , w2 , …, wn ), and lastly x≔ x≔

[ [

max(̃ x1 (k)), max(̃ x2 (k)), … , max(̃ xn (k)) p

p

p

min(̃ x1 (k)), min(̃ x2 (k)), … , min(̃ xn (k)) p

p

p

]T

]T

(15)

Note that ̃ xi (k) represents the k-th element of the (temporal) hisp

tory of the i-th component of state vector ̃ xp , that x and x respectively represent the element-wise maximum and minimum values xp (k) during the experiment time, and also that  x𝐦𝐚𝐱 reached by ̃ and  x𝐦𝐢𝐧 are two “hard” operating bounds imposed on the system. Note also that the constant 𝜅 1 must be assigned a value much higher

Please cite this article in press as: Honório LM, et al. Persistently-exciting signal generation for Optimal Parameter Estimation of constrained nonlinear dynamical systems, ISA Transactions (2018), https://doi.org/10.1016/j.isatra.2018.03.024

L.M. Honório et al. / ISA Transactions xxx (2018) 1–11

than any possible value of f V (̃ xp ). The statements related to equations (13)–(15) apply to ỹ p likewise. They were omitted here for the sake of brevity. 3.2. Output signal accuracy – fo (·) - inner layer level The output accuracy is an index that measures the difference between the outputs of two systems, given the same input signal u.

̂ 𝜹+

that minimizes (9). As it It is necessary to find the optimal set demands a dedicated optimization algorithm it is also referred to as the inner layer level. Using the Safety Barrier Interior Point (SBIP) approach described in Ref. [32], the inequalities in (9) can be eliminated by adding a logarithmic barrier function, whereupon the following optimization function is obtained minimize + ̂ 𝜹

̂− , ̂ f plus (x0 , 𝚪 𝜹+ , ̃ xp , ̃ yp ∣ f tilde , u) − 𝜇 𝜺

r ∑

5

mization problem is broken into two layers. The inner layer formulation was presented at Subsection 3.2. This section addresses the outer layer optimization algorithm, which is responsible for achieving the ultimate goal, i.e. the design of an optimal input signal. It also uses the metrics defined in Section 3.

4.1. Signal representation As for finding an optimal input signal u⊕ , it is first necessary to define a mathematical representation of u in order to be able to operate with it. Consider the matrix 𝚵 ∈ ℝ2pk , which represents the components of u divided into k stages, such that u

𝚵≔

ln(s1,i + 𝜏 )

u

t1 2 ,

u

a 12 ,

[t11 , · · · , tk 1 , u

a 11 , · · · ,

ak 1 ,

u

u

tk 2 ,

···

u

u

· · · , t1 p ,

u

· · · , ak 2 , · · · ,

up

a1 ,

···,

u

· · · , ak p ] (18)

i=1

∑ r

−𝜇

with

ln(s2,i + 𝜏 )

i=1

u

u

𝚪+ . optimal set ̂ 𝜹+ , which is used in (5) to evaluate ̂ It is recommended to use the same optimization method found in Ref. [32] because it is faster and more efficient. However, any traditional Interior Point (IP) algorithm can provide acceptable results. The main difference between (16) and any other Interior Point (IP) algorithm is that s1 and s2 are allowed to be zero by adding the safety barrier parameter 𝜏 . In conditions where the optimum solution is outside the feasible region this is a very important feature. 3.3. Parameter recoverability – f𝛿̂(·) Considering the possibility that a given input signal u does not have enough persistence of excitation [2], even if Σ𝜀 = 0 is achieved,

̂− + ̂ 𝚪p ≈ 𝚪 it cannot be guaranteed that ̃ 𝜹+ is satisfied. In such Case a different metric must account for the persistence of excitation. To measure this performance, the parameter recoverability is a metric designed to evaluate the signal excitation. A given signal is considered rich if, apart from the minimization output error, the ̃p , then satisfying (5). 𝚪+ converge to 𝚪 parameters ̂ To ensure that, it is necessary to minimize the following (normalized) metric

i=1

(

̂ 𝛾 +i − ̃ 𝛾 pi ̂ 𝛾 −i

u

(19)

where s1 , s2 ∈ ℝr≥0 are vectors of slack variables, 𝜇 ∈ ℝ is the barrier parameter such that 𝜇 → 0 at the end of the optimization process, and 𝜏 ∈ ℝ>0 is the safety barrier parameter. The result of (16) is the

r ∑

u

≤ d = (j − 1) ∗ k + i ≤ pk

𝜏>0 (16)

𝚪+ ∣ f o ) = f ̂ (̂ 𝜹

u

(amin )i j ≤ Ξd = ai j ≤ (amax )i j , pk < d = (j − 1) ∗ k + i + pk ≤ 2pk

̂ 𝜹+ + 𝚫 − s2 = 0 s1 ≥ 0,

u

(tmin )i j ≤ Ξd = ti j ≤ (tmax )i j , 1

subject to ̂ 𝜹+ − 𝚫 + s1 = 0

s1 ≥ 0,

up

tk ,

)2

(17)

where 1 ≤ i ≤ k and 1 ≤ j ≤ p represent the i-th stage of the j-th input uj uj signal component, ti and ai are respectively the execution time and uj

uj

the amplitude, (tmin )i and (tmax )i are respectively the minimum uj

uj

and the maximum execution time, and (amin )i and (amax )i are the minimum and the maximum signal amplitude, respectively. Finally, consider f 𝚵 ∈ ℝ≥0 the fitness of the signal 𝚵 where

( ) 𝐟 𝚵 = minimize 𝜅1 𝚯 (·|·, 𝚵) + 𝜅2 f o (·|𝚯) + 𝜅3 f ̂𝜹 (· ∣ f o ) 𝚵

(20)

It is possible that each input signal component demands a different time length. Even though an input with these characteristics would be poorly evaluated during the optimization process, it would unnecessarily spend more computational time and increase the solution space. The time length of each component is equalized according to highest time length among all the signal components. Toward that end, each component repeats its initial portion until they all present the same duration. This point can be better seen through the example provided in Fig. 2, where the component u1 presents the longest time length, i.e. 20 time units (tu), whereas the signal components u2 and u3 present only 15tu and 14tu, respectively. Therefore, it is necessary to repeat the initial part of each signal until both reach 20tu. One of the advantages of adopting this approach is that, instead of simply normalizing the time length of each signal component, it is possible to have different stage numbers with simple and efficient mathematical operations among instances. Therefore, operations such as scalar multiplication, sum and subtraction are performed element-wise and the final result is forced into the feasibility region according to (19).

𝛾 is a scalar in the parameter vector 𝚪 = each ̂ 𝛾 1, ̂ 𝛾 2 , …, ̂ 𝛾 r ]T . This individual normalization is necessary [̂

where

to keep each parameter under its own scale. 4. Probabilistic Optimal Input Design and optimization Section 2 covered the general architecture of the proposed Optimal Input Design (OID) and, more specifically, it stated that the opti-

Fig. 2. Example: discrete input signal designed through 7 stages.

Please cite this article in press as: Honório LM, et al. Persistently-exciting signal generation for Optimal Parameter Estimation of constrained nonlinear dynamical systems, ISA Transactions (2018), https://doi.org/10.1016/j.isatra.2018.03.024

6

L.M. Honório et al. / ISA Transactions xxx (2018) 1–11

̂− , ̃xp , ỹ p , and u. Fig. 3. Optimal estimation framework of ̂ 𝜹+ , given 𝚪

4.2. Optimization algorithm It is not in the scope of this work to discuss which optimization methodology would be better to solve equation (20). However, it is presented a new particle swarm-like optimization algorithm (PSLO), based on [41] and [35], that is capable of solving the problem satisfactorily. The population of the PSLO consists of z ∈ ℕ signals 𝚵 as such:

[

Pop = 𝚵1 , 𝚵2 , … , 𝚵z

]

(21)

where the fitness of a given signal id is represented by f 𝚵id as shown in equation (20). Consider 𝚵id,best ∈ ℝ2pk the historical value of the id-th individual from Pop that presents the best fitness regarding to (20) and 𝚵BEST the best evaluated fitness among all individual 𝚵id,best . The search process is a mix of global and local operators aiming convergence or/and divergence of individuals. While convergence operators produce fast solution they may be sensible to local minima. Divergence operators usually increases the search space improving the chances of finding several local minima, however they are time spending. To deal with this problem, reference [35] has presented a probabilistic multi-operator methodology. Based on that, this work adopted a simpler approach were four operators that are able to pursue both local and global minima are used. The convergence criteria is also multi-parametric: each fitness has its own user-defined index, i.e. 𝜖 1 , 𝜖 2 and 𝜖3 ∈ ℝ and there is also the maximum iteration Imax ∈ ℕ. So, it is defined that the algorithm has converged by the flag f Pop = 0

(22)

which happens when the following condition is satisfied

((

)

(

)

(

𝜅1 𝚯 ≤ 𝜖1 ∪ 𝜅2 f o ≤ 𝜖2 ∪ 𝜅3 f ̂𝜹 ≤ 𝜖3

))

( ) ∪ Ii ≥ Imax

(23)

where Ii is the current interaction. Thus, at each Ii and until the convergence criteria is met, one of the follow operators is choose to evolve the population. 4.2.1. Particle swarm operator Consider the displacement of a given individual 𝚵id over the solution space such as the velocity V ∈ ℝ2pk defined as V id = cin V id + c1 rnd(0, 1) · (𝚵id,best − 𝚵id ) + c2 rnd(0, 1) · (𝚵BEST − 𝚵id ) (24) where cin ∈ ℝ is the inertia parameter, c1 and c2 ∈ ℝ are the step size coefficients of the respective parameters and rnd(0, 1) ∈ ℝ ∣ rnd(0, 1) ⊂ [0, 1] is a random number. The population update is given by

𝚵id = 𝚵id + wid · V id , ∀id ∈ {1, 2, … , z}

(25)

where wid ∈ ℝ is an innovation parameter. For all simulations in this works it is assumed that, cin = 0.2, c1 = 0.5 and c2 = 0.3.

4.2.2. Uniform crossover From a given parent 𝚵id,best a second one is randomly selected 𝚵n,best with id ≠ n. They will generate a new individual 𝚵0 from a uniform mixture of parameters, such that for each parameter d

Ξ0d

{ Ξdid,best ≔ Ξdn,best

, if rnd(0, 1) ≤ 0.5 , otherwise

(26)

where for each parameter p, a new rnd (0, 1) is drawn.

4.2.3. Random mutation It generates a new individual 𝚵0 from a random chance in parameters of the worst individual 𝚵id,best present in Pop.

Ξ0d ≔ rnd(0.9, 1.1) × Ξdid,best

(27)

where for each parameter p, a new random is drawn.

4.2.4. Line recombination From a given parent 𝚵id,best a second one is randomly selected 𝚵n,best with id ≠ n. This operator places evaluates a new individual 𝚵0 as a linear combination of the parents.

( ) 𝚵0 ≔ 𝚵id,best + rnd(−1.5, 1.5) × 𝚵n,best − 𝚵id,best

(28)

4.3. Final algorithm The methodology of finding a persistently-exciting signals for Optimal Parameter Estimation is divided in two procedures as generally shown in Fig. 3, where in the outer layer, represented by (1), an optimization algorithm is responsible for creating and evolving the signal-based population. The evaluation process of a given a signal 𝚵 stars by presenting it to the perturbed system (̃ 𝚪p ) yielding ̃ xp and ỹ p . The output is then presented to the penalty evaluation function Θ(·). If the signal is unfeasible, i.e. Ψ(·) > 0, the current 𝚵 is heavily penalized and no further actions are necessary. Otherwise, the inner loop optimization process, represented by f o (·), returns the best set

of parameters ̂ 𝚪+ that reproduces ̃ 𝚪p starting from the benchmark set of parameters ̂ 𝚪− . Finally the parameter recoverability metric f ̂ (·) is evaluated and the final fitness f 𝚵 computed. This process fin𝜹 ishes according to the convergence criteria defined by equation (23) and is controlled by a flag inside block 1. This process is shown in detail by Algorithms 1, 2 and 3. The most important part in the proposed approach is the signal evaluation procedure depicted in Algorithm 1. Given a signal representation 𝚵id it starts at line 2 by evaluating f tilde and 𝚯 at line 3 in order to check the operational limits. If the system is operating inside a desirable region, the inner optimization, at line 5, searches for the best param𝚪+ that minimizes the output error. This is done by using eter set ̂ the algorithm shown at equation (16). Line 6 evaluates the differ̃p in order to check ence between the evaluated parameters ̂ 𝚪+ and 𝚪 whether the signal has enough persistence of excitation. Finally, at

Please cite this article in press as: Honório LM, et al. Persistently-exciting signal generation for Optimal Parameter Estimation of constrained nonlinear dynamical systems, ISA Transactions (2018), https://doi.org/10.1016/j.isatra.2018.03.024

L.M. Honório et al. / ISA Transactions xxx (2018) 1–11

line 10, the total fitness regarding all desirable criteria are computed.

7

5. Results using a differential mobile robot 5.1. System description

Algorithm 1 Signal Evaluation Procedure. 1: function EVALUATEINDIVIDUAL(𝚵) 2: Evaluate: f tilde ( (·∣𝚵) ) 3: Evaluate: 𝚯 · ∣ f tilde , 𝚵 4: if 𝚯 ⊂ limits then 5: Evaluate: f o (·∣𝚯) 6: Evaluate: f ̂ (· ∣ f o ) 𝜹 7: else 8: Set: 𝜅2 f o + 𝜅3 f ̂ = 0 𝜹 9: end if 10: Evaluate: 𝐟 𝚵 = 𝜅1 𝚯 + 𝜅2 f o + 𝜅3 f ̂ 𝜹 11: return: f 𝚵 12: end Function This evaluation procedure is used throughout the optimal persistently-exciting signal generation depicted in Algorithm 2. It is called at first time at line 3, just after the population is initialized, and again at line 9 to evaluate new individuals generated by the current evolutionary operator. Line 5 shows the flag that controls the convergence process given in equation (23). This flag is updated at line 15. A final observation is that just one operator, chose in line 6, is used per generation (lines 7–14). The result of this algorithm is a persistently-exciting optimal signal 𝚵⊕ that is capable of keeping the system inside desirable constraints while allowing to correctly estimate the system parameters. Algorithm 2 Optimal Persistently-Exciting Signal Generation.

𝚪− ), q. Input: x0 , (̂ 1: Initialization : Initialize Pop with z signals and set fPop = 1 2: for each 𝚵id ∈ Pop do 3: f 𝚵id = EVALUATEINDIVIDUAL(𝚵id ) 4: end for 5: while fPop = = 1 do 6: Select Evolutionary Operator OP 7: for each 𝚵id ∈ Pop do 𝚵0 = OP𝚵id 8: 9: f 𝚵0 = EVALUATEINDIVIDUAL(𝚵0 ) 10: if f 𝚵0 < f 𝚵id then 𝚵id = 𝚵0 11: 12: f 𝚵id = f 𝚵0 13: end if 14: end for 15: Update fPop 16: end while 17: return: 𝚵⊕ = 𝚵BEST The utilization of 𝚵⊕ over an unknown system (̃ 𝚪p ) is done through algorithm (3). It starts by applying the sub-optimal signal 𝚵⊕ over system whose parameters are intended to be estimated. The result of this simulation is used in line 2 in the inner optimization algorithm, where the outcome is the optimal parameter set ̂ 𝚪+ . Algorithm 3 Optimal Parameter Estimation. Input: x0 , 𝚵⊕ , (𝚪). 1: Initialization: evaluate f tilde (x0 , 𝚪∣𝚵BEST ) 2: Evaluate: f o (·∣f tilde , 𝚵BEST )

̂+ 3: return: 𝚪

For a tutorial Case, an example using a Very Small Size Soccer (VSSS) differential drive mobile robot shown in Fig. 4 will be used as (𝚪). The robot has a ATMEGA328P microcontroller, two DC motors, a driver SN754410 which delivers −7.4 V to 7.4 V through a PMW controller, a single chip 2.4 GHz transceiver nRF2401 to communicate with the PC. Two rectangular markers used by the visual processing system. The computational vision system uses a Logitech HD PRO Webcam and Opencv 3.2 to evaluate its linear and angular position and velocities. 5.2. Mathematical model Considering a traditional dynamical modeling of a differential drive robot [42] along with the motor dynamics, the state variables are represented by x(t ) = [px (t ), py (t ), 𝜃 (t ), v(t ), 𝜔(t ), 𝜙̇ r (t ), 𝜙̇ l (t )]T

(29)

where px (t), py (t) and 𝜃 (t) are the north, west and angular positions v(t) and 𝜔(t) are the robot linear and angular velocities, 𝜙̇ r (t ) and 𝜙̇ l (t) are the angular velocities of the right and left wheels. The input variable is u = [Er (t ), Er (t )]T and Er (t) and El (t) are the voltage levels applied to the motors with drive the right and left wheels respectively. Since the dynamics of both robot and motors are coupled, the input variables are the voltage levels over the system rather than direct torques. Therefore, the parametric model is given by

⎡ v(t ) cos(𝜃 (t )) ⎤ ⎡ 0 ⎢ ⎥ ⎢ v(t ) sin(𝜃 (t )) ⎥ ⎢⎢ 0 ⎢ ⎥ ⎢ 𝜔(t) ⎢ ⎥ 0 ̈ ̈ ⎢ 𝜙r (t ) + 𝜙l (t ) ⎥ ⎢⎢ ẋ (t ) = ⎢ ⎥+ 0 2 ⎢ (𝜙̈ (t ) − 𝜙̈ (t ))r ⎥ ⎢⎢ 0 r l ⎢ ⎥ ⎢ b ⎢ ⎥ p ̈ ̇ ⎢p2 𝜙l (t ) + p3 𝜙r (t )⎥ ⎢⎢ 1 0 ⎢ ̈ ⎥ ⎣p2 𝜙r (t ) + p3 𝜙̇ l (t )⎦ ⎣

0⎤



0⎥



] 0⎥[ Er ( t ) ⎥ + 𝜗(t ) 0 ⎥ El ( t ) 0⎥ ⎥

(30)

0⎥

p1 ⎥⎦

y( t ) = x ( t ) with p1 =

Kt , R(2a1 + Is )

p2 = −

a2 2a1 + Is

, p3 = −

Kt Kw R

+𝜈

2a1 + Is

(31)

and a1 =

I r2 I mr2 + zz 2 + w , 8 2b 2

a2 =

I r2 mr2 − zz 2 4 b

(32)

where Izz is the inertia moment around z axis, Iw is the wheel inertia moment, Is is the shaft inertia moment, K t and K w are torque and voltage constants, R is the armature resistance, and 𝜈 is the viscous friction coefficient. Finally r is the wheel radius and b is the distance between the wheel and the gravity center. As for the Very Small Size Soccer (VSSS) the values of Iw and Is are insignificant (<10−8 ), so they will be neglected. 5.3. Optimal Excitation Signal Generation The ultimate goal of the proposed methodology is to estimate 𝚪+ that best approximates a gray box model a set of parameters ̂

Please cite this article in press as: Honório LM, et al. Persistently-exciting signal generation for Optimal Parameter Estimation of constrained nonlinear dynamical systems, ISA Transactions (2018), https://doi.org/10.1016/j.isatra.2018.03.024

8

L.M. Honório et al. / ISA Transactions xxx (2018) 1–11

Fig. 4. Very Small Size Soccer differential mobile robot and its schematic diagram.

Table 1 ̂− for (𝚪) Initial a priori parameter estimation 𝚪 Parameter

Value

Unit

Parameter

Value

Unit

m r b R

0.450 0.021 0.078 1.500

kg m m

𝜈

0.005 0.0005 0.0005 5.00

kg/ms – kgm2 –

Ω

Kt Izz Kw

Fig. 6. Final optimal signal u⊕ .

Fig. 5. Details of the function fitness (f Ξ ) optimization process.

(̂ 𝚪+ ) to the constrained real system (𝚪), where, for this sample Case, 𝚪 is given by [ ]T 𝚪 = m, r, b, R, 𝜈, Kt , Izz , Kw (33)

Fig. 7. Real path and velocity direction using u⊕ over (̂ 𝚪+ ).

Finding a set of parameters for one signal does not ensure that

(̂ 𝚪+ ) will match  for different signals. Hence, it is necessary to have a benchmark system (̃ 𝚪p ) where, for any given signal, it is p + ̃ ̂ possible to compare Γ with Γ . The design of (̃ 𝚪p ) starts by cre− ̂ ating a white box model (𝚪 ). By considering real measurements (m, r, b), manufacture data (R), physical analysis (Izz ) [36] and heuris-

̂− was evaluated as tics (𝜈 , K t , K w ) the a priori set of parameters 𝚪 shown in Table 1. 𝚪− ) it is now possible to create the benchmark system Using (̂ ̃p = ̂ by considering 𝚪 𝚪− + ̃ 𝜹p , where ̃ 𝜹p is a random variable that, for this tutorial Case, is given by a uniform distribution ranging 40% of its maximum and minimum limits around ̂ 𝚪− . Using this heuristic, p ̃ 𝚪 was given by ̃ 𝚪𝐩 = [0.5251, 0.0163, 0.0917, 1.4155, 0.0003, 0.0004, 0.0004, 3.6145]

(34)

where the maximum and minimum values of each parameter are given by

𝚪 = [0.6, 0.04, 0.20, 3.0, 0.01, 0.001, 0.001, 6.00]

𝚪 = [0.1, 0.01, 0.01, 0.5, 0.00, 0.000, 0.000, 2.00] The superior and inferior limits over the state variables are given by x = [0.5, 0.5, 𝜋, 1, 20, 20, 20] x = [−0.5, −0.5, −𝜋, −1, −20, −20, −20] This assumes that the available space and velocity to perform the simulation are bounded. ̂− , to find an optimal signal u⊕ capable of The objective is, given 𝚪 + finding a posteriori ̂ 𝚪 estimative that matches the values shown in ̂+ . equation (34), i.e. ̃ 𝚪p = 𝚪 The optimization process assumes that the initial state of all variables are zero and the weights of the constants are given by 𝜅 1 = 100, 𝜅 2 = 10, 𝜅 3 = 1, 𝜅 t = 10000, and W = I 7×7 . The optimization process convergence is shown in Fig. 5, where it is possible to see that, until around iteration 18, the best available signal was still not feasible (Ψ > 0), meaning that some constraint was being violated. After that point, the fitness function started to consider the error between parameters (f𝛿̂). A deeper analysis indi-

Please cite this article in press as: Honório LM, et al. Persistently-exciting signal generation for Optimal Parameter Estimation of constrained nonlinear dynamical systems, ISA Transactions (2018), https://doi.org/10.1016/j.isatra.2018.03.024

L.M. Honório et al. / ISA Transactions xxx (2018) 1–11

9

Fig. 8. Real VSSS at 1, 3, 5 and 9.24 s.

Table 2 ̂− and a posteriori ̂ 𝚪+ estimations using u⊕ over (𝚪). A priori 𝚪 Parameter

̂− 𝚪 ̂+ 𝚪

m

r

b

R

𝜈

Kt

Izz

Kw

0.450

0.021

0.078

1.50

0.005

0.0005

0.0005

5.00

0.450

0.019

0.061

1.55

0.003

0.0008

0.0000

5.02

cates that, by using randomly generated signals (the initial population of 20 individuals) it was not possible to find any solution within the desired operating limits. Another indication is that, even for a signal that keeps the robot inside the desirable operating limits, i.e. the one given by best individual after the 20th iteration, the normalized 𝚪p and estimated ̂ 𝚪+ parameters was error between the perturbed ̃ 12%. At iteration 100, the same error was 0.11%. It demonstrates that signals provide different excitations and estimation results. Finally, the utilized time and volume during the experiment was decreased by 29.4% and 38.4% from its initially valid values, ending up with 12s and 0.016 m2 respectively. Therefore, it is possible to note that each individual multi-parametric fitness function was optimized by the proposed methodology. The final optimal excitation signal u⊕ is shown in Fig. 6 and its realization over (̃ 𝚪p ) . 5.4. Parameter estimation using a real Case scenario The signal u⊕ shown in 6 was used to excite the VSSS robot. Its position realization is shown in Fig. 7. Fig. 8 shows the robot at 1, 3, 5 and 9.24s respectively. This part of the simulation corresponds to the diagram presented ( p p ) in Fig. 1a. The output of this step is a vector y ,̃ x . composed by ̃

(

)

yp , ̃ xp and (̂ 𝚪− ), the OPE algorithm Using u⊕ along with ̃ shown at section 3.2 is used to find the best a posteriori parameter set ̂ 𝚪+ . The result of the initial estimation and the final optimal set is shown in Table 2. Fig. 9 shows the histories of both (𝚪+ ) and (𝚪). An important remark is the good results considering that the experiment was performed by exciting the systems in open loop.

5.5. Comparative performance In order to evaluate the proposed approach 4 tests were carried out. For all cases the nonlinear system it was used the differential 𝚪+ ) presented in equation (30), where ̂ 𝚪+ mobile robot model (̂ assumes the values shown in Table 2. This strategy was adopted because (̂ 𝚪+ ) was a very good approximation of (𝚪) and it is possible to compare the current parameter estimation to ̂ 𝚪+ . For the ini-

̃𝐩 shown in equation (34) and the same 20 tial estimation, the same 𝚪 signals adopted as initial population (considering that the initial signal has influence over the final result) in the tutorial Case presented in 5.3 were used. The results will be validated by 5 metrics, where Penalty is the state variables’ integral that have violated any of the formulation constraints. E. Penalty is the state variables’ integral that have violated the spatial constrains. Although this limit was not considered in some cases formulation, it is an important metric and demonstrates the originality and importance of the proposed SESGOPE methodology. The P. Error shows the quadratic error between the current estî+ , S. Time shows the signal time, i.e., mated set of parameters from 𝚪 the necessary time to execute the experiment and, finally, C. Time shows the algorithm’s convergence time to generate and estimate the parameters. Case 1. For the first comparison an approach similar to the one presented in Refs. [15,23,24] was used to evaluate the optimal input signal and parameter estimation. A random APRBS 𝚵 is generated and converted to a differentiable signal by using the strategy adopted in Ref. [23]. The elements of 𝚵, along with the parameter set Γ, are used together to minimize the fitness function. In this scenario, the optimization problem was formulated in accordance with [24] considering constraints just over body-frame related variables, i.e. angular e linear velocities and accelerations. Time and space constraints were neglected. The optimization algorithm, however, is the same used in section 3.2 from Ref. [32]. The best result is shown in Table 3 where, as expected, the optimization was able to find a feasible solution considering the bodyframe variables, however it was not resulted in a good parameter estimation. Case 2. As it may be considered unfair to compare time and spatial violations once those variables were neglected in the former Case, it was generated a second simulation case adopting the same methodology described in Case 1 but including the space as constraint and time as part of the objective function. The results of this Case 2 scenario are also shown in Table 3. It is possible to see that the optimization was not able to find a feasible solution considering the inertial frame variables, however, the simulation time and parameter error were lower than the previous case.

Please cite this article in press as: Honório LM, et al. Persistently-exciting signal generation for Optimal Parameter Estimation of constrained nonlinear dynamical systems, ISA Transactions (2018), https://doi.org/10.1016/j.isatra.2018.03.024

10

L.M. Honório et al. / ISA Transactions xxx (2018) 1–11

Fig. 9. History of both (𝚪+ ) and (𝚪) in open loop.

Table 3 Comparisons among differences of Optimal Input Design and Parameter Estimation Methodologies. Method

Penalty

E. Penalty

P. Error

C.Time (s)

S. Time (s)

Case 1 Case 2 Case 3 SESGOPE

0 547.54 295.32 0

903.36 547.54 295.32 0

49.53 5.33 0.25 1e−2

0.02 0.18 20.57 48.89

39.4 23.99 15.23 9.25

Case 3. Considering that to solve both input signal design and parameter estimation in one optimization layer is not a proper approach, the double layer strategy proposed in this work is used. However, as a way to demonstrate the efficiency of having two different optimization strategies at each level, the non differentiable signal was also converted using the same methodology presented at Case 1. Also the optimization algorithm shown in Ref. [32] was used at both inner and outer layers. Therefore each change in the signal would demand to run an entire parameter estimation strategy. The results of this Case 3 scenario is also shown in Table 3. While this strategy reduced the spatial penalty, parameter error and simulation time, it is clear that converting discrete variables into continuous ones are not a good optimization approach, specially in cases of highly nonlinear systems. Case SESGOPE. The SESGOPE methodology is tested and also shown in Table 3. It is possible to see that, although the SESGOPE approach had the highest convergence time, it presented better performance indexes and was capable of correctly estimating the parameters. It corroborated several reports which assert that turning discrete variables into continuous ones generates fast but inaccurate results. Using probabilist optimization methods it is possible to better explore the solution space and finding more efficient solutions.

Finally, Case 1 is the worst scenario in terms of parameter estimation. The proposed modifications at cases 2 and 3 have also increased the estimation performance, however, using the proposed hypothesis along with the most indicated optimization methodologies shown superior results. Cases 1 to 3 have presented difficulties in finding a feasible signal considering spatial constraints. The reason for that is, as they have used a gradient-based approach, they are all subjected to local minimum.

6. Conclusion This paper presented a dual-layer optimization approach to deal with Optimal Input Design (OID) for Optimal Parameter Estimation (OPE) regarding a multiparametric fitness function. The objective was determining the best signal 𝚵⊕ capable of finding the Optimal Parameter Estimation ̂ 𝚪+ while keeping the experiment within a restrict time and state space region. Any other constraint, i.e energetic, economical, etc., could be also easily incorporated into the problem. Two methodologies were applied; one to generate the optimal excitation signal and another to perform the Optimal Parameter Estimation. The SBIP and PSO-like algorithms were used with highly

Please cite this article in press as: Honório LM, et al. Persistently-exciting signal generation for Optimal Parameter Estimation of constrained nonlinear dynamical systems, ISA Transactions (2018), https://doi.org/10.1016/j.isatra.2018.03.024

L.M. Honório et al. / ISA Transactions xxx (2018) 1–11

effective results. As for the parameter estimation, it was proposed a hypothesis stating that a perturbed set of the initial parameters would preserve the system dynamics. Hence, if a given signal minimizes the error between the initial and this perturbed parameters set, the same signal could also be used to estimate the parameters of a real system. A practical tutorial Case was used to demonstrate the method and check the validity of the hypothesis. A major set of results, demonstrating the method potentiality will be shown in part 2 of this work. The results show that the method is able to deal with constraints such as time and space which, to the best of our knowledge, is the first one in the literature. Moreover, it was also shown that the method is able to estimate, with precision, parameters for nonlinear systems. Thus one major impact of this work is that it is now possible to design reliable excitation signals for a series of experiments that demand space and time constraints. The methodology presented here fits well to MIMO linear and nonlinear parametric models with asymptotically stable open-loop systems. However the methodology can be easily adapted to unstable systems inside closed loop control. As future works, there are some studies that should be interesting, and they are listed below

• Analyze the algorithm sensibility to changes in the initial White



• • •

Box Model (WBM) parameter estimation and perturbation deviation. Convergence analysis and the use of the Lagrange Multipliers from the interior point algorithm to mitigate poor initial estimation. Changes in the methodology to work in closed-loop systems. Development of a probabilist method considering uncertainties and time-variant parameters. Analysis of systems with larger sets of parameters.

References [1] Ren X, Rad AB, Chan P, Lo WL. Identification and control of continuous-time nonlinear systems via dynamic neural networks. IEEE Trans Ind Electron 2003;50(3):478–86. [2] Åström KJ, Wittenmark B. Adaptive control. Courier Corporation; 2013. [3] Zhang D, Shi P, Wang Q-G, Yu L. Analysis and synthesis of networked control systems: a survey of recent advances and challenges. ISA Trans 2017;66: 376–92. [4] Isermann R. Identification of dynamic systems, mechatronic systems: fundamentals. 2005. p. 293–332. [5] Ljung L. System identification. In: Signal analysis and prediction. Springer; 1998. p. 163–73. [6] Elenchezhiyan JM. State estimation of stochastic non-linear hybrid dynamic system using an interacting multiple model algorithm. ISA Trans 2015;58: 520–32. [7] Unbehauen H, Rao G. Continuous-time approaches to system identification—a survey. Automatica 1990;26(1):23–35. [8] Chen L, Han L, Huang B, Liu F. Parameter estimation for a dual-rate system with time delay. ISA Trans 2014;53(5):1368–76. [9] Yu F, Mao Z, Yuan P, He D, Jia M. Recursive parameter estimation for hammerstein-wiener systems using modified ekf algorithm. ISA Trans 2017;70:104–15. [10] R. Cui, Y. Wei, S. Cheng, Y. Wang, An innovative parameter estimation for fractional order systems with impulse noise, ISA Trans. [11] Isermann R, Münchhof M. Neural networks and lookup tables for identification. Ident Dyn Syst 2011:501–37. [12] Nagy-Kiss A, Schutz G, Ragot J. Parameter estimation for uncertain systems based on fault diagnosis using takagi–sugeno model. ISA Trans 2015;56:65–74. [13] Liu Y, Ren X-M, Rad A. Robust estimator based on information potential and its application to simultaneous localization and mapping. Control Theory & Appl 2011;28(7):901–6.

11

[14] Isermann R, Münchhof M. Practical aspects of parameter estimation. Ident Dyn Syst 2011:565–602. [15] Goodwin G. Optimal input signals for nonlinear-system identification. In: Proceedings of the institution of electrical engineers. vol. 118. IET; 1971. p. 922–6. [16] Bombois X, Scorletti G, Gevers M, Van den Hof PM, Hildebrand R. Least costly identification experiment for control. Automatica 2006;42(10):1651–62. [17] Mehra R. Optimal input signals for parameter estimation in dynamic systems–survey and new results. IEEE Trans Automat Control 1974;19(6):753–68. [18] Deflorian M, Zaglauer S. Design of experiments for nonlinear dynamic system identification. IFAC Proc Vol 2011;44(1):13179–84. [19] Gevers M, Miš kovic´ L, Bonvin D, Karimi A. Identification of multi-input systems: variance analysis and input design issues. Automatica 2006;42(4):559–72. [20] Miš kovic´ L, Karimi A, Bonvin D, Gevers M. Closed-loop identification of multivariable systems: with or without excitation of all references? Automatica 2008;44(8):2048–56. [21] Fang K, Shenton A. Constrained optimal test signal design for improved prediction error. IEEE Trans Automat Sci Eng 2014;11(4):1191–202. [22] Klein V. Optimal Input Design for aircraft parameter estimation using dynamicprogramming principles. In: 17th Atmospheric flight mechanics conference. 1990. p. 2801. [23] Jauberthie C, Bournonville F, Coton P, Rendell F. Optimal Input Design for aircraft parameter estimation. Aero Sci Technol 2006;10(4):331–7. [24] G. Licitra, A. Bürger, P. Williams, R. Ruiterkamp, M. Diehl, Optimal Input Design for autonomous aircraft, arXiv preprint arXiv:1711.10007. [25] Fernandes D d A, Sørensen AJ, Pettersen KY, Donha DC. Output feedback motion control system for observation class rovs based on a high-gain state observer: theoretical and experimental results. Control Eng Pract 2015;39:90–102. [26] Liu M, Tso S, Cheng Y. An extended nonlinear primal-dual interior-point algorithm for reactive-power optimization of large-scale power systems with discrete control variables. IEEE Trans Power Syst 2002;17(4):982–91. [27] Poubel R, De Oliveira E, Manso L, Honório L, Oliveira L. Tree searching heuristic algorithm for multi-stage transmission planning considering security constraints via genetic algorithm. Electr Power Syst Res 2017;142:290–7. [28] Wang C, Tang T. Recursive least squares estimation algorithm applied to a class of linear-in-parameters output error moving average systems. Appl Math Lett 2014;29:36–41. [29] Ding F. Coupled-least-squares identification for multivariable systems. IET Control Theory & Appl 2013;7(1):68–79. [30] Hu H, Ding F. An iterative least squares estimation algorithm for controlled moving average systems based on matrix decomposition. Appl Math Lett 2012;25(12):2332–8. [31] Mouni E, Tnani S, Champenois G. Synchronous generator modelling and parameters estimation using least squares method. Simulat Model Pract Theory 2008;16(6):678–89. [32] Oliveira EJ, Oliveira LW, Pereira J, Honório LM, Silva IC, Marcato A. An optimal power flow based on safety barrier interior point method. Int J Electr Power Energy Syst 2015;64:977–85. [33] Chianeh HA, Stigter J, Keesman KJ. Optimal Input Design for parameter estimation in a single and double tank system through direct control of parametric output sensitivities. J Process Control 2011;21(1):111–8. [34] Kou P, Zhou J, Wang C, Xiao H, Zhang H, Li C. Parameters identification of nonlinear state space model of synchronous generator. Eng Appl Artif Intell 2011;24(7):1227–37. [35] da Silva AML, Freire MR, Honório LM. Transmission expansion planning optimization by adaptive multi-operator evolutionary algorithms. Electr Power Syst Res 2016;133:173–81. [36] Ferreira BM, Matos AC, Cruz NA. Modeling and control of trimares auv. In: 12th international conference on Autonomous robot systems and competitions. 2012. p. 57–62. [37] Kirk DE. Optimal control theory: an introduction. Courier Corporation; 2012. [38] Honorio L, da Silva AL, Barbosa D, Delboni L. Solving optimal power flow problems using a probabilistic 𝛼 -constrained evolutionary approach. IET Gener, Transm Distrib 2010;4(6):674–82. [39] Takahama T, Sakai S. Constrained optimization by the 𝜀 constrained differential evolution with an archive and gradient-based mutation. In: Evolutionary computation (CEC), 2010 IEEE congress on. IEEE; 2010. p. 1–9. [40] Klosowski JT, Held M, Mitchell JS, Sowizral H, Zikan K. Efficient collision detection using bounding volume hierarchies of k-dops. IEEE Trans Visual Comput Graph 1998;4(1):21–36. [41] Shi Y, et al. Particle swarm optimization: developments, applications and resources. In: Evolutionary computation, 2001. Proceedings of the 2001 Congress on, vol. 1. IEEE; 2001. p. 81–6. [42] Fahimi F. Autonomous robots: modeling, path planning, and control, vol. 107. Springer Science & Business Media; 2008.

Please cite this article in press as: Honório LM, et al. Persistently-exciting signal generation for Optimal Parameter Estimation of constrained nonlinear dynamical systems, ISA Transactions (2018), https://doi.org/10.1016/j.isatra.2018.03.024