Parameter identification of conservative Hamiltonian systems using first integrals

Parameter identification of conservative Hamiltonian systems using first integrals

ARTICLE IN PRESS JID: AMC [m3Gsc;November 6, 2019;18:57] Applied Mathematics and Computation xxx (xxxx) xxx Contents lists available at ScienceDir...

1MB Sizes 0 Downloads 41 Views

ARTICLE IN PRESS

JID: AMC

[m3Gsc;November 6, 2019;18:57]

Applied Mathematics and Computation xxx (xxxx) xxx

Contents lists available at ScienceDirect

Applied Mathematics and Computation journal homepage: www.elsevier.com/locate/amc

Parameter identification of conservative Hamiltonian systems using first integrals Roger Miranda-Colorado CONACyT-Instituto Politécnico Nacional-CITEDI, Av. Instituto Politécnico Nacional No. 1310, Nueva Tijuana, Tijuana, Baja California 22435, Mexico

a r t i c l e

i n f o

Article history: Received 5 June 2019 Revised 1 October 2019 Accepted 20 October 2019 Available online xxx Keywords: Parameter identification Hamiltonian system First integral Cuckoo search algorithm Dirty derivative Sliding modes

a b s t r a c t This paper presents a methodology for nonlinear parameter identification of conservative Hamiltonian systems. In the proposed approach, the system’s Hamiltonian is used under the first integral concept. The time derivative of this first integral function is utilized to construct a signal termed surface variable, which depends on the system’s parameters. Then, the parameter convergence is ensured by driving this surface variable towards zero, employing the parameter estimates as control inputs. This procedure is approached by treating the parameter identification problem as an optimization one. Hence, different cost functions are defined to obtain various parameter updating laws. Besides, an automatic tuning methodology based on a meta-heuristic algorithm is proposed for tuning the adaptation gains of the new parameter updating laws. The proposed scheme shows that, when the surface variable reaches zero, the parameter estimates converge to the real ones. Furthermore, better estimation results are obtained when applying the automatic tuning scheme. Numerous numerical simulations validate the proposed parameter identification methodology, including the cases where the unknown system variables are estimated through the dirty derivative and a sliding-mode differentiator. © 2019 Elsevier Inc. All rights reserved.

1. Introduction Parameter identification methodologies are essential techniques used to obtain a mathematical model of a dynamic system. These methodologies use the input-output data information to construct a parameter identification law that allows computing a set of estimates of the system’s parameters. In the field of control theory, the data provided by a parameter identification technique may help to analyze or improve the performance of a given system. However, the efficiency of a parameter identification methodology may degrade when affected by external factors, including noisy measurements, modeling simplifications, and state estimation error, among others. Recent works on parameter identification focus on identifying nonlinear systems. Then, the information provided by the parameter identification technique can be used for applications in different areas involving model-based control, fault detection, filtering, and high-performance tasks, among others [1–4]. Usually, a parameter identification technique assumes the knowledge of the system’s dynamical model. This mathematical model, also known as dynamical equations of motion, can be obtained through the Lagrangian formulation or by using the Hamiltonian formalism [7–9]. A particular class of dynamical system is the conservative one, i.e., a system whose equations of motion admit autonomous single-valued first integral [10,11]. Problems related to this type of systems include E-mail address: [email protected] https://doi.org/10.1016/j.amc.2019.124860 0 096-30 03/© 2019 Elsevier Inc. All rights reserved.

Please cite this article as: R. Miranda-Colorado, Parameter identification of conservative Hamiltonian systems using first integrals, Applied Mathematics and Computation, https://doi.org/10.1016/j.amc.2019.124860

JID: AMC 2

ARTICLE IN PRESS

[m3Gsc;November 6, 2019;18:57]

R. Miranda-Colorado / Applied Mathematics and Computation xxx (xxxx) xxx

applications in electrical engineering, nonlinear mechanics, quantum mechanics, perturbation theory, chemistry, biology, and others. This fact remarks the importance to compute precise models for this type of dynamic systems, which is the aim of the present work. Different approaches are utilized for applying a parameter identification technique. These approaches include online or off-line parameter identification [2,5] and open-loop or closed-loop parameter identification [6]. If the process to be identified cannot be stopped, an online methodology is an appropriate choice. Otherwise, off-line techniques are preferable because they are more robust against disturbances, which allows them to generate a better set of parameter estimates. There are fundamental differences between off-line and on-line parameter identification techniques. The on-line techniques use the data provided by the system at each time instant to generate a new set of parameter estimates; the off-line methodologies utilize the batch of data to compute the parameter estimates. Typically, the off-line techniques contain a pre-processing stage which makes them more robust against disturbances. Besides, the on-line techniques are affected by the correlation between the input and the output of the system [12]. The present works on parameter identification report the use of different mathematical tools to construct a parameter updating law. Reference [13] applies a batch LS algorithm to estimate the inertial and friction coefficients of a mechatronic system. Other methodologies include the use of an off-line LS algorithm [12,14,15], recursive LS algorithms [16], the Kalman filter [17], optimization algorithms [18], sliding modes [19], support vector machines [20], adaptive techniques [21], algebraic techniques [2,22], gradient-based techniques [23], among others [10,24–26]. Most of the existing parameter identification techniques are applied by assuming the system to be linearly parametrizT able. This assumption implies that the system can be expressed in form z(t ) = θ φ (t ), with φ(t) being the so-called regression vector, z(t) a known signal, and θ the vector of the system’s parameters. Hence, if the linear parametrization assumption is not satisfied, the performance of the parameter identification technique may degrade or even yield to parameter drift [27]. In such cases, a multi-extremal optimization problem must be solved for computing the parameter estimates [28,29]. Another critical step in parameter identification is concentrated on the right tuning process of the parameter adaptation gains [23]. If these gains are not correctly chosen, the algorithm may have a slow convergence rate; the parameters may have an oscillatory behavior or even diverge. We are allowed to tune the adaptation gains heuristically or resort to the use of an optimization technique to obtain the optimal adaptation gain value [30,31]. In an optimization technique, the system’s inputs are adjusted in order to minimize or maximize a given performance index [32]. Some examples of gain tuning using optimization-based methods include the use of fuzzy logic or neural networks [33,34], genetic algorithms [35], particle swarm optimization [36], cuckoo search algorithm (CSA) [37,38], among others. However, the procedure for tuning most of the algorithms mentioned above may be cumbersome. The previous literature review shows that most of the existing works on parameter identification require the system to be linearly parametrizable. Also, the adaptation gains of these methodologies do not include a tuning procedure able to generate an optimal set of gain values, which would imply a better performance of the parameter identification technique. 1.1. Contribution of the paper This paper presents a new methodology for nonlinear parameter identification of conservative Hamiltonian systems. The proposed parameter identification algorithms allow computing the parameter estimates in an on-line or off-line manner. In the proposed methodology, the vector of generalized coordinates q(t ), the generalized momentum vector p(t ), and their time derivatives are used to construct a first integral function. Then, a surface variable depending on the structure of the first integral and the system parameters is defined. The information from this new function together with a proper definition of a cost function is used to obtain a parameter identification algorithm, which is used to generate the estimates of the system parameters. The proposed parameter identification scheme requires the knowledge of the generalized momentum vector p(t ), as well as the time derivatives of the generalized coordinates and generalized momentum. This requirement is usual in most of the parameter identification schemes, which require the time derivative of some signals to construct the regressor vector. In this work, we analyze the behavior of the parameter identification methodology through the use of the dirty derivative and a sliding-mode differentiator. Besides, we also include a methodology for tuning the gains of the proposed parameter identification algorithms. This tuning procedure is based on the use of a meta-heuristic algorithm, namely the CSA. The main contributions of this work consist of: • •



providing three new algorithms for parameter identification of conservative Hamiltonian systems; analyzing the performance of each parameter identification law when asymptotic and finite-time differentiators are used for computing p(t ) and the time derivatives of q(t ) and p(t ); developing a procedure for tuning the gains of the proposed parameter identification methodologies using a metaheuristic algorithm, allowing to obtain an optimal set of gains that permit the parameter identification algorithm to generate a precise set of parameter estimates.

Numerous numerical simulations show that precise parameter estimates are obtained when using the proposed parameter updating laws. Also, it is shown that the performance of the parameter identification algorithms is enhanced when the adaptation gains tuning procedure is applied. Please cite this article as: R. Miranda-Colorado, Parameter identification of conservative Hamiltonian systems using first integrals, Applied Mathematics and Computation, https://doi.org/10.1016/j.amc.2019.124860

ARTICLE IN PRESS

JID: AMC

[m3Gsc;November 6, 2019;18:57]

R. Miranda-Colorado / Applied Mathematics and Computation xxx (xxxx) xxx

3

1.2. Organization The rest of the paper is organized as follows. Some preliminaries related to the mathematical description of a dynamical system using the Lagrangian and Hamiltonian formalism are given in Section 2. Also, the problem formulation is stated. Section 3 presents the proposed parameter identification methodology. Here, three different parameter updating laws are provided. The convergence of the proposed schemes is proved by using a Lyapunov-based approach. Some issues related to the algorithm implementation are stated in Section 4. Here, the procedures for tuning the parameter adaptation gains and time derivative estimation are provided. The numerical simulations validating the proposed methodology are presented in Section 5. Some comments analyzing the simulations results are provided in Section 6. Finally, some concluding remarks and comments about future research directions wrap up the paper in Section 7. 1.3. Notation and definitions In this paper, IR denotes the set of real numbers, and IR > 0 the set of positive real numbers. Scalar variables are denoted by lowercase letters, and vectors are represented by uppercase or lowercase bold letters. For a given time-varying signal ν (t), the term ν˙ (t ) corresponds to the first time derivative of ν (t). The notation ∂ ν/∂ t denotes the partial derivative of ν(t) with respect to t. Also, ∇x denotes the gradient with respect to x of a given function. The element 0n ∈ IRn corresponds to the vector of zeros of n elements; | · | denotes the absolute value, and xT is the transpose of vector x. 2. Preliminaries and problem formulation In this paper, inspired by Hernandez and Poznyak [29], we use the so-called first integrals functions as a tool to obtain a procedure for parameter identification of conservative Hamiltonian systems. In the proposed methodology, the parameter estimates are utilized as control inputs which stabilize a nonlinear system obtained from the first integral of a Hamiltonian system. 2.1. Euler-Lagrange and Hamiltonian equations The dynamic equations of a Euler-Lagrange (EL) system can be obtained by using the Lagrange’s equations [7,39]

d dt

∂ ∂ L(q, q˙ ) − L(q, q˙ ) = Q np , ∂ q˙ ∂q

(1)

where q = [q1 , . . ., qn ]T ∈ IRn is the vector of generalized coordinates, L(q, q˙ ) = T (q, q˙ ) − U (q ) is the Lagrangian function, with T (q, q˙ ), U (q ) being the kinetic and potential energies, respectively. Finally, the vector Q np ∈ IRn contains the generalized non-potential forces acting on the system. In a conservative system, the non-potential forces are not present, i.e., Q np = 0n . Hence, the system’s dynamic equations are described by

d dt

∂ ∂ L(q, q˙ ) − L(q, q˙ ) = 0n . ∂ q˙ ∂q

(2)

From the theory of Hamiltonian systems, a generalized momentum p(t ) is defined as follows

p = ∇q˙ L(q, q˙ ),

(3)

where ∇q˙ denotes the gradient with respect to q˙ (t ) of a given function. The Hamiltonian H (q, p) of the corresponding system is computed as

H (q, p) = pT q˙ − L(q, p).

(4)

Then, the Hamiltonian equations describing the system consist of the next set of 2n first-order differential equations

∂ H ( q, p ) , ∂p ∂ H ( q, p ) p˙ = − . ∂q

q˙ =

(5)

Let us define the vector θ ∈ IRp , which corresponds to the actual parameters of the Hamiltonian system to be identified. Then, the kinetic and potential energies of the system also depend on the unknown parameters contained in θ , i.e., these energies can be written as T (q, q˙ |θ ) and U (q|θ ). The fact that T (q, p|θ ) and U (q|θ ) depend on θ implies that the Lagrangian and the Hamiltonian also depend on it, i.e., L = L(q, q˙ |θ ) and H = H (q, p|θ ). Hence, the vectors of generalized coordinates and momentum, as well as their time derivatives, also depend on θ , i.e., q = q(t |θ ), p = p(t |θ ), q˙ = q˙ (t |θ ) and p˙ = p˙ (t |θ ). Please cite this article as: R. Miranda-Colorado, Parameter identification of conservative Hamiltonian systems using first integrals, Applied Mathematics and Computation, https://doi.org/10.1016/j.amc.2019.124860

ARTICLE IN PRESS

JID: AMC 4

[m3Gsc;November 6, 2019;18:57]

R. Miranda-Colorado / Applied Mathematics and Computation xxx (xxxx) xxx

The previous analysis allows rewriting the Hamiltonian Eqs. (5) as

∂ H (q, p|θ ) , ∂p ∂ H (q, p|θ ) p˙ (t |θ ) = − . ∂q

q˙ (t |θ ) =

(6)

2.2. First integrals Given the Hamiltonian Eqs. (6), a first integral consists of a continuously-differentiable function f (t, q, p) which is constant along with any solution of (6), i.e., the following condition holds [29,40]

∂f + [ f, H ] = 0, ∂t

(7)

where [f, H] denote the Lee-Poisson’s brackets, which are defined as [41] n 



∂ f ∂H ∂ f ∂H − [ f, H ] = ∂ qi ∂ pi ∂ pi ∂ qi i=1     ∂f T ∂f T = q˙ − p˙ . ∂q ∂p



(8)

Eq. (7) indicates that every solution of the Hamiltonian system (6) satisfies the condition f (t, q, p) = k, for some constant k. 2.3. Problem statement Before presenting the proposed parameter identification methodology, we complete this Section by formulating the parameter identification problem. Problem formulation. The identification problem consists of designing a parameter identification algorithm that allows obtaining a vector of parameter estimates θˆ (t ) ∈ IR p of a given conservative Hamiltonian system having the structure (6), based on-line measurements of the generalized coordinates q(t |θ ), which fulfills the next condition

lim θ˜ (t ) = 0 p ,

t→∞

(9)

where θ˜ = θˆ − θ is the parameter estimation error vector. Remark 1. In the next development, the values of q˙ (t |θ ), p˙ (t |θ ), and p(t |θ ) are required for computing θˆ (t ). Usually, these values are not available. Hence, different procedures for estimating them will be analyzed. We will also study the influence on the parameter identification methodology when using different types of differentiators. Remark 2. The proposed parameter identification methodology allows estimating the parameters of the Hamiltonian system (6). Note that there is not any restriction on the parameters to be linearly parametrizable. This freedom is an essential advantage of the proposed scheme over other existing methodologies where the linearity in the parameters is required. 3. Parameter identification methodology Fig. 1 depicts the proposed parameter identification methodology, which is described in the following. First, the vector of generalized coordinates q(t ), generalized momentum p(t ), as well as their time derivatives q˙ (t ), p˙ (t ), obtained from the Hamiltonian system, are used to construct a first integral function f(t). Then, a surface variable s(t), which depends on the vector of parameter estimates θˆ (t ), is computed. The information of this surface variable, together with the current parameter estimates, are used to design a cost function J (q, p|θˆ ). Then, the cost function and surface variable are used to design a parameter updating law which estimates the values of the parameter vector θ . In the proposed methodology, we assume that only the values of vector q(t ) are available. Then, a differentiator is used to compute the values of signals q˙ (t ), p(t ), and p˙ (t ). As indicated in Fig. 1, the adaptation gains required by the parameter updating law can be tuned manually. However, optimal gain values can also be computed through a meta-heuristic algorithm. In this work, we present a methodology that allows utilizing the CSA for computing the optimal value of the adaptation gains. This alternative procedure allows for enhancing the performance of the parameter identification algorithm. The proposed parameter identification scheme is based on the concept of first integrals. Let us assume that the first integral f does not depend explicitly on t, but it does depend on the parameter vector θ , i.e., f (q, p) = f (q, p|θ ). Besides, definition (7) implies that f (q, p|θ ) is constant along the trajectories of the Hamiltonian system (6), i.e.,

f (q, p|θ ) = k,

(10)

Please cite this article as: R. Miranda-Colorado, Parameter identification of conservative Hamiltonian systems using first integrals, Applied Mathematics and Computation, https://doi.org/10.1016/j.amc.2019.124860

ARTICLE IN PRESS

JID: AMC

[m3Gsc;November 6, 2019;18:57]

R. Miranda-Colorado / Applied Mathematics and Computation xxx (xxxx) xxx

5

Fig. 1. Block-diagram of the proposed parameter identification methodology using first integrals.

with k ∈ R being some constant. Condition (10) holds if f (q, p|θ ) depends exactly on θ . Let θˆ ∈ IR p be the vector corresponding to the estimated value of θ . Then, we can rewrite the condition (10) as follows

f (q, p|θˆ ) = k if θˆ = θ .

(11)

Besides, using condition (10), the next equation holds

d f (q, p|θ ) = 0. dt

(12)

Condition (12) allows utilizing the vector θˆ (t ) as a control variable for estimating θ through the stabilization of the term df (q, p|θˆ )/dt. Let us define the surface variable s(t) as

s(q, p|θˆ ) = [ f (q, p|θˆ ), H (q, p|θˆ )],

(13)

where the dependence on θ has been replaced by the control parameter θˆ . Thus, the parameter identification problem (9) becomes a control problem which consists in designing a parameter updating law for θˆ that drives the surface variable s(q, p|θˆ ) to zero. If this stabilization problem is achieved, we conclude that the convergence of s(q, p|θˆ ) to zero implies that θˆ (t ) converges to θ , i.e.,

s(q, p|θˆ ) → 0 ⇒ θˆ → θ .

(14)

3.1. Parameter updating law In order to obtain the parameter updating law, let us consider a cost function J (q, p|θˆ ) which we want to minimize employing a proper value of the vector of parameter estimates θˆ (t ). Hence, we want to solve the optimization problem

min J (q, p|θˆ ).

(15)

θˆ

Many different choices can be selected for solving the optimization problem (15). For instance, we may define some cost function J (q, p|θˆ ) and obtain the parameter updating law by using the steepest descent approach [42]. In this work, we propose to solve the optimization problem (15) through the use of the next parameter updating law

θˆ˙ (t ) = −γ

∂ J (q, p|θˆ ) , ∂ θˆ

(16)

with γ ∈ IR a constant corresponding to the parameter adaptation gain. Each specific selection of the cost function J (q, p|θˆ ) yields different structures that can be used to estimate θˆ (t ). In the next, two different procedures for computing the parameter estimates are provided: 1. Let us define the cost function J1 (q, p|θˆ ) as follows



1 J1 (q, p|θˆ ) = s(q, p|θˆ ) 2



2

.

(17)

Please cite this article as: R. Miranda-Colorado, Parameter identification of conservative Hamiltonian systems using first integrals, Applied Mathematics and Computation, https://doi.org/10.1016/j.amc.2019.124860

ARTICLE IN PRESS

JID: AMC 6

[m3Gsc;November 6, 2019;18:57]

R. Miranda-Colorado / Applied Mathematics and Computation xxx (xxxx) xxx

By using the proposed solution (16), we obtain the next parameter updating law

θˆ˙ (t ) = −γ s(q, p|θˆ )

∂ s(q, p|θˆ ) . ∂ θˆ

(18)

2. Let us consider a cost function J2 (q ) = |s(q, p|θˆ )|. If the proposed Eq. (16) were used, we would obtain a parameter updating law proportional to the term sign(s(q, p|θˆ ))∂ s(q, p|θˆ )/∂ θˆ . Now, let us include an additional term proportional to the absolute value of the surface variable s(q, p|θˆ ). Hence, the proposed parameter updating law has the next structure

θˆ˙ (t ) = −κ|s(q, p|θˆ )|β sign(s(q, p|θˆ )) with κ ∈ IR > 0 and β ∈ IR ≥ 0 .

∂ s(q, p|θˆ ) , ∂ θˆ

(19)

We may select whichever of the previous parameter updating laws for estimating θˆ (t ). However, anyone requires computing the value of ∂ s(q, p|θˆ )/∂ θˆ . Note that

∂ s(q, p|θˆ ) ∂ = ∂ θˆ ∂ θˆ



∂ f (q, p|θˆ ) ∂q





T

q˙ +

∂ f (q, p|θˆ ) ∂p

  T

p˙ .

(20)

Hence, in order to compute θˆ (t ), we require to know the value of q˙ (t ), p(t ), and p˙ (t ). We can use different approaches to estimate these values such as the dirty derivative, asymptotic estimators, or finite-time estimators. In the next Section, we show the influence of using each of these differentiation algorithms. For the forthcoming analysis, let us consider the next assumptions: A1. For all t ≥ 0 and ∀θˆ ∈ IR p , the following condition holds



s(q, p|θˆ ) θ˜ (t )

 ∂ s(q, p|θˆ ) T > 0. ∂ θˆ

(21)

A2. For all t ≥ 0 and ∀θˆ ∈ IR p , the next condition holds

β   ∂ s(q, p|θˆ ) ≥ ρV 1/2 (θ˜ ), s(q, p|θˆ ) sign(s(q, p|θˆ )) θ˜ (t ) T ∂ θˆ

with V (θ˜ ) =

(22)

1 ˜T ˜ 2 θ θ.

Now, we wrap up this Section by presenting our main result, which states that the parameter identification algorithms (18) and (19) allow estimating the parameters of the conservative Hamiltonian system (6). Theorem 1. Let us consider the Hamiltonian system (6) and the positive definite function V (θ˜ ) =

1 ˜T ˜ 2 θ θ.

Then:

1. If Assumption A1 holds, the parameter updating law (18) ensures that θ˜ (t ) → 0 as time tends to infinity. 2. If Assumption A2 holds, the parameter updating law (19) ensures that θ˜ (t ) converges to zero in finite time tr , with

tr ≤

2

κρ

V 1/2 (θ˜ (0 )).

(23)

Proof. Let the next positive definite function

1 T V (θ˜ ) = θ˜ θ˜ . 2

(24)

The time derivative of (24) along the trajectories of (18) is given by T V˙ (θ˜ ) = θ˜ θ˜˙

= −γ s(q, p|θˆ )θ˜

T

∂ s(q, p|θˆ ) . ∂ θˆ

(25)

Then, by assuming that Assumption A1 holds, we conclude that V˙ (θ˜ ) < 0, i.e., θ˜ tends to zero as time tends to infinity, which proves the first part of the Theorem. Now, the time derivative of V (θ˜ ) along the trajectories of (19) is

β ˆ) T ∂ s (q, p|θ V˙ (θ˜ ) = −κ s(q, p|θˆ ) sign(s(q, p|θˆ ))θ˜ . ˆ ∂θ

Hence, by assuming that Assumption A2 holds, we conclude that

V˙ ≤ −κρV 1/2 (θ˜ ), Please cite this article as: R. Miranda-Colorado, Parameter identification of conservative Hamiltonian systems using first integrals, Applied Mathematics and Computation, https://doi.org/10.1016/j.amc.2019.124860

ARTICLE IN PRESS

JID: AMC

[m3Gsc;November 6, 2019;18:57]

R. Miranda-Colorado / Applied Mathematics and Computation xxx (xxxx) xxx

7

which is equivalent to the next relation

V 1/2 (θ˜ ) +

κρ t

≤ V 1/2 (θ˜ 0 ),

2

(26)

with θ˜ 0 = θ˜ (0 ) being the initial value of the estimation error vector. Finally, from Eq. (26), we infer that the time tr at which V (θ˜ ) reaches zero is upper bounded by

tr ≤

2V 1/2 (θ˜ 0 )

κρ

.

(27)

Then, θ˜ (t ) tends to zero in finite time bounded by tr , which completes the proof.



Remark 3. The parameter updating law (19) is a generalization of that reported in [29]. If we select the value β = 0 in (19), we recover Eq. (24) in [29]. Remark 4. The cost function J1 (q, p|θˆ ) can also be used to obtain an iterative procedure for computing the parameter estimates θˆ (t ). The corresponding parameter updating law is given by

θˆi (kT + 1 ) = θˆi (kT ) − γi s(q, p|θˆ )

∂ s(q, p|θˆ ) , ∂ θˆi

(28)

where θˆi (kT ) is the value of the i-th parameter estimate at time instant kT, k = 0, 1, 2, . . ., with T being the sampling period; γ i ∈ IR, i = 1, 2, . . ., p, are the parameter adaptation gains. In the next, we refer as M1, M2, and M3 to the parameter updating laws (18), (19), and (28), respectively. 4. Algorithm implementation This Section addresses some issues worth for implementing the proposed parameter identification methodology. Specifically, we develop a procedure that allows obtaining an optimal set of the parameter adaptation gains of each parameter updating law. Then, we present two alternatives that can be used to estimate p(t ), as well as the time derivative of the generalized coordinates q(t ) and impulses p(t ). 4.1. Tuning the parameter adaptation gains The convergence time and accuracy of any parameter identification technique depend on a proper selection of the parameter adaptation gains. Although the parameter identification algorithm can be manually tuned, we may obtain an optimal set of parameter estimates through a proper tuning procedure. An appealing choice for tuning the parameter identification algorithm consists of using an optimization-based methodology. With this aim, we selected a nature-inspired meta-heuristic algorithm, namely the cuckoo search algorithm (CSA) [37,45]. This approach has appealing features such as its ability to search and select the current best solution among all the existing ones (intensification), and an efficient search space exploration (diversification). The CSA requires to select a performance index function. Some possible functions that can be used encompass the integral square error (ISE), the time integral of absolute error (ITAE), or the time integral of square error (ITSE). Recall that a good set of parameter estimates is obtained if we ensure that the surface variable s(q, p|θˆ ) tends to zero. Hence, in this work, we employed the ISE to penalize significant errors related to the surface variable s(q, p|θˆ ). Hence, the performance index JCSA (t) used by the CSA is mathematically described as follows

JCSA (t ) =



Ts 0

s2 ( τ )d τ ,

(29)

which is evaluated in the time interval [0, Ts ]. A detailed explanation of how to implement the CSA is given in [45]. The summarized steps required to implement it are: 1. 2. 3. 4. 5.

Select the desired performance index. Initialize the CSA by selecting the number of nests, a threshold value, and a high initial value for the performance index. Select a set of possible values for the adaptation gains. Perform a simulation with the CSA to obtain a new set of adaptation gains. Repeat step 4 until the performance index reaches a value below the selected threshold.

After finishing the previous steps, the CSA provides an optimal set of adaptation gains. We can repeat this procedure ϑ times, and select the adaptation gains corresponding to the simulation that provided the lowest performance index value. Remark 5. The selection of the CSA obeys to its simplicity when compared to other methodologies that make use of fuzzy logic, neural networks, and genetic algorithms, among others [45]. The CSA can be tuned and implemented by selecting the Please cite this article as: R. Miranda-Colorado, Parameter identification of conservative Hamiltonian systems using first integrals, Applied Mathematics and Computation, https://doi.org/10.1016/j.amc.2019.124860

ARTICLE IN PRESS

JID: AMC 8

[m3Gsc;November 6, 2019;18:57]

R. Miranda-Colorado / Applied Mathematics and Computation xxx (xxxx) xxx

desired performance index and setting the value of the step size α , probability pa , and the number of nests. However, it is usual to select the values α = 1 and pa = 0.25. Hence, we are only left to select the number of nests. In this work, all the numerical simulations with the CSA employed the performance index (29), α = 1, pa = 0.25, and 200 nests. 4.2. Time derivative estimation Usually, in real systems, we have at disposal only measurements of the generalized coordinates q(t ). However, in order to compute the parameter updating laws (18), (19) and (28), we require the value of signals p(t ), q˙ (t ), and p˙ (t ). Different procedures can be used to estimate these time derivatives. Among them, we are allowed to use the dirty derivative or finite-time exact differentiators [43]. In this study, we selected the dirty derivative and a robust second-order sliding mode (SM) differentiator. The dirty derivative was selected because of its simplicity, although its convergence is asymptotic. The SM differentiator was used due to its robustness and finite-time properties. Let the differentiable known signal ζ 1 (t). The estimate ζˆ2 (t ) of the time derivative ζ2 (t ) = ζ˙ 1 (t ) is obtained utilizing the dirty derivative approach using the filter

ζˆ2 (t ) =

 ηs  ζ1 (t ), s+η

(30)

where η corresponds to the differentiator bandwidth, while s denotes the Laplace operator. On the other hand, the corresponding time-derivative using a second-order SM differentiator is computed through the next equations [43,44]

ζˆ˙1 (t ) = ζˆ2 (t ) + z1 (t ), ζˆ˙2 (t ) = z2 (t ), with

(31)









1/2 z1 (t ) = k1 ζ˜1 (t ) sign ζ˜1 (t ) ,





z2 (t ) = k0 sign ζ˜1 (t ) ,

(32)

ˆ where k0 , k1 ∈ IR are constant gains and ζ˜√ the 1 (t ) = ζ1 (t ) − ζ1 (t ). Also, differentiator (31) and (32) enforces ζˆ1 (t ) = ζ1 (t ) and ζˆ2 (t ) = ζ2 (t ) if k0 = 1.1L and k1 = 1.5 L, with L = ess sup ζ˙ 2 (t ) < ∞. 5. Simulation results This Section presents the numerical results when applying the proposed parameter identification methodology. The proposed scheme is validated through a mechanical system described by the following Hamiltonian equations [29]

p1 , m p2 q˙ 2 = , m p˙ 1 = mω2 q1 + θ1 [q2 − q1 − θ2 ], q˙ 1 =

(33)

p˙ 2 = mω2 q2 − θ1 [q2 − q1 − θ2 ], with the Hamiltonian given by

H (q, p|θ ) =

 θ1 p21 p2 mω 2 2 + 2 − q1 + q22 + [q2 − q1 − θ2 ]2 , 2m 2m 2 2

(34)

where the parameters θ = [θ1 , θ2 ]T appear in a nonlinear way in (34). All the forthcoming numerical simulations were performed using Matlab–Simulink with a fixed step of 1 ms, and a solver ode3 Bogacki–Shampine. Besides, two scenarios were considered: C1. The adaptation gains are manually tuned. C2. The adaptation gains are tuned through the CSA. Both cases, C1 and C2, include the results when using the adaptation algorithms M1 , M2 , and M3 . Also, the system parameters were set to θ1 = 3 and θ2 = 2. To obtain the explicit expression for the parameter updating laws, we selected the system’s Hamiltonian as the first integral, i.e., f (q, p|θˆ ) = H (q, p|θˆ ). Then, from Eq. (13), the surface variable is given by

s(t ) =





p1 p˙ 1 p2 p˙ 2 + − mω2 [q1 q˙ 1 + q2 q˙ 2 ] + θˆ1 q2 − q1 − θˆ2 [q˙ 2 − q˙ 1 ]. m m

(35)

Besides, the partial derivative ∂ s(q, p|θˆ )/∂ θˆ is computed as follows

∂ s(q, p|θˆ ) = ∂ θˆ





q2 − q1 − θˆ2 [q˙ 2 − q˙ 1 ] −θˆ1 [q˙ 2 − q˙ 1 ]



.

(36)

Please cite this article as: R. Miranda-Colorado, Parameter identification of conservative Hamiltonian systems using first integrals, Applied Mathematics and Computation, https://doi.org/10.1016/j.amc.2019.124860

ARTICLE IN PRESS

JID: AMC

[m3Gsc;November 6, 2019;18:57]

R. Miranda-Colorado / Applied Mathematics and Computation xxx (xxxx) xxx

4

4

4

3

3

3

2

2

2

1

1 0

50

100

150

200

1 0

50

100

150

200

4

4

4

3

3

3

2

2

2

1

1 0

50

100

150

200

50

100

150

200

4

4

4

3

3

3

2

2

2

1 0

50

100

150

50

100

150

200

0

50

100

150

200

0

50

100

150

200

1 0

200

0

1 0

1

9

50

100

150

200

Fig. 2. Case C1. Time evolution of the parameter estimates when using the parameter identification methodologies M1 (18), M2 (19), and M3 (28). The first, second, and third columns represent the cases when p(t ) and the time derivatives q˙ (t ) and p˙ (t ) correspond to their exact values, the values obtained using the dirty derivative, and those obtained using an SM differentiator, respectively.

Table 1 Case C1. Numerical values of the estimated parameters generated by methods M1 , M2 , and M3 when p(t ), q˙ (t ), and p˙ (t ) correspond to their exact value, the value computed by the dirty derivative, and the value provided by an SM differentiator, respectively. Nominal

θˆ1 θˆ2

3 2

M1

M2

M3

Exact

Dirty

SM

Exact

Dirty

SM

Exact

Dirty

SM

3.04 2.01

3.09 2.02

3.02 2.01

3.0 2.0

3.02 2.0

2.98 2.0

3.04 2.01

2.99 2.02

1.78 1.97

5.1. Numerical simulations for case C1 The parameter estimates were computed through the use of Eqs. (18), (19), and (28), which correspond to the methods M1 , M2 , and M3 , respectively. For method M1 , we employed the adaptation gain value γ = 0.08. In method M2 the values κ = 0.055 and β = 1.48 were utilized. Finally, method M3 employed the values γ1 = 0.23 and γ2 = 0.09. Three different simulations were performed. In the first one, the methods M1 , M2 , and M3 were implemented by assuming an exact knowledge of signals p(t ), q˙ (t ) and p˙ (t ). The second and third set of simulations were the same as the first one, but the values of p(t ), q˙ (t ) and p˙ (t ) were estimated by using the dirty derivative approach described by equation (30) and the SM differentiator described by Eqs. (31) and (32), respectively. Fig. 2 depicts the time evolution of the parameter estimates when using the algorithms M1 , M2 , M3 . The first column corresponds to the parameter estimates obtained when using the exact values of p(t ), q˙ (t ), and p˙ (t ). The second column depicts the parameter estimates when using the dirty derivative. Finally, the third column shows the time evolution of the parameter estimates when using the second-order SM differentiator. The red-dotted lines correspond to the nominal parameter values, which were set to θ1 = 3 and θ2 = 2. The numerical values of the parameter estimates are also given in Table 1. Fig. 3 shows the time evolution of the surface variable s(q, p|θˆ ). The first column corresponds to the case when each parameter identification algorithm uses the exact values of p(t ), q˙ (t ) and p˙ (t ). The second and third columns show the behavior of the surface variable when using the dirty derivative and the SM differentiator, respectively. Note that, when using the SM estimator, the surface variable exhibits the chattering phenomenon. However, the average value, depicted by the black line in Fig. 3, is always close to zero. Please cite this article as: R. Miranda-Colorado, Parameter identification of conservative Hamiltonian systems using first integrals, Applied Mathematics and Computation, https://doi.org/10.1016/j.amc.2019.124860

ARTICLE IN PRESS

JID: AMC 10

[m3Gsc;November 6, 2019;18:57]

R. Miranda-Colorado / Applied Mathematics and Computation xxx (xxxx) xxx Table 2 Case C2. Numerical values of the estimated parameters got by methods M1 , M2 , and M3 when p(t ), q˙ (t ), and p˙ (t ) are obtained by using their exact value, the value computed by the dirty derivative, and the value provided by an SM differentiator. Nominal

θˆ1 θˆ2

3 2

M1

M2

M3

Exact

Dirty

SM

Exact

Dirty

SM

Exact

Dirty

SM

3.05 2.01

3.10 2.03

3.02 2.00

2.98 2.0

3.00 2.0

2.98 2.0

3.06 2.02

3.02 2.02

0.63 2.81

1

1

4

0.5

0.5

2

0

0

0

-0.5

-0.5

-2 -4

-1

-1 0

50

100

150

0

200

50

100

150

200

1

1

4

0.5

0.5

2

0

0

0

-0.5

-0.5

-2

-1

-1 0

50

100

150

200

50

100

150

200

1

4

0.5

0.5

2

0

0

0

-0.5

-0.5

-2

-1

-1 50

100

150

200

50

100

150

200

0

50

100

150

200

0

50

100

150

200

-4 0

1

0

0

-4 0

50

100

150

200

Fig. 3. Case C1. The behavior of the surface variable s(q, p|θˆ ) when using the proposed parameter identification methodologies M1 (18), M2 (19), and M3 (28). The first, second, and third columns indicate the time evolution of s(q, p|θˆ ) when the values of p(t ), q˙ (t ), and p˙ (t ) correspond to the exact values, estimations from the dirty derivative, and estimations from the SM differentiator, respectively.

5.2. Numerical simulations for case C2 Now, we present the results obtained when the adaptation gains are tuned through the CSA. To this end, the CSA was implemented by selecting the performance index (29), 200 nests, and pa = 0.25. The CSA was run 20 times, and the best values from these simulations were selected. The optimal gain obtained with the CSA for method M1 was γ = 0.0954. For method M2 the gain values κ = 0.0353 and β = 1.776 were computed. Finally, the CSA produced the values γ1 = 0.001 and γ2 = 0.0864 for method M3 . As for Case C1, the parameter identification methodologies M1 , M2 , and M3 were applied by assuming the exact knowledge of p(t ), q˙ (t ), and p˙ (t ), and by estimating their corresponding values through the dirty derivative (30) and the SM differentiator (31)-(32). The time evolution of the estimated parameters is depicted in Fig. 4. Also, the numerical values of the parameter estimates are given in Table 2. Finally, the time evolution of the surface variable s(q, p|θˆ ) is shown in Fig. 5. 6. Discussion For Case C1, Figs. 2–3 and Table 1 indicate that, when using the exact values of p(t ), q˙ (t ), and p˙ (t ), or when using the dirty derivative, all the algorithms reach a final set of parameter estimates close to the nominal ones. However, the convergence time of method M3 is more significant than that required by the other algorithms. Also, method M3 exhibits an oscillatory behavior when using the dirty derivative estimator. In contrast, for case C2, Figs. 4–5 and Table 2 show that all the algorithms exhibit excellent performance and almost reach the exact value for each parameter when using the exact values of p(t ), q˙ (t ), and p˙ (t ), or the dirty derivative estimator. The oscillatory behavior for method M3 is absent in this second case. This behavior indicates that the proposed adaptation gains tuning procedure enhances the performance of the parameter identification algorithm. Please cite this article as: R. Miranda-Colorado, Parameter identification of conservative Hamiltonian systems using first integrals, Applied Mathematics and Computation, https://doi.org/10.1016/j.amc.2019.124860

ARTICLE IN PRESS

JID: AMC

[m3Gsc;November 6, 2019;18:57]

R. Miranda-Colorado / Applied Mathematics and Computation xxx (xxxx) xxx

4

4

4

3

3

3

2

2

2

1

1 0

50

100

150

200

1 0

50

100

150

200

4

4

4

3

3

3

2

2

2

1

1 0

50

100

150

200

50

100

150

200

4

4

3

3

3

2

2

2

1 0

50

100

150

200

0

50

100

150

200

0

50

100

150

200

0

50

100

150

200

1 0

4

1

11

1 0

50

100

150

200

Fig. 4. Case C2. Time evolution of the parameter estimates when using the parameter identification methodologies M1 (18), M2 (19), and M3 (28) and the parameter adaptation gains are tuned using the CSA. The first, second, and third columns represent the cases when the signals p(t ), q˙ (t ), and p˙ (t ) correspond to their exact values, the values obtained using the dirty derivative, and those computed using an SM differentiator, respectively.

1

1

4

0.5

0.5

2

0

0

0

-0.5

-0.5

-2

-1

-1 0

50

100

150

200

-4 0

50

100

150

200

1

1

4

0.5

0.5

2

0

0

0

-0.5

-0.5

-2

-1

-1 0

50

100

150

200

50

100

150

200

1

4

0.5

0.5

2

0

0

0

-0.5

-0.5

-2

-1 0

50

100

150

200

50

100

150

200

0

50

100

150

200

0

50

100

150

200

-4 0

1

-1

0

-4 0

50

100

150

200

Fig. 5. Case C2. The behavior of the surface variable s(q, p|θˆ ) when using the proposed parameter identification methodologies M1 (18), M2 (19), and M3 (28) and the gains provided by the CSA. The first, second, and third columns indicate the time evolution of s(q, p|θˆ ) when the values of p(t ), q˙ (t ), and p˙ (t ) correspond to the exact values, estimations from the dirty derivative, and estimations from the SM differentiator, respectively.

Please cite this article as: R. Miranda-Colorado, Parameter identification of conservative Hamiltonian systems using first integrals, Applied Mathematics and Computation, https://doi.org/10.1016/j.amc.2019.124860

ARTICLE IN PRESS

JID: AMC 12

[m3Gsc;November 6, 2019;18:57]

R. Miranda-Colorado / Applied Mathematics and Computation xxx (xxxx) xxx

0.1

0.1

0.1

0.05

0.05

0.05

0

0 0

20

40

60

0 0

20

40

60

0.1

0.1

0.1

0.05

0.05

0.05

0

0 0

20

40

60

0

20

40

60

0.1

50

0.05

0.05

0

0 0

20

40

60

20

4

0

0

20

4

0

0

20

4

0

0

0.1

0

0

-50 0

20

40

60

Fig. 6. Case C1. The behavior of Assumptions A1 and A2 when using the proposed parameter identification methodologies M1 (18), M2 (19), and M3 (28).

When the SM differentiator is employed in case C1, the performance of methods M1 and M2 is enhanced. Specifically, the oscillations presented by method M1 are almost eliminated, and the convergence time is reduced for both methods M1 and M2 . However, the performance of algorithm M3 is considerably reduced, exhibiting an oscillatory behavior. Also, the parameter estimates for method M3 are not precise. This behavior implies that method M3 is prone to reduce its performance when an SM differentiator generates the time derivative signals. However, it provides a good set of parameter estimates when using the dirty derivative. The data provided by Fig. 2 and Table 1 also shows that method M1 is just slightly affected when using the dirty derivative. Also, it continuous having an excellent performance when the SM differentiator is applied. Besides, method M2 is faster than M1 and M3 and exhibits small amplitude oscillations in comparison with that presented by the other schemes when using the dirty derivative or the SM differentiator. For method M1 , the amplitude of oscillations slightly increase when using the dirty derivative, and are reduced when the SM differentiator is applied. This fact indicates that we are allowed to use a simple differentiator such as the dirty derivative, and we may expect to obtain a good set of parameter estimates when using either M1 or M2 . Recall that the expected value of s(q, p|θ ) is zero and, although the three methodologies almost achieve this zero value for Case C1, methods M1 and M2 exhibit a more oscillatory behavior when using the dirty derivative or the SM differentiator, as indicated in Fig. 3. For method M2 , this may be due to the presence of the discontinuous sign function in the parameter updating law (19). Note also that the amplitude of oscillations of the surface variable s(q, p|θ ) increase when using the dirty derivative, and presents a large amplitude of oscillation when the SM differentiator is employed. However, in this last case, the average value of s(q, p|θ ) is almost zero for all the methodologies. The black line in the third column of Fig. 3 displays the average value for each scheme. Finally, Figs. 6 and 7 show the behavior of Assumptions A1 and A2 for Cases C1 and C2. For method M1 , when using the dirty derivative or the SM estimators, there are some instants where A1 does not hold. This behavior implies divergence of the parameter estimates. However, after a short time instant, condition A1 becomes valid, which makes the parameter estimates to move towards their desired values. This behavior is consistent with the oscillatory conduct depicted in Fig. 2. The same reasoning can be applied to methods M2 and M3 . Note also, from Fig. 7, that condition A1 exhibits a divergent behavior for method M3 when using the SM differentiator in case C2. This conduct implies that condition A2 is no longer valid. Then, as indicated by Fig. 4, the parameter estimates diverge. The previous results indicate that we may trust on using any of the proposed parameter identification methodologies, and that we may expect to obtain a reliable set of parameter estimates by using a simple differentiator such as the dirty derivative, i.e., we do not require to use more complex algorithms such as the finite-time SM differentiator (31) and (32). Besides, we can obtain a more precise set of estimated values if the parameter adaptation gains are tuned employing a meta-heuristic algorithm such as the CSA algorithm. Also, the proposed parameter identification technique can be used even when the system’s parameters do not appear linearly, which is an advantage over other existing techniques which require Please cite this article as: R. Miranda-Colorado, Parameter identification of conservative Hamiltonian systems using first integrals, Applied Mathematics and Computation, https://doi.org/10.1016/j.amc.2019.124860

ARTICLE IN PRESS

JID: AMC

[m3Gsc;November 6, 2019;18:57]

R. Miranda-Colorado / Applied Mathematics and Computation xxx (xxxx) xxx

0.1

0.1

0.1

0.05

0.05

0.05

0

0 0

20

40

60

0 0

20

40

60

0.1

0.1

0.1

0.05

0.05

0.05

0

0 0

20

40

60

0

20

40

60

0.1

50

0.05

0.05

0

0 0

20

40

60

0

20

4

0

0

20

4

0

0

20

4

0

0

0.1

0

13

-50 0

20

40

60

Fig. 7. Case C2. The behavior of Assumptions A1 and A2 when using the proposed parameter identification methodologies M1 (18), M2 (19), and M3 (28) and the gains provided by the CSA.

linearity in the parameters. Finally, note that the proposed parameter identification methodology can be applied by using the information from the vector of generalized coordinates q(t ). Hence, we can use this data off-line to apply the proposed parameter identification scheme, i.e., the proposed methodology can be applied in an on-line or an off-line manner. 7. Conclusion This paper presented a methodology for parameter identification of conservative Hamiltonian systems. The proposed methodology was based on the use of first integrals. Three different adaptation laws were proposed, and the parameter convergence was theoretically demonstrated. Besides, it was proposed the use of a meta-heuristic algorithm, namely the CSA, for tuning the parameter adaptation gains of each methodology. Numerical results showed that any of the proposed methodologies can obtain a good set of estimated parameters and that we can obtain such results even if we estimate the required signals p(t ), q˙ (t ), and p˙ (t ) through the simple dirty derivative. The precision of the parameter estimates provided by each methodology confirms the validity and reliability of the proposed scheme. Future research directions include the extension of the proposed parameter identification methodology to non-conservative Hamiltonian systems, and analyzing the performance of the proposed scheme when selecting other types of first integral functions. Also, it is essential to verify if the proposed methodology can be further simplified and improved when considering Hamiltonian systems with cyclic coordinates. Acknowledgment The author gratefully acknowledges the financial support from CONACyT (Consejo Nacional de Ciencia y Tecnología) under the Project Cátedras 1537. Also, the author acknowledges the valuable support provided by Lorena Ledón during the writing of this manuscript. References [1] Y. Miao, M. Zhao, J. Lin, Identification of mechanical compound-fault based on the improved parameter-adaptive variational mode decomposition, ISA Trans. 84 (2019) 82–95. [2] R. Miranda-Colorado, Closed-loop parameter identification of second-order non-linear systems: a distributional approach using delayed reference signals, IET Control Theory Appl. (2018), doi:10.1049/iet-cta.2018.5457. [3] H. Zhang, X. Tian, X. Deng, Y. Cao, Batch process fault detection and identification based on disciminant global preserving kernel slow feature analysis, ISA Trans. 79 (2018) 108–126. [4] G. Fedele, L. D’Alfonso, G. Pin, T. Parisini, Volterra’s kernels-based finite-time parameters estimation of the Chua system, Appl. Math. Comput. 318 (2018) 121–130.

Please cite this article as: R. Miranda-Colorado, Parameter identification of conservative Hamiltonian systems using first integrals, Applied Mathematics and Computation, https://doi.org/10.1016/j.amc.2019.124860

JID: AMC 14

ARTICLE IN PRESS

[m3Gsc;November 6, 2019;18:57]

R. Miranda-Colorado / Applied Mathematics and Computation xxx (xxxx) xxx

[5] Z. Hafezi, M.M. Arefi, Recursive generalized extended least squares and RML algorithms for identification of bilinear systems with ARMA noise, ISA Trans. (2018), doi:10.1016/j.isatra.2018.12.015. [6] R. Miranda-Colorado, A new parameter identification algorithm for a class of second order nonlinear systems: an on-line closed-loop approach, Int. J. Control Autom. Syst. 16 (3) (2018) 1142–1155. [7] J. Craig, Introduction to Robotics: Mechanics and Control, third, Ed. Pearson, 2004. ISBN: 978–0201543612. [8] A. van der Schaft, L2 -Gain and Passivity Techniques in Nonlinear Control, Springer, Communications and Control Engineering, 1996. ISBN: 978-3-319-49991-8. [9] C.J. Song, Y. Zhang, Conserved quantities for hamiltonian systems on time scales, Appl. Math. Comput. 313 (2017) 24–36. [10] R.F. Nagaev, Dynamics of Synchronising Systems, Springer-Verlag Berlin Heidelberg, 2003. ISBN: 978-3-642-53655-7. [11] Z. Zhou, On the first integral and equivalence of nonlinear differential equations, Appl. Math. Comput. 268 (2015) 295–302. [12] L. Ljung, T. Söderström, Theory and Practice of Recursive Identification, MIT Press, Cambridge, MA, 1983. [13] T. Iwasaki, T. Sato, A. Morita, Auto-tuning of two-degree-of-freedom motor control for high-accuracy trajectory motion, Control Eng. Pract. 4 (4) (1996) 537–544. [14] J. Chen, C. Richard, J.C.M. Bermudez, Reweighted nonnegative least-mean-square algorithm, Signal Process. 128 (2016) 131–141. [15] K. Xiong, S. Wang, Robust least mean logarithmic square adaptive filtering algorithms, J. Frankl. Inst. 356 (1) (2019) 654–674. [16] Y. Zhou, A. Han, S. Yan, et al., A fast method for on-line closed-loop system identification, Int. J. Adv. Manuf. Technol. 31 (1–2) (2006) 78–84. [17] P. Huang, Z. Lu, Z. Liu, State estimation and parameter identification method for dual-rate system based on improved Kalman prediction, Int. J. Control Autom. Syst. 14 (4) (2016) 998–1004. [18] Q. Zhang, Q. Wang, G. Li, Switched system identification based on the constrained multi-objective optimization problem with application to the servo turntable, Int. J. Control Autom. Syst. 14 (5) (2016) 1153–1159. [19] J. Davila, L. Fridman, A. Poznyak, Observation and identification of mechanical systems via second order sliding modes, Int. J. Control 79 (10) (2006) 1251–1262. [20] H. Xu, C.G. Soares, Vector field path following for surface marine vessel and parameter identification based on LS-SVM, Ocean Eng. 113 (2016) 151–161. [21] H. Thabet, M. Ayadi, F. Rotella, Experimental comparison of new adaptive PI controllers based on the ultra local model parameter identification, Int. J. Control Autom. Syst. 14 (6) (2016) 1520–1527. [22] J. Becedas, M. Mamani, V. Feliu, Algebraic parameters identification of DC motors: methodology and analysis, Int. J. Syst. Sci. 41 (10) (2010) 1241–1255. [23] R. Miranda-Colorado, G.C. Castro, Closed-loop identification applied to a DC servomechanism: controller gains analysis, Math. Probl. Eng. 2013 (2013) 1–10. [24] R. Miranda-Colorado, J.M. Valenzuela, An efficient on-line parameter identification algorithm for nonlinear servomechanisms with an algebraic technique for state estimation, Asian J. Control 19 (6) (2017) 2127–2142. [25] Q. Wu, M. Saif, Robust fault diagnosis of a satellite system using a learning strategy and second order sliding mode observer, IEEE Syst. J. 4 (1) (2010) 112–121. [26] X. Cheng, Y. Kawano, J.M.A. Scherpen, Reduction of second-order network systems with structure preservation, IEEE Trans. Autom. Control 62 (10) (2017) 5026–5038. [27] J.M. Nealis, R.C. Smith, Nonlinear adaptive parameter estimation techniques for magnetic transducers operating in hysteretic regimes, Proc. Conf. Model. Signal Process. Control 4693 (2002) 25–36. San Diego. [28] J.H. Seinfeld, Nonlinear estimation theory, Ind. Eng. Chem. 62 (1) (1970) 32–42. [29] A. Hernandez, A. Poznyak, Parametric estimation in hamiltonian systems, in: Proceedings of the 15th International Conference on Electrical Engineering, Computing Science and Automatic Control (CCE), Mexico City, 2018, pp. 5–7. [30] E. Korovessi, A.A. Linninger, Batch Processes, CRC Press, Taylor & Francis, 2005. ISBN: 978–0824725228. [31] D. Xue, Y. Chen, System Identification Techniques with Matlab and Simulink, Wiley, 2013. ISBN: 978-1-118-64792-9. [32] R. Sethi, S. Panda, B.P. Sahoo, Cuckoo search algorithm based optimal tuning of PID structured TCSC controller, in: L. Jain, H. Behera, J. Mandal, D. Mohapatra (Eds.), Computational Intelligence in Data Mining, Vol. 1, Smart Innovation, Systems and Technologies, volume 31, Springer, New Delhi, 2015. [33] Y. Pattanapong, C. Deelertpaiboon, Fuzzy-tuned PI controller for plate balancing on a mobile vehicle system, in: Proceedings of the IEEE International Conference on Mechatronics and Automation (ICMA), Takamatsu, Japan, Aug. 6–9, 2017. [34] Y. Abe, M. Konishi, J. Imai, R. Hasegawa, M. Watanabe, H. Kamijo, Neural network-based PID gain tuning of chemical plant controller, Electr. Eng. Jpn. 171 (4) (2010) 940–947. [35] R. Martnez-Soto, O. Castillo, L.T. Aguilar, A. Rodrguez, A hybrid optimization method with PSO and GA to automatically design type-1 and type-2 fuzzy logic controllers, Int. J. Mach. Learn. Cybern. 6 (2015) 175–196. [36] H. Babaee, A. Khosravi, Multi-objetive COA for design robust iterative learning control via second order sliding mode, Int. J. Control Sci. Eng. 2 (6) (2012) 143–149. [37] X.S. Yang, S. Deb, Engineering optimization by cuckoo search, Int. J. Math. Model. Numer. Optim. 1 (4) (2010) 330–343. [38] X. Liu, M. Fu, Cuckoo search algorithm based on frog leaping local search and chaos theory, Appl. Math. Comput. 266 (2015) 1083–1092. [39] R. Miranda-Colorado, Kinematics and Dynamics of Robotic Manipulators (in spanish), Alfaomega, 2016. ISBN: 978–607-622-048-1. [40] A.S. Poznyak, Advanced Mathematical Tools for Automatic Control Engineers, Volume 1: Deterministic Techniques, Elsevier, 2008. ISBN: 978-0-08-044674-5. [41] H. Goldstein, C. Poole, J. Safko, Classical Mechanics, third, Addison Wesley, 2001. ISBN: 9780201657029. [42] S. Sastry, M. Bodson, Adaptive Control: Stability, Convergence, and Robustness, Ed. Prentice Hall, Englewood Cliffs, New Jersey, 1989. [43] A. Levant, Robust exact differentiation via sliding mode technique, Automatica 34 (3) (1998) 379–384. [44] R. Seeber, M. Horn, Stability proof for a well established super-twisting parameter testing, Automatica 84 (2015) 241–243. [45] R. Miranda-Colorado, L.T. Aguilar, J. Herrero, Reduction of power consumption on quadrotor vehicles via trajectory design and a controller-gains tuning stage, Aerosp. Sci. Technol. 78 (2018) 280–296.

Please cite this article as: R. Miranda-Colorado, Parameter identification of conservative Hamiltonian systems using first integrals, Applied Mathematics and Computation, https://doi.org/10.1016/j.amc.2019.124860