Journal of Computational and Applied Mathematics xxx (xxxx) xxx
Contents lists available at ScienceDirect
Journal of Computational and Applied Mathematics journal homepage: www.elsevier.com/locate/cam
Gradient estimation algorithms for the parameter identification of bilinear systems using the auxiliary model✩ ∗
Feng Ding a,b,c , , Ling Xu c , Dandan Meng c , Xue-Bo Jin d , Ahmed Alsaedi e , Tasawar Hayat e a
School of Electrical and Electronic Engineering, Hubei University of Technology, Wuhan 430068, PR China College of Automation and Electronic Engineering, Qingdao University of Science and Technology, Qingdao 266061, PR China School of Internet of Things Engineering, Jiangnan University, Wuxi 214122, PR China d School of Computer and Information Engineering, Beijing Technology and Business University, Beijing 100048, PR China e Department of Mathematics, King Abdulaziz University, Jeddah 21589, Saudi Arabia b c
article
info
Article history: Received 16 August 2016 Received in revised form 20 July 2019 Keywords: Parameter estimation Gradient search Iterative algorithm Measurement information Bilinear system State space system
a b s t r a c t For the bilinear system with white noise, the difficulty of identification is that there exists the product term of the state and input in the system. To overcome this difficulty, we derive the input–output representation of a class of special bilinear systems by using the transformation, and present a stochastic gradient (SG) algorithm and a gradientbased iterative algorithm for estimating the parameters of the systems in the case of the known input–output data by means of the auxiliary model. The proposed gradient-based iterative algorithm can generate more accurate parameter estimates than the auxiliary model based SG algorithm. The performance of the proposed algorithms are tested by two numerical examples. © 2019 Elsevier B.V. All rights reserved.
1. Introduction For years, system modeling and parameter identification are basic for system analysis and control design [1–3], and become a hot issue in the field of control and signal modeling [4,5]. Because of simple structures, linear system identification has been well developed [6–8] such as the partially-coupled least squares based iterative parameter estimation for multivariable output-error-like autoregressive moving average systems [9], the maximum likelihood recursive identification for the multivariate equation-error autoregressive moving average systems using the data filtering [10] and the recursive parameter estimation algorithm for multivariate output-error systems [11]. However, almost all practical industrial systems contain nonlinear characteristics, which promotes the identification of nonlinear systems to a certain extent. Many identification methods have been reported for linear stochastic systems [12–15] and nonlinear stochastic systems. In order to describe actual nonlinear systems intelligently with relatively simple structures, one divides nonlinear systems into block oriented models, which include bilinear-in-parameter systems, input nonlinear systems, output nonlinear systems and the input–output nonlinear systems. As a special class of nonlinear systems, bilinear system identification is the focus of this paper. Many parameter estimation methods have been developed for linear systems, bilinear systems and nonlinear systems. A number of parameter estimation methods exist for system modeling and identification, such as the recursive least ✩ This work was supported by the National Natural Science Foundation of China (Nos. 61873111 and 61571182) and the 111 Project (B12018)). ∗ Corresponding author at: School of Internet of Things Engineering, Jiangnan University, Wuxi 214122, PR China E-mail address:
[email protected] (F. Ding). https://doi.org/10.1016/j.cam.2019.112575 0377-0427/© 2019 Elsevier B.V. All rights reserved.
Please cite this article as: F. Ding, L. Xu, D. Meng et al., Gradient estimation algorithms for the parameter identification of bilinear systems using the auxiliary model, Journal of Computational and Applied Mathematics (2019) 112575, https://doi.org/10.1016/j.cam.2019.112575.
2
F. Ding, L. Xu, D. Meng et al. / Journal of Computational and Applied Mathematics xxx (xxxx) xxx
squares (RLS) methods, the over-parameterization method, the iterative identification method. The RLS method is simple and easy to realize, but generates biased parameter estimates for stochastic systems with correlated noise or colored noise. On the identification of bilinear systems, Li et al. presented a filtering-based maximum likelihood gradient iterative estimation algorithm [16] and a filtering-based maximum likelihood iterative estimation algorithm by using the hierarchical identification principle [17]. Also, the state and parameter estimation algorithms for bilinear stochastic systems can be found in [18,19]. On the basis of the work about the multi-innovation stochastic gradient method and the recursive least squares method for bilinear systems [20], this paper presents an auxiliary model based stochastic gradient (AM-SG) algorithm and an auxiliary model gradient-based iterative (AM-GI) algorithm for bilinear output-error systems. The main contributions are to derive the input–output representation of a class of special bilinear systems by using the transformation, and derive new identification algorithms for estimating the parameters of the systems in the case of the known input–output data by means of the auxiliary model. The prosed gradient-based iterative algorithm has higher estimation accuracy than the AM-SG algorithm accuracy by repeatedly utilizing the input and output data. The proposed iterative identification algorithm can be applied to other fields such as in iterative learning control design for linear discrete-time systems with multiple high-order internal models and low-cost lateral active suspension system of high-speed train for ride quality based on the resonant control method [21,22]. The paper is organized as follows: Section 2 gives a description of problem formulation based on bilinear systems. Section 3 presents a auxiliary model based stochastic gradient identification algorithm. Section 4 proposes the gradient based iterative method for the identification of bilinear systems. Section 5 illustrates the proposed method in two numerical examples and testifies its feasibility. Section 6 concludes this paper. 2. System description and identification model Throughout this paper, z represents a unit forward shift operator: zx(t) = x(t + 1), z −1 x(t) = x(t − 1), the superscript T stands for the matrix transpose. Consider the single-input single-output bilinear discrete-time system with the observability canonical form [20,23]: x(t + 1) = Ax(t) + Bx(t)u(t) + f u(t),
(1)
y(t) = hx(t) + v (t).
(2)
where x(t) := [x1 (t), x2 (t), . . . , xn (t)] ∈ R is the state vector, u(t) ∈ R and y(t) ∈ R are the input–output of the system, the sequence {v (t)} is the stochastic white noise with zero mean, and A ∈ Rn×n , B ∈ Rn×n , f ∈ Rn and h ∈ R1×n are the system parameter matrices and vectors: n
T
⎡ ⎢ ⎢
A := ⎢ ⎢
0 0
.. .
1 0
.. .
0 1
.. .
··· ··· .. .
0 0
.. .
⎤ ⎥ ⎥ ⎥, ⎥ ⎦
0 0 0 ··· 1 −an −an−1 −an−2 · · · −a1 f := [f1 , f2 , . . . , fn ]T , h := [1, 0, . . . , 0, 0],
⎣
[ B :=
0 g
]
,
g = [−bn , −bn−1 , . . . , −b1 ] ∈ Rn .
As you can see in (1) that the state equation contains the product term of the state vector x(t) and system input u(t), which is the difficult of deriving the input–output representation for bilinear systems and thus one uses the special structure of the matrix B. From the state equation in (1), we get the following relations, xi (t + 1) = xi+1 (t) + fi u(t), i = 1, 2, . . . , n − 1
(3)
xn (t + 1) = −an x1 (t) − an−1 x2 (t) − an−2 x3 (t) − · · · − a1 xn (t)
− [bn x1 (t) + bn−1 x2 (t) + bn−2 x3 (t) + · · · + b1 xn (t)]u(t) + fn u(t).
(4)
From the above recursive relations, it is easy to obtain xn (t) = xn−1 (t + 1) − fn−1 u(t)
= x1 (t + n − 1) − f1 u(t + n − 2) − f2 u(t + n − 3) − · · · − fn−1 u(t).
(5)
Multiplying both sides of (5) by z and combining the acquired equation with (3) gives
−[an , an−1 , an−2 , . . . , a1 ]x(t) − [bn , bn−1 , bn−2 , . . . , b1 ]x(t)u(t) + fn u(t) = x1 (t + n) − f1 u(t + n − 1) − f2 u(t + n − 2) − · · · − fn−1 u(t + 1).
(6)
Using (3) and from (6), we have (1 + a1 z −1 + a2 z −2 + · · · + an z −n )z n x1 (t) + [(b1 z −1 + b2 z −2 + · · · + bn z −n )z n x1 (t)]u(t) Please cite this article as: F. Ding, L. Xu, D. Meng et al., Gradient estimation algorithms for the parameter identification of bilinear systems using the auxiliary model, Journal of Computational and Applied Mathematics (2019) 112575, https://doi.org/10.1016/j.cam.2019.112575.
F. Ding, L. Xu, D. Meng et al. / Journal of Computational and Applied Mathematics xxx (xxxx) xxx
{ } = [c1 z −1 + c2 z −2 + · · · + cn z −n ]z n u(t) + u(t) [d2 z −2 + d3 z −3 + · · · + dn z −n ]z n u(t) ,
3
(7)
where c1 ⎢ c2 ⎥ ⎢ ⎢ c ⎥ ⎢ ⎢ 3 ⎥=⎢ ⎢ . ⎥ ⎢
⎡
⎤
⎡
⎣ .. ⎦
⎣
cn
1 −a1 −a2
1
− a1 .. .
.. .
−an−1 −b1 −b2 −b3 .. ⎣ .
−an−2
−bn−1
d2 ⎢ d3 ⎥ ⎢ ⎢ d ⎥ ⎢ ⎢ 4 ⎥=⎢ ⎢ . ⎥ ⎢
⎡
⎤
0 − b1 − b2
⎡
⎣ .. ⎦ dn
f1 ⎥ ⎢ f2 ⎥ ⎥⎢ f ⎥ ⎥⎢ 3 ⎥, ⎥⎢ ⎥
⎤⎡ 1
..
. ···
..
⎤
(8)
⎦ ⎣ ... ⎦
.
− a1
1
fn f1 ⎥ ⎢ f2 ⎥ ⎥⎢ f ⎥ ⎥⎢ 3 ⎥. ⎥⎢ ⎥
⎤⎡ 0
.. .
− b1 .. .
..
0
−bn−2
···
−b2
.
..
⎦ ⎣ ... ⎦
.
−b1
⎤
0
(9)
fn
Hence, Eq. (7) can be rewritten as A(z)z n x1 (t) + u(t)[B(z)z n x1 (t)] = C (z)z n u(t) + u(t)[D(z)z n u(t)], where A(z) := 1 + a1 z −1 + a2 z −2 + · · · + an z −n , B(z) := b1 z −1 + b2 z −2 + · · · + bn z −n , C (z) := c1 z −1 + c2 z −2 + · · · + cn z −n , D(z) := d2 z −2 + d3 z −3 + · · · + dn z −n . Replacing t with t − n gives
[A(z) + u(t − n)B(z)]x1 (t) = [C (z) + u(t − n)D(z)]u(t).
(10)
Then we have x1 (t) =
C (z) + u(t − n)D(z) A(z) + u(t − n)B(z)
u(t).
(11)
Substituting A(z), B(z), C (z), and D(z) into (11) leads to x1 (t) = [1 − A(z)]x1 (t) − u(t − n)B(z)x1 (t) + C (z)u(t) + u(t − n)D(z)u(t)
= (−a1 z −1 − a2 z −2 − · · · − an z −n )x1 (t) − u(t − n)(b1 z −1 + b2 z −2 + · · · + bn z −n )x1 (t) + (c1 z −1 + c2 z −2 + · · · + cn z −n )u(t) + u(t − n)(d2 z −2 + d3 z −3 + · · · + dn z −n )u(t) = [−x1 (t − 1), −x1 (t − 2), . . . , −x1 (t − n), −u(t − n)x1 (t − 1), −u(t − n)x1 (t − 2), . . . , −u(t − n)x1 (t − n), u(t − 1), u(t − 2), . . . , u(t − n), u(t − n)u(t − 2), u(t − n)u(t − 3), . . . , u(t − n)u(t − n)] [a1 , a2 , . . . , an , b1 , b2 , . . . , bn , c1 , c2 , . . . , cn , d2 , d3 , . . . , dn ]T ⎡ ⎤ a
⎢ b ⎥ = [ϕx (t), u(t − n)ϕx (t), u(t − 1), ϕu (t), u(t − n)ϕu (t)] ⎣ , c ⎦ T
T
T
T
(12)
d where
ϕx (t) := [−x1 (t − 1), −x1 (t − 2), . . . , −x1 (t − n)]T ∈ Rn , ϕu (t) := [u(t − 2), u(t − 3), . . . , u(t − n)]T ∈ Rn−1 , a := [a1 , a2 , . . . , an ]T ∈ Rn , b := [b1 , b2 , . . . , bn ]T ∈ Rn , c := [c1 , c2 , . . . , cn ]T ∈ Rn , d := [d2 , d3 , . . . , dn ]T ∈ Rn−1 . Bringing (12) into (2), we have the following relation,
⎡
⎤
a ⎢ b ⎥ T T T T y(t) = [ϕx (t), u(t − n)ϕx (t), u(t − 1), ϕu (t), u(t − n)ϕu (t)] ⎣ + v (t). c ⎦ d
(13)
Define the information vector ϕ(t) and the parameter vector θ as
ϕ(t) := [ϕTx (t), u(t − n)ϕTx (t), u(t − 1), ϕTu (t), u(t − n)ϕTu (t)]T ∈ R4n−1 , Please cite this article as: F. Ding, L. Xu, D. Meng et al., Gradient estimation algorithms for the parameter identification of bilinear systems using the auxiliary model, Journal of Computational and Applied Mathematics (2019) 112575, https://doi.org/10.1016/j.cam.2019.112575.
4
F. Ding, L. Xu, D. Meng et al. / Journal of Computational and Applied Mathematics xxx (xxxx) xxx
θ := [aT , bT , c T , d T ]T ∈ R4n−1 . Then Eq. (12) can be rewritten for brevity as y(t) = ϕT (t)θ + v (t).
(14)
Eq. (14) is the identification model of the bilinear stochastic system. The objective is to propose new identification methods for estimating the unknown parameters ai , bi , ci and di and to evaluate the accuracy of the parameter estimates by simulations on computers. 3. The auxiliary model based stochastic gradient algorithm In this section, an auxiliary model based stochastic gradient (AM-SG) algorithm is derived for bilinear systems to illustrate the advantages of the proposed algorithm. According to the identification model in (14), introduce a quadratic criterion function J 1 (θ ) =
1 2
[y(t) − ϕT (t)θ]2 .
Let θˆ (t) be the estimate of θ at time t. Using the negative gradient search and minimizing the criterion function J1 (θ ) gives the following recursive relations [24]: 1 θˆ (t) = θˆ (t − 1) − grad[J1 (θˆ (t − 1))], r(t)
ϕ(t) [y(t) − ϕT (t)θˆ (t − 1)], = θˆ (t − 1) + r(t)
r(t) = r(t − 1) + ∥ϕ(t)∥2 , r(0) = 1.
(15) (16)
From the definition of the information vector ϕ(t), it is observed that ϕ(t) consists of measurable input, output and unknown noise-free output x1 (t − i), so Eqs. (15)–(16) cannot recursively compute the parameter estimate θˆ (t). Here the solution is to adopt the auxiliary model method [25]: Replace the unknown term x1 (t − i) in ϕx (t) with the auxiliary variable xˆ 1 (t − i) and define
ϕˆ x (t) := [−ˆx1 (t − 1), −ˆx1 (t − 2), . . . , −ˆx1 (t − n)]T ∈ Rn .
(17)
Then the estimate of ϕ(t) can be expressed as
ϕˆ (t) = [ϕˆ Tx (t), u(t − n)ϕˆ Tx (t), u(t − 1), ϕTu (t), u(t − n)ϕTu (t)]T ∈ R4n−1 . ˆ (t), we can summarize the AM-SG algorithm for bilinear systems as Replacing ϕ(t) in (15)–(16) with ϕ ϕˆ (t) [y(t) − ϕˆ T (t)θˆ (t − 1)], θˆ (t) = θˆ (t − 1) +
(18)
ˆ (t)∥2 , r(0) = 1, r(t) = r(t − 1) + ∥ϕ
(19)
ϕˆ (t) = [ϕˆ Tx (t), u(t − n)ϕˆ Tx (t), u(t − 1), ϕTu (t), u(t − n)ϕTu (t)]T ,
(20)
r(t)
ϕˆ x (t) = [−ˆx1 (t − 1), −ˆx1 (t − 2), . . . , −ˆx1 (t − n)]T ,
(21)
ϕu (t) = [u(t − 2), u(t − 3), . . . , u(t − n)]T ,
(22)
ˆ (t)θˆ (t), xˆ 1 (t) = ϕ
(23)
T
T
T
θˆ (t) = [ˆa (t), bˆ (t), cˆ (t), dˆ (t)]T , T
T
(24)
aˆ (t) = [ˆa1 (t), aˆ 2 (t), . . . , aˆ n (t)] ,
ˆ = [bˆ 1 (t), bˆ 2 (t), . . . , bˆ n (t)]T , b(t)
cˆ (t) = [ˆc1 (t), cˆ2 (t), . . . , cˆn (t)]T ,
ˆ = [dˆ 2 (t), dˆ 3 (t), . . . , dˆ n (t)]T . d(t)
T
The computation procedures of the AM-SG algorithm in (18)–(24) are listed in the following. 1. 2. 3. 4. 5. 6.
Initialize: let t = n + 1, θˆ (0) = 1n0 /p0 , r(t) = 1, xˆ 1 (t − i) = 0 (i = 1, 2, . . . , n). ˆ x (t) and ϕˆ (t) by (22), (21) and (20), respectively. Collect the input–output data u(t) and y(t). Form ϕu (t), ϕ Compute r(t) by (19). Update the parameter estimate θˆ (t) by (18). Compute xˆ 1 (t) by (23). Increase t by 1 and go to Step 2.
The flowchart of computing θˆ (t) in the AM-SG algorithm is shown in Fig. 1. Please cite this article as: F. Ding, L. Xu, D. Meng et al., Gradient estimation algorithms for the parameter identification of bilinear systems using the auxiliary model, Journal of Computational and Applied Mathematics (2019) 112575, https://doi.org/10.1016/j.cam.2019.112575.
F. Ding, L. Xu, D. Meng et al. / Journal of Computational and Applied Mathematics xxx (xxxx) xxx
5
Fig. 1. The flowchart of the AM-SG algorithm.
4. The gradient based iterative algorithm From the above section, we know that the AM-SG algorithm can identify bilinear systems with white noise, however, its main drawback is that when computing the parameter estimate vector θˆ (t) at instant t (1 < t < L), the algorithm uses only the measured data {u(i), y(i) : i = 0, 1, 2, . . . , t } up to and including time t, but does not use the data {u(i), y(i) : i = t + 1, t + 2, . . . , L} after time t. Hence, we are wondering whether there would be new iterative algorithms that use all the measured data {u(i), y(i): i = 0, 1, 2, · · ·, t, · · ·, L} at each iteration, so that the parameter estimation accuracy can be greatly improved. Next, we investigate the auxiliary model gradient-based iterative (AM-GI) identification approaches for bilinear systems. Consider the newest p data and define the stacked output vector Y (t) and stacked information matrix Φ (t) as
⎡ ⎢
Y (t) := ⎢ ⎣
y(t) y(t − 1)
.. .
⎤
⎡
ϕT (t) ϕ (t − 1) .. .
⎤
T
⎥ ⎥ ∈ Rp , ⎦
y(t − p + 1)
⎢ Φ (t) := ⎢ ⎣
⎥ ⎥ ∈ Rp×(4n−1) . ⎦
(25)
ϕT (t − p + 1)
According to (14) and (25), define a quadratic criterion function J2 (θ ) :=
1 2
∥Y (t) − Φ (t)θ∥2 .
Let k = 1, 2, 3, . . . be an iterative variable, and θˆ k be the iterative estimate of θ . To optimize the problems in J2 (θ ), minimizing J2 (θ ) by the negative gradient search yields the iterative algorithm of computing θˆ k (t) as follows:
θˆ k (t) = θˆ k−1 (t) − µk (t)grad[J2 (θˆ k−1 (t))] = θˆ k−1 (t) + µk (t)Φ T (t)[Y (t) − Φ (t)θˆ k−1 (t)],
(26)
where µk (t) ⩾ 0 is the iterative step-size or convergence factor. Here, a difficulty arises because Φ (t) in (26) contains the unknown inner variables x1 (t − i), so it is impossible to compute the iterative estimates of θ . The solution here is based ˆ k (t) denotes the on the hierarchical identification principle. Let xˆ 1,k (t − i) be the estimate of x1 (t − i) at iteration k, and ϕ information vector obtained by replacing x1 (t − i) in ϕ(t) with xˆ 1,k−1 (t − i), i.e.,
ϕˆ k (t) := [ϕˆ Tx,k (t), u(t − n)ϕˆ Tx,k (t), ϕTu (t), u(t − n)ϕTu (t)]T ∈ R4n−1 , ϕˆ x,k (t) := [−ˆx1,k−1 (t − 1), −ˆx1,k−1 (t − 2), . . . , −ˆx1,k−1 (t − n)]T ∈ Rn , ϕu (t) := [u(t − 2), u(t − 3), . . . , u(t − n)]T ∈ Rn−1 . Please cite this article as: F. Ding, L. Xu, D. Meng et al., Gradient estimation algorithms for the parameter identification of bilinear systems using the auxiliary model, Journal of Computational and Applied Mathematics (2019) 112575, https://doi.org/10.1016/j.cam.2019.112575.
6
F. Ding, L. Xu, D. Meng et al. / Journal of Computational and Applied Mathematics xxx (xxxx) xxx
Let θˆ k (t) be the estimate of θ at iteration k. From (12), we have x1 (t − i) = ϕT (t − i)θ.
ˆ k (t − i) and θˆ k (t), respectively, we can compute the Replacing ϕ(t − i) and θ in the above equation with their estimates ϕ estimate of x(t − i) at iteration k using ˆ k (t)θˆ k (t). xˆ 1,k (t − i) = ϕ T
(27)
Define
ϕˆ Tk (t) ϕˆ k (t − 1) .. .
⎡
⎤
T
⎢ ˆ k (t) := ⎢ Φ ⎢ ⎣
⎥ ⎥ ⎥ ∈ Rp×(4n−1) . ⎦
ϕˆ Tk (t − p + 1)
ˆ k (t), we have Replacing Φ (t) in (26) with Φ ˆ k (t)[Y (t) − Φ ˆ k (t)θˆ k−1 (t)] θˆ k (t) = θˆ k−1 (t) + µk (t)Φ T ˆ k (t)Φ ˆ k (t)][θˆ k−1 (t)] + µk (t)Φ ˆ Tk (t)Y (t), = [I − µk (t)Φ
(28) (29)
where I represents an identity matrix with appropriate sizes. Eq. (29) can be regarded as a dynamical system with state ˆ Tk (t)Φ ˆ k (t) must have all its θˆ k (t) for fixed t. In order to guarantee the convergence of θˆ k (t), the system matrix I − µk (t)Φ eigenvalues inside the unit circle, that is
ˆ Tk (t)Φ ˆ k (t) ⩽ I . −I ⩽ I − µk (t)Φ Thus, one safe choice of µk (t) is to satisfy
µk (t) ⩽
2 T
ˆ k (t)Φ ˆ k (t)] λmax [Φ
,
(30)
where λmax [X ] stands for the maximum eigenvalue of the symmetric matrix. Hence, we can summarize the AM-GI identification algorithm for bilinear systems as
ˆ k (t)[Y (t) − Φ ˆ k (t)θˆ k−1 (t)], θˆ k (t) = θˆ k−1 (t) + µk (t)Φ T 1 ˆ ˆ µk (t) ⩽ 2λ− max [Φ k (t)Φ k (t)], Y (t) = [y(t), y(t − 1), . . . , y(t − p + 1)]T ,
(31) (32) (33)
ˆ k (t) = [ϕˆ k (t), ϕˆ k (t − 1), . . . , ϕˆ k (t − p + 1)]T , Φ
(34)
ϕˆ k (t) = [ϕˆ Tx,k (t), u(t − n)ϕˆ Tx,k (t), u(t − 1), ϕTu (t), u(t − n)ϕTu (t)]T ,
(35)
ϕˆ x,k (t) = [−ˆx1,k−1 (t − 1), −ˆx1,k−1 (t − 2), . . . , −ˆx1,k−1 (t − n)]T , ϕu (t) = [u(t − 2), u(t − 3), . . . , u(t − n)] , T
ˆ k (t)θˆ k (t), i = 0, 1, . . . , t , xˆ 1,k (t − i) = ϕ T
T
T
(37) (38)
T
θˆ k (t) = [ˆak (t), bˆ k (t), cˆ k (t), dˆ k (t)]T . T
(36)
(39)
If we take t = p = L (L is the data length), we have
ˆ k (L)[Y (L) − Φ ˆ k (L)θˆ k−1 ], θˆ k = θˆ k−1 + µk Φ T 1 ˆ ˆ µk = 2λ− max [Φ k (L)Φ k (L)], Y (t) = [y(1), y(2), . . . , y(L)]T ,
(43)
ϕˆ k (t) = [ϕˆ Tx,k (t), u(t − n)ϕˆ Tx,k (t), u(t − 1), ϕTu (t), u(t − n)ϕTu (t)]T , t = 1, 2, . . . , L, ϕˆ x,k (t) = [−ˆx1,k−1 (t − 1), −ˆx1,k−1 (t − 2), . . . , −ˆx1,k−1 (t − n)]T , ϕu (t) = [u(t − 2), u(t − 3), . . . , u(t − n)] , T
T
T T T T θˆ k = [ˆak , bˆ k , cˆ k , dˆ k ]T .
(41) (42)
ˆ k (L) = [ϕˆ k (1), ϕˆ k (2), . . . , ϕˆ k (L)]T , Φ
ˆ k (t)θˆ k , xˆ 1,k (t) = ϕ
(40)
(44) (45) (46) (47) (48)
Please cite this article as: F. Ding, L. Xu, D. Meng et al., Gradient estimation algorithms for the parameter identification of bilinear systems using the auxiliary model, Journal of Computational and Applied Mathematics (2019) 112575, https://doi.org/10.1016/j.cam.2019.112575.
F. Ding, L. Xu, D. Meng et al. / Journal of Computational and Applied Mathematics xxx (xxxx) xxx
7
Fig. 2. The flowchart of the AM-GI algorithm.
The computation procedures of the AM-GI algorithm in (40)–(48) are listed in the following. 1. Give a small ε and integers L, and collect the input–output data u(t) and y(t). Form the stacked output vector Y (L) by (42). 2. Let k = 1, and set the initial value: θˆ (0) = 14n−1 /p0 , r(0) = 1, xˆ 1 (t − i) = 0 (i = 1, 2, . . . , L). ˆ k (L) by (46), (45), (44) and (43), respectively. ˆ x,k (t), ϕˆ k (t) and Φ 3. Form ϕu (t), ϕ 4. Compute µk by (41). 5. Update the parameter estimate θˆ k by (40). 6. Compute xˆ 1,k (t) by (47). 7. Compare θˆ k and θˆ k−1 . If ∥θˆ k − θˆ k−1 ∥ < ε , then terminate this procedure and obtain the iterative time k and the estimate θˆ k ; otherwise, increase k by 1 and go to Step 3. The flowchart of computing θˆ k in the AM-GI algorithm is shown in Fig. 2. The proposed auxiliary model based stochastic gradient algorithm and the auxiliary model based gradient iterative algorithm for bilinear systems can combine other estimation algorithms to explore new identification methods and can be applied to other fields [26–33] such as engineering systems and applications [34–37]. 5. Examples Example 1. Consider the following discrete input–output expressions of the bilinear system,
[ x(t + 1) =
0 0.30
]
1 −0.65
[ x(t) +
0 0.30
0 0.32
]
[ x(t)u(t) +
−0.80 0.82
]
u(t),
y(t) = [1, 0]x(t) + v (t). Its input–output representation is given by y(t) =
C (z) + u(t − n)D(z) A(z) + u(t − n)B(z)
u(t) + v (t),
A(z) = 1 + a1 z −1 + a2 z −2 = 1 + 0.65z −1 − 0.30z −2 , B(z) = b1 z −1 + b2 z −2 = −0.32z −1 − 0.30z −2 , C (z) = c1 z −1 + c2 z −2 = −0.80z −1 + 1.34z −2 , D(z) = d2 z −2 = −0.256z −2 . Please cite this article as: F. Ding, L. Xu, D. Meng et al., Gradient estimation algorithms for the parameter identification of bilinear systems using the auxiliary model, Journal of Computational and Applied Mathematics (2019) 112575, https://doi.org/10.1016/j.cam.2019.112575.
8
F. Ding, L. Xu, D. Meng et al. / Journal of Computational and Applied Mathematics xxx (xxxx) xxx
Table 1 The AM-SG estimates and their errors with σ 2 = 0.502 for Example 1. t
−a1
−a2
−b1
−b2
c1
c2
d2
δ (%)
100 200 500 1000 1500 2000 2500 3000
−0.16550 −0.18209 −0.20964 −0.23685 −0.24889 −0.25719 −0.26459 −0.27017
0.22634 0.24019 0.25162 0.25921 0.26274 0.26610 0.27009 0.27099
0.06269 0.07233 0.08216 0.07306 0.06965 0.06764 0.06160 0.05866
−0.24555 −0.25967 −0.27297 −0.28838 −0.29644 −0.30288 −0.30614 −0.30845
−0.49304 −0.53611 −0.57247 −0.59464 −0.60174 −0.60926 −0.61660 −0.62158
0.57259 0.61011 0.65838 0.68798 0.70478 0.71561 0.72302 0.72748
0.26027 0.22639 0.19136 0.16806 0.15289 0.14749 0.14341 0.13933
69.61362 66.83602 63.61980 61.85544 60.94166 60.45244 60.08489 59.81132
True values
−0.65000
0.30000
0.32000
0.30000
−0.80000
1.34000
−0.25600
Table 2 The AM-SG estimates and their errors with σ 2 = 1.002 for Example 1. t
−a1
−a2
−b1
−b2
c1
c2
d2
δ (%)
100 200 500 1000 1500 2000 2500 3000
−0.17610 −0.19958 −0.22939 −0.25865 −0.27230 −0.28084 −0.28912 −0.29482
0.26793 0.27821 0.27902 0.27940 0.28099 0.28249 0.28529 0.28542
0.03414 0.06174 0.08198 0.08092 0.07892 0.08000 0.07614 0.07366
−0.31191 −0.33198 −0.32464 −0.33369 −0.34069 −0.34685 −0.34905 −0.35111
−0.41762 −0.46536 −0.51058 −0.53810 −0.54740 −0.55665 −0.56582 −0.57176
0.61462 0.64706 0.69785 0.72595 0.74110 0.75046 0.75712 0.76043
0.39093 0.33586 0.28183 0.24467 0.22350 0.21384 0.20546 0.19912
74.35821 70.90599 66.30672 63.88873 62.79066 62.19403 61.65481 61.33553
True values
−0.65000
0.30000
0.32000
0.30000
−0.80000
1.34000
−0.25600
Fig. 3. The AM-SG estimation errors δ versus t with for Example 1.
Then the parameter vector to be identified is
θ = [a1 , a2 , b1 , b2 , c1 , c2 , d1 ]T = [0.65, −0.30, −0.32, −0.30, −0.80, 1.34, −0.256]T . In simulation, the input {u(t)} is taken as a persistent excitation signal sequence with zero mean and unit variance, {v (t)} is a stochastic white noise sequence with zero mean and variance σ 2 = 0.502 and σ 2 = 1.002 . Applying the AM-SG algorithm and the AM-GI algorithm to estimate the parameters of this system, the parameter estimates and their errors are shown in Tables 1–4. For the AM-SG algorithm, the parameter estimation error is δ := ∥θˆ (t) − θ∥/∥θ∥, the data length t = 3000; for the AM-GI algorithm, the parameter estimation error is δ := ∥θˆ k − θ∥/∥θ∥, the data length L = 1000. The parameter estimation errors δ versus t or k are shown in Figs. 3–4. Example 2. Consider the following discrete input–output expressions of the bilinear system,
[ x(t + 1) =
0 0.23
1 −0.58
]
[ x(t) +
0 0.38
0 0.45
]
[ x(t)u(t) +
−0.60 0.92
]
u(t),
y(t) = [1, 0]x(t) + v (t). Please cite this article as: F. Ding, L. Xu, D. Meng et al., Gradient estimation algorithms for the parameter identification of bilinear systems using the auxiliary model, Journal of Computational and Applied Mathematics (2019) 112575, https://doi.org/10.1016/j.cam.2019.112575.
F. Ding, L. Xu, D. Meng et al. / Journal of Computational and Applied Mathematics xxx (xxxx) xxx
9
Table 3 The AM-GI estimates and their errors with σ 2 = 0.502 for Example 1. k
−a1
−a2
−b1
−b2
c1
c2
d2
δ (%)
1 2 5 10 20 50
−0.02698 −0.07092 −0.37953 −0.61780 −0.65288 −0.65384
0.10600 0.17246 0.27102 0.29655 0.29911 0.29765
0.00421 0.01005 0.15937 0.29689 0.31410 0.30964
0.10670 0.19176 0.40537 0.34770 0.29027 0.28944
−0.31801 −0.56141 −0.73937 −0.79126 −0.79908 −0.79852
0.32688 0.58270 0.88119 1.22189 1.35471 1.36560
0.00729 0.00370 −0.04399 −0.19957 −0.24664 −0.24248
80.39733 65.48369 37.78508 9.04488 1.31327 2.04901
True values
−0.65000
0.30000
0.32000
0.30000
−0.80000
1.34000
−0.25600
Table 4 The AM-GI estimates and their errors with σ 2 = 1.002 for Example 1. k
−a1
−a2
−b1
−b2
c1
c2
d2
δ (%)
1 2 5 10 20 50
−0.07153 −0.10698 −0.40071 −0.62975 −0.65716 −0.65774
0.07385 0.14235 0.25303 0.29633 0.29682 0.29539
−0.02880 −0.01771 0.13296 0.28146 0.30065 0.29671
0.07647 0.16241 0.38308 0.33435 0.27921 0.27868
−0.32177 −0.56241 −0.73852 −0.79023 −0.79748 −0.79699
0.33793 0.60353 0.91340 1.25670 1.38245 1.39122
0.01197 0.01953 −0.02125 −0.18101 −0.23048 −0.22686
79.78274 64.60480 36.57876 7.80953 3.59760 4.21143
True values
−0.65000
0.30000
0.32000
0.30000
−0.80000
1.34000
−0.25600
Fig. 4. The AM-GI parameter estimation errors δ versus k for Example 1.
Its input–output representation is given by y(t) =
C (z) + u(t − n)D(z) A(z) + u(t − n)B(z)
u(t) + v (t),
A(z) = 1 + a1 z −1 + a2 z −2 = 1 + 0.58z −1 − 0.230z −2 , B(z) = b1 z −1 + b2 z −2 = −0.45z −1 − 0.38z −2 , C (z) = c1 z −1 + c2 z −2 = −0.60z −1 + 1.268z −2 , D(z) = d2 z −2 = −0.27z −2 . Then the parameter vector to be identified is
θ = [a1 , a2 , b1 , b2 , c1 , c2 , d1 ]T = [0.65, −0.30, −0.32, −0.30, −0.80, 1.34, −0.256]T . In this example, we use the same simulation conditions in Example 1, apply the AM-SG algorithm and the AM-GI algorithm to estimate the parameters of this system. The parameter estimates and their errors are shown in Tables 5–8. The parameter estimation errors δ versus t or k are shown in Figs. 5–6. Please cite this article as: F. Ding, L. Xu, D. Meng et al., Gradient estimation algorithms for the parameter identification of bilinear systems using the auxiliary model, Journal of Computational and Applied Mathematics (2019) 112575, https://doi.org/10.1016/j.cam.2019.112575.
10
F. Ding, L. Xu, D. Meng et al. / Journal of Computational and Applied Mathematics xxx (xxxx) xxx
Table 5 The AM-SG estimates and their errors with σ 2 = 0.502 for Example 2. t
−a1
−a2
−b1
−b2
c1
c2
d2
δ (%)
100 200 500 1000 1500 2000 2500 3000
−0.15327 −0.17419 −0.19475 −0.21978 −0.23437 −0.24284 −0.25026 −0.25597
0.19010 0.19680 0.20788 0.21076 0.20882 0.21149 0.21370 0.21349
0.02112 0.01389 0.01514 0.00128 −0.00712 −0.01310 −0.02425 −0.02909
−0.21625 −0.23384 −0.25408 −0.28533 −0.30178 −0.31302 −0.32204 −0.32706
−0.37678 −0.42027 −0.44825 −0.46278 −0.46541 −0.47076 −0.47680 −0.48133
0.59036 0.62801 0.68935 0.73042 0.75120 0.76310 0.77092 0.77553
0.25493 0.23410 0.19109 0.15734 0.13614 0.12988 0.12613 0.11998
73.68485 71.73003 68.69744 67.48240 66.92696 66.83138 66.98370 66.91573
True values
−0.58000
0.23000
0.45000
0.38000
−0.60000
1.26800
−0.27000
Table 6 The AM-SG estimates and their errors with σ 2 = 1.002 for Example 2. t
−a1
−a2
−b1
−b2
c1
c2
d2
δ (%)
100 200 500 1000 1500 2000 2500 3000
−0.17767 −0.20699 −0.22809 −0.25312 −0.26804 −0.27565 −0.28296 −0.28805
0.24173 0.24401 0.24717 0.24365 0.23993 0.24124 0.24248 0.24184
0.00169 0.01336 0.02172 0.01460 0.00634 0.00270 −0.00682 −0.01157
−0.28244 −0.30587 −0.30608 −0.33027 −0.34547 −0.35644 −0.36394 −0.36893
−0.30800 −0.35532 −0.39177 −0.41153 −0.41628 −0.42327 −0.43095 −0.43635
0.62289 0.65663 0.72174 0.76208 0.78159 0.79238 0.79980 0.80348
0.38369 0.34204 0.28186 0.23473 0.20777 0.19735 0.18922 0.18087
78.94251 76.11102 71.62430 69.65507 68.92386 68.71114 68.66257 68.55695
True values
−0.58000
0.23000
0.45000
0.38000
−0.60000
1.26800
−0.27000
Table 7 The AM-GI estimates and their errors with σ 2 = 0.502 for Example 2. k
−a1
−a2
−b1
−b2
c1
c2
d2
δ (%)
1 2 5 10 20 50
−0.03417 −0.09888 −0.50405 −0.51334 −0.56240 −0.57762
0.07331 0.11735 0.18777 0.23200 0.22841 0.22724
0.03507 0.06563 0.39144 0.36640 0.42326 0.44591
0.11887 0.23584 0.58506 0.31518 0.34693 0.37210
−0.26577 −0.43906 −0.56545 −0.59306 −0.59778 −0.59797
0.41261 0.69576 1.06305 1.24301 1.29346 1.29129
0.02427 0.01457 −0.10785 −0.21494 −0.25753 −0.26266
75.59348 58.28737 22.40529 8.93476 3.47853 1.68793
True values
−0.58000
0.23000
0.45000
0.38000
−0.60000
1.26800
−0.27000
Fig. 5. The AM-SG parameter estimation errors δ versus t for Example 2.
Please cite this article as: F. Ding, L. Xu, D. Meng et al., Gradient estimation algorithms for the parameter identification of bilinear systems using the auxiliary model, Journal of Computational and Applied Mathematics (2019) 112575, https://doi.org/10.1016/j.cam.2019.112575.
F. Ding, L. Xu, D. Meng et al. / Journal of Computational and Applied Mathematics xxx (xxxx) xxx
11
Table 8 The AM-GI estimates and their errors with σ 2 = 1.002 for Example 2. k
−a1
−a2
−b1
−b2
c1
c2
d2
δ (%)
1 2 5 10 20 50
−0.08280 −0.13827 −0.51559 −0.54773 −0.57315 −0.57600
0.04181 0.08618 0.17967 0.22768 0.22568 0.22479
−0.00047 0.03438 0.36835 0.40114 0.43639 0.44152
0.08704 0.20197 0.56271 0.35252 0.35967 0.36544
−0.26502 −0.43838 −0.56444 −0.59159 −0.59578 −0.59593
0.41936 0.71428 1.09228 1.27552 1.31699 1.31444
0.02922 0.02904 −0.08955 −0.20716 −0.25291 −0.25423
75.59307 58.11482 21.34980 5.82291 3.72659 3.36210
True values
−0.58000
0.23000
0.45000
0.38000
−0.60000
1.26800
−0.27000
Fig. 6. The AM-GI parameter estimation errors δ versus k for Example 2.
From Tables 1 to 8 and Figs. 3 to 6, we can draw the following conclusions.
• As the data length increases, the parameter estimation errors given by the AM-SG and AM-GI algorithms become smaller and the parameter estimates are close to their true values.
• The parameter estimation errors of the AM-GI algorithm are smaller than those of the AM-SG algorithm. In other words, the parameter estimates given by the AM-GI algorithm have higher accuracy than the AM-SG algorithm for bilinear systems. • Because of repeatedly using the measured data, the AM-GI algorithm has faster convergence rates than the AM-SG algorithm. 6. Conclusions This paper proposed an auxiliary model based stochastic gradient (AM-SG) algorithm and an auxiliary model gradientbased iterative (AM-GI) algorithm for the bilinear system with white noise. Only the system input and output are required by the methods to identify the bilinear system. The proposed AM-GI algorithm makes full use of all the measured input– output data at each iteration, which accounts for its merits than the AM-SG algorithm. The simulation test indicates that the AM-GI algorithm is effective for bilinear systems under white noise environment. Although the methods in this paper are proposed for bilinear systems with white noise, the idea of identification can be extended to study the parameter identification problems for other linear, bilinear and nonlinear systems with colored noises [38–42], and can be applied to other literatures [43–48] such as engineering applications [49–52],information processing and mathematical systems [53–56]. References [1] N. Li, S. Guo, Y. Wang, Weighted preliminary-summation-based principal component analysis for non-Gaussian processes, Control Eng. Pract. 87 (2019) 122–132. [2] J.X. Ma, W.L. Xiong, J. Chen, et al., Hierarchical identification for multivariate hammerstein systems by using the modified Kalman filter, IET Control Theory Appl. 11 (6) (2017) 857–869.
Please cite this article as: F. Ding, L. Xu, D. Meng et al., Gradient estimation algorithms for the parameter identification of bilinear systems using the auxiliary model, Journal of Computational and Applied Mathematics (2019) 112575, https://doi.org/10.1016/j.cam.2019.112575.
12
F. Ding, L. Xu, D. Meng et al. / Journal of Computational and Applied Mathematics xxx (xxxx) xxx
[3] J.X. Ma, F. Ding, Filtering-based multistage recursive identification algorithm for an input nonlinear output-error autoregressive system by using the key term separation technique, Circ. Syst. Signal Process. 36 (2) (2017) 577–599. [4] P. Ma, F. Ding, New gradient based identification methods for multivariate pseudo-linear systems using the multi-innovation and the data filtering, J. Franklin Inst. 354 (3) (2017) 1568–1583. [5] S.Y. Liu, F. Ding, L. Xu, et al., Hierarchical principle-based iterative parameter estimation algorithm for dual-frequency signals, Circ. Syst. Signal Process. 38 (7) (2019) 3251–3268. [6] J. Ding, J.Z. Chen, J.X. Lin, L.J. Wan, Particle filtering based parameter estimation for systems with output-error type model structures, J. Franklin Inst. 356 (10) (2019) 5521–5540. [7] F. Ding, F.F. Wang, L. Xu, M.H. Wu, Decomposition based least squares iterative identification algorithm for multivariate pseudo-linear ARMA systems using the data filtering, J. Franklin Inst. 354 (3) (2017) 1321–1339. [8] L.J. Wan, F. Ding, Decomposition- and gradient-based iterative identification algorithms for multivariable systems using the multi-innovation theory, Circ. Syst. Signal Process. 38 (7) (2019) 2971–2991. [9] H. Ma, J. Pan, F. Ding, et al., Partially-coupled least squares based iterative parameter estimation for multi-variable output-error-like autoregressive moving average systems, IET Control Theory Appl. 13 (2019) http://dx.doi.org/10.1049/iet-cta.2019.0112. [10] L.J. Liu, F. Ding, L. Xu, et al., Maximum likelihood recursive identification for the multivariate equation-error autoregressive moving average systems using the data filtering, IEEE Access 7 (2019) 41154–41163. [11] Y.J. Wang, F. Ding, M.H. Wu, Recursive parameter estimation algorithm for multivariate output-error systems, J. Franklin Inst. 355 (12) (2018) 5163–5181. [12] J. Ding, J.Z. Chen, J.X. Lin, G.P. Jiang, Particle filtering-based recursive identification for controlled auto-regressive systems with quantised output, IET Control Theory Appl. 13 (14) (2019) 2181–2187. [13] F. Ding, L. Lv, J. Pan, et al., Two-stage gradient-based iterative estimation methods for controlled autoregressive systems using the measurement data, Int. J. Control Autom. Syst. 18 (2020) http://dx.doi.org/10.1007/s12555-019-0140-3. [14] T. Cui, F. Ding, A. Alsaadi, et al., Joint multi-innovation recursive extended least squares parameter and state estimation for a class of state-space systems, Int. J. Control Autom. Syst. 18 (2020). [15] H. Ma, J. Pan, L. Lv, et al., Recursive algorithms for multivariable output-error-like ARMA systems, Mathematics 7 (6) (2019) Article Number: 558. 2019, http://dx.doi.org/10.3390/math7060558. [16] M.H. Li, X.M. Liu, et al., Filtering-based maximum likelihood gradient iterative estimation algorithm for bilinear systems with autoregressive moving average noise, Circ. Syst. Signal Process. 37 (11) (2018) 5023–5048. [17] M.H. Li, X.M. Liu, F. Ding, The filtering-based maximum likelihood iterative estimation algorithms for a special class of nonlinear systems with autoregressive moving average noise using the hierarchical identification principle, Int. J. Adapt. Control Signal Process. 33 (7) (2019) 1189–1211. [18] X. Zhang, F. Ding, L. Xu, E.F. Yang, State filtering-based least squares parameter estimation for bilinear systems using the hierarchical identification principle, IET Control Theory Appl. 12 (12) (2018) 1704–1713. [19] X. Zhang, F. Ding, E.F. Yang, State estimation for bilinear systems through minimizing the covariance matrix of the state estimation errors, Int. J. Adapt. Control Signal Process. 33 (7) (2019) 1157–1173. [20] D.D. Meng, Recursive least squares and multi-innovation gradient estimation algorithms for bilinear stochastic systems, Circ. Syst. Signal Process. 36 (3) (2017) 1052–1065. [21] Q. Zhu, J.X. Xu, D.Q. Huang, G.D. Hu, Iterative learning control design for linear discrete-time systems with multiple high-order internal models, Automatica 62 (12) (2015) 65–76. [22] Q. Zhu, L. Li, C.J. Chen, C.Z. Liu, G.D. Hu, A low-cost lateral active suspension system of the high-speed train for ride quality based on the resonant control method, IEEE Trans. Ind. Electron. 65 (5) (2018) 4187–4196. [23] H. Dai, N.K. Sinha, Robust recursive least-squares method with modified weights for bilinear system identification, IEE Proc. D 136 (3) (1989) 122–126. [24] G.C. Goodwin, K.S. Sin, Adaptive Filtering Prediction and Control, Prentice Hall, Englewood Cliffs, New Jersey, 1984. [25] Y.J. Wang, F. Ding, Novel data filtering based parameter identification for multiple-input multiple-output systems using the auxiliary model, Automatica 71 (2016) 308–313. [26] X.B. Jin, N. Yang, X. Wang, Y. Bai, T. Su, J. Kong, Integrated predictor based on decomposition mechanism for PM2.5 long-term prediction, Appl. Sci.-Basel 9 (21) (2019) Article Number: 4533, http://dx.doi.org/10.3390/app9214533. [27] B. Fu, C.X. Ouyang, C.S. Li, J.W. Wang, E. Gul, An improved mixed integer linear programming approach based on symmetry diminishing for unit commitment of hybrid power system, Energies 12 (5) (2019) 1–14, Article Number: 833, http://dx.doi.org/10.3390/en12050833. [28] W.X. Shi, N. Liu, Y.M. Zhou, X.A. Cao, Effects of postannealing on the characteristics and reliability of polyfluorene organic light-emitting diodes, IEEE Trans. Electron Dev. 66 (2) (2019) 1057–1062. [29] T.Z. Wu, X. Shi, L. Liao, C.J. Zhou, H. Zhou, Y.H. Su, A capacity configuration control strategy to alleviate power fluctuation of hybrid energy storage system based on improved particle swarm optimization, Energies 12 (4) (2019) 1–11, Article Number: 642. http://dx.doi.org/10.3390/ en12040642. [30] F.Y. Ma, Y.K. Yin, M. Li, Start-up process modelling of sediment microbial fuel cells based on data driven, Math. Probl. Eng. (2019) Article Number: 7403732, http://dx.doi.org/10.1155/2019/7403732. [31] W. Wei, W.C. Xue, D.H. Li, On disturbance rejection in magnetic levitation, Control Eng. Pract. 82 (2019) 24–35. [32] J.C. Liu, Y. Gu, Y.X. Chou, Seismic data reconstruction via complex shearlet transform and block coordinate relaxation, J. Seism. Explor. 28 (4) (2019) 307–332. [33] X.L. Zhao, Z.Y. Lin, B. Fu, L. He, C.S. Li, Research on the predictive optimal PID plus second order derivative method for AGC of power system with high penetration of photovoltaic and wind power, J. Electr. Eng. Technol. 14 (3) (2019) 1075–1086. [34] D.C. Chen, X.X. Zhang, H. Xiong, Y. Li, J. Tang, S. Xiao, D.Z. Zhang, A first-principles study of the SF6 decomposed products adsorbed over defective WS2 monolayer as promising gas sensing device, IEEE Trans. Device Mater. Reliab. 19 (3) (2019) 473–483. [35] Y.L. Li, Y. Zhang, Y. Li, F. Tang, Q.S. Lv, J. Zhang, S. Xiao, J. Tang, X.X. Zhang, Experimental study on compatibility of eco-friendly insulating medium C5F10O/CO2 gas mixture with copper and aluminum, IEEE Access 7 (2019) 83994–84002. [36] Y. Zhang, X.X. Zhang, Y. Li, Y.L. Li, Q. Chen, G.Z. Zhang, S. Xiao, J. Tang, AC breakdown and decomposition characteristics of environmental friendly gas C5F10O/Air and C5F10O/N-2, IEEE Access 7 (2019) 73954–73960. [37] Z.W. Chen, X.X. Zhang, H. Xiong, D.C. Chen, H.T. Cheng, J. Tang, Y. Tian, S. Xiao, Dissolved gas analysis in transformer oil using pt-doped wse2 monolayer based on first principles method, IEEE Access 7 (2019) 72012–72019. [38] P.C. Gong, W.Q. Wang, X.R. Wan, Adaptive weight matrix design and parameter estimation via sparse modeling for MIMO radar, Signal Process. 139 (2017) 1–11. [39] P.C. Gong, W.Q. Wang, F.C. Li, H. Cheung, Sparsity-aware transmit beamspace design for FDA-MIMO radar, Signal Process. 144 (2018) 99–103. [40] J. Pan, X. Jiang, X.K. Wan, W. Ding, A filtering based multi-innovation extended stochastic gradient algorithm for multivariable control systems, Int. J. Control Autom. Syst. 15 (3) (2017) 1189–1197.
Please cite this article as: F. Ding, L. Xu, D. Meng et al., Gradient estimation algorithms for the parameter identification of bilinear systems using the auxiliary model, Journal of Computational and Applied Mathematics (2019) 112575, https://doi.org/10.1016/j.cam.2019.112575.
F. Ding, L. Xu, D. Meng et al. / Journal of Computational and Applied Mathematics xxx (xxxx) xxx
13
[41] J. Pan, W. Li, H.P. Zhang, Control algorithms of magnetic suspension systems based on the improved double exponential reaching law of sliding mode control, Int. J. Control Autom. Syst. 16 (6) (2018) 2878–2887. [42] X.K. Wan, Y. Li, C. Xia, M.H. Wu, J. Liang, N. Wang, A T-wave alternans assessment method based on least squares curve fitting technique, Measurement 86 (2016) 93–100. [43] N. Zhao, Joint optimization of cooperative spectrum sensing and resource allocation in multi-channel cognitive radio sensor networks, Circ. Syst. Signal Process. 35 (7) (2016) 2563–2583. [44] N. Zhao, M.H. Wu, J.J. Chen, Android-based mobile educational platform for speech signal processing, Int. J. Electr. Eng. Educ. 54 (1) (2017) 3–16. [45] N. Zhao, Y. Liang, Y. Pei, Dynamic contract incentive mechanism for cooperative wireless networks, IEEE Trans. Veh. Technol. 67 (11) (2018) 10970–10982. [46] X.L. Zhao, F. Liu, B. Fu, F. Na, Reliability analysis of hybrid multi-carrier energy systems based on entropy-based Markov model, Proc. Inst. Mech. Eng. O-J. Risk Reliab. 230 (6) (2016) 561–569. [47] X.L. Zhao, Z.Y. Lin, B. Fu, L. He, F. Na, Research on automatic generation control with wind power participation based on predictive optimal 2-degree-of-freedom PID strategy for multi-area interconnected power system, Energies 11 (12) (2018) Article Number: 3325, http://dx.doi.org/10.3390/en11123325. [48] L. Wang, H. Liu, L.V. Dai, Y.W. Liu, Novel method for identifying fault location of mixed lines, Energies 11 (6) (2018) Article Number: 1529, http://dx.doi.org/10.3390/en11061529. [49] F. Ding, J. Pan, A. Alsaedi, T. Hayat, Gradient-based iterative parameter estimation algorithms for dynamical systems from observation data, Mathematics 7 (5) (2019) Article Number: 428. https://doi.org/10.3390/math7050428. [50] C.C. Yin, C.W. Wang, The perturbed compound poisson risk process with investment and debit interest, Methodol. Comput. Appl. Probab. 12 (3) (2010) 391–413. [51] C.C. Yin, K.C. Yuen, Optimality of the threshold dividend strategy for the compound poisson model, Statist. Probab. Lett. 81 (12) (2011) 1841–1846. [52] C.C. Yin, Y.Z. Wen, Optimal dividend problem with a terminal value for spectrally positive levy processes, Insurance Math. Econom. 53 (3) (2013) 769–773. [53] C.C. Yin, Y.Z. Wen, An extension of paulsen-gjessing’s risk model with stochastic return on investments, Insurance Math. Econom. 53 (3) (2013) 469–476. [54] C.C. Yin, Y.Z. Wen, Y.X. Zhao, On the optimal dividend problem for a spectrally positive levy process, Astin Bull. 44 (3) (2014) 635–651. [55] C.C. Yin, K.C. Yuen, Exact joint laws associated with spectrally negative levy processes and applications to insurance risk theory, Frontiers of Mathematics in China 9 (6) (2014) 1453–1471. [56] C.C. Yin, K.C. Yuen, Optimal dividend problems for a jump-diffusion model with capital injections and proportional transaction costs, J. Indus. Manag. Optim. 11 (4) (2015) 1247–1262.
Please cite this article as: F. Ding, L. Xu, D. Meng et al., Gradient estimation algorithms for the parameter identification of bilinear systems using the auxiliary model, Journal of Computational and Applied Mathematics (2019) 112575, https://doi.org/10.1016/j.cam.2019.112575.