The optimal state estimation for competitive neural network with time-varying delay using Local Search Algorithm

The optimal state estimation for competitive neural network with time-varying delay using Local Search Algorithm

Physica A 540 (2020) 123102 Contents lists available at ScienceDirect Physica A journal homepage: www.elsevier.com/locate/physa The optimal state e...

528KB Sizes 1 Downloads 57 Views

Physica A 540 (2020) 123102

Contents lists available at ScienceDirect

Physica A journal homepage: www.elsevier.com/locate/physa

The optimal state estimation for competitive neural network with time-varying delay using Local Search Algorithm ∗

Zhicheng Shi a , Yongqing Yang a,b , , Qi Chang b , Xianyun Xu a a b

School of Science, Wuxi Engineering Research Center for Biocomputing, Jiangnan University, Wuxi 214122, PR China School of IoT Engineering, Jiangnan University, Wuxi 214122, PR China

article

info

Article history: Received 27 May 2019 Received in revised form 12 August 2019 Available online 18 October 2019 Keywords: Competitive neural network Sampling state estimation LMI Optimal state estimator

a b s t r a c t In this paper, the optimal state estimation of competitive neural network with timevarying delay is investigated. A valid linear matrix inequality (LMI) method for neuron state estimation is proposed, sufficient conditions for the asymptotic stability of the error system are obtained. When the system is stable, further research is put forward based on the knowledge of cybernetics optimization. In particular, the Local Search Algorithm is used to optimize the parameters of state estimator. The optimal state estimator is obtained by minimizing the preset energy function. Numerical examples are included to illustrate the applicability of the proposed design method. © 2019 Elsevier B.V. All rights reserved.

1. Introduction Over the past few decades, neural networks have become increasingly popular because of its potential and broad application value. For example, it has quite a few applications in image processing, associative memory, pattern recognition and optimization problems [1–4]. Especially for the time-delay neural networks, many in-depth investigations have been proposed. This is because time-delay is ubiquitous in both the biological and artificial neural networks, and it sometimes causes vibrations, source instability, and other unknown hazards [5–8]. As of now, there have been a lot of interesting results about dynamic neural networks with various delays that have been known in [9–15]. Liu [16] studied the global exponential stability of BAM time-varying delay neural network. Li et al. [17] studied the global asymptotic stability of neutral hybrid time-delay neural network with Lyapunov method and LMI. At the same time, the state estimator can estimate the state of neurons by the measured output of the network model. Moreover, the state estimator is of great significance for the use of the state of the neuron to achieve a certain target. Thus, the status estimation problem is always a topic of researchers, and many previous results can be used. In Huang’s research [18], a novel-delay method was used to study the state estimator of time-varying delay recurrent neural network. Fu [19] designed the exponential state estimator of the impulsive neural network with time-varying delay. Zhang [20] designed the exponential state estimator of the Markov process neural network with discrete time-varying delay. Author [21] discussed the state estimation of a complex network system with stochastic nonlinearity and loss measurement randomly. In [22], the authors studied the asymptotically stable high-order neutral cellular neural networks with proportional delays and operators. Liu [23] studied the State estimation for complex systems with randomly occurring nonlinearities and randomly missing measurements. In addition, theory of sampling control has gained a lot of attention over the past decade, because modern control systems are usually implemented by using digital controllers. Through the discontinuous Lyapunov method, Lakshmanan [24] ∗ Corresponding author at: School of Science, Wuxi Engineering Research Center for Biocomputing, Jiangnan University, Wuxi 214122, PR China. E-mail address: [email protected] (Y. Yang). https://doi.org/10.1016/j.physa.2019.123102 0378-4371/© 2019 Elsevier B.V. All rights reserved.

2

Z. Shi, Y. Yang, Q. Chang et al. / Physica A 540 (2020) 123102

discussed the state estimations of the neural networks by sampling data. Li et al. [25] studied the exponential asymptotic stability of BAM neural network and the feedback input of the sampling system. Huang [26] studied the exponential stability of inertial neural networks involving proportional delays, and non-reduced order method. Xu [27] studied finitetime stabilization of switched dynamical networks. However, the estimator has not been optimized in the above literature, so this paper makes a further study in this aspect. Based on the results above, this paper will explore the optimal state estimation of the time-varying delay competitive neural network. Creating an appropriate Lyapunov function and the method of the free-weighting-matrix, a LMI-condition is derived to ensure the existence of the state estimation of the sampling system. The Local Search Algorithm is designed. A better state estimator will be found from some limited range. The remainder of this paper is organized as follows. In Section 2, some lemmas, definitions and model description are introduced. In Section 3, a linear control scheme is designed and the sufficient conditions for global asymptotic stability are obtained. In Section 4, some numerical examples are given to illustrate the feasibility of the proposed method. In Section 5, the Local Search Algorithm is proposed for optimizing the controllers. Finally, some conclusions are drawn in Section 6. Notations: Let Rn denotes the n-dimensional Euclidean space and Rn∗m is the set of real matrices. The X⩾Y notation, (respectively X>Y), where X and Y are real symmetric matrices, means that X−Y is positive semi-definite (respectively, positive definite). M T represents the transpose of the matrix M and I denotes the identity matrix of compatible dimension. And diag(. . . ) stands for a block-diagonal matrix. Moreover, the * in a matrix is used to denote term that is induced by symmetry. If there are not explicitly specified matrices, assuming that they have appropriate dimension. 2. Problem modelling and preliminary knowledge Consider the following competitive neural networks with time-varying delay STM : x˙ (t) = −Ax(t) + Dg(x(t)) + Dτ g(x(t − τ (t))) + BS(t)

˙ = −CS(t) + Wg(x(t)) LTM : S(t)

(1)

where x(t) = (x1 (t), x2 (t), x3 (t), . . . , xn (t))T ∈ Rn is the state vector of the competitive neural networks; g(x(t)) = (g1 (x1 (t)), g2 (x2 (t)), g3 (x3 (t)), . . . , gn (xn (t)))T ∈ Rn denotes the activation function of the neuron; A = diag(a1 , a2 , a3 , . . . , an ) with ai > 0 represents the time constant of the neuron; D = (dij )n∗n and Dτ = (dij )τn∗n are the connection weight matrix and time-delayed weight matrix, respectively; B = (bij )n∗n is the strength of the external stimulus; (S1 (t), S2 (t), S3 (t), . . . , Sn (t))T ∈ Rn is the dynamic variable about synaptic efficiency; C = diag(c1 , c2 , c3 , . . . , cn ) with ci > 0 represents linear scaling constant; W= (wij )n∗n is the constant external stimulus; τ (t) is the time-varying delay and satisfies: (Where τ¯ is the upper bound of τ (t), and µ is the upper bound of τ˙ (t)) 0 < τ (t) ≤ τ¯ , τ˙ (t) ≤ µ. In the biological neural networks , there are generally two kinds of state variables instantaneous, short-term neurons memory (STM) and long-term synaptic memory (LTM). State variable STM said the instantaneous change of the dynamic behaviour of neurons, while the corresponding LTM said synapses slow memory behaviour of unsupervised. Therefore, about this type of neural network model, there are two kinds of time scale, one is a kind of instant changes, another is slow action. The network measurements are given by

{

yx (t) = Mx(t) ys (t) = NS(t)

(2)

where yx (t) ∈ Rn , yS (t) ∈ Rn are the measurement outputs and M, N are known constant matrices with appropriate dimensions. Assumption 1. There exists constant lj > 0 such that activation function gj (·) satisfies the following inequality 0≤

gj (u) − gj (v ) u−v

≤ lj

for every u, v ∈ R (u ̸ = v ), j = 1,2,3, . . . ,n. In this paper, the measurement output is sampled before it enters the estimator. The actual output can be described as: y¯ x (t) = yx (tk ) = Mx(tk ) y¯ S (t) = yS (tk ) = NS(tk ) t ∈ [tk , tk+1 ] where y¯ x (t) ∈ Rn , y¯ S (t) ∈ Rn are the actual output, and tk denotes the sampling instant satisfying: lim tk = ∞.

k→∞

(3)

Z. Shi, Y. Yang, Q. Chang et al. / Physica A 540 (2020) 123102

3

Based on the available sampled measurement (3), the following state estimator for the delayed neural network (1) has: STM : x˙ˆ = −Axˆ (t) + Dg(xˆ (t)) + Dτ g(xˆ (t − τ (t))) + BS(t) + K1 [¯yx (t) − yˆ x (tk )]

(4)

˙ ˆ + Wg(xˆ (t)) + K2 [¯yS (t) − yˆ S (tk )] LTM : Sˆ = −C S(t) ˙

where x˙ˆ , Sˆ are the estimation of neuron state variables x(t) and S(t), respectively, yˆ x (tk ), yˆ S (tk ) are the estimated output vectors.

¯ Assumption 2. For k ≥ 0, there exists a positive constant d¯ and a sampling instant point tk , satisfying: tk+1 − tk ≤ d. Define the error vector to be ex (t) = x(t) − xˆ (t)

ˆ eS (t) = S(t) − S(t) and let d(t) = t − tk ; tk ≤ t < tk+1 , from (1) and (4), the error dynamical system can be expressed by e˙ x (t) = −Aex (t) + Df (ex (t)) + Dτ f (ex (t − τ (t))) + BeS (t) − K1 Mex (t − d(t))

(5)

e˙ S (t) = −CeS (t) + Wf (ex (t)) − K2 NeS (t − d(t))

¯ where: f (ex (t)) = g(x(t)) − g(xˆ (t)), f (ex (t − τ (t))) = g(x(t − τ (t))) − g(xˆ (t − τ (t))) and it has: 0 ≤ d(t) < d. The following lemma will be used in deriving the main results. Lemma 1 (Jensen’s Inequality [21]). For any constant matrix Q , Rn , Q = Q T > 0, scalarb > 0, and vector function x: [0, b] → Rn , it has: b

∫ − 0

b



1 x˙ T (s)Q x˙ (s)ds ≤ − [ b

b



x˙ (s)ds]T Q [ 0

x˙ (s)ds] 0

3. Main results In this section, we derive a new delay-dependent criterion for asymptotic stability of the error system (5) using the Lyapunov method combining with LMIs framework. Theorem 1. For given matrices L, positive scalars b, the error system (5) is globally asymptotically stable if there exist positive definite matrices P1 , P2 , Qi , Ri (i = 1, 2, 3, 4), two positive scalars δ1 , δ2 and any matrices: Y1 , Y2 , such that:

⎛ ⎜Λ11 ⎜ ⎜ ⎜ ∗ ⎜ ⎜ ⎜ ∗ ⎜ ⎜ ⎜ ∗ ⎜ ⎜ ⎜ ∗ ⎜ ⎜ ⎜ ∗ ⎜ Σ =⎜ ⎜ ⎜ ∗ ⎜ ⎜ ⎜ ∗ ⎜ ⎜ ⎜ ∗ ⎜ ⎜ ⎜ ⎜ ∗ ⎜ ⎜ ⎜ ∗ ⎝ ∗

⎞ Λ12

Λ13

0

Λ15

0

Λ17

0

0

0

Λ111

Λ22

P1 B 2

0

0

0

− Y12M

0

0

0

P1 D 2



Λ33

Λ34

0

0

0

0

Λ39

0

Λ311

Y2 N 2

0

P2 D 2





Λ44

0

0

0

0







Λ55

R1

0

0

0

0

0









Λ66

0

τ¯





0

0

0

0

2 R d¯ 2

1 R d¯ 2

0

0

0























Λ88

0

0

0















− 2d¯ R3

1 R d¯ 3

0

















Λ1010

0



















−δ1 I





















Λ112 ⎟ ⎟ P1 Dτ ⎟ 2 ⎟ ⎟ ⎟ 0 ⎟ ⎟ ⎟ 0 ⎟ ⎟ ⎟ 0 ⎟ ⎟ ⎟ ⎟ 0 ⎟ ⎟ < 0. ⎟ 0 ⎟ ⎟ ⎟ 0 ⎟ ⎟ ⎟ 0 ⎟ ⎟ ⎟ ⎟ 0 ⎟ ⎟ ⎟ 0 ⎟ ⎠ −δ2 I

where :

Λ11 = −2P1 A + Q1 + Q2 + Q3 −

1

τ¯

R1 −

1 d¯

R 2 + δ 2 LT L − P 1 A −

1−µ

τ¯

1 1 R4 ; Λ12 = − P1 − P1 A 2 2

(6)

4

Z. Shi, Y. Yang, Q. Chang et al. / Physica A 540 (2020) 123102

1

Λ13 = P1 B +

2

1

P1 B; Λ15 =

τ¯

R1 +

1−µ

τ¯

1

R4 ; Λ17 =



R2 − Y 1 M −

1 2

Y1 M ; Λ111 = P1 D +

1 2

P1 D;

1

1 P1 Dτ ; Λ22 = −P2 + τ¯ R1 + d¯ + R2 + τ¯ R4 ; Λ33 = −2P2 C + Q4 − R3 − P2 C ; 2 d¯ P2 C 1 1 P2 1 ¯ 3 − P2 ; ; Λ39 = R3 − Y2 N − Y2 N ; Λ311 = P2 W + P2 W ; Λ44 = dR =− − 2 2 2 2 d¯ 2 1−µ 1 = − R1 − R4 + δ2 LT L − (1 − µ)Q1 ; Λ66 = −Q2 − R1 ;

Λ112 = P1 Dτ + Λ34 Λ55

τ¯

Λ88 = −Q3 −

1 d¯

τ¯

1

R2 ; Λ1010 = −Q4 −



τ¯

R3 ;

Furthermore, if the LMI given above is solvable, the desired estimator parameters are given as: K1 = P1−1 Y1 , K2 = P2−1 Y2 . Proof. Consider the following Lyapunov function: V (t) = V1 (t) + V2 (t) + V3 (t)

(7)

where : V1 (t) = eTx (t)P1 ex (t) + eTS (t)P2 eS (t), t



eTx (s)Q1 ex (s)ds +

V2 (t) =

∫ V3 (t) =



−τ¯

0

e˙ Tx (θ )R1 e˙ x (θ )dθ ds +



t +s

0

−d¯





t t −d¯

eTx (s)Q3 ex (s)ds +

t

e˙ Tx (θ )R2 e˙ x (θ )dθ ds +

0



t +s

t t −d¯

eTS (s)Q4 eS (s)ds

t



−d¯



e˙ TS (θ )R3 e˙ S (θ )dθ ds t +s

t



e˙ Tx (θ )R4 e˙ x (θ )dθ ds

+ −τ (t)

eTx (s)Q2 ex (s)ds +

t −τ¯

t −τ (t) 0 t



t



t +s

Calculating the time-derivative of Vi (i = 1, 2, 3) along the solution of the system (5), one can deduce that: V˙ 1 (t) = 2eTx (t)P1 e˙ x (t) + 2eTS (t)P2 e˙ S (t)

= 2eTx (t)P1 [−Aex (t) + Df (ex (t)) + Dτ f (ex (t − τ (t))) + BeS (t) − K1 Mex (t − d(t))] + 2eTS (t)P2 [−CeS (t) + Wf (ex (t)) − K2 NeS (t − d(t))]

(8)

= −2eTx (t)P1 Aex (t) + 2eTx (t)P1 Df (ex (t)) + 2eTx (t)P1 Dτ f (ex (t − τ (t))) + 2eTx (t)P1 BeS (t) − 2eTx (t)P1 K1 Mex (t − d(t)) − 2eTS (t)P2 CeS (t) + 2eTS (t)P2 Wf (ex (t)) − 2eTS (t)P2 K2 NeS (t − d(t)) V˙ 2 (t) = eTx (t)[Q1 + Q2 + Q3 ]ex (t) − (1 − τ˙ (t))eTx (t − τ (t))Q1 ex (t − τ (t)) − eTx (t − τ¯ )Q2 ex (t − τ¯ )

¯ 3 ex (t − d) ¯ + eTS (t)Q4 eS (t) − eTS (t − d)Q ¯ 4 eS (t − d) ¯ − eTx (t − d)Q ≤ eTx (t)[Q1 + Q2 + Q3 ]ex (t) − (1 − µ)eTx (t − τ (t))Q1 ex (t − τ (t)) − eTx (t − τ¯ )Q2 ex (t − τ¯ ) ¯ 3 ex (t − d) ¯ + eTS (t)Q4 eS (t) − eTS (t − d)Q ¯ 4 eS (t − d) ¯ − eTx (t − d)Q eTx (t)R1 ex (t)

˙

[˙ −τ¯

eTx (t

−˙



−τ¯



0

+ s)R1 e˙ x (t + s)]ds + −d¯

0

∫ + τ˙ (t)

0



V˙ 3 (t) =

[˙eTS (t)R3 e˙ S (t) − e˙ TS (t + s)R3 e˙ S (t + s)]ds −



(9)

[˙eTx (t)R2 e˙ x (t) − e˙ Tx (t + s)R2 e˙ x (t + s)]ds

0

[˙eTx (t)R4 e˙ x (t) − e˙ Tx (t + s)R4 e˙ x (t + s)]ds

−τ (t)

t

eTx (s)R4 ex (s)ds

˙

˙

t −τ (t)

∫ t ∫ t = τ¯ e˙ Tx (t)R1 e˙ x (t) − e˙ Tx (s)R1 e˙ x (s)ds + d¯ e˙ Tx (t)R2 e˙ x (t) − e˙ Tx (s)R2 e˙ x (s)ds + d¯ e˙ TS (t)R3 e˙ S (t) t −τ¯ t −d¯ ∫ t ∫ t ∫ t T T T ˙ ˙ ˙ ˙ ˙ ˙ − eS (s)R3 eS (s)ds + τ (t)ex (t)R4 ex (t) − ex (s)R4 ex (s)ds + τ˙ (t) e˙ Tx (s)R4 e˙ x (s)ds t −d¯

∫ ≤− ∫ −

t −τ (t) t

eTx (s)R1 ex (s)ds

˙

t −τ¯

˙

eTx (s)R2 ex (s)ds

˙

− t −d¯

t

e˙ Tx (s)R4 e˙ x (s)ds + µ t −τ (t)

t −τ (t)

t





˙

eTS (t)



¯ 3 ]˙eS (t) − [dR



t

eTS (s)R3 eS (s)ds

˙

t −d¯

˙

t

t −τ (t)

¯ 2 + τ¯ R4 ]˙ex (t) e˙ Tx (s)R4 e˙ x (s)ds + e˙ Tx (t)[τ¯ R1 + dR

(10)

Z. Shi, Y. Yang, Q. Chang et al. / Physica A 540 (2020) 123102

5

By Lemma 1, one has that :



t

− t −τ¯

≤−

e˙ Tx (s)R1 e˙ x (s)ds = −

t





e˙ Tx (s)R1 e˙ x (s)ds −

τ (t)

t



e˙ x (s)ds]T R1 [

[



t

e˙ x (s)ds] − t −τ (t)

t −τ (t)

e˙ Tx (s)R1 e˙ x (s)ds

t −τ¯

t −τ (t)

1

t −τ (t)

t −τ (t)



1

τ¯ − τ (t)

1

e˙ x (s)ds]T R1 [

[ t −τ¯



t −τ (t) t −τ¯

e˙ x (s)ds]

1

≤ − [ex (t) − ex (t − τ (t))]T R1 [ex (t) − ex (t − τ (t))] − [ex (t − τ (t)) − ex (t − t¯)]T R1 [ex (t − τ (t)) − ex (t − t¯)] τ¯ τ¯ 2

1

1

= − eTx (t)R1 ex (t) + eTx (t)R1 ex (t − τ (t)) − eTx (t − τ (t))R1 ex (t − τ (t)) t¯ t¯ t¯ 1

2

1

− eTx (t − τ (t))R1 ex (t − τ (t)) + eTx (t − τ (t))R1 ex (t − t¯) − eTx (t − t¯)R1 ex (t − t¯) t¯ t¯ t¯

(11)

The same process can be done by Lemma 1, and the results are as follows:



t

− t −d¯

e˙ Tx (s)R2 e˙ x (s)ds = −



t

e˙ Tx (s)R2 e˙ x (s)ds − t −d(t)

1

2

t −d(t)



e˙ Tx (s)R2 e˙ x (s)ds

t −d¯

1

≤ − eTx (t)R2 ex (t) + eTx (t)R2 ex (t − d(t)) − eTx (t − d(t))R2 ex (t − d(t)) d¯ d¯ d¯ 1

− eTx (t − d(t))R2 ex (t − d(t)) + d¯ ∫ t ∫ t − e˙ TS (s)R3 e˙ S (s)ds = − t −d¯

2 d¯

¯ − eTx (t − d(t))R2 ex (t − d)

e˙ TS (s)R3 e˙ S (s)ds −

t −d(t)

1

2



1 d¯

¯ 2 ex (t − d) ¯ eTx (t − d)R

(12)

t −d(t) t −d¯

e˙ TS (s)R3 e˙ S (s)ds

1

≤ − eTS (t)R3 eS (t) + eTS (t)R3 eS (t − d(t)) − eTS (t − d(t))R3 eS (t − d(t)) d¯ d¯ d¯ 1 2 ¯ − 1 eTS (t − d)R ¯ 3 eS (t − d) ¯ − eTS (t − d(t))R3 eS (t − d(t)) + eTS (t − d(t))R3 eS (t − d) ¯d ¯d d¯ ∫ t ∫ t ∫ t T T − e˙ x (s)R4 e˙ x (s)ds + µ e˙ x (s)R4 e˙ x (s)ds = −(1 − µ) e˙ Tx (s)R4 e˙ x (s)ds t −τ (t)

≤−

1−µ

τ¯

t −τ (t)

eTx (t)R4 ex (t) +

2(1 − µ)

τ¯

(13)

t −τ (t)

eTx (t)R4 ex (t − τ (t)) −

1−µ

τ¯

eTx (t − τ (t))R4 ex (t − τ (t))

(14)

Consider the formula (7) to the formula (14), it has : V˙ (t) = Σ V˙ i (t)

≤ eTx (t)[−2P1 A + Q1 + Q2 + Q3 −

1

τ¯

R1 −

1 d¯

R2 −

1−µ

τ¯

¯ 2 + τ¯ R4 ]˙ex (t) R4 ]ex (t) + e˙ Tx (t)[τ¯ R1 + dR

1 ¯ 3 ]˙eS (t) + eTS (t)[−2P2 C + Q4 − R3 ]eS (t) + 2eTx (t)[P1 B]eS (t) + e˙ TS (t)[dR d¯ 1 1−µ 2 1−µ + 2eTx (t)[ R1 + R4 ]ex (t − τ (t)) + eTx (t − τ (t))[− R1 − R4 − (1 − µ)Q1 ]ex (t − τ (t)) τ¯ τ¯ τ¯ τ¯ 1

1

2

+ eTx (t − τ¯ )[− R1 − Q2 ]ex (t − τ¯ ) + 2eTx (t − τ (t))[ R1 ]ex (t − τ¯ ) + eTx (t − d(t))[− R2 ]ex (t − d(t)) τ¯ τ¯ d¯ 1

1

1

¯ [− R2 − Q3 ]ex (t − d) ¯ + 2eTx (t − d(t))[ R2 ]ex (t − d) ¯ + 2eTx (t)[ R2 − P1 K1 M ]eTx (t − d(t)) + eTx (t − d) d¯ d¯ d¯ 2

1

+ eTS (t − d(t))[− R3 ]eS (t − d(t)) + 2eTS (t)[ R3 − P2 K2 N ]eS (t − d(t)) d¯ d¯ 1 ¯ + 2eTx (t)[P1 Dτ ]f (ex (t − τ (t))) + eTS (t − d) ¯ [− 1 R3 − Q4 ]eS (t − d) ¯ + 2eTS (t − d(t))[ R3 ]eS (t − d) ¯d d¯ + 2eTx (t)[P1 D]f (ex (t)) + 2eTS (t)[P2 W ]f (ex (t))

(15)

6

Z. Shi, Y. Yang, Q. Chang et al. / Physica A 540 (2020) 123102

From Assumption 1, one can obtain the following inequalities:

f T (ex (t))f (ex (t)) − eTx (t)LT Lex (t) ≤ 0

(16)

f T (ex (t − τ (t)))f (ex (t − τ (t))) − eTx (t − τ (t))LT Lex (t − τ (t)) ≤ 0 where L = diag(l1 , L2 , l3 , . . . , ln ). Noting that, for any positive scalars δ1 , δ2 ≥ 0 there exist

− δ1 [f T (ex (t))f (ex (t)) − eTx (t)LT Lex (t)] ≥ 0

(17)

− δ2 [f T (ex (t − τ (t)))f (ex (t − τ (t))) − eTx (t − τ (t))LT Lex (t − τ (t))] ≥ 0 According to the error system (5), for any appropriately dimensioned P1 , P2 the following equations hold: 0 = [eTx (t)P1 + e˙ Tx (t)P1 ][−˙ex (t) − Aex (t) + Df (ex (t)) + Dτ f (ex (t − τ (t))) + BeS (t) − K1 Mex (t − d(t))] 0 = [eTS (t)P2 + e˙ TS (t)P2 ][−˙eS (t) − CeS (t) + Wf (ex (t)) − K2 NeS (t − d(t))]

(18)

Combining with (15)–(18), it has:

V˙ (t) = Σ V˙ i (t)

≤ eTx (t)[−2P1 A + Q1 + Q2 + Q3 −

1

τ¯

R1 −

1 d¯

R2 −

1−µ

τ¯

¯ 2 + τ¯ R4 ]˙ex (t) R4 ]ex (t) + e˙ Tx (t)[τ¯ R1 + dR

1

¯ 3 ]˙eS (t) + eTS (t)[−2P2 C + Q4 − R3 ]eS (t) + 2eTx (t)[P1 B]eS (t) + e˙ TS (t)[dR d¯ 1

+ 2eTx (t)[ R1 + τ¯

1−µ

τ¯

2 1−µ R4 ]ex (t − τ (t)) + eTx (t − τ (t))[− R1 − R4 − (1 − µ)Q1 ]ex (t − τ (t))

τ¯

τ¯

1

1

2

+ eTx (t − τ¯ )[− R1 − Q2 ]ex (t − τ¯ ) + 2eTx (t − τ (t))[ R1 ]ex (t − τ¯ ) + eTx (t − d(t))[− R2 ]ex (t − d(t)) τ¯ τ¯ d¯ 1

1

1

¯ [− R2 − Q3 ]ex (t − d) ¯ + 2eTx (t − d(t))[ R2 ]ex (t − d) ¯ + 2eTx (t)[ R2 − P1 K1 M ]eTx (t − d(t)) + eTx (t − d) d¯ d¯ d¯ 2

1

+ eTS (t − d(t))[− R3 ]eS (t − d(t)) + 2eTS (t)[ R3 − P2 K2 N ]eS (t − d(t)) d¯ d¯ 1

(19) 1

¯ + 2eTx (t)[P1 Dτ ]f (ex (t − τ (t))) + eTS (t − d) ¯ [− R3 − Q4 ]eS (t − d) ¯ + 2eTS (t − d(t))[ R3 ]eS (t − d) d¯ d¯ + 2eTx (t)[P1 D]f (ex (t)) + 2eTS (t)[P2 W ]f (ex (t)) − δ1 f T (ex (t))f (ex (t)) + δ1 eTx (t)LT Lex (t) − δ2 f T (ex (t − τ (t)))f (ex (t − τ (t))) + δ2 eTx (t − τ (t))LT Lex (t − τ (t)) − eTx (t)P1 e˙ x (t) − eTx (t)P1 Aex (t) + eTx (t)P1 Df (ex (t)) + eTx (t)P1 Dτ f (ex (t − τ (t))) + eTx (t)P1 BeS (t) − eTx (t)P1 K1 Mex (t − d(t)) − e˙ Tx (t)P1 e˙ x (t) − e˙ Tx (t)P1 Aex (t) + e˙ Tx (t)P1 Df (ex (t)) + e˙ Tx (t)P1 Dτ f (ex (t − τ (t))) + e˙ Tx (t)P1 BeS (t) − e˙ Tx (t)P1 K1 Mex (t − d(t)) − eTS (t)P2 e˙ S (t) − eTS (t)P2 CeS (t) + eTS (t)P2 Wf (ex (t)) − eTS (t)P2 K2 NeS (t − d(t)) − e˙ TS (t)P2 e˙ S (t) − e˙ TS (t)P2 CeS (t) + e˙ TS (t)P2 Wf (ex (t)) − e˙ TS (t)P2 K2 NeS (t − d(t))

Z. Shi, Y. Yang, Q. Chang et al. / Physica A 540 (2020) 123102

7

Simplifying the formula (19), it has: V˙ (t) = Σ V˙ i (t)

≤ eTx (t)[−2P1 A + Q1 + Q2 + Q3 − 1

1

2

2

1

τ¯

1

R1 −



R2 −

1−µ

τ¯

R4 + δ1 LT L − P1 A]ex (t)

¯ 2 + τ¯ R4 ]˙ex (t) + 2eTx (t)[− P1 − P1 A]˙ex (t) + e˙ Tx (t)[−P1 + τ¯ R1 + dR 1

eTS (t)

1

1

[−2P2 C + Q4 − R3 − P2 C ]eS (t) + 2eTx (t)[P1 B + P1 B]eS (t) + 2e˙ Tx (t)[ P1 B]eS (t) 2 2 d¯ 1 1 − µ ¯ 3 − P2 ]˙eS (t) + 2eTx (t)[ R1 + + e˙ TS (t)[dR R4 ]ex (t − τ (t)) τ¯ τ¯ 1 1 2 1−µ + 2eTS (t)[− P2 − P2 C ]˙eS (t) + eTx (t − τ (t))[− R1 + δ2 LT L − R4 − (1 − µ)Q1 ]ex (t − τ (t)) 2 2 τ¯ τ¯

+

1

1

2

+ eTx (t − τ¯ )[− R1 − Q2 ]ex (t − τ¯ ) + 2eTx (t − τ (t))[ R1 ]ex (t − τ¯ ) + ex T (t − d(t))[− R2 ]ex (t − d(t)) τ¯ τ¯ d¯ 1

1

1

+ 2eTx (t)[ R2 − P1 K1 M − P1 K1 M ]eTx (t − d(t)) + 2e˙ Tx (t)[− P1 K1 M ]ex (t − d(t)) 2 2 d¯ 1

1

(20) 2

¯ [− R2 − Q3 ]ex (t − d) ¯ + 2ex T (t − d(t))[ R2 ]ex (t − d) ¯ + eS T (t − d(t))[− R3 ]eS (t − d(t)) + eTx (t − d) d¯ d¯ d¯ 1

1

1

+ 2eTS (t)[ R3 − P2 K2 N − P2 K2 N ]eS (t − d(t)) + 2e˙ TS [− P2 K2 N ]eS (t − d(t)) 2 2 d¯ 1 ¯ + eTS (t − d) ¯ [− 1 R3 − Q4 ]eS (t − d) ¯ + 2eS T (t − d(t))[ R3 ]eS (t − d) d¯ d¯ + f T (ex (t))[−δ1 I ]f (ex (t)) + f T (ex (t − τ (t)))[−δ2 I ]f (ex (t − τ (t))) 1

1

+ 2eTx (t)[P1 Dτ + P1 Dτ ]f (ex (t − τ (t))) + 2eTx (t)[P1 D + P1 D]f (ex (t)) +

2 1

2

1

[P2 W + P2 W ]f (ex (t)) + 2e˙ Tx (t)[ P1 D]f (ex (t))

2eTS (t)

2eTS (t)

+ ˙

2

2

1

2eTx (t)

[ P2 W ]f (ex (t)) + ˙ 2

1

τ

[ P1 D ]f (ex (t − τ (t))) 2

= ξ T (t)Σξ ξ (t) where, ξ (t) =

¯ , eTS (t − d(t)), eTS (t − d) ¯ , f T (ex (t)), f T (ex (t − τ (t)))]T [eTx (t), e˙ Tx (t), eTS (t), e˙ TS (t), eTx (t − τ (t)), eTx (t − τ¯ ), eTx (t − d(t)), eTx (t − d)



⎛ ⎜Λ11 ⎜ ⎜ ⎜ ∗ ⎜ ⎜ ⎜ ∗ ⎜ ⎜ ⎜ ∗ ⎜ ⎜ ⎜ ∗ ⎜ ⎜ ⎜ ∗ ⎜ Σξ = ⎜ ⎜ ⎜ ∗ ⎜ ⎜ ⎜ ∗ ⎜ ⎜ ⎜ ∗ ⎜ ⎜ ⎜ ⎜ ∗ ⎜ ⎜ ⎜ ∗ ⎝ ∗

Λ12

Λ13

0

Λ15

0

¯ 17 Λ

0

0

0

Λ111

Λ22

P1 B 2

0

0

0

− Y12M

0

0

0

P1 D 2



Λ33

Λ34

0

0

0

0

¯ 39 Λ

0

Λ311

Y2 N 2

0

P2 D 2





Λ44

0

0

0

0







Λ55

R1

0

0

0

0

0









Λ66

0

0

0

0

0











− 2d¯ R2

1 R d¯ 2

0

0

0













Λ88

0

0

0















− 2d¯ R3

1 R d¯ 3

0

















Λ1010

0



















−δ1 I





















τ¯



Λ112 ⎟ ⎟ P1 Dτ ⎟ 2 ⎟ ⎟ ⎟ 0 ⎟ ⎟ ⎟ 0 ⎟ ⎟ ⎟ 0 ⎟ ⎟ ⎟ ⎟ 0 ⎟ ⎟ < 0. ⎟ 0 ⎟ ⎟ ⎟ 0 ⎟ ⎟ ⎟ 0 ⎟ ⎟ ⎟ ⎟ 0 ⎟ ⎟ ⎟ 0 ⎟ ⎠ −δ2 I

(21)

8

Z. Shi, Y. Yang, Q. Chang et al. / Physica A 540 (2020) 123102

Fig. 1. State X.

¯ 17 = where Λ

¯ 39 = 1¯ R3 − P2 K2 N − 1 P2 K2 N. − P1 K1 M − 12 P1 K1 M ; Λ 2 d Thus, change of variable as Y1 = P1 K1 , Y2 = P2 K2 , Eq. (20) can be rewritten as: 1 R d¯ 2

V˙ (t) = Σ V˙ i (t) ≤ ξ T (t)Σξ ξ (t) = ξ T (t)Σ ξ (t) From (6), it has that V˙ (t) ≤ ξ T (t)Σ ξ (t) < 0. This implies that the error dynamic (5) is globally asymptotically stable based on the Lyapunov theory. Theorem 1 is proved. 4. The numerical simulation In this section, a numerical example with simulation results is employed to demonstrated the effectiveness of the proposed method. Consider[ the neural parameters: ] networks[with the following ] [ ] 0.3 0 2.4 −0.3 1.25 0 A = , B = , C = , D 0 0.2 0.45 0.6 0 2 0.2 −0.01

[ =

2.5 −0.15

] −0.1 , Dτ 3.5

[ =

−2 −0.3

] −0.5 , −2

−0.0015 , M = N = I, g(x) = tanh(x), τ (t) = 0.1(1 + sin(t)), d¯ = 0.5. From the parameters 0.15 [ ] 1.3061 −0.9135 above, according to LMI condition, the following feasible solution can be obtained: Q1 = , −0.9135 4.9050 [ ] [ ] [ ] [ ] 1.3061 −0.9135 6.9486 −1.9056 1.9402 1.4936 18.8865 −1.2856 Q2 = , Q3 = , Q4 = , R1 = , −0.9135 4.9050 −1.9056 13.6265 1.4936 15.8139 −1.2856 22.4278 [ ] [ ] [ ] [ ] 0.2608 −0.5528 4.0177 −0.1573 0.4722 −0.6552 3.0939 −0.4306 R2 = , R3 = , R4 = , P1 = , −0.5528 2.4127 −0.1573 2.8562 −0.6552 3.0300 −0.4306 4.9373 [ ] [ ] [ ] 11.6978 0.4127 0.2758 −0.5919 5.6250 0.2781 P2 = , Y1 = , Y2 = , δ1 = 58.4724, δ2 = 52.2519. Further, 0.4127 6.3331 −0.5919 2.6536 0.2781 3.6824 from [K1 = P1−1 Y1 , K2 = ]P2−1 Y2 , [the gain matrix ] of the state estimation of the sampling system can be obtained: 0.0733 −0.1179 0.4804 0.0033 K1 = , K2 = , The simulation results are shown in Figs. 1–4: −0.1135 0.5272 0.0126 0.5812

[

W

=

]

Remark 1. In order to reduce the conservatism of Theorem 1, the method of free-weighting-matrix is used. For avoiding logic errors in solving LMI, the free matrix in this paper is set as P1 , P2 . 5. Controller optimization In this part, based on the known initial solution, the Local Search Algorithm is used to find a better controller to improve the control efficiency. 5.1. The optimization design Optimal control is a method to design the system, and the problem it studies is how to select the controller to guarantee the performance of the control system in a certain sense. In this paper, we optimize the controller above [28].

Z. Shi, Y. Yang, Q. Chang et al. / Physica A 540 (2020) 123102

9

Fig. 2. State S.

Fig. 3. Error X.

Fig. 4. Error S.

According to the general optimal control theory, the controller can be designed as follows:

µ1 (t) = K1 ex (t), µ2 (t) = K2 eS (t) The following performance (objective) function is designed: J =

1

T



eTx (t)Q (t)ex (t)

[

2

+

eTS (t)Q (t)eS (t)

]dt +

0

1 2

T



[µT1 (t)R(t)µ1 (t) + µT2 (t)R(t)µ2 (t)]dt 0

where, matrix R and Q usually take the unit matrix of appropriate dimensions, so the above performance index function becomes: J =

1 2

T



[∥ex (t)∥2 + ∥eS (t)∥2 ]dt + 0

1 2

T



[∥µ1 (t)∥2 + ∥µ2 (t)∥2 ]dt 0

Next, the controller is optimized in this paper. The Local Search Algorithm is proposed to deal with it.

10

Z. Shi, Y. Yang, Q. Chang et al. / Physica A 540 (2020) 123102

5.2. Local search algorithm Optimization idea First, a controller is obtained to stabilize the system, which consumes a certain amount of energy. A partial expansion of the controller is made to make the improved controller consume less energy. If the improved controller can make the system stable, then the optimization is successful. By simple calculation, the optimal efficiency can be get. Experimental results show that the algorithm is effective and better results are proposed. Basic variables In this section, we use a concrete example to show how the algorithm works. Assuming that an initial feasible solution must be obtained. Before introducing the algorithm, some basic variables will be introduced. Initial controller parameters: K1, K2; The initial energy function = s; Minimum energy required for control = t; Step length = h; Potential range = [−m, m]; The number of controller potential = p; Cycles = n; Maximum optimization efficiency = f; f = (s − t)/s; p = (2m)/h; Optimization code

BEGIN Step 1: Divide the potential area evenly according to the step length; So the number of controller potential=p; Step 2: One of the p potential controllers is selected at random, and verify that LMI has a solution; If so, calculate the energy function and save the value; Step 3: The energy function value corresponding to the feasible solution obtained in step 2 is compared with the initial energy function value; if less, record the controller and save the energy function value ; else cotinue; Step 4: Repeat steps 2 to step 3 n times; until find the maximum optimization efficiency f; print f; END Remark 2. The step size and potential range of the algorithm are sensitive to the initial solution. In addition, after many experiments, the value range of parameters h and m are relatively small, which can be artificially determined according to the initial solution. In order to ensure the improvement of optimization efficiency, the number of cycles should be increased as much as possible, but this brings up a problem of computational efficiency. However, under the same cycle times, a better suboptimal solution can still be obtained. In order to ensure the optimal solution, the parameter n is expanded as far as possible to obtain the maximum optimization efficiency f due to the small dimension of the example in this paper. 5.3. Optimization simulating Consider the following competitive neural networks with time-varying delay: STM : x˙ (t) = −Ax(t) + Dg(x(t)) + Dτ g(x(t − τ (t))) + BS(t)

˙ = −CS(t) + Wg(x(t)) LTM : S(t) where x(t) = (x1 (t), x2 (t), x3 (t), . . . , xn (t))T ∈ Rn is the state vector of the competitive neural networks; g(x(t)) = (g1 (x1 (t)), g2 (x2 (t)), g3 (x3 (t)), . . . , gn (xn (t)))T ∈ Rn denotes the activation function of the neuron; A = diag(a1 , a2 , a3 , . . . , an ) with ai > 0 represents the time constant of the neuron; D = (dij )n∗n and Dτ = (dij )τn∗n are the connection weight matrix and time-delayed weight matrix, respectively; B = (bij )n∗n is the strength of the external stimulus; (S1 (t), S2 (t), S3 (t), . . . , Sn (t))T ∈ Rn is the dynamic variable about synaptic efficiency; C = diag(c1 , c2 , c3 , . . . , cn ) with ci > 0 represents linear scaling constant; W = (wij )n∗n is the constant external stimulus; τ (t) is the time-varying delay. Calculate state: [ the following ] [ ] [ ] [ ] [ ] 0.3 0 2.4 −0.3 1.25 0 2.5 −0.1 −2 −0.5 τ A = , B = , C = , D = , D = , W = 0 0.2 0.45 0.6 0 2 −0.15 3.5 −0.3 −2 0.2 −0.0015 , M = N = I, g(x) = tanh(x), τ (t) = 0.1(1 + sin(t)), d¯ = 0.5. −0.01 0.15 Through calculating, the corresponding energy function value of the initial controller is 1.836133∗105 . The optimization efficiency is: 13.57%. And the specific optimization results are shown in Figs. 5, 6, and Table 1.

[

]

Z. Shi, Y. Yang, Q. Chang et al. / Physica A 540 (2020) 123102

11

Table 1 The optimization results. Group

1

2

3

4

5

19

20

Initial

22

32

34

Optimization

Results(105 )

2.81

2.72

2.64

2.56

2.48

1.88

1.86

1.83

1.81

1.60

1.59

1.58

Fig. 5. The optimization results.

Fig. 6. The Boolean value of the controllers.

Analysing the table and image above. There are 80 groups of controllers obtained by local search algorithm, among which 36 groups can make the system stable after LMI test. In Fig. 6, the controller with Boolean value of 1 is the feasible solution. In Fig. 5, the feasible solutions are renumbered and the corresponding bar chart of energy consumption is drawn. And the initial controller corresponding to the performance index value is 1.83, which is obtained in the initial group of feasible solutions. However, the value of the performance index function corresponding to the optimal controller was only 1.58 (Group: 36), and the optimization efficiency reached 13.57%. Remark 3. Different from the previous research results, this paper not only samples and analyzes the time-varying timedelay competitive neural network, but also optimizes the controller and simulates the optimization process. Not like other optimization papers, this paper not only proposed the optimization idea, but also proposed a specific algorithm. Local Search Algorithm is adopted to introduce the optimization steps in detail with an example, and the final optimization efficiency reaches a considerable 13.57%, which is the biggest innovation of this paper. Remark 4. This paper presents a Local Search Algorithm based on known feasible solutions. Because of the sensitivity of LMI algorithm, the number of feasible solutions is not large. Therefore, 80 groups of potential feasible solutions were selected to expand in this paper, and only 35 groups could be finally solved. Plus the initial solution, there are 36 possible

12

Z. Shi, Y. Yang, Q. Chang et al. / Physica A 540 (2020) 123102

solutions. The initial solution ranks 21st among the 36 feasible solutions, and the optimal solution ranks 36th. In other cases, no solution has been found. Remark 5. According to Figs. 5, 6 and Table 1, it can be known that when the control strength is too strong or too weak, the whole system will be over-controlled, or uncontrolled, which may make the system unstable. In other words, LMI has no solutions. This explains why only 36 sets can make the system stable among the 80 sets of controller obtained by Local Search Algorithm. 6. Conclusion In this paper, the optimal state estimation for competitive neural network with time-varying delay was studied. Based on stability theory and sampling control theory, new Lyapunov function was proposed. Firstly, a solution for the robust stability of the system was obtained. Then by referring to the theory of control field, an objective function with energy with physical meaning was modelled. Based on the results above, some optimization was made on the obtained controller, and the optimal controller were obtained within a certain range using Local Search Algorithm. The numerical simulations put in evidence the efficiency of the theoretical results. Funding This work was jointly supported by the Natural Science Foundation of Jiangsu Province of China under Grant No. BK20161126. References [1] H. Bao, J. Cao, J. Kurths, A. Alsaedi, B. Ahmad, H − ∝ State estimation of stochastic memristor-based neural networks with time-varying delays, Neural Netw. 99 (2018) 79–91. [2] C. Huang, R. Su, J. Cao, S. Xiao, Asymptotically stable high-order neutral cellular neural networks with proportional delays and operators, Int. J. Biometh. 12 (2019) 1950016. [3] H. Zhang, M. Ye, R. Ye, J. Cao, Synchronization stability of Riemann–Liouville fractional delay-coupled complex neural networks, Physica A 508 (2018) 155–165. [4] C. Huang, Z. Yang, T. Yi, X. Zou, On the basins of attraction for a class of delay differential equations with non-monotone bistable nonlinearities, J. Differ. Equ. 256 (2014) 2101–2114. [5] H. Zhang, J. Cao, L. Xiong, Novel synchronization conditions for time-varying delayed Lur’e system with parametric uncertainty, Appl. Math. Comput. 350 (2019) 224–236. [6] D. Zhang, J. Cheng, J. Cao, Finite-time synchronization control for semi-Markov jump neural networks with mode-dependent stochastic parametric uncertainties, Appl. Math. Comput. 344 (2019) 230–242. [7] H. Bao, J. Cao, J. Kurths, State estimation of fractional-order delayed memristive neural networks, Nonlinear Dynam. 94 (2018) 1215–1225. [8] Y. Tan, C. Huang, B. Sun, T. Wang, Dynamics of a class of delayed reaction–diffusion systems with Neumann boundary condition, J. Math. Anal. Appl. 458 (2018) 1115–1130. [9] H. Bao, J. Cao, Delay-distribution-dependent state estimation for discrete-time stochastic neural networks with random delay, Neural Netw. 24 (2011) 19–28. [10] C. Huang, B. Liu, New studies on dynamic analysis of inertial neural networks involving non-reduced order method, Neurocomputing 325 (2019) 283–287. [11] J. Wang, C. Huang, L. Huang, Discontinuity-induced limit cycles in a general planar piecewise linear system of saddle–focus type, Nonlinear Anal. Hybrid Syst. 33 (2019) 162–178. [12] J. Wang, X. Cheng, L. Huang, The number and stability of limit cycles for planar piecewise linear systems of node–saddle type, J. Math. Anal. Appl. 469 (2019) 405–427. [13] C. Huang, Y. Qiao, L. Huang, R. Agarwal, Dynamical behaviors of a food-chain model with stage structure and time delays, Adv. Differential Equations 186 (2018) 1–26. [14] C. Huang, B. Liu, X. Tian, L. Yang, X. Zhang, Global convergence on asymptotically almost periodic SICNNs with nonlinear decay functions, Neural Process Lett. 49 (2019) 625–641. [15] L. Jiang, J. Cao, L. Xiong, Generalized multiobjective robustness and relations to set-valued optimization, Appl. Math. Comput. 361 (2019) 599–608. [16] B. Liu, Global exponential stability for BAM neural networks with time-varying delays in the leakage terms, Nonlinear Anal. RWA 14 (2013) 559–566. [17] X. Li, J. Cao, Delay-dependent stability of neural networks of neutral type with time delay in the leakage term, Nonlinearity 23 (2010) 1709–1726. [18] H. Huang, G. Feng, State estimation of recurrent neural networks with time-varying delay: a novel delay partition approach, Neurocomputing 74 (2011) 792–796. [19] X. Fu, X. Li, H. Akca, Exponential state estimation for impulsive neural networks with time delay in the leakage term, Arab. J. Math. 2 (2013) 33–49. [20] D. Zhang, L. Yu, Exponential state estimation for Markovian jumping neural networks with time-varying discrete and distributed delays, Neural Netw. 35 (2012) 103–111. [21] J. Liu, J. Cao, Z. Wu, State estimation for complex systems with randomly occurring nonlinearities and randomly missing measurements, Internat. J. Systems Sci. 45 (2014) 1–11. [22] C. Huang, R. Su, J. Cao, S. Xiao, Asymptotically stable high-order neutral cellular neural networks with proportional delays and operators, Int. J. Bifurcation Chaos 29 (2019) 1–25. [23] J. Liu, J. Cao, Z. Wu, Q. Qi, State estimation for complex systems with randomly occurring nonlinearities and randomly missing measurements, Internat. J. Systems Sci. 45 (2014) 1364–1374. [24] S. Lakshmanan, J. Park, R. Rakkiyappan R, State estimator for neural networks with sampled data using discontinuous Lyapunov functional approach, Nonlinear Dynam. 73 (2013) 509–520.

Z. Shi, Y. Yang, Q. Chang et al. / Physica A 540 (2020) 123102

13

[25] L. Li, Y. Yang, T. Liang, The exponential stability of BAM neural networks with leakage time-varying delays and sampled-data state feedback input, Adv. Differential Equations 39 (2014). [26] C. Huang, Exponential stability of inertial neural networks involving proportional delays and non-reduced order method, Pure Appl. Anal. 18 (2019) 3337–3349. [27] C. Xu, X. Yang, Finite-time synchronization of networks via quantized intermittent pinning control, IEEE. Trans. Cyb. 48 (2018) 3021—3027. [28] Q. Chang, Y. Yang, X. Sui, Z. Shi, The optimal control synchronization of complex dynamical networks with time-varying delay using PSO, Neurocomputing 333 (2019) 1–10.