Neurocomputing 177 (2016) 120–129
Contents lists available at ScienceDirect
Neurocomputing journal homepage: www.elsevier.com/locate/neucom
Finite-time recurrent neural networks for solving nonlinear optimization problems and their application Peng Miao a,b, Yanjun Shen a,n, Yujiao Li c, Lei Bao a a
Hubei Provincial Collaborative Innovation Center for New Energy Microgrid, China Three Gorges University, 443002, China Department of Basic Courses, Zhengzhou Science and Technology University, Zhengzhou 450064, Hennan, China c College of Science, China Three Gorges University, 443002, China b
art ic l e i nf o
a b s t r a c t
Article history: Received 16 June 2015 Received in revised form 3 November 2015 Accepted 3 November 2015 Communicated by Long Cheng Available online 21 November 2015
This paper focuses on finite-time recurrent neural networks with continuous but non-smooth activation function solving nonlinearly constrained optimization problems. Firstly, definition of finite-time stability and finite-time convergence criteria are reviewed. Secondly, a finite-time recurrent neural network is proposed to solve the nonlinear optimization problem. It is shown that the proposed recurrent neural network is globally finite-time stable under the condition that the Hessian matrix of the associated Lagrangian function is positive definite. Its output converges to a minimum solution globally and finitetime, which means that the actual minimum solution can be derived in finite-time period. In addition, our recurrent neural network is applied to a hydrothermal scheduling problem. Compared with other methods, a lower consumption scheme can be derived in finite-time interval. At last, numerical simulations demonstrate the superiority and effectiveness of our proposed neural networks by solving nonlinear optimization problems with inequality constraints. & 2015 Elsevier B.V. All rights reserved.
Keywords: Finite-time stable Recurrent neural network Nonlinear optimization problems Continuous but non-smooth Hydrothermal scheduling problem
1. Introduction Nonlinear constrained optimization problems are widely applied in the fields of science and engineering [1], for example, electrical networks planning, optimal control, mechanical design, hydrothermal scheduling, structure design and so on. In the past decades, numerical methods have been proposed to solve nonlinear convex optimization problems and approximate optimal solutions could be derived [2–4]. However, it is usual imperative to require real-time solutions in many applications. Because of parallel-distributed character and hardware-realization amenity, recurrent neural networks are capable of real-time computing and have been applied in solving nonlinear optimization problems with inequality constraints. Different from traditional numerical optimization algorithms, recurrent neural networks governed by dynamic equations can apply directly the techniques of the numerical ODE and the dynamic to solve constrained optimization problems effectively. In what follows, we give a detailed introduction for the development of recurrent neural networks. In 1986, Hopfield firstly presented a neural network to solve linear programming problems [5]. In order to solve nonlinear convex programming problems, Kennedy n
Corresponding author. E-mail address:
[email protected] (Y. Shen).
http://dx.doi.org/10.1016/j.neucom.2015.11.014 0925-2312/& 2015 Elsevier B.V. All rights reserved.
proposed a neural network with a finite penalty parameter [6]. Although the Kuhn–Tucker optimal conditions hold for Kennedy's and Hopfield's, this neural network with a finite penalty parameter cannot find the exact optimal solution. In addition, it is difficult to implement the neural network with a very large penalty parameter [7]. In order to overcome the imperfection, some scholars attempted to design neural networks without penalty parameter. For example, Rodriguez designed a switched-capacitor neural network to solve a class of optimization problems with nonlinear convex constraints [8]. For convex quadratic optimization problems with bounded constraints, Bouzerdoum presented a recurrent neural network to solve them [9]. Zhang proposed a Lagrange neural network to deal with nonlinear programming problems with equality constraints [10]. Especially, a projection neural network is developed to solve monotone variational inequality problems with limit constraints [11,12]. By extending the projection neural network, two recurrent neural networks for solving strictly convex programming with nonlinear inequality constraints and linear constraints were presented in [13,14], respectively. The projection neural network was also used to solve pseudomonotone variational inequality problems in [15]. The above recurrent neural networks are proposed to solve linear or nonlinear convex problems. But, many optimization problems may be nonconvex or nonsmooth in the engineering application fields, so it is interesting to study recurrent neural networks to solve nonconvex or nonsmooth
P. Miao et al. / Neurocomputing 177 (2016) 120–129
optimization problems. Recently, some neural network models dealing with these optimization problems are proposed in [16–21]. It should be noted that all the above mentioned neural networks are asymptotically stable or exponentially stable, which means that the exact solutions can be obtained when time goes to infinity. In order to accelerate the convergent speed, some neural networks with finite-time convergent properties are explored. For example, Cheng et al. presented a recurrent neural network with a specific nonlinear unit to deal with convex optimization problems [22]. The proposed neural network converges to the optimal solution in a finite-time interval. The computation efficiency is increased dramatically as well. A finite-time convergent neural network has also been proposed to solve quadratic programming problems [23]. The actual optimal solutions can be derived in finite time interval. However, the neural networks proposed in [23] only deal with convex optimization problems. In this paper, in order to overcome these problems, we design a finite-time recurrent neural network based on the authors' previous work [24–26], to solve nonlinear nonconvex optimization problems. Under the condition that the Hessian matrix of the associated Lagrangian function is positive definite, we prove that the proposed recurrent neural network is globally finite-time stable. Then, the actual minimum solution can be derived in finite-time period. In addition, our recurrent neural network is applied to a hydrothermal scheduling problem. It is shown the superiority and effectiveness of our proposed neural networks by solving nonlinear optimization problems with inequality constraints. This paper is organized as follows. In Section 2, definition of finite-time stability and finite-time convergence criteria are reviewed. In Section 3, we present a finite-time recurrent neural network to solve nonlinearly constrained optimization problems. In Section 4, our method is applied to solve a short-time hydrothermal scheduling problem. Numerical simulations and a test hydrothermal system are given to show the effectiveness of our methods in Section 5. Section 6 concludes the paper.
2. Preliminaries For the system: _ ¼ f ðxðtÞÞ; xðtÞ
f ð0Þ ¼ 0;
x A Rn ;
xð0Þ ¼ x0 ;
ð1Þ
n
where f : D-R denotes a continuous function defined on an open neighborhood D such that 0 A D, we give the following definition of finite-time stability and finite-time stable criterion. Definition 1 (Bhat and Bernstein [27], Shen and Xia [28]). If there exists an open neighborhood U such that 0 A U, and a function T x : U⧹0-ð0; 1Þ, such that every solution xðt; x0 Þ of (1) with the initial point x0 A U⧹0 is well-defined and unique in forward time for t A ½0; T x ðx0 ÞÞ, limt-T x ðx0 Þ xðt; x0 Þ ¼ 0, and xðt; x0 Þ ¼ 0 for t Z T x ðx0 Þ, then the system (1) finite-time converges to the equilibrium x ¼0, T x ðx0 Þ denotes the convergent time. The equilibrium of (1) is finitetime stable if it is Lyapunov stable and finite-time convergent. In addition, the origin is a globally finite-time stable equilibrium when U ¼ D ¼ Rn . Lemma 1 (Bhat and Bernstein [27]). If there exists a positive definite function V(x) defined on a neighborhood U Rn of the origin, two real numbers k 4 0 and 0 o r o1 satisfying V_ ðxÞ r kV ðxÞ ; r
8 x A U;
ð2Þ
then, the origin of system (1) is finite-time stable. Moreover, the upper bound of the convergence time T1 satisfies T1 r
Vðx0 Þ1 r ; kð1 rÞ
8 x0 A U:
ð3Þ
121
If U ¼ Rn and V(x) is radially unbounded, the origin of system (1) is globally finite-time stable. The following lemma will be used in the proof of our main results. Lemma 2 (Clarke [29]). Let f : Rn -R be continuously differentiable. Then maxf0; f ðxÞg is a regular function, its Clarkes generalized gradient is given as follows: 8 if f ðxÞ 4 0; > < ∇f ðxÞ ∂maxf0; f ðxÞg ¼ ½0; 1∇f ðxÞ if f ðxÞ ¼ 0; ð4Þ > :0 if f ðxÞ o 0:
3. Finite-time recurrent neural network In this section, we consider the following nonlinear optimization problem (NOP): minimize
f ðxÞ;
ð5aÞ
subject to
cðxÞ r 0; x Z 0;
ð5bÞ
n
T
n
where f : R -R, cðxÞ ¼ ½c1 ðxÞ; …; cm ðxÞ , x A R . In this paper, we assume that f and ci are twice differentiable, and there exists at least a local optimal solution to the NOP. It is clear that the NOP is a convex programming problem (CPP) if f(x) and ci(x) are all convex, otherwise, the NOP is a nonconvex programming problem (NCPP). From [9], we have that if xn is a local optimal solution to the NOP, then there exits yn A Rm such that ðxn ; yn Þ is a Karush–Kuhn–Tucker (KKT) point satisfying the following conditions: ( y Z 0; cðxÞ r0; x Z0; ð6Þ ∇f ðxÞ þ ∇cðxÞy Z0; yT cðxÞ ¼ 0; where ∇cðxÞ and ∇f ðxÞ are the gradient of x. By the projection theorem [30], the following projection equations can be used to replace (6): ( ðx ð∇f ðxÞ þ ∇cðxÞyÞÞ þ x ¼ 0; ð7Þ ðy þ cðxÞÞ þ y ¼ 0; where x þ ¼ maxf0; xg. ~ yÞ ~ T ¼ ðð∇f ðx þ Þ þ Let z ¼ ðx; yÞT , z þ ¼ ðx þ ; y þ ÞT and z~ ¼ ðx; ∇cðx þ Þy þ Þ; cðx þ ÞÞT . If and only if zn ¼ ðxn ; yn Þ satisfies (7), then zn is a KKT point of the NOP. Furthermore, zn is a KKT point if zn is also a solution to the variational inequality problem: ðz zn ÞFðzn Þ Z;
8 z Z 0:
n
n
ð8Þ n
n
n
where Fðz Þ ¼ ½∇f ðx Þ þ ∇cðx Þy ; cðx Þ. Next, we design the following finite-time recurrent neural network to solve (5) by extending the ODE method in [31]: state equation
z_ ðtÞ ¼ εF ðz z þ þ z~ Þ;
output equation
ð9aÞ
xðtÞ ¼ x þ ;
ð9bÞ
where ε 40 is a parameter and the activation function F ðxÞ is defined as F ðxÞ ¼ signðxÞj xj r ;
ð10Þ T
where 0 o r o 1 is a real number, for x ¼ ðx1 ; x2 ; …; xn Þ , F ðxÞ ¼ and signðxÞ ¼ ðsignðx1 Þ; signðx2 Þ; …; ðF ðx1 Þ; F ðx2 Þ; …; F ðxn ÞÞT signðxn ÞÞT . We can use a recurrent neural network with a one-layer structure to realize the dynamical state equation expressed by (9). The circuit used to realize the neural network has n þ m integrators processors, n þm continuous but nonsmooth activation functions processors for z, n þm processors for z þ and n þ m
122
P. Miao et al. / Neurocomputing 177 (2016) 120–129
Fig. 1. The structure of the recurrent neural network (9).
processors for z~ . Its complexity is dependent on the mapping z in the original problem. Fig. 1 shows the structure of the recurrent neural network (9). The following lemmas are also useful for our main results. Lemma 3. There exists a solution to the recurrent neural network (9) on ½0; þ 1. Proof. Note that the functions (10) and z z þ þ z~ are continuous on ½0; þ 1. Then, F ðz z þ þ z~ Þ is a continuous function on ½0; þ 1. By [32], there exists a solution to the recurrent neural network (9) on ½0; þ 1.□ In what follows, let y0 ðtÞ denote the second state trajectory of (9) with zero initial point, and S0 ¼ fy A Rm j y ¼ ðy0 ðtÞ þ et 0 t yðt 0 Þy ðt 0 ÞÞ þ ; t A ½t 0 ; 1Þg denote an associated set, where yðt 0 Þ A Rm is a nonzero vector. A Lagrangian function concerned with the NOP is expressed by Lðx; yÞ ¼ f ðxÞ þy cðxÞ: Now, we give our main results. Theorem 1. If the Hessian matrix ∇2x Lðx; yÞ on Rnþ S0 is positive definite for any initial point z0 ¼ ðx0 ; y0 Þ, that is there exits a constant δ 4 0 such that ∇2x Lðx; yÞ 4 δI, then, the recurrent neural network (9) is Lyapunov stable at a KKT point, it can converge to its equilibrium point finite time and the upper bound of the convergence time t1 is given as follows: Vð0Þð1 rÞ=ð1 þ rÞ ð1 rÞ=ð1 þ rÞ
Kð1 rÞð1 þ rÞ
;
Proof. Defined the following Lyapunov function: ð12Þ
where 1 J x x þ þ x~ J rr þ þ1 ; V 1 ðtÞ ¼ r þ1
V 2 ðtÞ ¼
ð15Þ
i
Therefore, þ ~ T dðx x þ þ xÞ dx dx~ dx~ ¼ I þ Z ¼ ∇2x Lðx; yÞ: dx dx dx dx
ð16Þ
From (14)–(16), we have ~ x x þ þ x~ j r V_ 1 ðtÞ r x_ T ∇2x Lðx; yÞsignðx x þ þ xÞj
r δεðr þ 1Þ2r=ðr þ 1Þ V 1 ðtÞ2r=ðr þ 1Þ : From (9), it also follows that T
dy dðy y þ cðx þ ÞÞ signðy y þ cðx þ ÞÞj y y þ cðx þ Þj r V_ 2 ðtÞ ¼ dt dy ¼ λy_ T signðy y þ cðx þ ÞÞj y y þ cðx þ Þj r ¼ λεF ðy y þ cðx þ ÞÞT signðy y þ cðx þ ÞÞj y y þ cðx þ Þj r 2 ¼ λε signðy y þ cðx þ ÞÞj y y þ cðx þ Þj r
ð11Þ
1 þ 1 where Vð0Þ ¼ V 1 ð0Þ þ V 2 ð0Þ ¼ r þ1 1 J x0 x0þ þ x~ 0 J rr þ þ 1 þ r þ 1 J y0 y0 1 cðx0þ Þ J rr þ is the initial value of V(t) defined by (12), K ¼ minð δε ; λε Þ, þ1 0 r λ r 1, 0 o r o 1, and ε is a positive real number.
VðtÞ ¼ V 1 ðtÞ þ V 2 ðtÞ;
By Lemma 2, we have 8 if xi 4 0; > þ <1 dxi ¼ ½0; 1 if xi ¼ 0; > dxi :0 if x o 0:
~ T ∇2x Lðx; yÞsignðx x þ þ xÞj ~ x x þ þ x~ j r ¼ εF ðx x þ þ xÞ 2 ~ x x þ þ x~ j r ¼ δε J x x þ þ x~ J 2r r εδ signðx x þ þ xÞj 2r
T
t1 r
Note that, T T ~ dx dV 1 ðtÞ dx dðx x þ þ xÞ ~ x V_ 1 ðtÞ ¼ ¼ signðx x þ þ xÞj dt dx dt dx x þ þ x~ j r : ð14Þ
1 J y y þ cðx þ Þ J rr þ þ1 ; r þ1
denotes the r þ 1 norm of a vector x, that is, J x J r þ 1 ¼ JxJ Pnr þ 1 r þ 1 1=ðr þ 1Þ for x ¼ ½x1 ; x2 ; …; xn T . We calculate the time i ¼ 1 j xi j derivative of V(t) along the trajectories of the system (9), then T T dx dV 1 ðtÞ dx dV 2 ðtÞ þ : ð13Þ V_ ðtÞ ¼ V_ 1 ðtÞ þ V_ 2 ðtÞ ¼ dt dx dt dx
2r=ðr þ 1Þ ¼ λε J y y þ cðx þ Þ J 2r V 2 ðtÞ2r=ðr þ 1Þ : 2r r λεðr þ 1Þ
Let K ¼ minfδε; λεg, then V_ ðtÞ ¼ V_1 ðtÞ þ V_2 ðtÞ r Kðr þ1Þ2r=ðr þ 1Þ ½V 1 ðtÞ2r=ðr þ 1Þ þ V 2 ðtÞ2r=ðr þ 1Þ r Kðr þ 1Þ2r=ðr þ 1Þ ½V 1 ðtÞ þ V 2 ðtÞ2r=ðr þ 1Þ r Kðr þ 1Þ2r=ðr þ 1Þ VðtÞ2r=ðr þ 1Þ :
ð17Þ
By Lemma 1, the recurrent neural network (9) is finite time stable and the upper bound of the convergence time is given by (11). The proof is completed.□ Theorem 2. x(t) is the optimal solution to the nonlinear optimization problem (5), if zn is an equilibrium point of (9). Proof. Since zn is an equilibrium point of (9), zn satisfies εF ðzn ðzn Þ þ þ z~n Þ ¼ 0: Then, zn ðzn Þ þ þ z~n ¼ 0;
P. Miao et al. / Neurocomputing 177 (2016) 120–129
123
Table 1 Comparisons of (9) with other neural network models by solving the problem (5). Network
Error
Time
CPP
NCPP
[6]
Non-zero
Infinite
√
[8]
Zero
Infinite
√
[11]
Zero
Infinite
√
[33]
Zero
Infinite
√
[4]
Zero
Infinite
√
[16]
Zero
Infinite
√
√
(9)
Zero
Finite
√
√
and ( n y Z 0; cðxn Þ r 0; n
n
n
∇f ðx Þ þ ∇cðx Þy Z 0;
xn Z 0; ðyn ÞT cðxn Þ ¼ 0:
Therefore, ( ðxn ð∇f ðxn Þ þ ∇cðxn Þyn ÞÞ þ xn ¼ 0;
Fig. 2. System representation.
ðyn þcðxn ÞÞ þ yn ¼ 0: Thus, zn is a KKT point of the NOP (5). Therefore, xn is a local optimal solution to the NOP [1]. Note that the set S ¼ fz A Rn j VðzÞ r Vðz0 Þg is bounded and the solution trajectory fzðtÞg S is also bounded. By LaSalle's invariance principle, z(t) can convergence to Θ, the largest invariant subset of set: dV Θ ¼ z^ A S ¼ 0 : dt That is to say, limt-1 distðzðtÞ; ΘÞ ¼ 0. Thus, lim distðx þ ; X n Þ ¼ 0;
t-1
where X n ¼ fxn g. Therefore, the output trajectory of the proposed neural network globally converges to the set of the minimizers of the NOP. Then, the proof is completed.□ Remark 1. Compared with other methods, the proposed neural network with any initial point z0 is finite-time stable at a KKT point. Its output can globally converge to the optimal solution to the NOP (5) in finite time interval, if ∇2x Lðx; yÞ 4 δI on Rnþ S0 for a positive real number δ. Moreover, from the inequality (11), we have the conclusions that the convergent time is dependent on the parameters r and ε. Remark 2. There exist several other neural network models which can be used to solve the problem (5) [6,8,11,33,4,16]. In Table 1, we summarize the comparisons of these neural network models with our model (9). Remark 3. In [19–21], neural networks have been presented to solve nonsmooth optimization problem. In this paper, the objective function is assumed to be twice differentiable. Therefore, it is intractable to extend the proposed neural networks to solve these problems. Future works should investigate the design of neural networks with finite time stability solving these nonsmooth problems.
4. Application to hydrothermal scheduling problem In this section, our recurrent neural networks will be applied to a classic hydrothermal scheduling problem. The objective of the
short-term hydrothermal scheduling is to find optimal water volume of reservoir and water discharge of hydro plant in a given time interval such that the total cost of thermal generation is optimal by making full use of the hydro resource. At the same time, diverse hydraulic, thermal constraints are satisfied. In order to better describe the problem, a test system is used. We set up a multi-reservoir cascaded hydraulic power system. The hydropower generation is the function of water discharge rate and storage volume. Water transportation delay of reservoirs is also taken into account. In a hydrothermal power system the marginal cost of hydroelectric generation is negligible. The short-term scheduler then allocates water for hydrogeneration among various time intervals along the scheduling horizon so as to minimize the cost of fuel for thermal generation while trying to satisfy diverse constraints. The scheduling for time interval is one hour. A general system representation for formation of ith reservoir is shown in Fig. 2. Especially, the classic model of hydrothermal scheduling problem coming from [34] can be written as the following nonlinear optimal problem: " ! !2 3 Th N N X X X t t t t minimize f ðxÞ ¼ αi þ β i P D P hi þ γ i P D P hi 5; t¼1
i¼1
i¼1
ð18aÞ subject to
P min r P tD s
N X
P thi rP max ; s
ð18bÞ
t¼1 t max P min hi r P hi r P hi ;
ð18cÞ
r Q ti r Q max ; Q min i i
ð18dÞ
r V ti rV max ; V min i i
ð18eÞ
2 V ti
¼
V ti 1 þ M 4I ti Q ti Sti þ
V 0i ¼ V Bi ; V Ti ¼ V Ei ;
Nu X m ¼ i&t τm;i Z 0
3 t τ ðQ m m;i
t τ þ Sm m;i Þ5;
ð18fÞ
ð18gÞ
where f(x) denotes the fuel total cost of thermal plant, αi, βi and γi are coefficients of the thermal generation of ith plant, Psit and Phit are the power generation of thermal plant i and the power
124
P. Miao et al. / Neurocomputing 177 (2016) 120–129
generation of hydro plant i at time interval t, respectively, PDt is the system load balance, Vit, Qit are denoted as water volume of reservoir i at the end of time interval t and water discharge of hydro plant i at time interval t, respectively, the water spillage of hydro plant i at time interval t is Sit, the natural inflow into reservoir i at time interval t is Iit, Nu is number of upstream hydropower plants directly above ith hydro plant, τm i is water transport delay time from reservoir m to i, M is conversion factor of water discharge into stored water. For this problem, let Ns, Nh denote number of thermal plants, number of hydro plants, respectively; Th and t are denoted as total time horizon and time index, respectively. Note that (18b)–(18g) can also be expressed as follows: ! N N X X t t P min P P P thi P max r 0; P tD s D hi r 0; s t¼1
t¼1
t P min hi P hi r 0;
P thi P max hi r 0;
Q min Q ti r 0; i
Q ti Q max r 0; i
V ti r 0; V min i
V ti V max r0; i
0
2
V ti @V ti 1 þ M 4I ti Q ti Sti þ 2 V ti 1 þ M 4I ti Q ti Sti þ
31
Nu X
t τ ðQ m m;i m ¼ i&t τm;i Z 0
tτ þ Sm m;i Þ5A r0;
3
Nu X
t τ ðQ m m;i
m ¼ i&t τm;i Z 0
t τ þ Sm m;i Þ5 V ti r 0:
Then, c(x) defined by (19) is given as " ! ! # Nh Nh X X Th Th min 1 1 min 1 cðxÞ ¼ P s P D P hi ; …; P s P D P hi ; P D t¼1
N X
t¼1
generation coefficients [34]. Then, the optimization problem (18) is a nonlinear function of Vit and Qit. We can use the recurrent neural network proposed in Section 3 to solve the optimization problem (18): 8 2γ ð2C 11 V i1 þ C 31 Q i1 þ C 41 Þ2 2C 11 β > > > !# > > 4 > X > > > þ 2γ P iD P ihk ; > > > > k¼1 > > > > > i ¼ 1; 2; …; 24 > > > i 24 > þ C 42 Þ2 2C 12 β > 2γ ð2C 12 V i2 24 þ C 32 Q 2 > > !# > > 4 > X > i 24 > > P ihk 24 ; > þ 2γ P D > > > k¼1 > > > > i ¼ 25; 26; …; 48 > > > > > γ ð2C 13 V i3 48 þ C 33 Q i3 48 þ C 43 Þ2 2C 13 β 2 > > > !# > > 4 > X > < þ 2γ P iD 48 P ihk 48 ; aii ¼ k¼1 > > > > i ¼ 49; 50; …; 72 > > > > i 72 > þ C 34 Q i4 72 þ C 44 Þ2 2C 14 β > > 2γ ð2C 14 V 4 > !# > > 4 > X > > > þ 2γ P iD 72 P ihk 72 ; > > > > k¼1 > > > > i ¼ 73; 74; …; 96 > > > > > … > > > > > > 2γ ð2C 1Nh V i4 23Nh þ C 3Nh Q i4 23Nh þ C 4Nh Þ2 > > " !# > > 4 > X > i 23N h > > 2C 1Nh β þ 2γ P iD 23Nh P ; > hk > > > k¼1 > > : i ¼ 23Nh þ 1; …; 24Nh
t¼1
P 1hi P max ; …; P TDh s
N X t¼1
1 min P Thih P max ; P min s h1 P h1 ; …; P hN h
Th Th min 1 max P ThNh ; P 1h1 P max Q 11 ; …; Q min Nh Q Nh ; Q 1 h1 ; …; P hN P hN h ; Q 1 h
h
Q max ; …; Q TNhh 1
Th min 1 max Q max V 11 ; …; V min ; …; Nh ; V 1 Nh V Nh ; V 1 V i
1 0 V TNhh V max Nh ; V 1 V 1 2
Nu X
þ M 4I 11 Q 11 S11 þ
m ¼ i&t τm;i Z 0
2
Th 1 VN þ M 4I TNhh Q TNhh STNhh þ h
2
t τm;i
Qm
t τ m;i
þ Sm
m ¼ i&t τm;i Z 0
Q TNhh STNhh þ
Nu X
i ;
bii ¼
3
Nu X
t τ ðQ m m;i m ¼ i&t τm;i Z 0
2 Th 1 þVN þ M 4I TNhh h
Nu X
V 11 þ V 01 þ M 4I t1 Q t1 St1 þ
V TNhh
i tτ t τ Q m m;i þ Sm m;i ; …; V TNhh
tτ þ Sm m;i Þ5; …;
t τm;i
Qm
m ¼ i&t τm;i Z 0
t τm;i
þ Sm
i
:
ð19Þ In addition, we have P thi
¼ C 1i ðV ti Þ2 þ C 2i ðQ ti Þ2 þ C 3i V ti Q ti þ C 4i V ti þ C 5i Q ti þ C 6i ;
where C1i, C2i, C3i, C4i, C5i and C8i are the hydro plant power
8 2γ ð2C 21 Q i1 þC 31 V i1 þ C 51 Þ2 2C 11 β > > > !# > > 4 > X > > þ 2γ P i > P ihk ; > D > > > k¼1 > > > > i ¼ 1; 2; …; 24 > > > > > γ ð2C 22 Q i2 24 þ C 32 V i2 24 þ C 52 Þ2 2C 12 β 2 > > > !# > > 4 > X > > > þ 2γ P iD 24 P ihk 24 ; > > > > k¼1 > > > > i ¼ 25; 26; …; 48 > > > > > γ ð2C 23 Q i3 48 þ C 33 V i3 48 þ C 53 Þ2 2C 13 β 2 > > > !# > > 4 > > < þ 2γ P i 48 X P i 48 ; D
hk
k¼1 > > > > i ¼ 49; 50; …; 72 > > > > i 72 > þ C 34 V i4 72 þ C 54 Þ2 2C 14 β > > 2γ ð2C 24 Q 4 > !# > > 4 > X > i 72 i 72 > > þ 2 γ ðP P ; > D hk > > > k¼1 > > > > i ¼ 73; 74; …; 96 > > > > > … > > > > > > 2γ ð2C 2Nh Q i4 23Nh þ C 3Nh Q i4 23Nh þ C 5Nh Þ2 > > " !# > > 4 > X > i 23N h i 23N h > > Ph ; > > 2C 1Nh β þ2γ P D k > > k¼1 > > : i ¼ 23N h þ 1; …; 24N h
ð20Þ
ð21Þ
P. Miao et al. / Neurocomputing 177 (2016) 120–129
h
h
Table 2 Comparisons the performances of different methods with ours (ε ¼ 50, r ¼0.5) by solving problem (24).
ð22Þ
Method
z0
Solution
Time (s)
Error
[4] [12] [16] Ours Num [12] [16] Ours Num [12] [16] Ours
30n15 1 30n15 1 30n15 1 30n15 1 30n15 1 30n15 1 30n15 1 30n15 1 300n15 1 300n15 1 300n15 1 300n15 1
(2.0191,3.9538) (1.9294,3.8999) (2.0008,4.0003) (2.00000,4.00000) (2.0187,3.9537) (1.9982,3.9993) (1.99997,3.99999) (2.00000,4.00000) (2.0187,3.9537) (1.9994,3.9994) (2.0000,4.0003) (2.00000,4.00000)
1.046 4.953 0.172 0.108 0.146 0.156 0.141 0.121 1.953 0.452 0.14 0.134
0.05 0.1225 8.54e 5 o 1e 5 0.0499 0.0019 3.16e 5 o 1e 5 0.0499 8.4e 4 3e 4 o 1e 5
30 x
1
20 15 10 5
h
Theorem 3. The optimal solution to the hydrothermal scheduling problem (18) can be obtained in finite time by using the proposed neural network (9) with any initial point z0, if ∇2x Lðx; yÞ is positive definite, that is, there exits a constant δ 4 0 such that ∇2x Lðx; yÞ 4 δI, where ! ! A1 D1 A2 D2 ∇2x Lðx; yÞ ¼ ∇2 f ðxÞ þ yT ∇2 cðxÞ ¼ þ y; ð23Þ T T D1 B1 D2 B2
x2
25
x
8 h > 2γ ð2C 11 V i1 þ C 31 Q i1 þ C 41 Þ 2C 21 Q i1 þ C 31 V i1 > > > > !# > > 4 X > > > > þ C 51 Þ P iD P ihk C 31 β ; > > > > k¼1 > > > > i ¼ 1; 2; …; 24 > > > > h > > 2γ ð2C 12 V i2 24 þ C 32 Q i2 24 þ C 42 Þ 2C 22 Q i1 24 > > > > !# > > 4
X > > i 24 i 24 i 24 > þ C 32 V 1 þC 52 P D P hk C 32 β ; > > > > k¼1 > > > > > i ¼ 25; 26; …; 48 > > h > > > > γ ð2C 13 V i3 48 þ C 33 Q i3 48 þ C 43 Þ 2C 23 Q i1 48 2 > > > !# > > 4
> X > < þ C V i 48 þC i 48 i 48 P hk C 33 β ; 33 1 53 P D dii ¼ k¼1 > > > > i ¼ 49; 50; …; 72 > > > h > > > 2γ ð2C 14 V i4 72 þ C 34 Q i4 72 þ C 44 Þ 2C 24 Q i1 72 > > > > !# > > 4
> X > i 72 i 72 > þ C V i 72 þC > P C 34 β ; P 34 1 54 > D hk > > > k¼1 > > > > i ¼ 73; 74; …; 96 > > > > > … > > h > > > 2γ ð2C 1N V i 23Nh þ C 3N Q i 23Nh þ C 4N Þ > > h h h 4 4 > > > > i 23Nh i 23N h > Q þ C V þC ð2C > 5Nh Þ 2Nh 1 3Nh 1 > >
i > P4 > i 23N h i 23Nh > > C 3Nh β; k ¼ 1 Ph ðP D > > k > > : i ¼ 23N þ 1; 23N þ2; …; 24N
125
0 −5
0
0.02
0.04
0.06
0.08
0.1
0.12
0.14
0.16
0.18
0.2
time (s)
Fig. 3. The output state trajectory of (9) by solving Example 1 (ε ¼ 50, r ¼0.5, z0 ¼ 30n151 ). 7 y
and f(x) is given by (18a), A1 ¼ diagða11 ; a22 ; …; a24Nh ;24Nh Þ;
6
B1 ¼ diagðb11 ; b22 ; …; b24Nh ;24Nh Þ;
5
D1 ¼ diagðd11 ; d22 ; …; d24Nh ;24Nh Þ;
4
A2 ¼ 2 diagðC 11 I2424 ; C 12 I2424 ; …; C 1;Nh I2424 Þ;
3
B2 ¼ 2 diagðC 21 I2424 ; C 22 I2424 ; …; C 2;Nh I2424 Þ;
2
D2 ¼ diagðC 31 I2424 ; C 32 I2424 ; …; C 3;Nh I2424 Þ;
1
c(x), aii, bii and dii are given by (19)–(22), respectively.
A2 D2 Proof. From (19), we have
that ∇cðxÞ ¼ DT2 B2 . From (18a), it also T follows that ∇f ðxÞ ¼ DA11 DB11 , where aii, bii and dii are given by (20)– (22), respectively. Since ∇2x Lðx; yÞ 4 δI, then the problem (18) can be solved by using the proposed neural network (9) and the optimal solution to (18) can be achieved in finite time by Theorem 1.□
0
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
1
time (s)
Fig. 4. The trajectory of y in Example 2.
plant cost obtained by the proposed neural network is smaller than by other methods.
5. Numerical simulations Remark 4. By Theorem 3, our method can obtain a minimum solution to (18) in finite time compared with other methods which can obtain an approximate solution. Consequently, the thermal
In this part, three illustrative examples and an engineering project for hydrothermal scheduling are used to show the
126
P. Miao et al. / Neurocomputing 177 (2016) 120–129 120
7
the neural network in [Xia 2008] our neural network
the neural network in [Xia 2008] our neural network
6
100
5
80
4 ||x−x*||
60 3 40 2 20 1 0
0 −1
−20 0
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
1
0
0.01
0.02
0.03
0.04
Fig. 5. The error trajectory J x xn J by solving Example 2 (ε ¼ 50, r ¼ 0.5, z0 ¼ 131 ).
0.06
0.07
0.08
0.09
0.1
y
1
y2
0
Fig. 7. The error trajectory J x xn J by solving Example 3 (ε ¼ 50, r ¼0.5, z0 ¼ 141 ).
Table 2 shows the simulation results by using different neural networks or numerical (Num) methods with different initial point. From Table 2, it is observed that our proposed neural network not only converges to the exact solution in finite-time, but also has a smaller error norm. Furthermore, let z0 ¼ 30n151 , K ¼1, we have Vð0Þ ¼ V 1 ð0Þ þ V 2 ð0Þ ¼ 39:89 þ 311:97 ¼ 351:86. By computing (11), we can obtain t 1 r 1:233. Fig. 3 gives the output state trajectory of (9) by solving Example 1 and the convergence time is about 0.108 s.
5
−5
−10
Example 2. A non-convex constraint programming problem:
−15
−20
−25
0.05 time (s)
time (s)
minimize
f ðxÞ ¼ x21 þ x22 ;
ð25aÞ
subject to
cðxÞ ¼ 1 0:3x1 x2 r 0
ð25bÞ
where 2 rx1 r 5 and 1 r x2 r 2.
0
0.05
0.1
0.15
0.2
0.25
0.3
0.35
0.4
0.45
0.5
time (s)
Fig. 6. The trajectory of y in Example 3.
effectiveness of our methods. The simulation is performed with the programming language Matlab 7.10.0 on a desktop computer with the Intel (R) Core(TM) G640 Duo CPU at 2.80 GHz, 1.59 GHz and 1.90 GB of RAM. 5.1. Illustrative examples
Example 1. A convex programming problem: minimize
f ðxÞ ¼ 2x21 þ x22 þ x1 x2 12x1 10x2 ;
ð24aÞ
subject to
c1 ðxÞ ¼ x21 þ ðx2 5Þ2 r 64;
ð24bÞ
2
2
c2 ðxÞ ¼ ðx1 þ3Þ þ ðx2 1Þ r36;
ð24cÞ
So, the proposed neural network (9) can be used to solve this problem by Theorems 1 and 2. Fig. 5 shows the simulation results and the convergence time 0.18 s obtained by our method is smaller than that obtained by using the method in [16]. Since c(x) is nonconvex, the optimal solution xn cannot be obtained by the methods in [4,12]. However, the neural network we presented can converge to the optimal solution in finite-time. Example 3. A non-convex object function programming problem: minimize
f ðxÞ ¼ x1 x2 ;
ð26aÞ
subject to
x1 þ 4x2 1 r0
ð26bÞ
20ð4x21 þ 5x22 þ 1:5x1 x2 Þ 23 r 0
ð26cÞ
xZ0
c3 ðxÞ ¼ ðx1 3Þ2 þ ðx2 1Þ2 r36; x1 ; x2 Z 0:
The objective function is convex but c(x) is non-convex. xn ¼ ½2; 1:6666T is the optimal solution to this problem. Let ε ¼ 50, r ¼0.5, z0 ¼ 131 . From Fig. 4, we have that 0 0:3 2 0 ∇2x Lðx; yÞ ¼ þ y 4 0: 0:3 0 0 2
ð24dÞ
It is clear that f(x), ci(x) are all convex and the problem (24) has a unique solution xn ¼ ½2; 4T . When the proposed network (9) is used to solve the problem (24), the output state trajectory of (9) always converges to xn with any initial point. Let ε ¼ 50, r¼ 0.5.
For this problem, f(x) is non-convex and the optimal solution is xn ¼ ½0:51741; 0:0702T . We select ε ¼ 50, r ¼ 0.5, z0 ¼ 141 . From Fig. 6, we have that 8 1:5 0 1 ∇2x Lðx; yÞ ¼ þ 20 y 4 0: 1:5 10 1 0
P. Miao et al. / Neurocomputing 177 (2016) 120–129
127
Fig. 8. The schematic diagram of a classic hydrothermal scheduling problem. Table 3 Load demand (unit: MW). Time Load Time Load Time Load
1 190 9 270 17 270
2 170 10 310 18 250
Table 5 Characteristics of hydro plant. 3 170 11 350 19 230
4 190 12 310 20 210
5 190 13 350 21 210
6 210 14 350 22 210
7 230 15 310 23 190
8 250 16 290 24 190
Table 4 Hydro plant power generation coefficients.
Plant i
V min i
V max i
V1i
V24 i
Q min i
Q max i
1 2 3 4
80 60 100 70
150 120 240 160
100 80 170 120
120 70 170 120
5 6 10 13
15 15 30 25
−2000 y
Plant i
C1i
C2i
C3i
C4i
C5i
C6i
−2200
1 2 3 4
0.001 0.001 0.001 0.001
0.1 0.1 0.1 0.1
0.01 0.01 0.01 0.01
0.40 0.38 0.30 0.38
4.0 3.5 3.0 3.8
30 30 30 30
−2400
min
−2600 −2800 −3000
Therefore, our neural network (9) can be used to solve this problem (26) by Theorems 1 and 2. Fig. 7 shows the simulation results.
−3200 −3400 −3600
5.2. A classic hydrothermal scheduling in engineering project In this section, our method will be applied to a classic hydrothermal scheduling problem. Its schematic diagram is shown in Fig. 8. There are four cascaded hydroplants along the river and an equivalent thermal plant. The scheduling period is one day with 1h interval. We select the same parameters as [34], which are given in Tables 3–5. Another parameters are given as Th ¼24, Nh ¼4, αi ¼ 1000, βi ¼ 10, γ i ¼ 0:5, M ¼ ½10; 000; 8000; 1000; 0T m3 =h,
−3800 −4000
0
0.2
0.4
0.6
0.8
1
1.2
1.4
1.6
1.8
2
Fig. 9. The trajectory of ymin in the hydrothermal scheduling problem.
τ1;3 ¼ 1 h, τ2;3 ¼ 2 h and τ3;4 ¼ 2 h. In the following simulation, we set ε ¼ 50, r ¼0.5, z0 ¼ 13841 . For this problem, we have A2 ¼ 0:002nI9696 , B2 ¼ 0:2nI9696 ,i D2 ¼ 0:01nI9696 in (23), h and the smallest eigenvalue of
A2 D2 DT2 B2
is 0.001496.
128
P. Miao et al. / Neurocomputing 177 (2016) 120–129 180 V V
Reservoir Volume(103 m3)
160
V V
140
120
100
80
60
0
5
10
15
20
25
Hour
Fig. 10. Hourly hydro plant storage.
convergence criteria were reviewed. Secondly, a finite-time recurrent neural network was proposed to solve the nonlinear optimization problem. It was shown that the proposed recurrent neural network was globally finite-time stable under the condition that the Hessian matrix of the associated Lagrangian function was positive definite. Its output could converge to a minimum solution globally and finite-time, which means that the actual minimum solution could be derived in finite-time period. In addition, our neural network was applied to a hydrothermal scheduling problem. Compared with other methods, a lower consumption scheme of hydrothermal scheduling was obtained in finite-time period. At last, numerical simulations demonstrated superiority and effectiveness of our proposed neural networks by solving nonlinear optimization problems with inequality constraints. Recently, the researches on memristor-based recurrent neural networks and complex dynamical networks have attracted considerable attentions [37–40]. It is worth studying these recurrent neural networks to solve nonlinear optimization problems.
25
Acknowledgments This work was supported by the National Science Foundation of China (Nos. 61374028, 51177088, 61273183), the Grant National Science Foundation of Hubei Provincial (2013CFA0 50), the Scientific Innovation Team Project of Hubei Provincial Department of Education (T201504).
3
Plart Discharge (10 m3 )
20
15
10
References
5
0
0
5
10
15
20
25
Hour
Fig. 11. Hourly hydro plant discharge. Table 6 Comparison of thermal plant cost with other methods. Method
Cost ($)
Augmented Lagrange in [35] Two-phase neural network in [36] Chaotic hybrid differential evolution in [34] Our method (9)
154739.0 154808.5 154338.1 152635.7
From Fig. 9, we can obtain that ! ! A1 D1 A2 D2 2 ∇x Lðx; yÞ ¼ þ y4 0: DT1 B1 DT2 B2 Then the proposed recurrent neural network (9) can be used to solve this problem (18) by Theorem 3. From the simulation results, we can obtain the optimal Vit and t Qi , which are shown in Figs. 10 and 11, respectively. In addition, Table 6 shows that our method achieves a better performance concerned with the total thermal plant costs than others.
6. Conclusion In the paper, finite-time recurrent neural networks were proposed and used to solve nonlinearly constrained optimization problems. Firstly, definition of finite-time stability and finite-time
[1] M.S. Bazaraa, H.D. Sherali, C.M. Shetty, Nonlinear Programming: Theory and Algorithms, 2nd ed., Wiley, New York, 1993. [2] F. Facchinei, A. Fischer, C. Kanzow, A simply constrained optimization reformulation of KKT systems arising from variational inequalities, Appl. Math. Optim. 40 (1) (1999) 19–37. [3] A. Wachter, L.T. Biegler, Line search filter methods for nonlinear programming: motivation and global convergence, SIAM J. Optim. 16 (2005) 1–31. [4] M.V. Solodov, P. Tseng, Modified projection-type methods for monotone variational inequalities, SIAM J. Control Optim. 2 (1996) 1814–1830. [5] D.W. Tank, J.J. Hopfield, Simple neural optimization networks: an A/D converter, signal decision circuit, and a linear programming circuit, IEEE Trans. Circuits Syst. CAS-33 (5) (1986) 533–541. [6] M.P. Kennedy, L.O. Chua, Neural networks for nonlinear programming, IEEE Trans. Circuits Syst. CAS-35 (5) (1988) 554–562. [7] W.E. Lillo, M.H. Loh, S. Hui, S.H. Zak, On solving constrained optimization problems with neural networks: a penalty method approach, IEEE Trans. Neural Netw. 4 (6) (1993) 931–939. [8] A. Rodriguez-Vazquez, R. Dominguez-Castro, A. Rueda, J.L. Huertas, E. SanchezSinencio, Nonlinear switched-capacitor neural networks for optimization problems, IEEE Trans. Circuits Syst. 37 (3) (1990) 384–397. [9] A. Bouzerdoum, T.R. Pattison, Neural network for quadratic optimization with bound constraints, IEEE Trans. Neural Netw. 4 (2) (1993) 293–304. [10] S. Zhang, A.G. Constantinides, Lagrange programming neural networks, IEEE Trans. Circuits Syst. II: Analog Digit. Signal Process. 39 (7) (1992) 441–452. [11] Y.S. Xia, H. Leung, J. Wang, A projection neural network and its application to constrained optimization problems, IEEE Trans. Circuits Syst. I: Fundam. Theory Appl. 49 (4) (2002) 447–458. [12] Y.S. Xia, An extended projection neural network for constrained optimization, Neural Comput. 16 (4) (2004) 863–883. [13] Y.S. Xia, J. Wang, A recurrent neural network for solving nonlinear optimization subject to nonlinear inequality constraints, IEEE Trans. Circuits Syst. I: Regul. Pap. 51 (7) (2004) 1385–1394. [14] Y.S. Xia, J. Wang, A recurrent neural network for solving nonlinear convex programs subject to linear constraints, IEEE Trans. Neural Netw. 16 (2) (2005) 379–386. [15] X. Hu, J. Wang, Solving pseudomonotone variational inequalities and pseudoconvex optimization problems using the projection neural network, IEEE Trans. Neural Netw. 6 (2006) 1487–1499. [16] Y.S. Xia, G. Feng, J. Wang, A novel recurrent neural network for solving nonlinear optimization problems with inequality constraints, IEEE Trans. Neural Netw. 19 (8) (2008) 1340–1353. [17] D. Beyer, R. Ogier, Tabu learning: a neural network search method for solving nonconvex optimization problems, in: IEEE International Joint Conference on Neural Networks, vol. 2, 2000, pp. 953–961.
P. Miao et al. / Neurocomputing 177 (2016) 120–129
[18] C.Y. Sun, C.B. Feng, Neural networks for nonconvex nonlinear programming problems: a switching control approach, in: Lecture Notes in Computer Science, vol. 3496, 2005, pp. 694–699. [19] L. Cheng, Z.G. Hou, Y. Lin, M. Tan, W.C. Zhang, F.X. Wu, Recurrent neural network for non-smooth convex optimization problems with application to the identification of genetic regulatory networks, IEEE Trans. Neural Netw. 22 (5) (2011) 714–726. [20] M. Forti, P. Nistri, M. Quincampoix, Convergence of neural networks for programming problems via a nonsmooth Lojasiewicz inequality, IEEE Trans. Neural Netw. 17 (6) (2006) 1471–1486. [21] W. Bian, X. Xue, Subgradient-based neural networks for nonsmooth nonconvex optimization problems, IEEE Trans. Neural Netw. 20 (6) (2009) 1024–1038. [22] L. Cheng, Z.G. Hou, N. Homma, M. Tan, M.M. Gupta, Solving convex optimization problems using recurrent neural networks in finite time, in: International Joint Conference on Neural Networks, IJCNN 2009, Atlanta, Georgia, USA, 2009, pp. 539–543. [23] S. Li, Y.M. Li, Z. Wang, A class of finite-time dual neural networks for solving quadratic programming problems and its k-winners-take-all application, Neural Netw. 39 (2013) 27–39. [24] P. Miao, Y.J. Shen, X.H. Xia, Finite time dual neural networks with a tunable activation function for solving quadratic programming problems and its application, Neurocomputing 143 (2014) 80–89. [25] Y.J. Shen, P. Miao, Y.H. Huang, Y. Shen, Finite-time stability and its application for solving time-varying Sylvester equation by recurrent neural network, Neural Process. Lett. 42 (2015) 763–784. [26] P. Miao, Y.J. Shen, Y.H. Huang, Y.W. Wang, Solving time-varying quadratic programs based on finite-time Zhang neural networks and their application to robot tracking, Neural Comput. Appl. 26 (3) (2015) 693–703. [27] S. Bhat, D. Bernstein, Finite-time stability of continuous autonomous systems, SIAM J. Control Optim. 38 (2000) 751–766. [28] Y.J. Shen, X.H. Xia, Semi-global finite-time observers for nonlinear systems, Automatica 44 (2008) 3152–3156. [29] F.H. Clarke, Optimization and Non-Smooth Analysis, Wiley, New York, 1969. [30] D. Kinzderlehrer, G. Stampcchia, An Introduction to Variational Inequalities and Their Applications, Academic, New York, 1980. [31] Y.S. Xia, ODE methods for solving convex programming problems with bounded variables, Chin. J. Numer. Math. Appl. 4 (1996) 402–408. [32] J.K. Hale, Ordinary Differential Equations, Pure and Applied Mathematics XXI, second ed., Krieger, Malabar, FL, 1980. [33] X.B. Gao, A neural network for solving nonlinear convex programming problems, IEEE Trans. Neural Netw. 15 (3) (2003) 613–621. [34] X.H. Yuan, B. Cao, B. Yang, Y.B. Yuan, Hydrothermal scheduling using chaotic hybrid differential evolution, Energy Convers. Manag. 49 (2008) 3627–3633. [35] X. Guan, B. Peter, Nonlinear approximation method in Lagrangian relaxation based algorithms for hydrothermal scheduling, IEEE Trans. Power Syst. 10 (2) (1995) 772–778. [36] R. Naresh, J. Sharma, Two-phase neural network based solution technique for short term hydrothermal scheduling, IEE Proc. Gener. Transm. Distrib. 146 (6) (1999) 657–663. [37] G. Zhang, Y. Shen, Exponential stabilization of memristor-based chaotic neural networks with time-varying delays via intermittent control, IEEE Trans. Neural Netw. Learn. Syst. 26 (7) (2015) 1431–1441. [38] Y.W. Wang, T. Bian, J. Xiao, C. Wen, Global synchronization of complex dynamical networks through digital communication with limited data rate, IEEE Trans. Neural Netw. Learn. Syst. 26 (10) (2015) 2487–2499. [39] Y. Zhang, Y. Shen, X. Wang, L. Cao, A novel design for memristor-based logic switch and crossbar circuits, IEEE Trans. Circuits Syst. I, Regul. Pap. 62 (5) (2015) 1402–1411. [40] G. Zhang, Y. Shen, Exponential synchronization of delayed memristor-based chaotic neural networks via periodically intermittent control, Neural Netw. 55 (2014) 1–10.
129 Peng Miao received the bachelor's degree from the Department of Mathematics at the Normal University of Nanyang of China, in 2012, the master's degree from the College of Science, China Three Gorges University, in 2015. Now, he is an assistant in Department of Basic Courses, Zhengzhou Science and Technology University. His research interests include nonlinear systems and neural networks.
Yanjun Shen received the bachelor's degree from the Department of Mathematics at the Normal University of Huazhong of China in 1992, the master's degree from the Department of Mathematics at Wuhan University in 2001 and the Ph.D. degree in the Department of Control and Engineering at Huazhong University of Science and Technology in 2004. Now he is currently a professor in the College of Electrical Engineering and New Energy, China Three Gorges University. His research interests include robust control, nonlinear systems, neural networks.
Yujiao Li received the bachelor's degree from the Department of Mathematics at the Normal University of Zhoukou of China, in 2013. Now, she is a postgraduate student in the College of Science, China Three Gorges University. Her research interests include convex analysis, nonlinear systems and neural networks.
Lei Bao received the bachelor's degree from the Department of mechanical and electrical at the Normal University of Wuhan of China in 2013. Now, he is a postgraduate student in the College of Electrical Engineering and New Energy, China Three Gorges University. His research interests include hydrothermal scheduling, neural networks.