A novel neurodynamic reaction-diffusion model for solving linear variational inequality problems and its application

A novel neurodynamic reaction-diffusion model for solving linear variational inequality problems and its application

Applied Mathematics and Computation 346 (2019) 57–75 Contents lists available at ScienceDirect Applied Mathematics and Computation journal homepage:...

4MB Sizes 0 Downloads 41 Views

Applied Mathematics and Computation 346 (2019) 57–75

Contents lists available at ScienceDirect

Applied Mathematics and Computation journal homepage: www.elsevier.com/locate/amc

A novel neurodynamic reaction-diffusion model for solving linear variational inequality problems and its applicationR Chunlin Sha, Hongyong Zhao∗ Department of Mathematics, Nanjing University of Aeronautics and Astronautics Nanjing 210016, People’s Republic of China

a r t i c l e

i n f o

Keywords: Projection neural network Delays Diffusions Globally exponential stability Image fusion

a b s t r a c t In this paper, we present a new delayed projection neural network with reaction-diffusion terms for solving linear variational inequality problems. The proposed neural network possesses a simple one-layer structure. By employing the differential inequality technique and constructing a new Lyapunov–Krasovskii functional, we derive some novel sufficient conditions ensuring the globally exponential stability. These conditions are dependent on diffusions and the monotonicity assumption is unnecessary. Furthermore, the considered neural network can solve quadratic programming problems. Finally, several applicable examples are provided to illustrate the satisfactory performance of the proposed neural network. © 2018 Elsevier Inc. All rights reserved.

1. Introduction The linear variational inequality (LVI) problem: find a vector x∗ ∈  such that

(x − x∗ )T (Mx∗ + q ) ≥ 0, ∀x ∈ ,

(1)

where M ∈ Rn × n , q ∈ Rn , x = (x1 , x2 , . . . , xn ) ∈ Rn and feasible domain  = {x ∈ Rn |li ≤ xi ≤ hi , i = 1, 2, . . . , n} is a convex nonempty subset of Rn . If M is a positive definite or positive semi-definite matrix, then the problem (1) is called the strictly monotone LVI problem or monotone LVI problem. LVI problem is crucial in mathematical programming and widely applied to scientific and engineering areas, such as signal and image processing, resistive piecewise-linear circuits, automatic control, pattern recognitions, military scheduling, optimal control, and so on [1–4]. In recent years, due to real-time applications and inherent massive parallelism computations, one promising approach for handling LVI and related optimization problems is to employ artificial neural networks based on the circuit implementation [5–10]. The introduction of artificial neural networks that can be utilized to a closedloop circuit perhaps stemmed back from Tank and Hopfield’s work [11] in 1986. Thereafter, their seminal work inspired other researchers to develop neural networks for optimization problems. Xia [5] proposed a linear projection neural network for solving the linear projection. Tang [6] presented a discrete-time recurrent network for solving LVI problems and the globally exponential stability was analyzed. Xia [8] developed the projection neural network for variational inequality problems with box or sphere constraints. Hu and Wang [9] studied a general projection neural network to deal with variational inequality problems, which can contain previous neural networks as the special case. It is worth noting that above neural networks require three stringent assumptions: (i) LVI problems must be strictly monotone or monotone to ensure T

R ∗

The work is supported by National Natural Science Foundation of China under Grant nos. 11571170, 61174155 and 61403193. Corresponding author. E-mail address: [email protected] (H. Zhao).

https://doi.org/10.1016/j.amc.2018.10.023 0 096-30 03/© 2018 Elsevier Inc. All rights reserved.

58

C. Sha, H. Zhao / Applied Mathematics and Computation 346 (2019) 57–75

the stability and convergence of the neural network; (ii) Neurons communicate and respond without any delay in practical circuit; (iii) Diffusions are not taken into account in aforementioned neural networks. It is well known that, during the hardware circuit implementation, delays inevitably occur in the signal transmission of neurons [12,13], on account of finite switching speed of the amplifiers and the communication time. This may affect the dynamical behavior and lead to the oscillation phenomenon or the instability of networks [14,15]. Thus delays should be introduced in neural networks. It is inspiring that the subject of delayed neural networks for solving optimal problems has been an active research topic and received considerable attentions [12,13,16–22]. According to the penalty method, Chen et al. [19] solved the convex quadratic programming by using a delayed neural network, and gave the delay margin by analyzing the characteristic equation of neural networks. Liu et al. [20] presented a delayed neural network to deal with a class of linear projection equations, and obtained the delay-dependent exponential criteria of the neural network by using the LMI method. Cheng et al. [17] proposed a delayed projection neural network for solving linear variational inequalities. Recently, Niu et al. [21] designed a new delayed projection neural network with two different delays and analyzed the globally exponential stability based on the theory of functional differential equations. By employing the differential inequality technique, Sha et al. [13] proposed a projection delayed neural network and solved quadratic programming problems with equality and inequality constraints. In that paper, the proposed neural network has fewer neurons and a one-layer structure. It is noted that almost all of above neural networks for solving optimization problems are described by ordinary differential equations (ODEs). However, the foundations of neural processing refer to a phenomenon which involves an ensemble of neurons mutually connected and takes place both in time and space. Their dynamic is governed by the law of nonlinear diffusion [23,24]. Therefore, ODE neural networks usually ignore the spatial evolution at the level of neuron assemblies and cannot account for diffusions and reactions of neurons in biological systems. In order to achieve a good approximation of spatiotemporal actions and interactions of real-world neurons, it is desirable to introduce reaction-diffusion terms in neural networks. [25,26]. Up to now, several authors have extensively investigated neural networks with reaction-diffusion terms and obtained some useful stability criteria [23,24,26–28]. By constructing an appropriate Lyapunov functional and employing the linear matrix inequality method, some sufficient criteria [26] on the globally delay-dependent stability were derived and these conditions were dependent on the size of delays and the measure of the space. In Ref. [23], the globally exponential synchronization for reaction-diffusion networks with mixed delays was studied and some useful synchronization criteria in terms of p-norm were obtained by using the Lyapunov functional theory and introducing multi-parameters. Since delays, diffusions and the no-monotonicity assumption are significant to neural networks for solving the LVI problem, it is natural to arise the following problems: Can delays affect stability? Is it beneficial for stability or not? Will diffusion speed up the tendency of convergence? Can a class of non-monotone LVI problems be solved by the proposed neural network? Further, does the neural network have the practical application? Evidently, these questions are important for the application of neural networks. Moreover, to the best of the authors’ knowledge, no authors have applied neural networks with reaction-diffusion terms to LVI problems or related optimization problems. Inspired by the above discussion, our objective in this paper is to design and study a new delayed projection neural network with reaction-diffusion terms for solving LVI problem (1). The remainder of this paper is organized as follows. In Section 2, a delayed projection neural network with reactiondiffusion terms is presented for solving LVI problem (1). The globally exponential stability of the proposed neural network is shown under some conditions in Section 3. In Section 4, we use the proposed neural network to solve the quadratic programming problem with equality and inequality constraints. Next, in Section 5, numerical simulations are given to demonstrate the effectiveness of the proposed neural network. Finally, Section 6 concludes this paper. 2. A projection neural network According to the well known projection theorem [29], it follows that x∗ ∈  is the solution of LVI (1) if and only if it satisfies the following projection equation:

x = P [x − α (Mx + q )], where α > 0 is a constant and P : Rn →  is a projection operator defined by

P (x ) = arg min x − v2 ,

(2)

v∈

where  · 2 denotes the l2 Euclidean norm of Rn . In 2013, Huang et al. [32] analyzed a projection neural network as follows:

dx = P (Ax(t ) + b) − x(t ), t ∈ [t0 , +∞ ), dt

(3)

where A = (ai j )n×n = I − α M and b = (b1 , b2 , . . . , bn ) = −α q. It is shown that if there exist a positive definite symmetric matrix P ∈ Rn × n , positive diagonal matrices D = diag{d1 , d2 , . . . , dn }, L = diag{l1 , l2 , . . . , ln } ∈ Rn×n and a constant k > 0, such that T



( 2k − 2 )P ∗

 ( 2k − 1 )AT D + P + AT L DA + AT D − 2L

< 0,

C. Sha, H. Zhao / Applied Mathematics and Computation 346 (2019) 57–75

59

then the equilibrium point of neural network (3) is globally exponentially stable and convergent to the solution of LVI problem (1). And the neural network (3) can solve a class of non-monotone LVI problems. Based on the discussion above, we present an alternative delayed projection neural network with reaction-diffusion terms for solving a class of more general LVI problems

⎧ m ∂ x(t, ξ )  ∂ ∂ x(t, ξ )

⎪ ⎪ = Dk − 2x(t, ξ ) + x(t − τ¯ , ξ ) + P (Ax(t, ξ ) + b), t ∈ [t0 , +∞ ), ξ ∈ 1 , ⎪ ⎨ ∂t ∂ξ ∂ξ k=1

k

k

∂ x(t, ξ ) ⎪ = 0, t ∈ [t0 , +∞ ), ξ ∈ ∂ 1 , ⎪ ⎪ ⎩ ∂n x(t, ξ ) = φ (t, ξ ), t ∈ [t0 − τ , t0 ], ξ ∈ 1 ,

(4)

where 1 denotes a compact set in Rm with a smooth boundary ∂ 1 , ξ = (ξ1 , ξ2 , . . . , ξn ) ∈ 1 , x(t, ξ ) = (x1 (t, ξ ), x2 (t, ξ ), . . . , xn (t, ξ ))T represents the average membrane potential of neurons at time t and in space ξ , Dk = diag(d1k , d2k , . . . , dnk ) > 0 denotes the transmission diffusion coefficient along the axon of neurons and in space ξ , τ¯ = (τ1 , τ2 , . . . , τn )T > 0 stands for the transmission delay of neurons and corresponds to finite speed of axonal signal transmission at time t, weight matrix A is the strength of neurons, i.e., the strength of synaptic connections among the circuit neurons in the network, τ = max{τ1 , τ2 , . . . , τn }, C  C ([t0 − τ , t0 ] × Rm , Rn ) is defined as the Banach space of continuous T functions which map [t0 − τ , t0 ] × Rm into Rn with the topology of uniform converge, φ = (φ1 , φ2 , . . . , φn ) ∈ C is the initial state variable of network, P : Rn →  is defined by (2) and can be implemented by using a linear piecewise function. The neural network (4) can be transformed into the vector form as T

⎧ m n  ∂ xi (t, ξ )  ∂ ∂ xi (t, ξ )

⎪ ⎪ = d − 2xi (t, ξ ) + xi (t − τi , ξ ) + P ai j x j (t, ξ ) + bi , t ∈ [t0 , +∞ ), ξ ∈ 1 , ⎪ ⎨ ∂t ∂ξk ik ∂ξk j=1 k=1 ∂ xi (t, ξ ) ⎪ ⎪ ⎪ ⎩ ∂ n = 0, t ∈ [t0 , +∞ ), ξ ∈ ∂ 1 , xi (t, ξ ) = φi (t, ξ ), t ∈ [t0 − τ , t0 ], ξ ∈ 1 , (5) where i = 1, 2, . . . , n, n is the number of the formal neurons in neural network (5). Neural network (5) is composed of the formal neurons connected by resistances that simulate synapses. A formal neuron is modeled by a subcircuit consisting of a capacitor, an amplifier, a reverse amplifier and a resistance [30]. From Fig. 1, it is easy to see that the architecture of neural network (5) can be implemented by using integrated circuits with a one-layer T T structure, where x = (x1 , x2 , . . . , xn ) is the output variable, b = (b1 , b2 , . . . , bn ) is the input variable. Remark 1. It is noted that if Dk = 0, then the proposed neural network have the same structure as that of [31]; If Dk = 0, τ = 0, then the proposed neural network becomes those proposed in [5,32]. That is to say, neural network (5) described in this paper is more general than those of [5,31,32]. For the sake of further discussion on globally exponential stability of neural network (5), we introduce the following notions, definitions and lemmas. Notions. Set x(t, ξ ) = (x1 (t, ξ ), x2 (t, ξ ), . . . , xn (t, ξ )) , [x]+ = (max(0, x1 ) , max(0, x2 ), . . . , max(0, xn )) and write L2 (1 ) for the space of real Lebesgue measurable functions on 1 . It is a Banach space for the L2 -norm T

x(t )2 =

n 

T

xi (t )22 ,

i=1

where xi (t )2 =

φτ =



2 1 |xi (t, ξ )| dξ

n 

12

. For any φ i (t, ξ ) ∈ C, define

φi 2τ ,

i=1

where φi τ =



1

|φi (t, ξ )|2τ dξ

12

, |φi (t, ξ )|τ =

sup

t0 −τ ≤t≤t0

|φi (t, ξ )|.

Definition 1. The point x∗ is said to be an equilibrium point of neural network (5), if x∗ satisfies projection equation (2). Definition 2. The equilibrium point x∗ of neural network (5) is said to be globally exponentially stable if there exist constants K > 0 and λ > 0, such that the output trajectory of this network satisfies

x(t ) − x∗ 2 ≤ K φ − x∗ τ e−λ(t−t0 ) , f or all t ≥ t0 , where x(t, ξ ) is the solution of neural network (5), λ is the exponential convergence rate [25].

60

C. Sha, H. Zhao / Applied Mathematics and Computation 346 (2019) 57–75

Fig. 1. Block diagram of neural network (5).

Lemma 1. [26] (Poincar e´ Integral Inequality) Let 1 be a bounded domain of Rm with a smooth boundary ∂ 1 of class L2 by 1 . u(ξ ): 1 → R is a real-valued function belonging to H01 (1 ) and ∂ u∂(nξ ) |∂ 1 = 0. Then



1

|u ( ξ )|2 d ξ ≤

 1 |∇ u(ξ )|2 dξ , r 1 1

where r1 is the smallest positive eigenvalue of the Neumann boundary problem



− ψ (ξ ) = rψ (ξ ),

ξ ∈ 1 ,

∂ψ (ξ ) = 0 , ξ ∈ ∂ 1 . ∂n

(6)

Lemma 2. [32] Assume the set  ⊂ Rn is a closed convex set, then for any x, y ∈ Rn , P satisfies the following inequality:

P (x ) − P (y ) ≤ x − y. 3. Stability analysis Let x∗ = (x∗1 , x∗2 , . . . , x∗n ) be the equilibrium point of neural network (5) and denote yi (t, ξ ) = xi (t, ξ ) − x∗i (i = 1, . . . , n), then neural network (5) can be reformulated as T

  m ∂ yi (t, ξ )  ∂ ∂ yi (t, ξ ) = dik − 2yi (t, ξ ) + yi (t − τi , ξ ) + fi (y(t, ξ )), t ∈ [t0 , +∞ ), ∂t ∂ξk ∂ξk k=1

ξ ∈ 1 ,

(7)

with the Neumann boundary condition and the initial value



∂ yi (t, ξ ) = 0, t ∈ [t0 , +∞ ), ξ ∈ ∂ 1 , ∂n yi (t, ξ ) = ψi (t, ξ ), t ∈ [t0 − τ , t0 ], ξ ∈ 1 ,

where y(t, ξ ) = (y1 (t, ξ ), y2 (t, ξ ), . . . , yn (t, ξ )) , fi (y(t, ξ )) = P = φi (t, ξ ) − x∗i , i = 1, 2, . . . , n. T

 n j=1



ai j (y j (t, ξ ) + x∗j ) + bi − P

 n j=1



ai j x∗j + bi , ψi (t, ξ )

C. Sha, H. Zhao / Applied Mathematics and Computation 346 (2019) 57–75

61

Clearly, the equilibrium point x∗ of neural network (5) is globally exponentially stable if and only if the zero equilibrium point of neural network (7) is globally exponentially stable. Thus in the following, we only consider globally exponential stability of the zero equilibrium point for neural network (7). Theorem 1. If there exists a positive constant λ <

0 < τi < ln

m 

r1 dik + 2 − 2λ −

k=1

n 

|a ji |

m

k=1 r1 dik /2



+ 1, such that

(2λ ), i = 1, 2, . . . , n,

j=1

then the zero equilibrium point of neural network (7) is globally exponentially stable. Proof. Multiply both sides of (7) by yi (t, ξ ) and integrate with respect to ξ on , then we can obtain

1 d yi (t )22 = 2 dt



yi (t, ξ )



    ∂ ∂ yi (t, ξ ) 2 dik dξ + [−2(yi (t, ξ )) + yi (t, ξ )yi (t − τi , ξ )]dξ + yi (t, ξ ) fi (y(t, ξ ))dξ . ∂ξk ∂ξk k=1

m 





(8) By a simple calculating, we can get

 d yi (t )2 = dt

1 yi (t )2 



yi (t, ξ )



∂ ∂ yi (t, ξ )

2 dik dξ + [−2(yi (t, ξ )) + yi (t, ξ )yi (t − τi , ξ )]dξ ∂ξ ∂ξ k k k=1   

m 

yi (t, ξ ) fi (y(t, ξ ))dξ .

+

(9)



 It follows from the Green formula and Lemma 1 that

   m  ∂ ∂ yi (t, ξ ) ∂ yi (t, ξ ) yi (t, ξ ) d dξ = yi (t, ξ )∇ · dik dξ ∂ξk ik ξk ∂ξk k=1 k=1   m  m   ∂ y (t, ξ ) ∂ yi (t, ξ ) = ∇ · yi (t, ξ )dik i dξ − ∇ yi (t, ξ ) · dik dξ ∂ξk ∂ξk k =1 k =1   m  2     m ∂ yi (t, ξ ) ∂ yi (t, ξ ) = yi (t, ξ )dik · ds − dik dξ ∂ξk ∂ξk k=1 k=1



m 



∂

≤ −r1

  m





2

dik yi (t, ξ ) dξ ,

(10)

 k=1

where ∇ = (∂ /∂ ξ1 , ∂ /∂ ξ2 , . . . , ∂ /∂ ξn ) is the gradient operator, and T



∂ yi (t, ξ ) dik ∂ξk

m



∂ yi (t, ξ ) ∂ yi (t, ξ ) = di1 , . . . , dim ∂ξ ∂ξm 1 k=1

T .

Substituting (10) into (9), the following inequality holds:



m  d yi (t )2 ≤ − r1 dik + 2 dt k=1

yi (t )2 +

1 yi (t )2

 





yi (t, ξ ) yi (t − τi , ξ ) + fi (y(t, ξ )) dξ .

(11)

62

C. Sha, H. Zhao / Applied Mathematics and Computation 346 (2019) 57–75

Based on the variation-of-constants formula, Lemma 1 and Ho¨ lder inequality, we can obtain

yi (t )2 ≤e−( ≤e

−(

+

m

k=1 r1 dik +2

)(t−t0 )

m

k=1 r1 dik +2 )(t−t0 )

n 

yi (t0 )2 + yi (t0 )2 +

+

e− (

m

k=1 r1 dik +2

t0



t

e

−(

m

1 yi ( s )2

k=1 r1 dik +2 )(t−s )

t0

k=1 r1 dik +2

)(t−t0 )

yi (t0 )2 +



|ai j |yi (s )2 y j (s )2 ds

t

e− (



ym r d +2 )(t−s ) k=1 1 ik

t0

j=1

≤e−(

1 yi ( s )2

)(t−s )



 |yi (s, ξ )||yi (s − τi , ξ ) + fi (y(s, ξ ))|dξ ds





|yi (s, ξ )||yi (s − τi , ξ )|



|ai j ||yi (s, ξ )||y j (s, ξ )|dξ ds

m

n 

t



j=1

≤e−(



m

k=1 r1 dik +2

)(t−t0 )

yi (t0 )2 +



t

e− (

m

k=1 r1 dik +2

)(t−s )

1 yi ( s )2



t0



 y i ( s )  2  y i ( s − τi )  2

 y i ( s − τi )  2 +

n 

|ai j |y j (s )2 ds.

j=1

(12) According to the condition in Theorem 1, we have



m 

− 2λ > 0

r1 dik + 2

k=1

and m 

r1 dik + 2 − 2λ − e2λτi −

n 

k=1

|a ji | > 0,

j=1

where i = 1, 2, . . . , n. Then, for all t ≥ t0 , we can obtain

sup

t0 ≤θ ≤t

  yi (θ )2 e2λ(θ −t0 )

≤ sup

 e

t0 ≤θ ≤t

(2λ−

m  k=1

r1 dik −2 )(θ −t0 )

   θ 2λ(θ −t0 )− m (r1 dik +2)(θ −s)

 n  k=1  y i ( s − τi )  2 + yi (t0 )2 + sup e |ai j |y j (s )2 ds t0 ≤θ ≤t

≤ yi (t0 )2 + sup

t0 ≤θ ≤t

 θ

k=1

(r1 dik +2 )](θ −s )

t0

t0

k=1 r1 dik

+ 2 ) − 2λ

j=1

e2λτi e2λ(s−τi −t0 ) yi (s − τi )2 +

n 

 |ai j |e2λ(s−t0 ) y j (s )2 ds

j=1

 1

≤ yi (t0 )2 + m

(

e

[2λ−

m 

e

2λτi

sup

t0 −τi ≤θ ≤t

e

2λ(θ −t0 )



yi ( θ )2 +

n 

|ai j | sup

j=1

t0 −τi ≤θ ≤t

e

2λ(θ −t0 )

y j ( θ )2



(13)

Then



sup

t0 −τi ≤θ ≤t



yi (θ )2 e2λ(θ −t0 )

sup

t0 −τi ≤θ ≤t0

≤ 2

sup





   yi (θ )2 e2λ(θ −t0 ) + sup yi (θ )2 e2λ(θ −t0 )

t0 −τi ≤θ ≤t0

t0 ≤θ ≤t

yi (θ )2 + m

1

k=1 r1 dik

 

+ 2 − 2λ

e2λτi

sup

t0 −τi ≤θ ≤t





e2λ(θ −t0 ) yi (θ )2 +

n  j=1

Summing from i = 1 to n on both side of inequality (14) together, we can get

|ai j |

sup

t0 −τi ≤θ ≤t



e2λ(θ −t0 ) y j (θ )2

 

.

(14)

C. Sha, H. Zhao / Applied Mathematics and Computation 346 (2019) 57–75 n 

 (

i=1

m 

r1 dik + 2 − 2λ ) − e2λτi −

n 

k=1

 |a ji |

t0 −τi ≤θ ≤t

j=1



m k=1 r1 dik

Denote N = min1≤i≤n



sup



63



m n    r1 dik + 2 yi (θ )2 e2λ(θ −t0 ) ≤ 2 max

+ 2 − 2λ − e2λτi −

1≤i≤n

sup

i=1 t0 −τi ≤θ ≤t0

k=1

yi ( θ )2 .

(15)



  m | a | and M = 2N −1 max1≤i≤n ji j=1 k=1 r1 dik + 2 , then N > 0,

n

M ≥ 1 and we can easily obtain

y(t )2 ≤ Mψτ e−λ(t−t0 ) . Combing it with meaning of the exponential stability, we derive that the zero equilibrium point of neural network (7) is globally exponentially stable. This completes the proof. Remark 2. It is observed from the proof of Theorem 1 that the exponential convergence rate λ is dependant on delays and diffusions of the projection neural network. Theorem 1 shows that the delays and the reaction-diffusion term do contribute τ to the exponential stability criterion. Moreover, for any given error precision ε > 0, there exists a T = t0 + λ1 ln( Mψ ) such ε ∗ that xi (t, ξ ) − xi  < ε , when t ≥ T. This means that the proposed neural network can globally converge to the equilibrium points of a given error precision within a finite time. Corollary 1. If the following condition



m 

r1 dik − 1 +

n 

k=1

|a ji | < 0, i = 1, 2 . . . , n

(16)

j=1

holds, then the zero equilibrium point of neural network (7) is globally exponentially stable. Next, by constructing a new Lyapunov–Krasovskii functional, we can obtain an alternative criterion for the globally exponential stability of neural network (7). Theorem 2. If there exist constants li > 0, β i > 0, γ i > 0(i = 1, 2, . . . , n ), such that

−2

m 

li r1 dik + (2λ − 4 )li +

k=1

li

βi

+ βi li e2λτi + γi li τi +

n 

n 

|ai j |li +

j=1

|a ji |l j < 0, i = 1, 2, . . . , n,

(17)

j=1

then the zero equilibrium point of neural network (7) is globally exponentially stable. Proof. Consider the following Lyapunov–Krasovskii functional

V (t ) =

  n  i=1



li y2i (t, ξ )e2λ(t−t0 ) + βi



t

t−τi

y2i (s, ξ )e2λ(s−t0 +τi ) ds + γi



0

−τi



t

t+θ

y2i (s, ξ )e2λ(s−t0 ) dsdθ dx.

(18) 

Calculating the upper right Dini-derivate along the solution of (7) and estimating its right hand side, we obtain

D V (t ) = +

  n  i=1



li 2yi (t, ξ )

∂ yi (t, ξ ) 2λ(t−t0 ) e + 2λy2i (t, x )e2λ(t−t0 ) + βi y2i (t, ξ )e2λ(t−t0 +τi ) ∂t

−βi y2i (t − τi , ξ )e2λ(t−t0 ) + γi τi y2i (t, ξ )e2λ(t−t0 ) − =e

2λ(t−t0 )





t

t−τi

y2i (s, ξ )e2λ(s−t0 ) ds dξ

  ∂ ∂ yi (t, ξ ) li 2yi (t, ξ ) dik + (2λ − 4 )y2i (t, ξ ) ∂ξ ∂ξ  i=1 k k k=1

  n

m 

+ 2yi (t, ξ )yi (t − τi , ξ ) + 2yi (t, ξ ) fi (y(t, ξ )) + βi y2i (t, ξ )e2λτi − βi y2i (t − τi , ξ ) +γ τ

2 i i yi

≤e

   n (t, ξ )dξ − li

2λ(t−t0 )



 i=1



t

t−τi

y2i

(s, ξ )e

2λ(s−t0 )

ds dξ

  ∂ ∂ yi (t, ξ ) 1 li 2yi (t, ξ ) dik + (2λ − 4 )y2i (t, ξ ) + y2i (t, ξ ) ∂ξ ∂ξ β  i=1 i k k k=1

  n

m 

+βi y2i (t − τi , ξ ) + 2|yi (t, ξ )|

n  j=1



|ai j ||y j (t, ξ )| + βi y2i (t, ξ )e2λτi − βi y2i (t − τi , ξ ) + γi τi y2i (t, ξ ) dξ

64

C. Sha, H. Zhao / Applied Mathematics and Computation 346 (2019) 57–75

≤ 2e2λ(t−t0 )

  n  i=1

li yi (t, ξ )

!

     n ∂ ∂ yi (t, ξ ) 1 dik dξ + e2λ(t−t0 ) li ( 2λ − 4 ) + ∂ξ ∂ξ β  i k k i=1 k=1  

m 

+βi e2λτi + γi τi y2i (t, ξ )dξ + e2λ(t−t0 ) ≤ 2e2λ(t−t0 ) +γi li τi +

  n  i=1

n 

li yi (t, ξ )

  n



m 

 i=1

n 

li

|ai j |y2i (t, x ) +

j=1

j=1

n 

|ai j |y2j (t, x ) dξ

j=1



∂ ∂ yi (t, ξ ) dik dξ + e2λ(t−t0 ) ∂ξ ∂ξ k k k=1 

|ai j |li + βi li e2λτi +

n 

  n   i=1

( 2λ − 4 )li +

1

βi

li

|a ji |l j y2i (t, ξ )dξ .

(19)

j=1

Furthermore, it follows from Green formula and the Neumann boundary condition that

2e2λ(t−t0 )

  n

li yi (t, ξ )

 i=1

n  ∂ ∂ yi (t, ξ )

dik dξ ≤ −2e2λ(t−t0 ) li r1 dik ∂ξk ξk i=1 k=1

m 

  m 

2

yi (t, ξ ) dξ .

(20)

 k=1

Therefore, substituting (20) into (19), we obtain

D V (t ) ≤ e +

2λ(t−t0 )

  n



−2

 i=1

m 

li

li r1 dik + (2λ − 4 )li +

βi

k=1

+ βi li e

2λτi

+ γi li τi +

n 

|ai j |li +

j=1

n 

 |a ji |l j y2i (t, ξ )dξ .

j=1

(21) From (17), we have

D+V (t ) ≤ 0, and so V(t) ≤ V(t0 ), ∀t ≥ t0 . Since

V (t0 ) =

  n  i=1





li y2i (t0 , ξ ) + βi

   n



1≤i≤n

   n 

t0 −τi

1≤i≤n

 i=1

 i=1

t0

y2i (s, ξ )e2λ(s−t0 +τi ) ds + γi

max (li )y2i (t0 , ξ ) + max (li βi e2λτi )

+ max (li γi ) ≤



1≤i≤n

0



−τi

  n  i=1

t0 −τi

t0 −τi

max (li )y2i (t0 , ξ ) max (li βi e2λτi )

1≤i≤n

1≤i≤n

t0

t0 +θ

y2i (s, ξ )e2λ(s−t0 ) dsdθ dξ

y2i (s, ξ )e2λ(s−t0 ) ds





t0

t0 −τi

y2i (s, ξ )ds + max (li γi τi ) 1≤i≤n

ψ2τ ,



t0

t0 −τi

y2i (s, ξ )ds dξ (22)

li y2i (t, ξ )e2λ(t−t0 ) ds ≥ e2λ(t−t0 ) min (li ) y(t )22 , 1≤i≤n

we can get



e2λ(t−t0 ) min (li ) y(t )22 ≤ max li + li βi τi e2λτi + li γi τi2 1≤i≤n

Denote M =

−τi



y2i (s, ξ )e2λ(s−t0 ) dsdθ dξ

1≤i≤n

V (t ) ≥

t0

0



t0

≤ max li + li βi τi e2λτi + li γi τi2 and





max

1≤i≤n



1≤i≤n

li +li βi τi e2λτi +li γi τi2 min (li )



ψ2τ .

 , then M ≥ 1, and we easily obtain that

1≤i≤n

y(t )2 ≤ Mψτ e−λ(t−t0 ) . Combing it with meaning of the exponential stability, we derive that the zero equilibrium point of neural network (7) is globally exponentially stable. This completes the proof. Remark 3. It is worth pointing out that existing neural networks [7,8,33] for solving linear variational inequality problems require the monotonicity (i.e. M is a positive semi-definite or positive definite matrix) assumption to ensure stability. Nevertheless, matrix M is unnecessary to be positive semi-definite or positive definite in Theorem 1 (or Theorem 2). That is to say, neural network (5) can solve a class of non-monotone linear variational inequality problem, Thus, we extend and improve the result of Refs. [7,8,33].

C. Sha, H. Zhao / Applied Mathematics and Computation 346 (2019) 57–75

65

Corollary 2. If there exist constants li > 0, β i > 0, γi > 0(i = 1, 2, · · · , n ), such that

−2

m 

li r1 dik − 4li +

k=1

li

βi

+ βi li + γi li τi +

n 

|ai j |li +

j=1

n 

|a ji |l j < 0, i = 1, 2 · · · , n,

(23)

j=1

then the zero equilibrium point of neural network (7) is globally exponentially stable. Corollary 3. If there exist constants li > 0 and βi > 0(i = 1, 2, . . . , n ), such that

−2

m 

li r1 dik − 4li +

k=1

li

βi

+ βi li +

n 

|ai j |li +

n 

j=1

|a ji |l j < 0, i = 1, 2, . . . , n,

(24)

j=1

then the zero equilibrium point of neural network (7) is globally exponentially stable. Proof. By (24), we can choose some constant λ > 0 (may be very small), such that

−2

m 

li r1 di k + ( 2λ − 4 )li +

k=1

li

βi

+ βi li e2λτi +

n 

|ai j |li +

n 

j=1

|a ji |l j < 0, i = 1, 2, . . . , n.

(25)

j=1

Consider the following Lyapunov–Krasovskii functional

V (t ) =

  n  i=1



li y2i (t, ξ )e2λ(t−t0 ) + βi



t

t−τi

y2i (s, ξ )e2λ(s−t0 +τi ) ds dx.

(26) 

As the remainder of the proof is similar to that of Theorem 2, it is omitted here.

Remark 4. Let us compare Corollary 1 and Corollary 3 under the assumption that M is a symmetric matrix. When paramn  eters li = 1, βi = 1(i = 1, 2, · · · , n ), the condition of Corollary 3 converts into − m j=1 |a ji | < 0, i = 1, 2, . . . , n. k=1 r1 dik − 1 + Clearly, it is just equivalent to the condition of Corollary 1. Thus, Corollary 3 extends Corollary 1 in some way. 4. Solving convex optimization problem by the proposed approach Consider the following quadratic programming problem with equality and inequality constraints

minimize subject to

1 T x Qx + cT x 2 Ex = e, Gx ≤ g,

(27)

T

where x = (x1 , x2 , . . . , xn ) ∈ Rn , Q ∈ Rn × n is a symmetric and semi-positive definite matrix, c ∈ Rn , E ∈ Rm × n and rank(A ) = m ( 0 < m < n ), G ∈ Rp × n , e ∈ Rm , g ∈ Rp . By using λ and μ to denote Lagrange multipliers for constraints Ex = e and Gx ≤ g respectively, we construct the lagrange function [34,35] as follows:

L(x, λ, μ ) =

1 T x Qx + cT x − λT (Ex − e ) − μT (g − Gx ), 2

where λ ∈ Rm and μ ∈ Rp are Lagrange multipliers. To simplify the architecture of neural network (5), we give the following Theorems via well known Karush–Kuhu–Tucker condition. Theorem 3. x∗ is an optimal solution of (27) if and only if there exists μ∗ such that (x∗ , μ∗ ) satisfies the following equation set:

(I − U )(Qx + c + GT μ ) + V (Ex − e ) = 0,

(28)

[μ + α (Gx − g)]+ − μ = 0,

(29) −1

where α is a positive constant, U = E T (E E T )

−1

E, V = E T (E E T )

. 

Proof. The proof is similar to that of Theorem 1 in [13]. T

Theorem 4. x∗ is an optimal solution of (27) if and only if there exists μ∗ such that z∗ = (x∗T , μ∗T ) satisfies projection equation

66

C. Sha, H. Zhao / Applied Mathematics and Computation 346 (2019) 57–75

z = PX (z − α (Mz + q )),



where z =

xn×1





,M=

μ p×1

(In − Un×n )Qn×n + Un×n −G p×n

(In − Un×n )GTn×p



0n×p



, q=

(In − Un×n )cn×1 − Vn×m em×1 g p×1



(30)

.

Proof. We denote X1 = {x ∈ Rn | − ∞ ≤ x ≤ +∞}. By using the projection method, equation (28) can be reformulated as the following linear projection equation:

"



x = PX1 x − α (I − U )(Qx + c + GT μ ) + V (Ex − e ) That is

!

.

    x 0n×p x =PX1 In − α (In − Un×n )Qn×n + Un×n μ    x + (In − Un×n )cn×1 − Vn×m em×1 . μ 

(In − Un×n )GTn×p



(31)

We also denote X2 = {x ∈ R p |0 ≤ x ≤ +∞}. By employing the projection method, (29) can be equivalently represented as the following linear projection equation:

μ = PX2 {μ + α (Gx − g)}.





−∞n×1 , h= 0 p×1 Eqs. (31) and (32) can be rewritten as follows: Denote X = {x ∈ Rn+ p |l ≤ x ≤ h}, where l =

  x

μ

#

=PX

In

0n×p Ip

0 p×n



  x

μ

+



  x

μ

−α

 $

(32)



+∞n×1 . +∞ p×1

(In − Un×n )Qn×n + Un×n

(In − Un×n )c − Vn×m em×1 g p×1



−G p×n

(In − Un )GTn×p



0 p×p

.

From the definition of vector z, q and matrix M, we can obtain the projection equation

y = PX (y − α (My + q )). 

This completes the proof. x∗

Corollary 4. is an optimal solution of quadratic programming problems with linear equality constraints (i.e. minimize cT x, subject to Ex = e), if and only if x∗ satisfies projection equation

x = P (x − α (Mx + q )), where α > 0 is a constant, M = (I − U )Q + VA, q = (I − U )c − V b, U =

1 T 2 x Qx

+

(33) −1 E T (E E T ) E

and V =

−1 E T (E E T ) .

Remark 5. If the condition of Theorem 1 (or Theorem 2) holds, then according to the relationship between the projection equation and the variational inequality problem, neural network (5) can be applied to solve above linear projection equation and quadratic programming problems. Remark 6. In [36], a neural network for solving quadratic programming problem (27) was presented as follows:

⎧ dx T T ⎪ ⎪ ⎪ dt = −k(Qx + c + E λ + G μ ), ⎪ ⎪ ⎪ ⎨ dy = k(−μ + [μ + Gx − g]+ ), ⎪ dt ⎪ ⎪ ⎪ ⎪ ⎪ ⎩ dλ = k(Ex − e ).

(34)

dt

Effati and Nazemi [36] proved that the state trajectory of neural network (34) converges to the optimal solution under a hypothesis that Q is a positive definite matrix. In addition, the neural network has n + p + m neurons. However, neural network (5) has a simple structure with n + p neurons and state trajectory is globally exponentially convergent to the equilibrium point under a weaker condition (i.e., Q is a semi-definite matrix). Therefore, we improve the partial results of [36]. 5. Numerical simulation To demonstrate the effectiveness of the proposed neural network, four illustrative examples will be given in this section. The computer-simulations are conducted on the Matlab R2013b. For purpose of solving differential equations with programming, we need to discretize the model in time domain and space domain. The continuous model (5) which is defined by the

C. Sha, H. Zhao / Applied Mathematics and Computation 346 (2019) 57–75

67

Fig. 2. The output variable of neural network (5) with τi = 1(i = 1, 2, 3 ), d11 = 1.45, d21 = 1.48, d31 = 1.92 in Example 1(i).

reaction-diffusion neural network in one-dimensional space is resolved in a discrete space domain with N lattice sites. The time evolution is discrete with the time interval t and the derivatives are approximated by differences over t through using the Eluer method. Meanwhile, the space interval is defined by the lattice constant h, and the Laplacian describing diffusion is calculated by using the definite difference method. We consider the following applications solved by our neural network with space domain 1 = {ξ1 |0 ≤ ξ1 ≤ l }, time interval t = 0.01, space interval h = 0.2. Obviously, the smallest 2 eigenvalue r1 is ( πl ) . 5.1. Application in monotone LVI problem Example 1. Consider the following linear variational inequality problem [31] defined by (1), where



M=

0.1 0.1 0.5

0.1 0.1 −0.5



−0.5 0.5 , c = 0.0



−1.0 1.0 −0.5



,

 = {x ∈ R3 | − 10 ≤ xi ≤ 10, i = 1, 2, 3}. It is easy to see that M is an asymmetric and positive semi-definite matrix with the minimal eigenvalue 0 and the T maximal eigenvalue 0.2. This linear variational inequality problem has a unique solution x∗ = (0.5, −0.5, −2 ) . (i) Globally converge to the optimal solution For neural network (5), we fix l = π , α = 0.7, m = 1, d11 = 1.45, d21 = 1.48, d31 = 1.92, λ = 1 , τi = 1 ( i = 1 , 2 , 3 ) .   m By a simple calculation, we obtain r1 = 1, λ < min ( m r d / 2 + 1 ) = 1 . 7250 , min τ < ln k=1 1 ik k=1 r1 dik + 2 − 2λ − 1≤i≤n 1≤i≤n

T | a | / ( 2 λ ) = 2 . 1913 . Based on Theorem 1 , the equilibrium point ( 0 . 5 , −0 . 5 , −2 ) is globally exponentially stable. ji j=1

n

T

As observed in Fig. 2, we can see that the output variable globally converges to (0.5, −0.5, −2 ) , which corresponds to the optimal solution. (ii) Impact of delays on the stability of neural network To observe the impact of delays on the stability of our neural network, we consider α = 0.7, m = 1, di1 = 0(i = 1, 2, 3 ) unchanged, and take τi = 0, 1(i = 1, 2, 3 ), respectively. First of all, we consider our neural network (5) without delays. As seen from Fig. 3(a), it illustrates that the output variable of neural network without delay oscillates and does not converge to x∗ . Then, we add the delay term and take τi = 1(i = 1, 2, 3 ), it is easy to see that the output variable of neural network

68

C. Sha, H. Zhao / Applied Mathematics and Computation 346 (2019) 57–75

Fig. 3. The output variables of neural network (5) with τ = 0, 1, respectively.

(5) is globally convergent at x∗ from Fig. 3(b). Therefore, the appropriate transmission delays of the circuit neuron can change the stability of neural network. Remark 7. When m = 1 and τi = di1 = 0(i = 1, 2, 3 ), neural network (5) becomes neural network (3) analyzed by Huang et al. [32]. As seen in Example 1(ii), compared with existing neural network (4) proposed by Huang et al. [32], our neural network (5) can solve more general variational inequalities by introducing appropriate time delays. Thus, we extend and improve partial result of [32]. (iii) Impact of diffusions on neural network When the output variable of neural network oscillates and does not converge to x∗ as shown in Example 1(ii), we observe the impact of diffusions on the amplitude of neural network. Let parameters τ1 = τ2 = τ3 = 0, d11 = d22 = d33 = d , α = 0.7 and assign 0 and 0.6 to d, respectively. The simulation results are shown in Fig. 4. From Fig. 4, it can be concluded that the amplitude of neural network (5) without diffusions is greater than that of neural network (5) with diffusions. This means that diffusions can inhibit the oscillation of neural network. 5.2. Application in non-monotone LVI problem Example 2. Consider the following linear variational inequality problem [21] defined by (1), where



M=

0.5 −0.1 1

1 0.5 −0.1



−0.1 0.1 , c = 0.5



1 0 −1



,

 = {x ∈ R3 | − 5 ≤ xi ≤ 5, i = 1, 2, 3}. It is easy to see that M is an asymmetric and indefinite matrix, which implies that it is a non-monotone linear variational T inequality problem, and this problem has a unique optimal solution x∗ = (−0.3343, −0.5775, 2.5532 ) . (i) Globally converge to the optimal solution Consider neural network (5) with the following parameters l = π , α = 0.7, m = 1, λ = 0.5, d11 = 0.615, d21 = 0.930, d31 =  0.60, β = 1, λ = 0.1, γ = 0.01, li = 1, τi = 0.5(i = 1, 2, 3 ). By a simple calculation, we obtain r1 = 1 and max [−2 m k=1 li r1 dik + 1≤i≤n   li n n (2λ − 4 )li + β + βi li e2λτi + γi li τi + j=1 |ai j |li + j=1 |a ji |l j ] = −0.6098 < 0, which implies these parameters satisfy the coni

T

dition of Theorem 2. Thus, the equilibrium point (−0.3343, −0.5775, 2.5532 ) is globally exponentially stable. The simulation results are shown in Fig. 5 which depicts the transient behavior of proposed neural network (5) with a random initial function. As can be seen from the Fig. 5, the output variable globally converges to the optimal solution (−0.3343, −0.5775, 2.5532 )T . Remark 7. We make a comparison between Corollary 1 and Corollary 2. Firstly, we take parameters α = 0.7, m = 1, d11 = n  0.415, d21 = 0.50, d31 = 0.60, r1 = 1 in Corollary 1. By a simple calculation, we have max (− m j=1 |a ji | ) = k=1 r1 dik − 1 + 1≤i≤3

0.0050 > 0. Clearly, the condition of Corollary 1 is not satisfied. However, we add parameters β = 1, l1 = 2, l2 = l3 = 1  n n li in Corollary 2. After calculating, we obtain max (−2 m j=1 |ai j |li + j=1 |a ji |l j ) = −0.0900 < 0, k=1 li r1 dik − 4li + β + βi li + 1≤i≤3

i

which implies that the condition of Corollary 2 holds. This means that the stability criteria in Corollary 2 is more effective and less conservatism than that in Corollary 1.

C. Sha, H. Zhao / Applied Mathematics and Computation 346 (2019) 57–75

69

Fig. 4. The output variables of neural network (5) with diffusions and neural network (5) without diffusions.

(ii) Impact of diffusions on neural network To observe the impact of diffusions on the convergence rate of neural network, we let α = 2, τi = 5, d11 = 0, d21 = 0, d31 = 0 and d11 = 2, d21 = 3, d31 = 4, respectively. The simulation results are shown in Fig. 6. From Fig. 6, we can observe that the output variable of temporal network which converges to the optimal solution have higher volatility than those of spatio-temporal network. This means that the diffusion terms restrain the volatility and accelerate convergence. 5.3. Application in quadratic program with equality and inequality constraints problem Example 3. Consider the following quadratic program with equality and inequality constraints:

f (x ) = x21 + x22 + x1 x2 − 11x1 + 2x2 − 5x3 ,

minmize

x1 − x2 + x3 = 4,

subject to

Let Q =

x1 − x2 + 2x3 ≤ 2. 2 1 0

1 2 0



0 0 , c= 0





−11  2 ,E= 1 −5

−1





1 , e = 4, G = 1

−1



2 , g = 2.

70

C. Sha, H. Zhao / Applied Mathematics and Computation 346 (2019) 57–75

Fig. 5. The output variable of neural network (5) with τi = 1(i = 1, 2, 3 ), d11 = 1.2, d21 = 0.92, d31 = 0.9 in Example 2.

Then this quadratic programming problem with equality and inequality constraints can be rewritten as the the quadratic programming problem with the form (27). It is obvious that Q a symmetric and positive semi-definite matrix with the T minimal eigenvalue 0 and the maximal eigenvalue 3. This quadratic program has a unique solution x∗ = (4.5, −1.5, −2 ) . (i) Globally converge to the optimal solution Firstly, we consider neural network (5) without diffusions. Keep l = π2 , α = 1, m = 1, β = 1.5, λ = 0.3, γ = 0.2, li = 1, τi =  4 li 0.4(i = 1, · · · , 4 ). By calculating, we find out that r1 = 4 and max [−2 m j=1 |ai j |li + k=1 li r1 dik − 4li + β + βi li + γi li τi + 1≤i≤4

4  j=1

i

|a ji |l j ] = 6.5869 > 0 which implies that the condition of Theorem 2 fails. However, we add diffusions term and take

 li d11 = 1.832, d21 = 1.643, d31 = 1.754, d41 = 1.953. After a calculation, we have r1 = 4 and max [−2 m k=1 li r1 dik − 4li + βi + 1≤i≤4   βi li + γi li τi + 4j=1 |ai j |li + 4j=1 |a ji |l j ] = −8.2238 < 0. Clearly, these parameters satisfy the condition of Theorem 2 and the simulation results verify the theory as shown in Fig. 7. As can be seen from the Fig. 7, the output variable globally converges T to the optimal solution (4.5, −1.5, −2 ) . (ii) Impact of parameter α on the convergence of neural network To observe the impact of parameter α on the convergence of our neural network (5), we assign 1, 3, 5 to α , respectively, and choose other parameters the same as above in Example 3(i). After calculating, the condition of Theorem 2 holds. As seen from Fig. 8, we notice that a larger α yields a shorter convergence time. The above observation means that the globally exponential convergence rate of neural network (5) can be improved with increased parameter α . 5.4. Application in image fusion Data fusion is critical problem in image analysis and processing, and it is widely available on medical imaging, computing vision, remote sensing, wireless sensor networks, etc [37]. The image information are signals collected by different sensors and images from various modalities. Data fusion is to integrate complementary information from different sources to improve the system performance [38]. The fusion concepts and fusion techniques gather tools including of signal-to-noise ratio, weighted average, minimal mean square error, neural networks, sub-band filtering and rules based knowledge [39].

C. Sha, H. Zhao / Applied Mathematics and Computation 346 (2019) 57–75

71

Fig. 6. The output variables of temporal network (5) and spatio-temporal network (5) in Example 2(ii).

More recently, fuzzy logic, graph pyramids, L1 estimation-based fusion method [38] and L2 -Norm estimation-based fusion method [40] have been used. Consider the image fusion of K images at the pixel level and let the ith images measurement be expressed as

mi (t ) = ai s(t ) + ni (t ), (i = 1, . . . , K; t = 1, . . . , N ), where mi (t) is the ith noisy image data measured by ith sensor, ai denotes a scaling coefficient, N is the number of sensor measurements, s(t) represents the original image data, and n(t ) = [n1 (t ), . . . , nK (t )]T , ni (t) is the additive Gaussian noise at the ith sensor, in addition, s(t) and ni (t) are mutually independent random process, i = 1, . . . , K. The main goal of picture fusion is to find an optimal fusion so that the uncertainty of the fused information is minimized and the corresponding signal-to-noise can be improved. That is , there exists a set of the optimal weight vector {ω1∗ , . . . , ωn∗ } such that the following objective function:

minmize f1 (ω ) = E[(z(t ) − s(t )) ], 2

K

where z(t ) = i=1 ωi mi (t ) is the fused image. According to the linearly constrained least squares method proposed in [2], the image fusion problem can be reformulated as a constrained quadratic programming problem

minmize

f2 (ω ) = ωT Rω,

subject to

aT ω = 1 , T

where a = (1, . . . , 1 ) and available average estimated is given by R =

1 N

N

i=1

T

m(t )m(t ) .

72

C. Sha, H. Zhao / Applied Mathematics and Computation 346 (2019) 57–75

Fig. 7. The output variable of neural network (4) with τ = 0.5, d11 = 1.715, d21 = 1.830, d31 = 1.72, d41 = 2.58, in Example 3(i).

Fig. 8. The output variables for neural network (5) with α = 1, 3, 5 in Example 3(ii).

C. Sha, H. Zhao / Applied Mathematics and Computation 346 (2019) 57–75

73

Fig. 9. (a)-(f) are output variable of neural network (5) with K=6, (g) is a random noisy Lena image, (h) is the fused Lena image with K=6.

So the optimal image fusion can be formulated as

z(t ) =

K 

ωi∗ mi (t ).

i=1

Example 4. This example illustrates the effectiveness of the proposed projection neural fusion method. The proposed fusion method is applied to the Lena image, which is eight-bit gray level image with 256 by 256 pixels. (i) Globally exponentially converge to the optimal solution and image fusion We consider that 6 different Lena images at the pixel level are used for Lena image collection. Fig. 9(g) is a noisy  T Lena image measured by one random sensor, where the signal-to-noise ratio is 11 DB. According to R = N1 N i=1 m (t )m (t ) ,

74

C. Sha, H. Zhao / Applied Mathematics and Computation 346 (2019) 57–75

Fig. 10. (a)–(c) are fused Lena images by using K=2, 20, 60, respective.

we can obtain



0.3371 ⎜0.2731 ⎜0.2728 R=⎜ ⎜0.2734 ⎝ 0.2732 0.2719

0.2731 0.3360 0.2723 0.2730 0.2730 0.2716

0.2728 0.2723 0.3356 0.2727 0.2724 0.2711

0.2734 0.2730 0.2727 0.3364 0.2728 0.2718

0.2732 0.2730 0.2724 0.2728 0.3360 0.2715



0.2719 0.2716 ⎟ 0.2711 ⎟ ⎟. 0.2718 ⎟ ⎠ 0.2715 0.3333

We choose parameters as follows l = π , α = 1, m = 1, d11 = 1.30, d21 = 1.25, d31 = 1.23, β = 1, λ = 0.2, γ = 0.01, li =  li 2λτi + γ l τ + 1, τi = 1.1(i = 1, 2, 3 ). By a simple calculation, we have r1 = 1 and max [−2 m i i i k=1 li r1 dik + (2λ − 4 )li + βi + βi li e 1≤i≤n n n | a | l + | a | l ] = −1 . 0773 < 0 , the above parameters satisfies the conditions of Theorem 2 . According to Theorem ji j j=1 i j i j=1 2, the equilibrium point of projection neural network is globally exponentially stable which is consistent with Fig. 9(a)–(f). As is seen from Fig. 9(a)–(f), the simulation result shows that the output variable using our neural network converges to T the optimal solution w∗ = (0.1580, 0.1628, 0.1700, 0.1606, 0.1645, 0.1841 ) . Fig. 9(h) is fused image using projection neural fusion method for the number of sensors 6. It is obvious that the fused image of Fig. 9(h) has better quality than the noisy image of Fig. 9(g). This means that image fusion can be applied for enhancing the quality of an image so that more reliable segmentation are obtained and more discriminating features are extracted for image process. (ii) Impact of the collected picture numbers on the quality of the fusion image To observe impact of the collected picture numbers on the quality of the fusion image, we used 2, 20 and 60 different images at the pixel level, respectively and choose same parameters as those of Example 4 (i). After calculating, we find out that these parameters satisfy the stability condition of theorem 2. By using proposed neural network (5), Fig. 10(a)–(c) illustrate that the quality of the fused image shown is proving with the increase of the collected picture’s numbers. 6. Conclusions In this paper, we introduced delays and diffusions into neural network (5), which is an improvement and extension of previous networks proposed in [32]. The proposed neural network can be implemented by a circuit with a one-layer structure and is amenable to parallel implementations. By utilizing the differential inequality technique and constructing the Lyapunov–Krasovskii functional, we presented some verifiable conditions for the globally exponential stability of neural

C. Sha, H. Zhao / Applied Mathematics and Computation 346 (2019) 57–75

75

network (5). In addition, we also obtained the influence of the parameters τ i , dik and α on the convergence behavior or convergence rate through simulations. Finally, examples and applications were provided to illustrate the performance of the proposed neural network. Acknowledgement The work was supported by National Natural Science Foundation of China under Grant nos. 11571170 and 11501290. The authors would like to express our gratitude to Editor and the anonymous referees for their valuable comments and suggestions that led to truly significant improvement of the manuscript. References [1] F. Facchinei, J.S. Pang, Finite-Dimensional Variational Inequalities and Complementarity Problems, Springer Science & Business Media, Berlin, Germany, 2003. [2] Y. Xia, H. Leung, E. Boss, Neural data fusion algorithms based on a linearly constrained least square method, IEEE Trans. Neural Netw. 13 (2) (2002) 320–329. [3] K. Sun, Y. Li, S. Tong, Fuzzy adaptive output feedback optimal control design for strict-feedback nonlinear systems, IEEE Trans. Syst. Man Cybern. Syst. 47 (1) (2017) 33–44. [4] K. Sun, S. Sui, S. Tong, Fuzzy adaptive decentralized optimal control for strict feedback nonlinear large-scale systems, IEEE Trans. Cybern. 48 (4) (2018) 1326–1339. [5] Y. Xia, J. Wang, A recurrent neural network for solving linear projection equations, Neural Netw. 13 (20 0 0) 337C350. [6] H. Tang, K. Tan, Y. Zhang, Convergence analysis of discrete time recurrent neural networks for linear variational inequality problem, in: Proceedings of the International Joint Conference on Neural Networks 3 (2002) 2470–2475. [7] Y. Xia, H. Leung, J. Wang, A projection neural network and its application to constrained optimization problems, IEEE Trans. Circuits Syst. I, Fundam. Theory Appl. 49 (4) (2002) 447–458. [8] Y. Xia, Further results on global convergence and stability of globally projected dynamical systems, J. Optim. Theory Appl. 122 (3) (2004) 627–649. [9] X. Hu, J. Wang, Solving generally constrained generalized linear variational inequalities using the general projection neural networks, IEEE Trans. Neural Netw. 18 (6) (2007) 1697–1708. [10] Y. Yang, J. Cao, X. Xu, M. Hu, Y. Gao, A new neural network for solving quadratic programming problems with equality and inequality constraints, Math. Comput. Simul. 101 (2014) 103–112. [11] D.W. Tank, J.J. Hopfield, Simple neural optimization networks: an a/d converter signal decision circuit, and a linear programming circuit, IEEE Trans. Circuits Syst. 33 (5) (1986) 533C541. May [12] C. Sha, H. Zhao, T. Huang, W. Hu, A projection neural network with time delays for solving linear variational inequality problems and its applications, Circuits Syst. Signal Process. 168 (2016) 1164–1172. [13] C. Sha, H. Zhao, F. Ren, A new delayed projection neural network for solving quadratic programming problems with equality and inequality constraints, Neurocomputing 168 (2015) 1164–1172. [14] H. Zhao, N. Ding, Dynamic analysis of stochastic Cohen–Grossberg neural networks with time delays, Appl. Math. Comput. 183 (1) (2006) 464–470. [15] Q. Fang, J. Cao, T. Hayat, Delay-dependent stability of neutral system with mixed time-varying delays and nonlinear perturbations using delay-dividing approach, Cognit. Neurodyn. 9 (1) (2015) 75–83. [16] L. Cheng, Z. Hou, M. Tan, Solving linear variational inequalities by projection neural network with time-varying delays, Phys. Lett. A 373 (20) (2009) 1739–1743. [17] L. Cheng, Z. Hou, M. Tan, A delayed projection neural network for solving linear variational inequalities, IEEE Trans. Neural Netw. 20 (6) (2009) 915–925. [18] B. Huang, G. Hui, D. Gong, et al., A projection neural network with mixed delays for solving linear variational inequality, Neurocomputing 125 (2014) 28–32. [19] Y. Chen, S. Fang, Neurocomputing with time delay analysis for solving convex quadratic programming problems, IEEE Trans. Neural Netw. 11 (1) (1999) 230–240. [20] Q. Liu, J. Cao, Y. Xia, A delayed neural network for solving linear projection equations and its analysis, IEEE Trans. Neural Netw. 16 (4) (2005) 834–843. [21] J. Niu, D. Liu, A new delayed projection neural network for solving quadratic programming problems subject to linear constraints, Appl. Math. Comput. 21 (6) (2012) 3139–3146. [22] A. Nazemi, A neural network model for solving convex quadratic programming problems with some applications, Eng. Appl. Artif. Intell. 32 (2014) 54–62. [23] C. Hu, J. Yu, H. Jiang, Z. Teng, Exponential synchronization for reaction–diffusion networks with mixed delays in terms of p-norm via intermittent driving, IEEE Trans. Neural Netw. 31 (7) (2012) 1–11. [24] L. Wang, H. Zhao, Synchronized stability in a reaction–diffusion neural network model, Phys. Lett. A 378 (48) (2014) 3586–3599. [25] W. Chen, S. Luo, W. Zheng, Impulsive synchronization of reaction-diffusion neural networks with mixed delays and its application to image encryption, IEEE Trans. Neural Netw. Learn. Syst. 27 (12) (2016) 2696–2710. [26] Q. Zhu, J. Cao, Delay-dependent exponential stability for a class of neural networks with time delays and reaction-diffusion terms, J. Frankl. Inst. 346 (4) (2009) 301–314. [27] K. Wang, Z. Teng, H. Jiang, Global exponential synchronization in delayed reaction-diffusion cellular neural networks with the Dirichlet boundary conditions, Math. Comput. Model. 52 (1) (2010) 12–24. [28] J. Tan, C. Li, T. Huang, Exponential stability of delayed fuzzy cellular neural networks with diffusion, Chaos Solitons Fractals 31 (3) (2007) 658–664. [29] D. Kinderlehrer, G. Stampacchia, An Introduction to Variational Inequalities and their Applications, SIAM, New York, USA, 1980. [30] H. Yang, T. Dillon, Exponential stability and oscillation of Hopfield graded response neural network, IEEE Trans. Neural Netw. 5 (5) (1994) 719–729. [31] Q. Liu, J. Cao, Y. Xia, A delayed neural network for solving linear projection equations and its analysis, IEEE Trans. Neural Netw. 16 (4) (2005) 834–843. [32] B. Huang, H. Zhang, D. Gong, Z. Wang, A new result for projection neural networks to solve linear variational inequalities and related optimization problems, Neural Comput. Appl. 23 (2) (2013) 357–362. [33] Y. Xia, J. Wang, A recurrent neural network for solving linear projection equations, IEEE Trans. Neural Netw. 13 (3) (20 0 0) 337–350. [34] M. Bazaraa, H.D. Sherali, C.M. Shetty, Nonlinear Programming: Theory and Algorithms, John Wiley & Sons, New jersey, 2013. [35] I. Griva, S. Nash, A. Sofer, Linear and Nonlinear Optimization, SIAM, 2009. [36] S. Effati, A. Nazemi, Neural network models and its application for solving linear and quadratic programming problems, Appl. Math. Comput. 172 (1) (2006) 305–331. [37] R.C. Luo, C.C. Chang, C.C. Lai, Multisensor fusion and integration: theories, applications, and its perspectives, IEEE Sens. J. 11 (12) (2011) 3122–3138. 12 [38] Y. Xia, H. Leung, Performance analysis of statistical optimal data fusion algorithms, Inf. Sci. 277 (2014) 808–824. [39] F. Li, Delayed lagrangian neural networks for solving convex programming problems, Neurocomputing 73 (10) (2010) 2266–2273. [40] Y. Xia, H. Leung, A fast learning algorithm for blind data fusion using a novel-norm estimation, IEEE Sens. J. 14 (3) (2014) 666–672.