An artificial neural network for solving quadratic zero-one programming problems

An artificial neural network for solving quadratic zero-one programming problems

Author’s Accepted Manuscript An artificial neural network for solving quadratic zero-one programming problems M. Ranjbar, S. Effati, S.M. Miri www.el...

466KB Sizes 26 Downloads 143 Views

Author’s Accepted Manuscript An artificial neural network for solving quadratic zero-one programming problems M. Ranjbar, S. Effati, S.M. Miri

www.elsevier.com/locate/neucom

PII: DOI: Reference:

S0925-2312(17)30028-0 http://dx.doi.org/10.1016/j.neucom.2016.12.064 NEUCOM17906

To appear in: Neurocomputing Received date: 18 May 2016 Revised date: 27 September 2016 Accepted date: 29 December 2016 Cite this article as: M. Ranjbar, S. Effati and S.M. Miri, An artificial neural network for solving quadratic zero-one programming problems, Neurocomputing, http://dx.doi.org/10.1016/j.neucom.2016.12.064 This is a PDF file of an unedited manuscript that has been accepted for publication. As a service to our customers we are providing this early version of the manuscript. The manuscript will undergo copyediting, typesetting, and review of the resulting galley proof before it is published in its final citable form. Please note that during the production process errors may be discovered which could affect the content, and all legal disclaimers that apply to the journal pertain.

An artificial neural network for solving quadratic zero-one programming problems M. Ranjbar 1 S. Effati 2 S. M. Miri 3 Department of Applied Mathematics, Ferdowsi University of Mashhad, Mashhad, Iran.

Abstract This paper presents an artificial neural network to solve the quadratic zero-one programming problems under linear constraints. In this paper, by using the connection between integer and nonlinear programming, the quadratic zeroone programming problem is transformed into the quadratic programming problem with nonlinear constraints. Then, by using the nonlinear complementarity problem (NCP) function and penalty method this problem is transformed into an unconstrained optimization problem. It is shown that the Hessian matrix of the associated function in the unconstrained optimization problem is positive definite in the optimal point. To solve the unconstrained optimization problem an artificial neural network is used. The proposed neural network has a simple structure and a low complexity of implementation. It is shown here that the proposed artificial neural network is stable in the sense of Lyapunov. Finally, some numerical examples are given to show that the proposed model finds the optimal solution of this problem in the low convergence time. Keywords: Neural networks, Quadratic zero-one programming, Nonlinear complementarity problem function.

1. Introduction Quadratic zero-one programming problem is considered to be one of the basic programming problems, widely encountered in science, engineering and management. The quadratic zero-one programming problem can be stated as follows: Minimize

g(x) = 12 xT Qx + cT x

subject to

Ax = a x ∈ {0, 1}n ,

(1)

where c is an n real vector, a is an m real vector, Q is a symmetric and positive semi-definite n × n real matrix and A is an m × n real matrix. Quadratic zero-one programming with linear constraints is a general model that allows to formulate numerous important problems in combinatorial optimization including, for example: quadratic assignment [12], graph partitioning [27], location decision [4, 18], and other applications. Various heuristics and exact methods have been proposed to solve quadratic zero-one programming problems. Many methods for solving such problems have been of the branch-and-bound type [22]. Several methods have been developed to solve it exactly through 0-1 linear reformulation [1, 15, 26], or 0-1 convex quadratic reformulation [19, 5]. Aourid et al. [2, 3] proposed neural networks for solving the linear and quadratic 0-1 programming problems. In general, this problem is known to be NP-complete. However, traditional algorithms for digital computers may not be efficient, since the computing time required for a solution is greatly dependent on the dimension and the structure of the problems. One possible and very promising 1 Email

addresses: m.ran [email protected] (M. Ranjbar). author. Email addresses: s − e f f [email protected], e f f [email protected] (S. Effati). 3 Email addresses: [email protected] (S. M. Miri). 2 Corresponding

Preprint submitted to Neurocomputing

January 5, 2017

approach to real-time optimization is to apply artificial neural networks. Because of the inherent massive parallelism, the neural network approach can solve optimization problems in running time at the orders of magnitude much faster than those of the most popular optimization algorithms executed on general-purpose digital computers [9]. In this paper, we propose an artificial neural network to solve the quadratic zero-one programming problem. The motivation for using the artificial neural network model for solving this problem is their inherent simplicity of implementation and their parallel processing nature. Neural networks are computing systems composed of a number of highly interconnected simple information processing units, and thus can usually solve optimization problems faster than most popular optimization algorithms. Also, the numerical ordinary differential equation techniques can be applied directly to the continuous-time neural network for solving constrained optimization problems effectively. Tank and Hopfield [29] proposed first in 1986 a neural network for linear programming. Then Kennedy and Chua [17] extended the Tank-Hopfield network by developing a neural network with a finite penalty parameter for solving nonlinear programming problems. Rodriguez-Vazues et al. [24] proposed a switched capacitor neural network for solving a class of constrained nonlinear convex optimization problems. Wang and Xia et al. [31, 32, 33, 34, 35] presented several neural network models for solving linear and nonlinear optimization problems, monotone variational inequality problems and monotone projection equations. Effati et al. [8, 9, 10, 11] proposed some neural network models for solving linear, nonlinear and bilinear programming problems, also Nazemi in [21] proposed a dynamic system model for solving convex nonlinear optimization problems. Neural network for solving the NCPs based on the generalized Fischer-Burmeister is developed in [6]. A popular NCP function is the Fischer-Burmeister function [13, 14]. In [7] proposed a family of NCP functions that subsumes the Fischer-Burmeister function as a special case. A method to solve an optimization problem commonly is to convert the constrained optimization problem into an associated unconstrained problem with gradient methods. Gradient descent is commonly used in algorithms to find the minimum of unconstrained functions [23, 25]. In this paper, we reformulate quadratic zero-one programming problem to an unconstrained nonlinear programming problem by using NCP function and penalty function, then introduce an artificial neural network that it can solve this problem. The rest of the paper is organized as follows. In Section 2 some preliminary results are provided. In Section 3 problem formulation and neural network model is proposed. In Section 4 convergence and stability results are discussed. In Section 5 simulation results of the new method on several numerical examples are reported. Finally, some concluding remarks are drawn in Section 6. 2. Preliminaries This section provides the necessary mathematical background, which is used to propose the desired neural network and to study its stability. Definition 2.1. A function F : R m → E ⊂ Rm is said to be Lipschitz continuous with constant L on a set E if for each pair of points x, y ∈ E F(x) − F(y) ≤ Lx − y, where . denotes the l 2 -norm of R m . Definition 2.2. A function φ : R 2 → R is called an NCP-function if it satisfies, φ(u, v) = 0 ⇐⇒ u ≥ 0, v ≥ 0, uv = 0. Definition 2.3. Fischer-Burmeister function φ : R 2 → R define as: √ φ(u, v) = u2 + v2 − (u + v).

(2)

(3)

This function had used by Fischer [13] to construct a Newton-type method for constrained optimization problem. Definition 2.4. We define Ψ : R 2n → R by, Ψ(z) =

n i=1

2

ψ(ui , vi )

(4)

where ψ(ui , vi ) = 12 (φ(ui , vi ))2 and z = (u1 , v1 , ..., un, vn ). The function ψ is continuously differentiable and it has some favorable properties. The following lemma summarizes some favorable properties of ψ. Lemma 2.1. Let ψ(u, v) = 12 (φ(u, v))2, then i) ψ(u, v) ≥ 0 for all u, v ∈ R and ψ is an NCP-function. ii) ψ is continuously differentiable everywhere. Moreover, ∇u ψ(u, v) = ∇v ψ(u, v) = 0 if (u, v) = (0, 0), otherwise,

u ∇u ψ(u, v) = ( √ − 1)φ(u, v) 2 u + v2 v − 1)φ(u, v) ∇v ψ(u, v) = ( √ 2 u + v2 iii) The gradient of ψ is Lipschitz continuous, i. e., there exists L 1 > 0 such that ∇ψ(u, v) − ∇ψ(s, t) ≤ L1 (u, v) − (s, t) for all (u, v), (s, t) ∈ R 2 .

Proof: See [7]. Theorem 2.1. Let z = (u 1 , v1 , ..., un, vn ), then Ψ(z) ≥ 0 for all z ∈ R 2n and Ψ(z) = 0 if and only if z ≥ 0 and ui vi = 0 for all i = 1, 2, ..., n. Proof: The results directly follow from Lemma 2.1. Corollary 2.1. The gradient of Ψ(u, v) is Lipschitz continuous, i.e. there exists L 2 > 0 such that ∇Ψ(z1 ) − ∇Ψ(z2 ) ≤ L2 z1 − z2  for all z1 , z2 ∈ R2 . Proof. Based on Definition 2.4 and axiom (iii) of Lemma 2.1 the proof follows. We also recall some materials about first order differential equation and Lyapunov function. These materials can be found in ordinary differential equation and nonlinear control textbooks; see [20, 28]. Definition 2.5. Take the following dynamical system x˙ = f (x(t)),

x(t0 ) = x0 ∈ Rn

x∗ is called an equilibrium point of the top equation if f (x ∗ ) = 0. If there is a neighborhood K ⊆ R n of x∗ such that f (x∗ ) = 0 and f (x)  0 for all x ∈ K \ x ∗ , then x∗ is called an isolated equilibrium point. Definition 2.6. Let K ⊆ Rn be an open neighborhood of x ∗ . A continuously differentiable function E : R n → R is said to be a Lyapunov function at the state x ∗ over the set K for x˙ = f (x(t)) if i) E(x∗ ) = 0 and for all x  x ∗ ∈ K, E(x) > 0. 3

Figure 1: Plot of equation (8).

ii)

dE dt

= (∇ x E(x))T x˙ = (∇ x E(x))T f (x(t)) ≤ 0.

Lemma 2.2. i) An isolated equilibrium point x ∗ is Lyapunov stable if exists a Lyapunov function over some neighborhood K of x ∗ . ii) An isolated equilibrium point x ∗ is asymptotically stable if there is a Lyapunov function over some neighborhood ∗ K of x∗ such that dE dt < 0 for all x  x ∈ K. 3. Problem formulation and artificial neural network model Consider the following quadratic zero-one programming problem: Minimize

g(x) = 12 xT Qx + cT x

(5)

subject to Ax = a

(6)

x = (x1 , x2 , ..., xn )T ∈ {0, 1}n .

(7)

In this problem we consider following equations: 1 1 |xi − | = , 2 2

i = 1, 2, ..., n,

(8)

that it is equivalent to condition (7) ( see Fig. 1). We consider xi − 12 = ui − vi where ui , vi ≥ 0 and ui vi = 0 for i = 1, 2, ..., n. Thus we have |x i − 12 | = ui + vi where ui vi = 0 and ui , vi ≥ 0 for i = 1, 2, ..., n. Then the problem (5)-(7) is equivalent to the following problem: Minimize

g(x) =

subject to

1 T x Qx + cT x 2

Ax = a

1 2 1 xi − u i + vi = 2 u i vi = 0 ui , vi ≥ 0 i = 1, 2, ..., n. u i + vi =

4

(9)

By using (4), problem (9) is equivalent to the following problem: minimize

h(w) = 12 wT Hw + dT w + λ1 Ψ(w)

subject to

Bw = b

(10) where w = [x1 , ..., xn , u1 , v1 , ..., un, vn ]T and ⎡ ⎡ ⎤ ⎢⎢⎢ ⎢⎢⎢ c1 ⎥⎥⎥ ⎢⎢⎢ ⎢⎢⎢ . ⎥⎥⎥ ⎢⎢⎢ ⎢⎢⎢ .. ⎥⎥⎥ ⎢⎢⎢ ⎢⎢⎢⎢ ⎥⎥⎥⎥ ⎢⎢ ⎢⎢⎢ cn ⎥⎥⎥ , b = ⎢⎢⎢⎢⎢ d = ⎢⎢ ⎥ ⎥ ⎢⎢⎢ 0 ⎥⎥⎥ ⎢⎢⎢ ⎢⎢⎢ ⎢⎢⎢⎢ . ⎥⎥⎥⎥ . ⎢⎢⎢ ⎢⎢⎢ . ⎥⎥⎥ ⎢⎣ ⎣ ⎦ 0 3n×1

⎤ ⎥⎥⎥ ⎥⎥⎥ ⎥⎥⎥ ⎥⎥ am ⎥⎥⎥⎥⎥ 1 ⎥ ⎥⎥ 2 ⎥ ⎥ .. ⎥⎥⎥⎥⎥ . ⎥⎥⎥ ⎦ a1 .. .

1 2

, H=

Qn×n 02n×n

0n×2n 02n×2n

, 3n×3n

3m×1

⎡ ⎢⎢⎢ Am×n ⎢ B = ⎢⎢⎢⎢ 0n×n ⎣ In×n

0m×2n Cn×2n Dn×2n

⎤ ⎥⎥⎥ ⎥⎥⎥ ⎥⎥⎦

,

(2n+m)×3n

where ⎡ ⎢⎢⎢ ⎢⎢⎢ ⎢⎢⎢ ⎢ C = ⎢⎢⎢⎢ ⎢⎢⎢ ⎢⎢⎢ ⎣

1 1 0 0 0 1 .. .. . . . . . 0 0 ... 0 0 ...

0 1 .. .

0 0 .. .

0 0

1 0

⎤ . . . 0 0 ⎥⎥ ⎥ . . . 0 0 ⎥⎥⎥⎥ ⎥ . . ⎥⎥⎥ .. . .. .. ⎥⎥⎥⎥ , ⎥ 1 0 0 ⎥⎥⎥⎥⎦ 0 1 1

also λ1 is large positive number and Ψ(w) =

n i=1

⎡ ⎢⎢⎢ ⎢⎢⎢ ⎢⎢⎢ ⎢ D = ⎢⎢⎢⎢ ⎢⎢⎢ ⎢⎢⎢ ⎣

−1 1 0 0 0 −1 .. .. . . . . . 0 0 ... 0 0 ...

0 1 .. .

0 0 .. .

... ... .. .

0 0

−1 0

1 0

⎤ ⎥⎥⎥ ⎥⎥⎥ ⎥⎥⎥ ⎥⎥⎥ ⎥⎥⎥ , ⎥ 0 0 ⎥⎥⎥⎥⎦ −1 1 0 0 .. .

0 0 .. .

ψ(ui , vi ).

The mapping of a constrained optimization problem into an appropriate energy cost function is a commonly applied approach to the design of neural network models. Accordingly, we transform the problem (10) into an unconstrained problem as follows: minimize

f (w) = 12 wT Hw + dT w + λ1 Ψ(w) +

λ2 2 Bw

− b2

(11)

where coefficients λ1 and λ2 are large positive numbers. So, the necessary condition for optimality of (11) is ∇ f (w) = 0, i.e. ∇ f (w) = Hw + d + λ1 ∇Ψ(w) + λ2 BT (Bw − b) = 0 where ∇Ψ(w) = [0, ..., 0, ∇u1 ψ(u1 , v1 ), ∇v1 ψ(u1 , v1 ), ..., ∇un ψ(un , vn ), ∇vn ψ(un , vn )]T1×3n with ∇ui ψ(ui , vi ) = ( √ u2i

− 1)φ(ui, vi )

∇vi ψ(ui , vi ) = ( √ v2i

− 1)φ(ui , vi )

ui +v2i

ui +v2i

5

(12)

for (ui , vi )  (0, 0). Moreover,

∇ui ψ(ui , vi ) = ∇vi ψ(ui , vi ) = 0

if (ui , vi ) = (0, 0). The artificial neural network model as a dynamic system model for the quadratic zero-one problem, according to the problem (11) can be described by the following nonlinear dynamical system: dw dt

= −η∇ f (w)

(13)

where η > 0 is a scaling factor. Lemma 3.1. ∇ f (w) is Lipschitz continuous with L = H + λ 1 L2 + λ2 B2 in R3n . Proof. For any w 1 , w2 ∈ R3n we have ∇ f (w1 ) − ∇ f (w2 ) = Hw1 + d + λ1 ∇Ψ(w1 ) + λ2 BT (Bw1 − b) − Hw2 − d − λ1 ∇Ψ(w2 ) − λ2 BT (Bw2 − b)  Hw1 − Hw2  + λ1 ∇Ψ(w1 ) − ∇Ψ(w2 ) + λ2 BT B(w1 − w2 )  (H + λ1 L2 + λ2 B2 )w1 − w2 .  Lemma 3.2. Assume that f : Rn → Rn is a continuous mapping. Then, for arbitrary t 0 ≥ 0 and x0 ∈ Rn there is a local solution x(t) for x˙ = f (x(t)), x(t0 ) = x0 ∈ Rn with t ∈ [t0 , τ) for some τ ≥ t0 . Furthermore, if f is locally Lipschitzian continuous at x 0 then the solution is unique, and if f is Lipschitzian continuous in R n then τ can be extended into ∞. Proof. See [20]. Theorem 3.1. For any initial state w 0 = w(t0 ), there exists exactly one unique solution w(t) with t ∈ [t 0 , ∞) for the neural network (13). Proof. Since ∇ f (w) is continuous there exist a local solution w(t) for (13) with t ∈ [t 0 , τ) for some τ ≥ t0 and by using Lemma 3.1 ∇ f (w) is Lipschitz continuous, thus by Lemma 3.2, we have τ(w 0 ) = ∞.  4. Stability analysis In this section, we will study the stability of the equilibrium point and the convergence of the optimal solution of the neural network (13). First provides the necessary mathematical background, then study stability analysis of the proposed neural network to solve the problem (1). If f is a twice continuously differentiable (not necessarily convex) function, then the second order sufficient conditions for the local minima are derived as follows. Lemma 4.1. Let f be a twice continuously differentiable function and ∇ 2 f (x) denote the Hessian matrix of f at the point x ∈ Rn . If ∇ f (x) = 0 and ∇ 2 f (x) is positive definite, then the point x is a strict local minimum of the function f. 6

Lemma 4.2. If w∗ ∈ R3n be an optimal solution of the problem (11), then the Hessian matrix ∇ 2 Ψ(w∗ ) is positive semi-definite.   Proof. Since Ψ(w) = ni=1 ψ(ui , vi ) = ni=1 21 ( u2i + v2i − (ui + vi ))2 , we have ∂Ψ(w) ui √ = ( − 1)( u2i + v2i − (ui + vi )) ∂ui 2 2 ui +vi

∂Ψ(w) ∂vi

= ( √ v2i

∂Ψ(w) ∂xi

=0

− 1)( u2i + v2i − (ui + vi ))

ui +v2i

therefore, we get ⎧ ⎪ 0 if i  j ⎪ ⎪ 2 ∗ ⎨ ∂ Ψ(w ) 0 if i = j, u∗ = 12 , v∗i = 0 = ⎪ ∂ui ∂u j ⎪ ⎪ ⎩ 1 if i = j, ui∗ = 0, v∗i = 12 i ∗

∂ Ψ(w ) ∂vi ∂v j 2

∂2 Ψ(w∗ ) ∂ui ∂v j

⎧ ⎪ 0 ⎪ ⎪ ⎨ 1 =⎪ ⎪ ⎪ ⎩ 0

if i  j if i = j, u∗i = 12 , v∗i = 0 if i = j, u∗i = 0, v∗i = 12

= 0.

Since function Ψ(w) is independent of x i for i = 1, 2, ..., n, thus ∇ 2 Ψ(w∗ ) is diagonal matrix that elements of principal diagonal are 0 or 1, hence Hessian matrix ∇ 2 Ψ(w∗ ) is positive semi-definite.  Lemma 4.3. Let P ∈ Rn×n is a symmetric and positive semi-definite matrix and x ∈ R n . Then, Px = 0 if and only if xT Px = 0. Proof. If Px = 0, it is clear that x T Px = 0. To prove the converse, we know P is positive semi-definite, it can be written as P = U T U where U ∈ Rn×n is an upper triangular matrix (see [30]). So, x T Px = 0 implies that (U x)T (U x) = 0, i.e., U x22 = 0. Therefore, U x = 0, thus it is clear U T U x = 0, i.e., Px = 0.  Proposition 4.1. ∇ 2 f (w∗ ) is positive definite. Proof. By using (12), we have

∇2 f (w∗ ) = H + λ1 ∇2 Ψ(w∗ ) + λ2 BT B

by Lemma 4.2 and primal assumption, ∇ 2 Ψ(w∗ ) and H are positive semi-definite, also we note that B T B is positive semi-definite and λ 1 , λ2 > 0, then ∇2 f (w∗ ) is positive semi-definite. Since, each positive semi-definite and nonsingular matrix is positive definite, then it is sufficient to show that ∇ 2 f (w∗ ) is a nonsingular matrix. For this purpose, we let ∇2 f (w∗ )w = 0, where w = [x1 , ..., xn, u1 , v1 , ..., un, vn ]T , by Lemma 4.3 we have w T ∇2 f (w∗ )w = 0 i.e., wT (H + λ1 ∇2 Ψ(w∗ ) + λ2 BT B)w = 0, since H, ∇2 Ψ(w∗ ) and BT B are positive semi-definite, we get wT Hw = 0 wT ∇2 Ψ(w∗ )w = 0 7

wT BT Bw = 0, by Lemma 4.3 we have Hw = 0

(14)

∇2 Ψ(w∗ )w = 0

(15)

B Bw = 0.

(16)

T

Also, by using Lemma 4.2, we know that ∇ 2 Ψ(w∗ ) is a diagonal matrix, then by (15) for i = 1, 2, ..., n, we have ⎧ ⎪ ⎪ ⎪ ⎪ ⎪ ⎨ ⎪ ⎪ ⎪ ⎪ ⎪ ⎩

∂2 Ψ(w∗ ) ui ∂u2i

=0 (17)

∂2 Ψ(w∗ ) vi ∂v2i

=0

also ⎧ ⎪ ⎪ ⎪ ⎪ ⎪ ⎨ ⎪ ⎪ ⎪ ⎪ ⎪ ⎩

∂2 Ψ(w∗ ) ∂u2i ∂2 Ψ(w∗ ) ∂v2i

=1

if

u∗i = 0, v∗i =

if

u∗i

1 2

(18) =1

=

1 2,

v∗i

= 0.

Then, we obtain from (17) and (18) ⎧ ⎪ u = 0 i f u∗i = 0, v∗i = 12 ⎪ ⎪ ⎨ i ⎪ ⎪ ⎪ ⎩ v = 0 i f u∗ = 1 , v∗ = 0. i

i

2

i

Consequently, for i = 1, 2, ..., n, we have ui vi = 0.

(19)

Since BT B is positive semi-definite and by (16), we have Bw = 0, and by using the definition of matrix B in (10), it implies that: [A 0]w = 0

(20)

[0 C]w = 0

(21)

D]w = 0,

(22)

[I

also, by using the definition of matrix C and (21) for i = 1, 2, ..., n, we get ui + vi = 0. 8

(23)

Hence, by using (19) and (23), we have ui = vi = 0,

i = 1, 2, ..., n

(24)

which implies that w = [x1 , ..., xn , 0, 0, ..., 0, 0]T . Consequently, we obtain from (22) xi = 0,

i = 1, 2, ..., n.

(25)

From (24) and (25), we can see that w = 0 and it completes the proof.  Lemma 4.4. w∗ ∈ W f = {w|w is the local solution of (11)} if and only if w ∗ is an equilibrium point of (13). Proof. First, let w∗ ∈ W f , the necessary condition for optimality is ∇ f (w ∗ ) = 0, then by using (13), therefore Definition 2.5 shows that w ∗ is an equilibrium point of (13). Next, assume that w∗ is an equilibrium point of (13) i.e.

dw∗ dt

= 0,

dw∗ =0 dt therefore, by Proposition 4.1 and Lemma 4.1, we get w ∗ ∈ W f .



Lemma 4.5. Let w∗ = (x∗ , z∗ ) where z∗ = (u∗1 , v∗1 , ..., u∗n, v∗n ), be an optimal solution of the problem (11) and K ⊆ R 3n be neighborhood of w ∗ , then 1 1 ( wT Hw + dT w) − ( w∗T Hw∗ + dT w∗ ) > 0 2 2 ∗ for all w ∈ K \ {w }. Proof. Since, the elements of matrix H and vector d corresponding to variables of z are equal to zero and x ∗ is an optimal solution of (1), the proof is completed. Theorem 4.1. Let w∗ be an isolated equilibrium point of the neural network (13). Then, w ∗ is Lyapunov stable and asymptotically stable for (13). Proof. First, by Lemma 4.4 we know that w ∗ is an isolated equilibrium point of (13), thus there is a neighborhood K ⊆ R3n of w∗ such that Hw∗ + d + λ1 ∇Ψ(w∗ ) + λ2 BT (Bw∗ − b) = 0 ∀w ∈ K \ {w∗ },

Hw + d + λ1 ∇Ψ(u, v) + λ2 BT (Bw − b)  0.

We consider the following energy function: E(w) =

λ2 1 1 T w Hw + dT w + λ1 Ψ(w) + Bw − b2 − w∗T Hw∗ − dT w∗ . 2 2 2

(26)

We show that E(w) is a Lyapunov function for system (13). To prove asymptotic stability, we note that E(w) in the equilibrium point w ∗ is equal to zero, also by using Theorem 2.1 and Lemma 4.5 for all w ∈ K \ {w ∗ }, we get E(w) > 0,

9

thus E(w) is positive definite, it is sufficient that we show for all w ∈ K \ {w ∗ }, d(E(w(t))) < 0. For this purpose, with dt taking the derivative of E(w) respect to time t, since w ∗ is isolated equilibrium point for all w(t) ∈ K \ {w ∗ } we have dE(w) dE(w) dw 1 dw dw 1 dw = . = (− ). = −  2 < 0. dt dw dt η dt dt η dt Thus, according to axiom (ii) of Lemma 2.2, w ∗ is asymptotically stable and neural network (13) is convergent to a stable state for a neighborhood K ⊆ R 3n of w∗ .  Remark 4.1. In the next section, the numerical implementation is coded by MATLAB 2013 (R2013a) and the ordinary differential equation solver adopted is ode15s. Remark 4.2. In quadratic zero-one programming problem, if Q = 0, then the problem (1) is transformed into a zero-one linear programming problem, thus the proposed artificial neural network can be used to solve this problem.

5. Simulation results In order to demonstrate the effectiveness and performance of the proposed artificial neural network model, we discuss three illustrative examples. In all the examples we let η = 1 and λ 1 = λ2 = 106 . Also, the stopping criterion in all the examples is w(t f ) − w∗  ≤ 10−4 , where t f represents the final time when the stopping criterion is met. Example 5.1: Consider the following zero-one quadratic programming problem under linear constraints. Minimize

− 9x1 − 7x2 + 2x3 + 23x4 + 12x5 − 48x1 x2 + 4x1 x3 + 36x1 x4 −24x1 x5 − 7x2 x3 + 36x2 x4 − 84x2 x5 + 40x3 x4 + 4x3 x5 − 88x4 x5

subject to x1 + x2 + x4 + x5 = 2 xi ∈ {0, 1}, i = 1, 2, ..., 5. x∗ = (0, 1, 1, 0, 1) T is an optimal solution of this problem (see [5]). By using (11), we have: Minimize f (w) = 12 wT Hw + dT w + λ1 Ψ(w) +

λ2 2 Bw

− b2

where, d ∈ R15 , w ∈ R15 , b ∈ R11 , H is an 15 × 15 real matrix, B is an 11 × 15 real matrix and 1 1 Ψ(w) = ( u21 + v21 − (u1 + v1 ))2 + ... + ( u25 + v25 − (u5 + v5 ))2 . 2 2 We use the proposed artificial neural network (13) to solve this problem, with considering initial points, w i = 0.5, for i = 1, 2, ..., 5 and w j = 0.25, for j = 6, 7, ..., 15, the optimal solutions of the Example 5.1 show in Table 1 with final time t f = 3 × 10−5 . Table 1. Optimal solutions of zero-one quadratic problem in Example 5.1 by the proposed artificial neural network (13).

x∗i x∗1 = 0.000 x∗2 = 1.0000 x∗3 = 1.0001 x∗4 = 0.0000 x∗5 = 1.0000

u∗1 u ∗2 u ∗3 u ∗4 u ∗5

u∗i = 0.0000 = 0.5000 = 0.5000 = 0.0000 = 0.5000

10

v ∗1 v ∗2 v ∗3 v ∗4 v ∗5

v∗i = 0.5000 = 0.0000 = 0.0000 = 0.5000 = 0.0000

1.2

1

0.8

0.6

0.4

0.2

0

−0.2

0

0.5

1

1.5

2

2.5

3 −5

x 10

Figure 2: Transient behavior of the continuous-time neural network in Example 5.1.

Solutions of Table 1 are corresponding to the optimal solutions of this problem. Fig. 2 displays the transient behavior of the continuous-time neural network in Example 5.1. Example 5.2: The zero-one quadratic programming problem is stated as follows: Minimize subject to

1 T 2 x Qx

x1 + x2 + x3 x4 + x5 + x6 x7 + x8 + x9 x1 + x4 + x7 x2 + x5 + x8 x3 + x6 + x9 xi ∈ {0, 1},

=1 =1 =1 =1 =1 =1 i = 1, 2, ..., 9 ,

where is modeling of a quadratic assignment problem (QAP), which have the following Q−matrix, ⎡ ⎢⎢⎢ ⎢⎢⎢ ⎢⎢⎢ ⎢⎢⎢ ⎢⎢⎢ ⎢⎢⎢ ⎢ Q = ⎢⎢⎢⎢ ⎢⎢⎢ ⎢⎢⎢ ⎢⎢⎢ ⎢⎢⎢ ⎢⎢⎢ ⎣

175 4 11 10 9 27 18 17 49 4 175 13 11 20 32 19 36 58 11 13 175 28 33 10 50 59 18 10 11 28 178 16 44 22 21 60 9 20 33 16 185 52 23 44 71 27 32 10 44 52 179 61 72 22 18 19 50 22 23 61 174 4 11 17 36 59 21 44 72 4 177 13 49 58 18 60 71 22 11 13 174

⎤ ⎥⎥⎥ ⎥⎥⎥ ⎥⎥⎥ ⎥⎥⎥ ⎥⎥⎥ ⎥⎥⎥ ⎥⎥⎥ ⎥⎥⎥ . ⎥⎥⎥ ⎥⎥⎥ ⎥⎥⎥ ⎥⎥⎥ ⎥⎥⎦

Then by using (11) we have: Minimize f (w) = 12 wT Hw + λ1 Ψ(w) +

λ2 2 Bw

− b2

where w ∈ R27 , b ∈ R24 , H is an 27 × 27 and B is an 24 × 27 real matrix and 1 1 Ψ(w) = ( u21 + v21 − (u1 + v1 ))2 + ... + ( u29 + v29 − (u9 + v9 ))2 . 2 2 11

1.2

1

0.8

0.6

0.4

0.2

0

−0.2

0

1

2

3

4

5

6 −4

x 10

Figure 3: Transient behavior of the continuous-time neural network in Example 5.2.

This problem has a unique optimal solution as x ∗ = (0, 0, 1, 1, 0, 0, 0, 1, 0) T . We use the proposed artificial neural network (13) to solve this problem, with considering initial points, w i = 0.5, for i = 1, 2, ..., 9 and w j = 0.25, for j = 10, 11, ..., 27, the optimal solutions of the Example 5.2 show in Table 2 with final time t f = 6 × 10−4 . Table 2. Optimal solutions of zero-one quadratic problem in Example 5.2 by the proposed artificial neural network (13).

x∗1 x∗2 x∗3 x∗4 x∗5 x∗6 x∗7 x∗8 x∗9

x∗i = 0.0003 = 0.0003 = 0.9993 = 0.9994 = 0.0003 = 0.0002 = 0.0002 = 0.9993 = 0.0004

u ∗1 u ∗2 u ∗3 u ∗4 u ∗5 u ∗6 u ∗7 u ∗8 u ∗9

u∗i = 0.0001 = 0.0001 = 0.4996 = 0.4997 = 0.0001 = 0.0000 = 0.0000 = 0.4996 = 0.0001

v ∗1 v ∗2 v ∗3 v ∗4 v ∗5 v ∗6 v ∗7 v ∗8 v ∗9

v∗i = 0.4998 = 0.4998 = 0.0002 = 0.0002 = 0.4998 = 0.4999 = 0.4999 = 0.0002 = 0.4998

Solutions of Table 2 are corresponding to the optimal solutions of this problem. Fig. 3 displays the transient behavior of the continuous-time neural network in Example 5.2. Example 5.3: The zero-one linear programming problem is stated as follows (see [16]), this example illustrates a class of problems called set partitioning problems.

12

Minimize subject to

z = 2x1 + 3x2 + 4x3 + 6x4 + 7x5 + 5x6 + 7x7 + 8x8 + 9x9 + 9x10 + 811 + 9x12 x1 + x2 + x3 + x4 + x5 + x6 + x7 + x8 + x9 + x10 + x11 + x12 = 3 x1 + x4 + x7 + x10 = 1 x2 + x5 + x8 + x11 = 1 x3 + x6 + x9 + x12 = 1 x4 + x7 + x9 + x10 + x12 = 1 x1 + x6 + x10 + x11 = 1 x4 + x5 + x9 = 1 x7 + x8 + x10 + x11 + x12 = 1 x2 + x4 + x5 + x9 = 1 x5 + x8 + x11 = 1 x3 + x7 + x8 + x12 = 1 x6 + x9 + x10 + x11 + x12 = 1 xi ∈ {0, 1}, i = 1, 2, ..., 12

Then, since H = 0 by using (11), we have: Minimize

f (w) = d T w + λ1 Ψ(w) +

λ2 2 Bw

− b2

where d ∈ R36 , w ∈ R36 , b ∈ R36 , B is an 36 × 36 real matrix, and 1 1 Ψ(u, v) = ( u21 + v21 − (u1 + v1 ))2 + ... + ( u212 + v212 − (u12 + v12 ))2 . 2 2 This problem has an optimal solution as follows (see [16]): x∗ = (1, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 1) T . We use the proposed artificial neural network (13) to solve this problem, with considering initial points, w i = 0.5, for i = 1, 2, ..., 12 and w j = 0.25, for j = 13, 14, ..., 36, the optimal solutions of the Example 5.3 show in Table 3 with final time t f = 4 × 10−3 . Table 3. Optimal solutions of zero-one linear programming problem in Example 5.3 by the proposed artificial neural network (13).

x∗i x∗1 = 1.0000 x∗2 = 0.0000 x∗3 = 0.0000 x∗4 = 0.0000 x∗5 = 1.0000 x∗6 = 0.0000 x∗7 = 0.0000 x∗8 = 0.0000 x∗9 = 0.0000 x∗10 = 0.0000 x∗11 = 0.0000 x∗12 = 1.0000

u∗i u ∗1 = 0.5000 u ∗2 = 0.0000 u ∗3 = 0.0000 u ∗4 = 0.0000 u ∗5 = 0.5000 u ∗6 = 0.0000 u ∗7 = 0.0000 u ∗8 = 0.0000 u ∗9 = 0.0000 u ∗10 = 0.0000 u ∗11 = 0.0000 u ∗12 = 0.5000

v∗i v ∗1 = 0.0000 v ∗2 = 0.5000 v ∗3 = 0.5000 v ∗4 = 0.5000 v ∗5 = 0.0000 v ∗6 = 0.5000 v ∗7 = 0.5000 v ∗8 = 0.5000 v ∗9 = 0.5000 v ∗10 = 0.5000 v ∗11 = 0.5000 v ∗12 = 0.0000

Solutions of Table 3 are corresponding to the optimal solutions of this problem. Fig. 2 displays the transient behavior of the continuous-time neural network in Example 5.3.

13

1.2

1

0.8

0.6

0.4

0.2

0

−0.2

0

0.5

1

1.5

2

2.5

3

3.5

4 −3

x 10

Figure 4: Transient behavior of the continuous-time neural network in Example 5.3.

6. Conclusions In this paper, we have presented an artificial neural network to solve zero-one quadratic programming problems. First, with reformulation problem, then by using the NCP function and the penalty method, we proposed an artificial neural network model for this problem. We have proved that the proposed neural network is locally asymptotically stable. In the application aspect, the proposed neural network can be used directly to solve zero-one linear and quadratic programming problem under linear constraints. The motivation for using the artificial neural networks, to solve zeroone quadratic problems stems their inherent simplicity and their parallel processing nature. However, we believe that to make a method for a special problem is worthy. Also, we show that the proposed neural network has a simple form and a low performance in convergence time and study stability analysis of the proposed neural network. Selection of region for the initial points, so guarantee global minimum of the optimal solution, is from future research topics.

References [1] W. P. Adams, R. Forrester, F. Glower, Comparisons and enhancement strategies for linearizing mixed 0-1 quadratic programs. Discrete Optimization 1 (2004) 99-120. [2] M. Aourid, X. D. Do, B. Kaminska, Penalty formulation for 0-1 linear programming problem: a neural network approach. Neural Networks, IEEE 2 (1993) 1092-1095. [3] M. Aourid, B. Kaminska, Neural networks for solving the quadratic 0-1 programming problem under linear constraints. Neural Networks, IEEE 4 (1995) 1690-1693. [4] A. Billionnet, M. C. Costa, A. Sutter, An efficient algorithm for a task allocation problem. Journal of the Association for computing Machinery, 39 (1992) 502-518 . [5] A. Billionnet, S. Elloumi, M. C. Plateau, Improving the performance of standard solvers for quadratic 0-1 programs by atight convex reformulation: the QCR method. Discrete Applied Mathematics 157 (2009)1185-1197. [6] J. S. Chen, C. H. Ko, S. Pan, A neural network based on the generalized Fischer-Burmeister function for nonlinear complementarity problems. Information Sciences 180 (2010) 697-711. [7] J. S. Chen, S. Pan, A family of NCP functions and a descent method for the nonlinear complementarity problem. Comput. Optim. 40 (2008) 389-404. [8] S. Effati, M. Baymani, A new nonlinear neural network for solving convex nonlinear programming problems. Applied Mathematics and Computation 168 (2005) 1370-1379. [9] S. Effati, A. R. Nazemi, Neural network and its application for solving linear and quadratic programming problems. Applied Mathematics and Computation 172 (2006) 305-331. [10] S. Effati, M. Ranjbar, A novel recurrent nonlinear neural network for solving quadratic programming problems. Applied Mathematics Modeling 35 (2011) 1688-1695. [11] S. Effati, A. Mansoori, M. Eshaghnezhad , An efficient projection neural network for solving bilinear programming problems. Neurocomputing 168, (2015) 1188-1197. [12] G. Finke, R. E. Burkard,F. Real, Quadratic assignment problem. Annals of Discrete Mathematics, 31 (1987) 61-82.

14

[13] A. Fischer, A special Newton-type optimization method. Optimization 24 (1992) 269-284. [14] A. Fischer, Solution of the monotone complementarity problem with locally Lipschitzian functions. Math. Program. 76 (1997) 513-532. [15] X. He, A. Chen, W. A Chaovalitwongse, H. X. Liu, An improved linearization technique for a class of quadratic 0-1 programming problems. Optimization Letters 6 (1) (2012) 31-41. [16] F. S. Hillier, G. J. Lieberman, Introduction to operations research. McGraw-Hill, 9th ed., 2008. [17] M. P. Kennedy, L. O. Chua, Neural networks for nonlinear programming. IEEE Trans, Curcuits Syst. 35 (1988) 554-562. [18] J. Krarup, D. Pisinger, F. Plastria, Discrete location problems with push-pull objectives. Discrete Applied Mathematics 123 (2002) 365-378 . [19] C. Lu, X. Guo, Convex reformulation for binary quadratic programming problems via average objective value maximization. Optimization Letters 9 (3) (2015) 523-535. [20] R. K. Miller, A. N. Michel, Ordinary differential equations. Academic press,1982. [21] A. R. Nazemi, A dynamic system model for solving convex nonlinear optimization problems. Commun Nonlinear Sci Numer Simulat 17 (2012) 1696-1705. [22] P. M. Pardalos, G. P. Rodgers, Computational aspect of a branch and bound algorithm for quadratic zero-one programming. Computing 45 (1990) 131-144. [23] F. A. Pazos, A. Bhaya, Control Liapunov function design of neural networks that solve convex optimization and variational inequality problems. Neurocomputing 72 (2009) 3863-3872. [24] A. Rodriguez-Vazquez, R. Domingues-Castro, A. Rueda, J. L. Huertas, E. Sanchez-Sinencio, Nonlinear switched-capacitor neural networks for optimization problems. IEEE Trans, Circuits Syst. 37 (3) (1990) 384-397. [25] M. V. Rybashov, The gradient method of solve convex programming problems on electronic analog computers. Automation and Remote Control 26 (1965) 1886-1898. [26] H. D. Sherali, W. P. Adams, A reformulation-linearization technique for solving discrete and continuous nonconvex problems. Kluwer Academic Publishers, Nowell, MA, 1999. [27] B. Simeone, Quadratic 0-1 programming, boolean functions and graphs. Waterloo, ontario, 1979. [28] J. J. E. Slotin, W. Li, Applied nonlinear control. Prentice hall, 1991. [29] D. W. Tank, J. J. Hopfield, Simple neural optimization networks: An A/D converter, Signal decision network and a linear programming circuit. IEEE Trans, Curcuits Syst. 33 (1986) 533-541. [30] L. N. Trefethen, D. Bau, Numerical linear algebra. Society for Industrial and Applied Mathematics, 1997. [31] J. Wang, A deterministic annealing neural network for convex programming. Neural Network 7 (4) (1994) 629-641. [32] Y. Xia, G. Feng, J. Wang, A novel recurrent neural network for solving nonlinear optimization problems with inequality constraints. IEEE Trans, Neural Networks 19 (2008) 1340-1353. [33] Y. Xia, J. Wang, A recurrent neural network for solving linear projection equations. Neural Networks 13 (2000) 337-350. [34] Y. Xia, J. Wang, A general projection neural network for solving monotone variational inequality and related optimization problems. IEEE Trans, Neural Networks 15 (2) (2004) 318-328. [35] Y. Xia, J. Wang, A recurrent neural network for solving nonlinear convex programs subject to linear constraints. IEEE Trans, Neural Networks 16 (2) (2005) 1045-1053.

15