GenConstraint: A programming tool for constraint optimization problems

GenConstraint: A programming tool for constraint optimization problems

SoftwareX 10 (2019) 100355 Contents lists available at ScienceDirect SoftwareX journal homepage: www.elsevier.com/locate/softx Original software pu...

435KB Sizes 0 Downloads 60 Views

SoftwareX 10 (2019) 100355

Contents lists available at ScienceDirect

SoftwareX journal homepage: www.elsevier.com/locate/softx

Original software publication

GenConstraint: A programming tool for constraint optimization problems ∗

Ioannis G. Tsoulos a , , Vasileios Stavrou b , Nikolaos E. Mastorakis b , Dimitrios Tsalikakis c a

Department of Informatics and Telecommunications, University of Ioannina, Greece Hellenic Naval Academy, Department of Computer Science, Military Institutions of University Education, 18539 Piraeus, Greece c University of Western Macedonia, Department of Engineering Informatics and Telecommunications, Greece b

article

info

Article history: Received 22 June 2019 Received in revised form 25 October 2019 Accepted 25 October 2019 Keywords: Genetic algorithm Constrained optimization Stochastic methods

a b s t r a c t This article presents a software used to solve constrained optimization problems with a modified genetic algorithm, which utilizes a series of modified genetic operators to preserve the feasibility of trial solutions and terminates using a stochastic stopping rule. The software is written entirely in ANSI-C++ and the user can prepare the objective function either in C++ or in Fortran. The article presents the genetic algorithm, the incorporated software as well as some experiments on a series of optimization problems. Also, the proposed software was tested on the design of a two-dimensional filter. The results are compared against the results from the algorithm DONLP2. © 2019 The Authors. Published by Elsevier B.V. This is an open access article under the CC BY license (http://creativecommons.org/licenses/by/4.0/).

Code metadata Current code version Permanent link to code/repository used for this code version Legal Code License Code versioning system used Software code languages, tools, and services used Compilation requirements, operating environments & dependencies If available Link to developer documentation/manual Support email for questions

1.0 https://github.com/ElsevierSoftwareX/SOFTX_2019_222 GNU General Public License (GPL) git C++ Linux https://github.com/itsoulos/GenConstraint/wiki [email protected]

1. Introduction In optimization problems the main objective is to locate a minimum for the objective function, which usually is defined as: f (x) : x ∈ S ⊂ Rn . The function f (x) is minimized with or without constraints. In the second case a combination of linear and non linear constraints may be used. The constraint optimization problem can be formulated as min x

f (x)

subject to

gi (x) ≤ 0 i = 1, . . . , m hj (x) = 0 j = 1, . . . , p ∗ Corresponding author. E-mail address: [email protected] (I.G. Tsoulos).

(1)

where xi ∈ [ai , bi ] , i = 1, . . . , n. The inequalities gi (x) ≤ 0 and the equations hj (x) = 0 stand for the inequality and equality constraints of the objective function respectively. This problem finds many application in a series of fields such as medicine [1,2], physics [3,4], economics [5,6], etc. During the past years many methods have been developed to solve constraint optimization problems such as interval methods [7,8], Genetic Algorithm methods [9,10], Particle Swarm Optimization methods [11,12], Differential Evolution methods [13,14] etc. The proposed software presents and uses a modified version of the genetic algorithm introduced in [15] for solving constrained optimization problems. The method incorporates 1. Modified versions for the genetic operators aimed to preserve the feasibility of the trial solutions.

https://doi.org/10.1016/j.softx.2019.100355 2352-7110/© 2019 The Authors. Published by Elsevier B.V. This is an open access article under the CC BY license (http://creativecommons.org/licenses/by/4.0/).

2

I.G. Tsoulos, V. Stavrou, N.E. Mastorakis et al. / SoftwareX 10 (2019) 100355

2. Application of a local search procedure to randomly selected chromosomes. 3. A stochastic stopping rule. The original method has been enhanced by using a periodical application of a global search procedure which preserves the feasibility of the chromosomes. The rest of this article has as follows: in Section 2 the proposed software is described in detail, in Section 3 the results from a series of experiments with a variety of constrained optimization problems are outlined and finally in Section 5 some conclusions and some guidelines for future research are presented. 2. Software description The proposed algorithm is given in Algorithm 1 (see [16]). The genetic algorithm constructs a set of M chromosomes inside the bounds [a, b] of the objective function and at every generation the modified genetic operators are applied to the chromosomes until the stochastic criterion is met. The original algorithm has been enhanced by the addition of periodical application of a global search procedure. 2.1. Distribution

Algorithm 1 The main steps of the proposed algorithm 1. Set the parameters of the algorithm: a) The number of chromosomes M. b) The selection rate ps . c)The mutation rate pm . d)The local search rate pl e) gI the number of generations that will be passed before the global search procedure will be applied g) gc the number of chromosomes that will used in the global search procedure. 2. Set k=0. 3. Initialize the M chromosomes inside the feasible region [a, b]. 4. Evaluate the fitness for every chromosome. This evaluation is performed using a penalty technique in order to preserve the feasibility of the trial solutions as given by the chromosomes. 5. Apply the modified genetic operations of crossover and mutation to the population. These operators are designed to maintain the feasibility of the trial solutions. 6. Select randomly some of the chromosomes from the population with rate pl and apply to them the local search procedure of Cobyla [16]. 7. Set k=k+1. 8. If k mod pl =0 then

The software can be downloaded from the relevant url: https: //github.com/itsoulos/GenConstraint/. The software has been written in Ansi C++ for Unix based environments. After unzipping the user should change only the variable ROOTDIR in Makefile.inc file. This variable points to the current installation directory. The final outcome of the compilation is the utility make_program which will be installed under bin subdirectory. The make_program has the following command line options:

• -h Displays a help screen and the program terminates. • -p filename. The parameter filename specifies the name of the file with the objective function. The function could be written either in C++ or in Fortran. • -o filename. The output of the compilation is the argument filename. The parameter filename is optional and the default name used will be constraint. 2.2. Problem formulation

(a) Select randomly gc chromosomes and add them to the set LS . (b) For every chromosome Xi in LS i. Select randomly a chromosome Y . ii. Create an offspring of Xi and Y using the proposed crossover method. Denote the offspring as Z iii. If f (z) < f (xi ) then Xi = Z 9. endif 10. If the termination criteria are hold terminate, else goto step 4. At every iteration k the variance of f (x∗ ) is measured. Denote this variance with σ (k) . If there is no any new minimum found for a number of generations, then it is highly possible that the algorithm has found the global minimum and hence it should terminate. The algorithm terminates when

• An example of coding the objective problem in ANSI C++

• • • • • • • •



is presented in the relevant wiki page for the problem of chootinan1. The following functions were used: int getdimension(): This function returns the dimension of the objective problem. int geteq(): This function returns the number of equality constraints. int getineq(): This function returns the number of inequality constraints. void getleftmargin(double *x): This function returns in the array x the lower bound for the objective problem. void getrightmargin(double *x): This function returns in the array x the upper bound for the objective problem. double funmin(double *x): This function returns the objective function evaluated at the point x. void feq(double *x,double *eq): This function returns the equality constraints in the array eq evaluated at the point x. void fineq(double *x,double *ineq): This function returns the inequality constraints in the array ineq evaluated at the point x. void done(double *x). The method done will be called after the termination of the genetic algorithm. The parameter x is the best value discovered by the genetic algorithm.

σ (k) ≤

σ (klast)

(2) 2 where klast is the last iteration where a new minimum was found.

3. Experiments 3.1. A typical run Let us consider the problem Chootinan1. The problem is coded using C++ and the corresponding file is the chootinan1.cc located under the examples subdirectory of the distribution. The following steps are required to apply the proposed method on this function: cd examples; ./bin/make_program -p chootinan1.cc; ./constraint. The constraint executable has a series of command line options given in the relevant wiki page. The constraint executable prints a series of lines to the terminal. The last ten lines from an example run are listed in Fig. 1. The software prints in every line: Generation number, current global minimum and the feasibility of the located minimum. At the end the software prints the

I.G. Tsoulos, V. Stavrou, N.E. Mastorakis et al. / SoftwareX 10 (2019) 100355

3

Fig. 1. Output of the minimization for the Chootinan1 function. Table 1 Parameters for the genetic algorithm. Parameter

Value

M pc pm gc gI

200 0.10 0.05 10 50

Table 2 Experimental results using the proposed method. Function

Proposed

Donlp2

Levy Hess Shittkowski Chootinan1 Chootinan2 Himmelblau Salkin

−1.8730

−1.8730 262.83 18.681 −11.376 −0.037 1.388 −319.804

309.45 13.59 −15 −0.095 0.015 −320.000

Response (IIR) or Recursive filters. In the non-recursive filter structures the output depends only on the input, and in the recursive filter structures, the output depends both on the input and on the previous outputs. The recursive filters have been employed in science and technology for issues like signal processing, control signals, radar signals, astronomy signals, medical image processing, and X-rays enhancements among others [24]. The design approaches for 2-D filters are based on (a) appropriate 1-D filters [24] and (b) appropriate optimization techniques [24– 27]. Here, the proposed technique has been used to overcome the instability problems which emerges in 2-D filters. The transfer function for the investigated 2-D recursive filter is given by

∑K ∑K i=1

H(z1 , z2 ) = H0 ∏K

k=1 (1

j=1

αij z1i z2j

+ bk z1 + ck z2 + dk z1 z2 )

,

(3)

with α00 = 1. The Eq. (3) can be approximated by minimizing the function J, where J = J(aij , bk , ck , dk , H0 )

located global minimum as well as the total number of function calls. 3.2. Benchmark functions The method is compared against Donlp2 optimization method [17] method in a series of benchmark functions: 1. 2. 3. 4. 5. 6. 7.

Levy function, described in [18]. Hess function, given in [19]. Shittkowsi function, provided in [20]. Chootinan1 function, given in [21]. Chootinan2 function, given in [21]. Himmelblau function, as described in [22]. Salkin function, described in [23].

The proposed method was applied to the above test problems 30 times using different seed for the random generator each time. The parameters used in the experiments are listed in Table 1. The results from the application of the proposed method to the above benchmark functions are presented in Table 2. The column PROPOSED denotes the proposed method and the column DONLP2 denotes the results from the application of the Donlp2 to the above problems. 3.3. A two dimensional filter problem There are two types of digital filters: the Finite Impulse Response (FIR) or Non Recursive filters and the Infinite Impulse

=

N1 N2 ∑ ∑

[|M(ω1 , ω2 )| − |Md (ω1 , ω2 )|]p

(4)

n1 =0 n2 =0

where M(ω1 , ω2 ) = H(ω1 , ω2 )|

z2 =e−jω2 z =e−jω1

(5)

1

using

ω1 = (π/N1 )n1 ω2 = (π/N2 )n2 p

even positive integer Therefore Eq. (4) can be written as

J =

⏐]p ( )⏐ ⏐ N1 N2 [⏐ ∑ ∑ ⏐ ⏐ ⏐ ⏐ ⏐M π n1 , π n2 ⏐ − ⏐Md ( π n1 , π n2 )⏐ ⏐ ⏐ ⏐ N1 N2 N1 N2 ⏐

(6)

n1 =0 n2 =0

|bk + ck | − 1 < dk dk < 1 − |bk − ck |

k=1,2, . . . ,K

(7)

k=1,2, . . . ,K

(8)

The function Md (ω1 , ω2 ) is the desired amplitude response and it can be expressed with the following function:

⎧ ⎪ ⎪ ⎨ 1, Md (ω1 , ω2 ) = 0.5, ⎪ ⎪ ⎩ 0,



if ω12 + ω22 ≤ 0.12 if 0.08π ≤

√ ω12 + ω22 ≤ 0.12

otherwise

(9)

4

I.G. Tsoulos, V. Stavrou, N.E. Mastorakis et al. / SoftwareX 10 (2019) 100355

5. Conclusions A software designed to solve constrained optimization problems of arbitrary dimensions was outlined. The software implements a hybrid genetic algorithm which utilizes a global search procedure and a local optimization procedure for faster identification of the global minimum. The software was tested on a series of well-known benchmark functions from the relevant literature as well as on the construction of a two dimension recursive filter. The user an program the objective function either in ANSI C++ or in Fortran 77 programming language. Future additions to the software may include as an example more advanced genetic operators or better stopping rules. Also a graphical interface of the software might be helpful. Fig. 2. The amplitude response |M (ω1 , ω2 )| of the produced 2-D filter.

Declaration of competing interest The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper. References

Fig. 3. The desired amplitude response |Md (ω1 , ω2 )|.

The above problem can be considered as the following constrained optimization problem min J(x) x

(10)

subject to constraints of Eqs. (7) and (8), where x is the union of vectors a, b, c , d. The problem was coded in the Filter.cc file under the examples subdirectory of the distribution. The proposed method was applied to the above problem 30 times using different seed for the random generator each time with the parameters shown in Table 1. For demonstration purposes the value of p was set to 2 and the value of K was also set to 2. The best located value was 2.5 × 10−4 and the corresponding amplitude response |M (ω1 , ω2 )| is shown in Fig. 2. The desired amplitude response |Md (ω1 , ω2 )| is shown in Fig. 3. 4. Impact In the current software tool the user can code the objective function of any constrained optimization problem in plain C++ by defining the objective function and the corresponding equality and inequality constraints. The tool can be executed in almost any operating system and the user can control the process with a series of simple command line options that control the genetic algorithm process in terms of speed and efficiency. Also, the software can enhance the output of the genetic algorithm process using a special local search procedure ideal for constrained optimization problems. Researchers can benefit from the proposed software since it requires information only from the objective problem in a simple format in ANSI C++ that can developed in any operating system.

[1] Zhang X, Wang J, Xing L. Metal artifact reduction in x-ray computed tomography (CT) by constrained optimization. Med Phys 2011;38:701–11. [2] Iakovidis I, Gulrajani RM. Regularization of the inverse epicardial solution using linearly constrained optimization. In: Proceedings of the annual international conference of the IEEE, Vol. 13. Engineering in Medicine and Biology Society; 1991, p. 698–9, Publication Date: 31 Oct-3 Nov 1991. [3] Birgin EG, Chambouleyron I, Martí nez JM. Estimation of the optical constants and the thickness of thin films using unconstrained optimization. J Comput Phys 1999;151:862–80. [4] Yang C, Meza JC, Wang LW. A constrained optimization algorithm for total energy minimization in electronic structure calculations. J Comput Phys 2006;217:709–21. [5] Strumberger I, Tuba E, Bacanin N, Beko M, Tuba M. Hybridized artificial bee colony algorithm for constrained portfolio optimization problem. In: 2018 IEEE Congress on Evolutionary Computation (CEC), Rio de Janeiro, 2018, pp. 1-8. [6] Metawa Noura, Kabir Hassan M, Elhoseny Mohamed. Genetic algorithm based model for optimizing bank lending decisions. Expert Syst Appl 2017;80:75–82. [7] Markót MCs, Fernández J, Casado LG, Csendes T. New interval methods for constrained global optimization. Math Program 2005;106:287–318. [8] Ichida K. Constrained optimization using interval analysis. Comput Ind Eng 1996;31:933–7. [9] Venkatraman S, Yen GG. A generic framework for constrained optimization using genetic algorithms. IEEE Trans Evol Comput 2005;9:424–35. [10] Garg H. A hybrid PSO-GA algorithm for constrained optimization problems. Appl Math Comput 2016;274:292–305. [11] He Q, Wang L. A hybrid particle swarm optimization with a feasibility - based rule for constrained optimization. Appl Math Comput 2007;186:1407–22. [12] Jadoun Vinay Kumar, Gupta Nikhil, Niazi KR, Swarnkar Anil. Modulated particle swarm optimization for economic emission dispatch. Int J Electr Power Energy Syst 2015;73:80–8. [13] Becerra RL, Coello CAC. Cultured differential evolution for constrained optimization. Comput Methods Appl Mech Engrg 2006;195:4303–22. [14] Trivedi Anupam, Srinivasan Dipti, Biswas Subhodip, Reindl Thomas. Hybridizing genetic algorithm with differential evolution for solving the unit commitment scheduling problem. Swarm Evol Comput 2015;23:50–64. [15] Tsoulos Ioannis G. Solving constrained optimization problems using a novel genetic algorithm. Appl Math Comput 2009;208:273–83. [16] Powell MJD. A direct search optimization method that models the objective and constraint functions by linear interpolation, DAMTP/NA5, Cambridge, England. [17] Spelluci P. An SQP method for general nonlinear programs using only equality constrained subproblems. Math Program 1998;82:413–48. [18] Levy AV, Montalvo A. The tunneling algorithm for global optimization of functions. SIAM J Sci Stat Comput 1985;6:15–29.

I.G. Tsoulos, V. Stavrou, N.E. Mastorakis et al. / SoftwareX 10 (2019) 100355 [19] Hess R. A heuristic search for estimating a global solution of non convex programming problems. Oper Res 1973;21:1267–80. [20] Schittkowski K. More examples for mathematical programming codes. Lecture notes in economics and mathematical systems, vol. 282, 1987. [21] Chootinan P, Chen A. Constraint handling in genetic algorithms using a gradient - based repair method. Comput Oper Res 2006;33:2263–81. [22] Himmelblau DM. Applied nonlinear programming. New York: McGrawHill; 1972.

5

[23] Salkin HM. Integer programming. Amsterdam: Edison Wesley Publishing Com; 1975. [24] Lu WS, Antoniou A. Digital filters. New York: Marcel Dekker; 1992. [25] Maria GA, Fahmy MM. An lp design technique for two-dimensional digital recursive filters. IEEE Trans Acoust Speech Signal Process 1974;22:15–21. [26] Mldenov V, Mastorakis NE. Design of two-dimensional recursive filters by using neural networks. IEEE Trans Neural Netw 2001;12:585–90. [27] Mastorakis NE, Gonos IF, Swamy MNS. Design of 2-dimensional recursive filters using genetic algorithms. IEEE Trans Circuits Syst 2003;50:962–5.