Computers and Industrial Engineering Vol. 23. Nos I-4, pp. 305-307, 1992
0360-8352/92 $5.00+0.00 Copyright © 1992 Pergamon Press Ltd
Printed in Great Britain. All rights reserved
A Parallel Processing Algorithm for Nonlinear Programming
Problems
Y. Tsujimura, K. Ida, & M. Gen Dept. of Industrial & Systems Engg. Ashikaga Inst. of Tech. Ashikaga, 326 Japan
where x is a vector of decision variables; f is nonlinear objective function; h is a set of nonlinear constraints. The algorithm of the GRG method is described as follows:
Abstract Algorithms for solving multiple criteria nonlinear programming problems are frequently based on the use of the generalized reduced gradient (GRG) method. Since the GRG method gives complex and large size processing for computation, it takes much time to solve large-scale multiple criteria nonlinear programming problems. Therefore, parallel processing dealing with the GRG method is required to solve the problems. We propose a parallel processing algorithm for the GRG method under multiple processors systems.
[STEP 1]: Give
objective function f ( x ), a set of constraints h ( x ) , Vs h ( x ), V~ h ( x ), initial point x~, boundaries U and L, required accuracy ca, a number of decision variables N, and a number of constraints M. [STEP 2]: Set k=l. If the initial point x~ satisfies constraints, proceed to STEP3. Otherwise, decide xct) for satisfying constraints and go to next step.
Introduction
In production planning, portfolio selection, capital budgeting, and transportation problem, as well as in other fields, we are frequently faced multiple criteria nonlinear programming problems. Algorithms for solving multiple criteria nonlinear programming problems are usually based on the use of the generafized reduced gradient (GRG) method. The GRG algorithm is a natural extension of the reduced gradient algorithm due to Wolfe to the case of nonlinear constraints. The algorithm has been under study for some time. Since the GRG method gives complex and a lot of processing for computation, it takes much time to solve nonlinear programming problems. Therefore, parallel processing dealing with the GRG method is required to solve the nonlinear programming problems. Furthermore, the mentioned parallel algorithm tested on some numerical examples. The experiments indicate that the algorithm effects faster computation than the former sequential algorithm.
[STEP 3]: Reduce the dimension of Vj h ( x*) ), by using the Preassigned Pivot Procedure ( p3 ). [STEP 4]: Calculate the inverse matrix ( vn h ( x~) ))-~. [STEP 5]: Calculate reduced gradient r t*~ at xtk) = ( x~t), xk~) ) by using (2). r(t)r=V s f (x 0 0 ) - V m f ( x
d(t) = ( d~'), d~)).
( o x~) = L., and ~?) • o ~^ '_~'t') rz: =ndr(t)
Attempts to solve nonlinear programming problems have resulted in many algorithms that work for special case. Examples are separatable programming, quadratic programming, geometric programming, and the Wolf's reduced gradient method. In an attempt to develop a procedure that will handle the general nonlinear programming problem, the generalizing the Wolf's reduced gradient method has been proposed by Abadie and Carpentier by generalizing the Wolf's reduced gradient method. The GRG method is for solving nonlinear programming problems as follows:
s.t.
ffi
x z 0
=
h
(x(*)))
-t Vs f ( P ~ ) d~')
(3) And if I dkt) I < e~, terminate. Otherwise, go to next step. [STEP 7]: Decide at by next equation. a, = rain { min {(xi - L / ) l d i I dj < 0 }, rain {( U/ - xj/dj ) I dj >0} } for a:t j
(4)
If a~ > I, put at = I. After that, calculate the step width a (k) using (5).
f(x)
b(x)
(2)
[STEP 6]: By using (3) calculate a usable direction
A l g o r i t h m o f the G R G m e t h o d
z
an
Vf(x)faflax,
0
a ¢) = argmin { f ( x (k) +
ctd Or) )
I 0 < a < at } (5)
(])
305
306
Proceedings
of the 14th Annual Conference
[STEP 81: Revise the value of x#+‘) = xi’) + a(‘)d#) using (6) as x#+*) comes into the bound Lo 5 #+I) < uN.
on Computers
and Industrial Engineering
product of two matrices
is calculated as following;
(6) [STEP 91: By using the Newton method we can obtain a solution satisfying the constraints b ( xp+n , xf+‘) ) = 0 )
(7)
All elements cij , i = 1.2,j =1,2 are individually calculated by using multiple processors.
Estimation of the GRG parallel algorithm
where x~~+‘) is unknown value and $+I) is fixed value. There are three states for x, as follows: (1) x# isn’t existence, (2) x# exists with f ( x Bt+‘)) x#+‘)) > f ( xf’ , xp ), (3) x# exists, but out from the boundary. In case of (1) and (2), get a new point on the constraints with reducing a step width a@) to aa)/ 2 or a@)/10, and return to STEP 8. In case of (3). change basis as x,, comes into the boundaty. Further if
We used FX/2800 system (ALLIANT) which is parallel computing system by use of multipk CPUs (Intel i860, 64bits. 4OMHx clock, 8CPUs) for the estimation. Estimation of the GRG parallel algorithm is performed by use of the Rose&rock function as EX.1 at first. The problem is to minimize the Rosenbrock function and its optimal solution is 0.0 at xi = 1.0
)i
=1,23 ,... Jl.
function
EX. 1: Rosen&k
minf(x)=
~~O{(X~)~-X~+,~+(~-X~)~
f ( x p+u , #St+‘) ) >f ( xf’ , 4’ ) , change a(‘) to a(‘)/ 2 or au)/ STEP 8.
10
, And return to
=E====--E=
xi 50,
i = 1.2.3,
Parallel processing of the GRG method Some kinds of parallel processing technique are proposed and developed up to the present. In this paper, we adopt the simplest and the most popular uarallel urocessine techniaue which is the techniaue ihat any’ parts \;hich c&t be parallel&d in ihe sequential GRG altmrithm are processed individually by use of multipie processors. Parallelized parts in the sequential GRG algorithm am described as following; (1) In STEP3, processing of the Preassigned Pivot Procedure ( P3 ) using as the technique which transforms a man-lx to triangular matrix are parallelized. (2) In STEP4, a calculation of an inverse matrix ( v, b ( XQ)))-I. (3) In STEPS, parallelization of calculations of both the reduced gradient vector r@) and the usable direction de! (4) In STEP7. parallel processing of the line search tnenxl which ts used for dectdmg the step wrdth (5) In STEP9, parallel processing of the Newton method which is used to obtain a solution satisfying the constraints. The parallelizing technique adopted in this paper is the technique that any operations on vectors or mat&es are processed in parallel. For example, the
,n
initial value:
. ,n
xi = 0.5 , i = 1, 2. 3.
[STEP 101: If f ( x@+t)) -f ( x@)) < E, terminate Otherwise, put t=k+t andretumtoSTEP3. .
”
optimal solution: f(x’)=O.O,x~=l.O,i=1,2.3,
Let requited accuracy
E,, =
...
1O4,
.n
and when
f ( xw+l)) - f ( x(k)) < E* )
the processing is terminated. And calculation is performed for each case number of decision variables n=SO, 100,150,200,250. Estimation is performed by corn arlson parallel processing time for the proposed 8 RG algorithm on 4 CPUs with sequential processing time for former GRG algorithm on 1 CPU . Results of estimation by use of EX.1 is shown as Table 1. In Table 1, the parallel processing time is half the time of sequential processing. We guess that parallel processing time is almost reduced to about 1R l/3 of sequential protesting, because 4 CPUs are Therefore, we consider used for parallel rsrng. that the results o esttmatton are nearly as we expetted. As the computing system which is used for the estimation mounts high-performance CPUs. sequential processing of a problem which gives a lot of calculations like Ex.1 by use of the computing system with 1 CPU is faster than processing by use of any general type of workstations. Because data communication between processors takes some time, however, in case of solving a small-scale problem, the parallel processing is not faster than the sequential processing. Then we shall estimate the GRG parallel algorithm by using EX.2 as a small-scale problem
TSUJIMURAel al.: Nonlinear Programming Problems
307
Table 1, Results of comparison of processing time for EX.I parallel processing time (4 CPUs) (see)
sequential processing time (1 C P U ) (see)
ratio of processing time
2.85 11.05 24.75
4.81 19.52 45.47 79.17 127.48
0.59 0.57 0.45 0.55 0.53
50 100 150
200 250
43.48 67.37
Table 2. Results of comparison of processing time for EX.2 parallel processing time (4 CPUs) (sec)
sequential processing time (1 C P U ) (see)
ratio of processing time
0.09
0.04
2.25
EX.2:
_4__+9
min f (xl, x2, x 3 ) - x2 -- X I X 2 +
+X I +X2+X
s.t.
7__
x~ + xl 1 X~lf 3 -- ~XIX 3 3
xl + x2 < 6
X~-6X2X3+4X 2 >0 Xl ,X2,X3>O initial value: X I = X2=X 3 = 0.5
optimal solution: f ( x" ) = 4.22, x~ = 3.55, x~ = 2.45, x~ = 2.03
Let required accuracy eA = 10-6 , and the processing is terminated on the same condition of EX.1. Results of estimation which is obtained by same way for EX. 1 is shown as Table 2. In case of EX.2, the parallel processing time is more than twice the sequential processing time. According to the results of the above mentioned two cases of estimation, so that we can get the effect of parallel processing of GRG method, it is necessary that processing time at least takes several seconds. Therefore this proposed parallel processing for GRG algorithm is effectual, on practical use, because almost realistic nonlinear programming problems are large-scale.
Conclusion In this paper, we proposed parallel processing algorithm for the GRG method under multiple processors system. And the parallel algorithm tested on some numerical examples. The experiments indicate that the algorithm effects faster computation than the former sequential algorithm. The GRG method is frequently adopted to algorithms for solving multiple criteria nonlinear programming problems, and some algorithms are proposed up to the present. The parallel processing algorithm for the GRG method proposed in this paper must be useful for fast computation to solve multiple criteria nonlinear programming problems. Par-
ticularly, it is more usef-ul for s01,~ing large-scale multiple criteria nonlinear programming problems by adopting the GRG parallel processing algorithm.
References (1)Sudagopan, S. & A. Ravindran: Interractive algorithms for multiple criteria nonliner programming problems , European Journal o f OR , 25, pp.247257(1986). (2)Roy, A. & J. Wallenius: Nonlinear and unconstrained multiple-objective optimization: Algorithm, computation, and application , Naval Research Logistics , voL38, pp.623-635(1991). (3)Steuer, R.: Multiple criteria optimization: theory, computation, and application, Wily(1986). (4)Wolf, P.: Methods of nonlinear programming,in Recent Advances in Mathematical Programming , McGraw-Hill(1963). (5)Abadie, L & L Carpentier: Generalization o f the Wolfe reduced gradient method to the case o f non. linear constraints , Chap. 4 of Optimization, Academic Press(1969). (6)Hwang, C.L. et. al: Introduction to the generalized reduced gradient method Jnst. for Sys. Design and Opt. , Kansas State Univ., No.39, 38pp(1972). (7)Himmelblau, D.M.: applied nonlinear programming , MaGraw-Hill(1972). (8)Beightlcr, C.S. et. al: Foundations of optimization , Prentice-Hall(1979) (9)Schendel, V.: Introduction to numerical methods for parallel computers, Ellis Horwood(1984). (10)Brawer, S.: Introduction to parallel programming , Academic Press(1989). (11)Bertsekas, D.P. & J.N. Tsitsiklis: Parallel and distributed computation numerical methods ,PrenticeHal1(1989). (12)Yamada, S. & E.Aiyoshi: A parallel Quasi-Newton method by use of group conjugacy , Trans. o f the Soci. of Inst. and Contrl. Engin. ,vol.25,No.4,pp.482-489(1989, in Japanease). (13)Fukushima, M.: A parallel algorithm for nonlinear optimization, System~Control/Information, vol.34, No.4, pp.223-231(1990, in Japanease). (14)Aiyosi, E. & Y. Sugiuchi: Parallel computaition techniques for nonlinear optimization , Journal of the Soci. o f lnst. and Contrl. Engin. , vol.29, No.12, pp1070-I076(1990, in Japanease). (15)Dongarra, LJ. et. al: Solving linear systems on vector and shared memory computers, SIAM (1991). (16)Hellermm, E. & D. Rarick: Reinversion with preassigned pivot procedure , Mathematical Programs , 1(1972).