A new heuristic algorithm for probabilistic optimization

A new heuristic algorithm for probabilistic optimization

Pergamon PlI: S030.~-0548(96)00056-1 ComputersOps Res. Vol. 24, No. 7, pp. 687--697,1997 © 1997ElsevierScienceLtd All dgh~reserved.PrintedinGreatBfi...

666KB Sizes 3 Downloads 85 Views

Pergamon

PlI: S030.~-0548(96)00056-1

ComputersOps Res. Vol. 24, No. 7, pp. 687--697,1997 © 1997ElsevierScienceLtd All dgh~reserved.PrintedinGreatBfitain 0305-0548197 $17.00+0.00

A NEW HEURISTIC ALGORITHM FOR PROBABILISTIC OPTIMIZATION Rajani R. Joshit:l: Department of Mathematics and School of Biomedical Engineering, Indian Institute of Technology Powai Bombay 400 076, India (Received June 1995; in revisedform July 1996)

Scope and Purpose---In a large number of practical problems of decision making, planning and designing one has to deal with incompletedata and uncertain or subjective information.Well known examples of such problems in industrial applications include the problems of inventory management, quality control or distribution planning under random fluctuations of market economy or the customer satisfiability indices and the problems of personnel management and predictions in equity markets etc. Modelling techniques in O.R. offer mathematical formulation of such problems as the ProbabilisticProgrammingProblems. These are bard problems and despite the development of advanced computing, such PP-Ps are found to be intractable in general. Our new heuristic approach presents a reliable, fast and general purpose efficient algorithm for solving the PPPs. Our solution technique is flexible with respect to the type of inputs and allows sufficient user-interactions for adaptive performance and updates. Abstract--A new heuristic method is presented for solving a class of constrained probabilistic optimization problems where the random coefficients involved in the objective function are continuous. This optimization technique is general purpose and computationallyfeasible. The heuristic idea here has emanated from an analogy of the expected value function with the centre of gravity of a linear physical system. Theoretical results on the efficiency and reliability of the involved algorithm are presented along with computational results on some standard test problems and a real-world problem of antibody inventory control. The importance and scope of the approach is highlighted. © 1997 Elsevier Science Ltd 1. INTRODUCTION Consider the following probabilistic programming problem (PPP): (Pb-I) Minimize F(x) w.r.t, x such that, for given function g:fftn---.~Rm; m_> lg(x)_>0; X ~ ~Rn(or X ~ Zn); n-> 1.where, F(x)= Ew[/Ix, w) ];

f: ~ "+P---*~ a given function, and w, a p-dimensional (p->l) continuous random vector. Here E(.) denotes the expected value, ~ and ~ denote respectively the sets o f real and integer variables. It is well known that the complexity of the Nonlinear Programming Problems together with the involved (in the expected value expressions of the objective or constraint functions) multiple summations and/or multidimensional integrals make it practically impossible to solve these P P P s efficiendy in general. Existing approaches to solve the above kinds o f problems essentially involve approximations (e.g. penalty functions) and often (e.g. in all "descent" direction methods) require certain regularity conditions (such as smoothness, Lipschitz continuity, convexity, differentiability or possibility of transformation into linear or quadratic forms etc.) and appropriate design (e.g. for pseudo random number generation) and dimensionality reduction techniques ( c f [1-4]). In this paper we present a new heuristic method for the above PPPs. This method is general purpose and is based on an analogy of the expected value expressions with the centre of gravity (CG) in certain physical systems [5]. We formulate the optimal CG analogy in Section 2 below. The heuristic solution technique is also presented in this section. Computational results for some examples axe presented in Sections 3 and 4 followed by a discussion in Section 5. Theoretical analysis of the efficiency and the reliability o f the algorithm is given in the Appendix. ? e-mail: [email protected] ¢ Rajani R. Jo~ti is Associate Professor in the Department of Mathematics and School of BiomedicalEngineering, Indian Institute of Technology, Bombay. She has a Ph.D. degree from I.I.T. Bombay and a Doctorate from Univ. de Tech. de Compit~gne, France. Her research interem include pmbabilistic optimization and information processing in biomolecular systems, heuristic algorithms and machine learning. Her research work is published in Acta Biotheoretica, Bulletin of Mathematical Biology, Computers and Mathematical Modelling, Computers and Mathematics with Applications, Computers and O.R., European Journal of O.R., Information Sciences, Journal of Computational Biology and Journal of Theoretical Biology. 687

688

Rajani R. Joshi 2. OPTIMAL CG EQUIVALENCE AND HEURISTIC TECHNIQUE

Consider the problem (Pb-I) with UxC ~ [] as the set of (continuous or discrete) feasible solutions x. Let w be a continuous random vector on ~2,,C~ P with known probability density function /A~(.), satisfying,

~a.lX~(u)du= 1;/~(.) >0

(1)

Rmin= Min {f(x,w)lx E U~,w ~ l-2w}

(2)

Let,

Rmax=Max{J(x,w)Lr E U~,w e Ow } Now, by definition,

F(x) = Sa~(x,u)l~(u)du

I,ufu)du

Whence F(x) is also equal to the Centre of gravity (CG) of the system S:having continuous distribution of unit mass between y=Rmin to y=R.~ along the horizontal rod at x (as illustrated in Fig. I). Hence the problem Programming Problem:

of minimizing

F(x) in (Pb-l) is equivalent to the following Non

Linear

Optimal CG problem. (Pb-II) Find xEU~ f3 F~ such that y(x) is minimized where y(x) satisfies, ::J) h(x;u)du= 1 u=Rmia 2

(3a)

0 h(x;u)= Ou Pr{f(x'w)<--u }

(3b)

where,

Rnmx

S h(x;u)du = 1 Vx E Ux. I/~Rmi n

and F~= {x E ff~"lf(x,O=Rmln for some ~:E/2w}; other terms as defined above. Noting that h(.,.)->0 by definition and considering that, the CG (in the equivalent physical system) would be closed to Rminif and only if the heavier weights (i.e., higher probability masses) are placed near the latter, we propose the following heuristic technique for solving the (Pb-l) with continuous w.

2.1. Heuristic solution technique Find x E Ux f'l F~ such that, H*(x) is maximized;where,

X

~--- Peint0 (~R rain)

(~u)

(a;u +d~)

(~y (x))

(~Ra~ t)

f(x;.) Fig. I. Illustrationof F(x) (=y(x))as the centre of gravity (CG) of Sx.For x~.R,Sx representsa horizontalrod OQ placed at height x. This rod has lineardensity h(x,u)at a pointA whose verticaland horizontalco-ordinatesare (x,u);hence the weight of the portionAB shown above will be h(x,u)duunits.The rod has totalweight= I unit. Clearly, if the CG of the rod liesat a point P(x,y(x))then the weight of the portion OP=weight of the portion PQ=0.5 units,(Hence the equation 3 in the text.)Note that,for the applicationof the optimalCG analogyitis only the relativeposition (i.e.,distance from R.t~ along the directionof J~x,.))of the CG of Sx that plays a role in the formulation of the equivalent problem. Whether the system Sx is placed in two dimensional or higher dimensional spaces does not affectthis analogy. Hence the applicationof the optimalCG analogy is valid for x~.R~ for any nml.

Heuristic algorithm for probabilistic optimization

689

M

H *(x) = ,'~l h(x;Rmi, + i&

(4)

8>0 and integers M, K~; I<-M
Rmax - Rmin

r~

(5)

Using the approach of our algorithm WTCOMB [5], the above parameters would be controlled so that the higher probabilities (namely the probabilities h(x,.) in above summation whose total->0.5) at any given x would lie between Rminto Rmin+MS. This would ensure that R~in+MS<-y(x).

2.1.1. Mathematical interpretation of h(.,.) Theoretically, the infinitesimal element du in d(h(.,u)) would be equivalent to dw involved in d(/.~/(.,w)) in Lebesgue sense; where/~j(.,w) represents the probability distribution function of ft.,w). It is interesting to note that in our heuristic approach this du would be equivalent to the heuristic parameter A used in Joshi [5]. Hence (considering the physical system's analogy as used for the equivalence between {Pi} and {qi} in Joshi [5]), we can devise heuristics to compute h(.,.) without an explicit derivation by the Jacobian transformations from the probability density function of w. These heuristic techniques would in general be specific to the form off(.,.) and may sometimes impose extra feasibility constraints on the decision variables x. Some applications of such techniques are illustrated for different test problems in Section 3 below. Theoretical results on the existence of optimal parameters for which the above algorithm converges to the true optimal solutions are presented in the Appendix.

2.1.2. Implementation of the algorithm Our algorithm (to be called NEWSPPE) can be successfully used to solve any probabilistic optimization problem once h(x,u) is expressed as a function of the probability density (p.d.f.) of the random vector w. If the p.d.f, of w is not known its estimate (obtained from the random sample of observations) or its "equivalent" (derived from the heuristics applicable to the given problem) may be used. The following equivalence relation offers a good heuristic here.

2.2. Heuristic based on equivalence of probability The following equivalence relation (derived from the basic definition of probability) is used below in simplifying the derivation/computation of h(.,.): Let yn×~ be an n-dimensional random column vector and dn×~ be any given real column vector; d~0. Then (denoting a transpose by superscript T) for any given scalar a,

da The event y= ~-~ is the equivalent of the event dry = tr

(R-I)

Stepwise description for use of NEWSPPE is given in Fig. 2. 3. TEST PROBLEMS AND COMPUTATIONAL RESULTS The following test problems from Schittowski and Sengupta [6,7] and Joshi [8] were used to illustrate the performance of our approach vis-a-vis conventional approaches (which use the expected value, namely, Ew(](x,w)) as the objective function). The test problem TP#I below is chosen because geometric programming problem is one of the standard problems considered to be computationally hard. The problem TP#2 represents a large class of profit maximization (or energy minimization) problems. Although these problems belong to the class of extensively studied test problems, they represent most general form of hard problems as far as the application of the equivalence relation (R-I) is concerned. These problems are chosen here mainly to illustrate how h(x,u) can be expressed explicitly in terms of the probabilities of w using the heuristic based on (R-I). Problem T I ~ 3 is related with an entirely different and novel application for adaptive inventory control. Here the demand (antigen) and the product (antibody) are continuous random variables and there is a nonlinear probabilistic relationship between them. Namely, a quantity y of the demand can be satisfied by a quantity x of the product with a probability p'(x,y;kb,kg) where kb denotes the maximum number of

690

Rajani R. Joshi

quality levels of the product at least one of which must match with the ks numbers o f d e m a n d specifications. Interestingly, the production rate and hence the p.d.f, of x also depends on p*(x,y;kb,kg). The origin and i m m u n o l o g i c a l implications of the model are given in our recent paper [8]. The importance of this antibody inventory control p r o b l e m in modern industrial applications is presented in Joshi [9] where we have analyzed a multiproduct discrete model. The continuous case considered here is i m m u n o l o g i c a l l y more realistic but harder to analyze as the probability function p*(x,y;kb,kg) is not k n o w n explicitly.

\ \

• Read Rmin, Rmax - The lower and upper boundary on objective function. • Read ITMAX, the maximum no. of interations to be allowed.

read f~w from real problem data I +lfp.d.£ ofw is known. Then derive I p.d.f, off(x w) using equivalent I transformation and relation of type (R-/). Define fun~ion h (x, • ) using equation ( 3b ). else, set, updates = 'allowed'; • Collect uenple from f l and estinuue p.d.f, oft0 using STD. Statistical S/W like SAS. • Pit multiplex polynon~l h (x, * ) using above estimates in ( 3-b ) and

* Read feasibility constraints

l < fme Ux) Call subroutine estimate

"

~fineh(x,.),r~

• Set ITER = 1, SOL N= empty N = dimensions ofx • Initialize heatistic parameters: $ = o (0.1), Km= o (N*ITMAX} M = any integer in (5, g m/ 2) =

I[Reduce Km N~_~ i

M,

[M ;~5

.

relation(R-l). +Define I'x= Ix E R s ] h (X, R min)~ O}

t

I

Yes

Is Rmtn+M 8 < (Rnm-Rmi~)[K~

s.t.

*•or

Subroutine Estimate

I I Set &o= 8, Mo= M

.

IR e d u c e 8

Is update = ~Set 'allowed'/~ M =_~+! /

Yes

[ Set Mo = M, 8o= 8,

M =Mo+l 8=8oMolM

Define H *(x ) =.~.h ( x;Rmia+j8 ) M J=l STD NLPP solver ~.e~. NLPQL) To solve max H (x) s.t. x~uxnrx

i

Km= K'm-8o IS ITER > ITMAX~

No

Set Xopt = Solution obtained

T 1= ~op,~,~

I Output

Best soln

( op0 ~, 05 +

[Yes/

.

.

.

.

[ - - ~ Is ~om

I ~ ---~

t = empty

" Output best Soln = [Xopt,SOln i }

Fig. 2. Stepwise description for use of NEWSPPE to solve any practical problem of decision making or optimal planning, etc. which has been formulated, using appropriate OR-modelling, as a PPP of the type (Pb-I).

Heuristic algorithm for probabilistic optimization

691

3.1. Problem TP#1:Stochastic geometric programraing problera with continuous w,× v

Minimize (with respect to x) the function E_~ff(x,w)),where,

such that 10 -4--
xi= 1 and ~ x,aij=O;j= 1..... m. i=1

for given re,n; 1 <--no
I=nk_l+l

xi;k=l ..... m - I.

w__,×t is a continuous random vector with li<-w~<-b~,i= 1,2 ..... n. Determination of h(.;.). For any given u>Rmin, h(x,u) by definition would satisfy (with ,Ik etc. as above)

where, Xi

Hence, h(x,u) = Pr{ x rv = 0 } ;v= (vj)n x l;vi = ln(w/Sxi) 1 )~/(~,.,x,) with S=

H22

Now, applying equivalence (R-I) we would get, h(x,u)= Pr{ v=O } = Pr{ wi=SxiVi= 1..... n}

Further, by definition, all the marginal probabilities will play equal role for application of the "optimal CG heuristic" (i.e, for selection under higher probability mass placed near Rm~ on Sx). Hence, for the purpose of maximization of ~h(x,.) we would approximate h(x_,u)by, h(x,u)-nPr{ wi=Sxi} for at least one wii= 1..... n

(6)

The above problem was solved with m=3, no=l, n~=3 and n2=5; for n=5 and n=6. The following values were used in computational experiments for w~ distributed as Gamma (a~,/zi) i= 1,2.... n with Truncation Scale e [ 1.75,2.1 ]; the data were collected from examples in Schittowski (1987) and Sengupta (1972): [6,7]: /=(0.5 0.5 0.5 0.5 0.5 0.5) b--(950. 950. 950. 950. 950. 950.) a=(1.5.5.5 1.1.8 1.) /z--(2.0 1.0 1.04.0 3.0 1.0) The coefficient matrix A was as below:

A=

2.3 5.1 -1.2 1.5 6.5 0.12

4.5 8.4 4.6 -0.4 -2.6 -0.89

6.6 2.5 2.2 1.1 7.1 2.5

692

Rajani R. Joshi

Remark. It may be noted that the standard NLP-algorithm would perform its best here if the p.d.f, of w is selected from the family of exponential distributions because of the relative computational simplicity of the numerical integration [10,11] necessary for the evaluation of the expected value E(.). Whence the choice of Gamma distribution here gives us an opportunity to compare the performance of our algorithm (NEWSPPE) with the best possible performance of its counterpart (viz algorithm NLPQL) which is based on the conventional NLP-approach. Noting that the expression for h(x,u) given by (6) above can be computed for any (given theoretically or estimated from sample data or modelled qualitatively) general probability distribution of w~'s and hence our algorithm (newsppe) can also be applied to any other set of data or the general p.d.f, of w with equally good performance characteristics.

3.2. Problem TP#2: Profit maximization problem under stochastic demand MindEw(f(x,w)) f ( x , w ) = - ~ r x - ~ g~(x,w) i=l

--

- -

such that 10-4--
where, gi(x,w)= - f~iSi ifui->0 fi~=w~- (Ax_)~;i=1,2 ..... n. An×, is a given matrix and (Ax)i denotes the ith component of (Ax_)n×l;r/,&x are given coefficients and w~× ~is a continuous random vector. Determination of h(.;.). By definition,

h(x,u)=Pr{(-_ rlrx-~_~, g~(x,w))=u} for some _w~/2applying the equivalence relation (R-I) above we get, for given u>R,~, h(x,u)= Pr{ w=( drd) -'d( - u -

nrx) +A.s_)}

(7)

where, dn× i = (di)nx t; d r = ( _ ~8~ otherwiseifPr{w~<-(A~-)i}>Pr{wi>(A~-)i} The above problem was solved for n=6 with li<-wi<-b~and marginal p.d.f, of w~ as truncated Gamma (a~,/~) for scale 75. and following parameter values: /=(2.5 2.5 2.5 2.5 2.5 2.5) b=(950 950 950 950 950 950) a=(0.015 0.016 0.035 0.012 0.033 0.04) _~--(1.1 2.4 2.3 1.6 1.9 1.0) 3=(3.02 2.98 4.21 2.01 3.96 3.22) _7=(3.57 2.99 4.11 2.78 2.81 2.83) 8=(0.78 1.21 1.11 1.56 1.89 1.02) The coefficient matrix A was as below:

A=

2.3 4.5 5.1 8.3 1.2 4.6 1.5 0.40 6.5 2.6 4.4 1.2

6.6 2.5 2.2 1.1 7.1 0.98

2.2 3.2 0.99 1.3 3.4 2.3

1.4 3.2 1.2 2.5 1.1 5.5

11.2 8.7 6.4 4.3 3.2 7.4

Heuristic algorithm for probabilistic optimization

693

Remark. As h(x,u) is represented (c.f. (7) above) here explicitly in terms of the probability of w, our algorithm (NEV~'SPPE) would be successfully implemented with equally good performance for any (known or estimated or modelled) general p.d.f, of w. However the computational experiments were carded for random data (collected from Schittowski ~ d Sengupta [6,7]) and using the p.d.f, of w as a Gamma distribution because the latter is likely to favour the other algorithms (i.e. NLPQL) and thus offer a harder test of comparison for our algorithm (NEWSPPE). The results of experiments on the above example problems TP#1 and TP#2 are given in Section 3.3 below.

3.3. Problem TP#3: Antibody Memory (Inventory) Optimization in Immune System [8,9] Minimizex ~tim,°j~,xlE_~(f(x,w)) Where, Imin, /max are given (from controlled experiments in molecular immunology) values and w2x ~= (w~,w2)r;w~ ~ (0,1), and w2e (0,1) denote the proportions of bound antibodies (Ab) and antigens (Ag) respectively. Here the cost (energy [8]) function is given by fix,w) = Ch(1 -- w l)(x -- ~'+ ~) + Cs(1 - w2)d0 with given positive parameters Ch,Cs,~fl and do

Determination of h (.;.). Here,

h(x;u)

= Pr{f(x,w) = u } ;u > 0 =Pr{Qrw*=u}

where (from the expression off(x,w) above),

Q=(Ch(X- ~'+~),Cflo) r and w*=((1 - wl),(1 - w2))r Equivalently (using (R-I)),

h(x;u)=Pr(w*=_ (--~-~

(8)

we determine h(.;.) from above relation using the following expression [8] for the p.d.fof w*: Ab) Ag) . Pr{ w * = a} = r/~A(al)fl~.A(c~2),

O[l,a 2 E

+

where, x*, the free Ab concentration, = x - ~ ' + ~ ; r/is a normalizing factor and f...ab(.),f...Ag(.)are quasi quadratic functions (explicit expressions for the estimators of the p.d.f, of w ° are available in Joshi [8]) based on the affinity (quality-match) between involved Ab (product) and Ag (demand) molecules (types). It is interesting to note that in the above problem the p.d.fof w (and the p.d.fof w*) is also a function of x. The above problem was solved for different sets of parameters of immunological relevance (biochemical "cost" constraints). Results corresponding to the following parameter sets are presented here. Parameters: Set-I: Set-II:

Ch

Cs

~"

Cl

do

Imin

Imax

21.8 42.6

54.5 221.1

3 12

152 884

100 349

7 114

750 4000

4. RESULTS The problems TP#1-TP#3 above were solved computationally by our heuristic approach (NEWSPPE) and a conventional NLP approach using a general purpose code (NLPQL) on a CDC cyber 180•840 at l i t Bombay. The IMSL subroutines DNCONG/NCONG and DNCONF/NCONF based on the successive quadratic programming algorithm of Schittowski [12] were used to solve the involved NLPPs and the subroutines QDAG, QDAGI and QAND [10,11] were used to evaluate the integrals wherever

necessary. The maximum number of iterations (MAXITN) for the NLPP-solvers was set at 150 in each

694

Rajani R. Joshi Table 1. Computational results using our heuristic based algorithm NEWSPPE and a conventional general purpose NLP code N L I ~ L .

Test problem

NEWSPPE

Solution based on theoretical analysis

NLPQL

n

p

n.i .

u,q,,=n.i.+61(4

x.,r,

F~,I.,

xo.,F~,l.

5

5

1.0 × 10 - t_,

1.5 X 10 - 6

0.27

6

1.0 × 10 - ~-~

1.5 × 10 -o6

No convergence in QAND/DQAND No convergence in QAND/DQAND

Not available

6 TP#2

6

6

- 651.0

- 625.5

I I

2 2

0.0 0.0

3.816 4.957

No convergence in QAND/DQAND 7.0 0.0 114.0 0.0

Not available

TP#31 TP#3 II

(0.0001, 4.248, 4.241, 4.803, 2.447) (0.0001, 4.274, 4.267, 4.857, 2.447, 2.192) (0.001,683.889,0.001, 313.470,0.001,0.001 ) 7.6 120.7

TP#1

0.55 0.21 0.97 0.78

Not available

x~i~=7.0 xmi,~= 120.0

experiment. The initial solution guess (XGUESS) x ° were chosen arbitrarily with x ° e (0.1,260);i= 1..... n. In the results below (Table 1), we have shown the components of optimal solution rounded upto the 3rd decimal place, unless otherwise necessary. For conventional NLP approach using NLPQL, F*,,p, would be the optimal expected value of the objective function of the minimization problem stated in the corresponding Test Problem above. In our approach (NEWSPPE) for continuous w_ F'p, would indicate the optimal value of ~i=th(x,Rmi,+Mo M °) with respect to x_; for best values of involved heuristic parameters. Tables 2-4 show the overall computational costs in terms of SRUs (System Resourse Units, indicating the costs due to CM storage and CPU time) used for corresponding run on the NOS of the CDC CYBER 180/840 at the Indian Institute of Technology, Bombay. 5. C O N C L U S I O N S

AND SCOPE

The PPPs with continuous random coefficients are analytically and computationally intractable in general because of the multiple integrals involved in the expected value expressions. Finding efficient resolution techniques for such problems is therefore an important area of fundamental as well as applied research in Mathematical Programming. In this paper we have presented a new general purpose heuristic method for the above class of problems. This method is computationally feasible and reliable. The heuristic idea here has emanated from an analogy of the expected value function with the Center of Gravity in a linear physical system. Our method does not require any specific mathematical properties (regularity conditions) to be satisfied by the objective or the constraint functions. Further, the discrete problems [5] can also be solved as particular cases under this approach. The most important contribution of our algorithm is that it does not require any computation of integrals or multiple summations in the probability space (sample space) of Table 2. Comparison of SRUs used by our approach (NEWSPPE) and a conventional NLP approach (NLPQL) for the problem TP#1

Table 3. Comparison of SRUs used by our approach (NEWSPPE) and a conventional NLP approach (NLPQL) for the problem T I ~

Method

Method

NEWSPPE NLPQL

n=p

Convergence

SRUs

5 6 5 6

Successful Successful No convergence* No convergence*

0.171 0.178 0.089-0.139 0.089-0.139

NEWSPPE NLPQL

n=p

Convergence

SRUs

6 6

Successful No convergence*

0.682 0.705

Table 4. Comparison of SRUs used by our approach (NEWSPPE) and by a conventional NLP approach (NLPQL) for the problem TP#3 Method

n

p

Parameterset number

Convergence

SRUs

NEWSPPE

1 l I I

2 2 2 2

I Il I II

Successful Successful Successful Successful

0.464 0.319 0.829 0.62[

NLPQL

*No convergence was achieved (not even for a single NCONG iteration) in the computation of multiple integrals involved in the objective function here. The SURs were computed only until the unsuccessful termination of the corresponding IMSL routine.

Heuristic algorithm for probabilistic optimization

695

the random coefficients and is therefore also efficiently applicable to the PPPs where the p.d.fs of the random coefficients are not known or there is incomplete information on the latter. The heuristic approaches, in general give approximate solutions. However, our method offers to control the involved heuristic parameters in a plan and execute manner. We have also shown (c.f. Appendix) the existence of heuristic parameters for which this algorithm gives the true optimal solution. Computational simplicity of our method vis-h-vis conventional general purpose approach to the PPPs is obvious because the latter involves multiple integrations. Moreover, the superiority of our algorithm increases with the dimension of the problem (i.e. with increase in n and/or p); this is because of the following: The complexity of the conventional NLP-approach increases as p (the number of random coefficients and hence the level of multiple integration involved in the expected value expression) and/or n (the dimension of x and hence of the search-space) increases. However, in our algorithm, once h(x,u) is derived, the sol-ution is obtained by the technique presented in Section 2.1 above. This solution technique does not depend on p and the objective function H*(x) does not involve any integration (or multiple summations). It may also be noted (c.f. (A5) and (A6) in the Appendix) that for any PPP, this objective function (which is to be maximized) would be concave whenever that (viz F(x)) of the minimization problem under standard NLP-approach is convex. Hence, with respect to an increase in n, our algorithm would perform better than the best performance of the NLP-based general purpose algorithms. Most importantly, H*(x) here can be approximated to a quadratic form (since each h(x,.)~ (0,1)Vx and Max H*(.)<--I) using piecewise continuous least squares approximations and thus the algorithm can also be used with an o(n 3t2) complexity. As stated earlier, the key to the best utilization of the heuristic approach in our algorithm is the derivation/estimation of h(x,u). The transformations obtained from the equivalence relation (R-I) are of particular importance in this regard. We have successfully implemented the algorithm using this equivalence relation on some example problems. The computational results have also signified the efficiency (best solutions with least SRUs) of our algorithm (NEWSPPE) as compared to the conventional NLP-approach even when the given p.d.fs were likely to support the latter more. Results on some representative problems are presented here. The example problems TP#1 and TP#2 correspond to hard problems with respect to the direct application of the rule (R-I). As specified earlier, our algorithm can also be applied to any general p.d.f of w or to the problems where the p.d.fof w are not known but estimated from available observations (sample data) on these random coefficients. The results on the problem TP#3 present a comparison between the optimal solutions obtained by our algorithm using such estimators and those derived theoretically using known p.d.fs; the average relative error of difference between the theoretical and computed solutions here is of the order of 0.01; this small error can also be eliminated if we allow the search space to vary in the open interval (/,,in, I,~) instead of a closed one. Results of the implementation of this algorithm on large scale real applications (case-studies) will be reported separately in successive papers. Theoretical analysis of the variation in - - (i) the rate of convergence to the true optimal solution or (ii) the difference between the true and the computed solutions; with respect to the heuristic parameters M and 6 and the extension of NEWSPPE to reduce the complexity of the general purpose NLP-approach to chance constrained PPPs and the analysis of the reliability and efficiency of this algorithm will offer interesting research problems. Acknowledgements--The author is grateful to the anonymous reviewers for their insightful comments and suggestions. The preparation of this manuscript is supported by a project sponsored by the Dept. of Biotech., Govt. of India. The author wishes to thank the DBT for this grant. REFERENCES I. Prekopa, A. and Wets, R. J. B., Stochastic programming, Parts I-lI. Mathematical ProgrammingStudies, 1986, 27, 28. 2. ROmish, W. and Schultz, R., Distribution sensitivity in stochastic programming. Mathematical Programming, 1991, AS0, 2 197-226. 3. Ruszczynski, A., Feasible direction methods for stochastic programming problems. Mathematical Programming, 1980, 19, 2 220-229. 4. Shapiro, A., On differential stability in stochastic programming. Mathematical Programming, 1990, A47, 1 107-116. 5. Joshi, R. R., A new approach to stochastic programming problems: discrete model. EuropeanJournal of Operational Research, 1995, 83, 514-529. 6. Schittowski, K., More test examples for nonlinear programming codes. In Lecture Notes in Economics and Mathematical Systems, Vol. 282. Springer-Verlag,Berlin, 1987. 7. Sengupta, L K., Stochastic Programming. North-Holland, Amsterdam, 1972.

696

Rajani R. Joshi

8. 9. 10. 11. 12.

Joshi, R. R., Ab---Ag affinity thresholds in inventory optimization. Acta Biotheoretica, 1994, 42, 4 295-313. Joshi, R. R., Immune network memory: an inventory approach. Computers and Operations Research, 1995, 22. 6 575-591. Atkinson, K.. An Introduction to Numerical Analysis. John Wiley and Sons, New York, 1978. Piessens, R. E., deDonker K., Oberhauber. C. W. and Kahaner, D. K., QUADPACK. Springer-Verlag, New York, 1983. Schittowski, K., NLPQL: A FORTRAN subroutine solving constrained NLPPs. Annals of Operations Research. 1986, $.

APPENDIX

E F F I C I E N C Y AND R E L I A B I L I T Y Let x,s,, denote the true optimum solution of the given (Pb-l) with continuous w. Let .~,~,denote the optimum solution obtained by our heuristic approach. With F(x), y(x),h(x,u),M, & and K. as defined earlier,we would get the following results on the efficiency and reliabilityof our approach. Lemma 1. There exist values .~/,~ and ~u of the heuristic parameters M, & and Ku such that,

(i) for any desired (arbitrarily small) e>0 where,

L~,~l:~(x)=lF(x) - y(x)l ; Vxe U. (ii) with e as above and ~:~, obtained for corresponding lff,~and kM,

F(Tco~,)~F(x) + o( e) ; Vx e U~ (iii) as e---.0 above,

~,--'x~, Proof. Noting that 00 and y(.)>0 etc. without loss of generality. (i) By the definition of M, 8 and KM in reference to the (Pb-II), we have, y(x)>--R=i.+M8

(A1)

and, M*

F(x)= ,~=l8(Rmi"+iS)h(x"Rn +ib')

(A2)

M ' = (Rm,~- R.~.)I8

(A3)

where,

Also, Vx e U~,O
6h(x;Rmin+ ib)--'I

(A4)

(Clearly, the approximate relations in (A2) and (A4) above would become exact as M*--..o0 or as 8-.*0). From the above we would get,

F(x) - y(x)

M*

<--c52i~. ih(x"Rmin+ ib) - M 82 ,~, ~ h(x"R=in+i6) M*

=~52iX=l( i - M)h(X;Rmi~+iS)

(AS)

= V u . ~ , ( x ) - Uu.~r,(x)

where,

V~,~x~(x)= 8 2 i=~÷I (i - M)h(x',R n + i8) >-0 and, M

Uu.~r~(x) = 82 :Xl (M - i)h(x,Rmi. + i8) ->0

(A6)

Let M* be sufficiently large and M0 at x=$c.v. Then we would have,

Lu,~x~(~,,)=lF(yca~) - y(.~,,~)l
Maxast,~u~+,.t~u.{ (h($¢~; e j) - h(xom;¢2))} =o( - ~ ) whence,

Heuristic algorithm for probabilistic optimization

697

Lu,~xu(Ycow) <--~(g* -M)o( ~ ) M

(A7)

=o(1 - ~ )o(oD8 Now for any desired e>O we can find appropriate heuristic parameters say,/f,/,~ and R u such that o(1 - ~/~')~2-<6. Hence, for corresponding -~o~,.

L~.~.k,,(Y%,)<--e



F(~o.,) = 8 i~l (Rmi.+ i o")h(Yc~,;Rmi.+ i ~

(A8)

(ii)

=

V, (Lp,)+

V2(~o,,)

where, Vx e Ux,,

V,(x)

=8 i=l ~ (Rmi.+ib')h(x;Rmi.+i6)

V2(x)

= 8 i=~+, (Rmln+i6)h(x;Rml. + i6)

M"

Now, by definition o f . ~ , and considering (A4) above, we would have, V,.(Ycop,)<--V2(x);VxEU,. Hence. Vx~ U.,

F(.%,) <-V~(~oD+V2(x) =

F(x) +

V, (Ycor,,) - V,(x)

u

=F(x)+6 E=,(Rmi.+ib')(h(Yc~,;Rmi.+i6)-h(x;Rmi.+i6))

(A9)

<--F(x)+ 8(R,.i.+ Mb')M 6 * where,

8" =(h(~op,;.)-h(x;.))ma~ 1 =o( ~ ), (considering equation (A4) and by the definition of M, M*)' It is easy to see that for any e>0 we can always find M, 8, Kin; M'>2M which simultaneously satisfy the following two relations:

82(1- ~ ) - - e and, M

6(Rmi.+Mo") ~

=o(e)

Hence, for any choice of ~ , ~ and ~'u for given E in (i) above, we would have, for corresponding -~o~, F(Y%,) <-F(x) + o( e) ; ¥x ~ U, (ill) The proof is obvious from the definition of xop,while taking the limit e---,0 above. • Remm'k. The above results are valid even if the probability density function of w is unknown or estimated from sample data; because the degree of uncertainty or approximation in F(x) and y(x) will be of the~ame order so the difference between the two will satisfy the inequalities stated in (i) and (ii) above in these cases as well.