An effective genetic algorithm approach to multiobjective resource allocation problems (MORAPs)

An effective genetic algorithm approach to multiobjective resource allocation problems (MORAPs)

Applied Mathematics and Computation 163 (2005) 755–768 www.elsevier.com/locate/amc An effective genetic algorithm approach to multiobjective resource ...

232KB Sizes 24 Downloads 114 Views

Applied Mathematics and Computation 163 (2005) 755–768 www.elsevier.com/locate/amc

An effective genetic algorithm approach to multiobjective resource allocation problems (MORAPs) M.S. Osman a, M.A. Abo-Sinna

b,*

, A.A. Mousa

b

a

b

High technological Institute, 10th Ramadan City, Egypt Department of Basic Engineering Science, Faculty of Engineering, Moenoufia University, Shebin El-Kom, Al-Gharbia, Egypt

Abstract Dynamic programming (DP) is a mathematical procedure designed primarily to improve the computational efficiency of solving select mathematical programming problems by decomposing them into smaller, and hence computationally simpler, subproblems. In solving Multiple objectives Dynamic Programming Problem (MODP), classical approaches reduce the multiple objectives into a single objective of minimizing a weighted sum of objectives. The determination of these weights indicate the relative importance of the various objective. Also, if the problem scale increases, it become difficult to be dealt with even in the case of single objective because of the rapid expansion of the number of states to be considered. In this paper, we investigated the possibility of using genetic algorithms (GAs) to solve multiobjective resource allocation problems (MORAPs). This procedure eliminates the need of any user defined weight factor for each objective. Also, the proposed approach is developed to deal with the problems with both single or multiple objectives. The simulation results for multiobjective resource allocation problems (MORAP) shows that genetic algorithms (GAs) may hopefully be a new approach for such kinds of problems. Ó 2004 Published by Elsevier Inc. Keywords: Dynamic programming; Multiobjective optimization; Multiobjective resource allocation problem; Genetic algorithms

*

Corresponding author. Address: P.O. Box 398, Tanta 31111, Al-Gharbia, Egypt. E-mail address: [email protected] (M.A. Abo-Sinna).

0096-3003/$ - see front matter Ó 2004 Published by Elsevier Inc. doi:10.1016/j.amc.2003.10.057

756

M.S. Osman et al. / Appl. Math. Comput. 163 (2005) 755–768

1. Introduction Dynamic programming (DP) is an optimization approach that transforms a complex problem into a sequence of simpler problems; its essential characteristic is the multistage nature of the optimization procedure. More so than other optimization techniques, DP provides a general framework for analyzing many problem types. Within this framework a variety of optimization techniques can be employed to solve particular aspects of a more general formulation [10,13]. Resource allocation problem RAP is the process of allocating resources among the various projects or business units. Resource may be a person, asset, material, or capital which can be used to accomplish a goal. A project may be a set of related tasks which have a specific goal. A goal my be objective or target, usually driven by specific future financial needs. The ‘‘best’’ or optimal solution may mean maximizing profits, minimizing costs, or achieving the best possible quality. An almost infinite variety of problems can be tackled this way, but here are some typical examples: Finance and investment: Working capital management involves allocating cash to different purposes (accounts receivable, inventory, etc.) across multiple time periods, to maximize interest earnings. Capital budgeting: involves allocating funds to projects that initially consume cash but later generate cash, to maximize a firm’s return on capital. Portfolio optimization: creating ‘‘efficient portfolios’’ involves allocating funds to stocks or bonds to maximize return for a given level of risk, or to minimize risk for a target rate of return. Job shop scheduling: involves allocating time for work orders on different types of production equipment, to minimize delivery time or maximize equipment utilization. And many other application that can be formulated as resource allocation problem. On the other hand, there has recently been a great deal of interest in promising genetic algorithm (GA) and its application to various disciplines including optimization problems [3,4,8]. Genetic algorithms are also being applied to a wide range of optimization and learning problems in many domains. Genetic algorithms lend themselves well to optimization problems since they are known to exhibit robustness, require no auxiliary information, and can offer significant advantages in solution methodology and optimization performance. An useful feature of genetic algorithm is to handle multiobjective function optimization [2,4]. GAs operate on a population of candidate solutions encoded to finite bit string called chromosome. In order to obtain optimality, each chromosome exchanges information by using operators borrowed from natural genetic to produce the better solution. GAs differ from other optimization and search procedures in four ways [8]:

M.S. Osman et al. / Appl. Math. Comput. 163 (2005) 755–768

757

(1) GAs work with a coding of the parameter set, not the parameters themselves. Therefore GAs can easily handle the integer or discrete variables. (2) GAs search from a population of points, not a single point. Therefore GAs can provide a globally optimal solutions. (3) GAs use only objective function information, not derivatives or other auxiliary knowledge. Therefore GAs can deal with the non-smooth, non-continuous and non-differentiable functions which are actually existed in a practical optimization problem. (4) GAs use probabilistic transition rules, not deterministic rules. This chapter is organized as follows, the formulation of the multiobjective resource allocation Problem (RAP) are described in Section 2. Section 3 gives out representation MORAP as a network model. In Section 4, we introduce multiobjective resource allocation problem via genetic algorithm. The simulation result are discussed in Section 5. And conclusion follows in Section 6.

2. Multiobjective resource allocation problem MORAP The general form of the multiobjective resource allocation problem is as follows [1]: MORAP: Max

Z1 ðx1 ; x2 ; . . . ; xn Þ ¼

n X

z1k ðxk Þ

k¼1

Max

Z2 ðx1 ; x2 ; . . . ; xn Þ ¼

n X

z2k ðxk Þ

k¼1

......... Max

ZP ðx1 ; x2 ; . . . ; xn Þ ¼

n X

zpk ðxk Þ

ð1Þ

k¼1

subject to

n X

gk ðxk Þ 6 S;

k¼1

gk ðxk Þ P 0; xk P 0: The reason for the label ‘‘resource allocation problem’’ is that finding optimal values xk (decision variables) is equivalent to allocating the ‘‘resources’’ s among the ‘‘activities’’ stages gk ðxk Þ in such a way that objective function Zðx1 ; . . . ; xn Þ is maximized.

758

M.S. Osman et al. / Appl. Math. Comput. 163 (2005) 755–768

The efficient solution of MORAP via dynamic programming can be characterized in terms of the optimal solution of the following parametric problem [1]. P ðwÞ: Zðx1 ; . . . ; xn Þ ¼ Max M¼

wj zj ;

0 6 w 6 1;

j¼1

( s:t:

p X

x:

n X

xk 6 s; xk P 0; s 2 R

)

p X j¼1

wj ¼ 1 ð2Þ

þ

k¼1

where n is the number of stages, s is state variables (amount of resource available). zjk ðxk Þ is the return function j associated with stage k, given xk , k ¼ 1; . . . ; n. Although this problem can be approached by dynamic programming, with the increased problem scale, many stages and states must be considered, which will greatly affect the efficiency of the procedure dynamic programming in getting the optimal solution. Moreover, the DP has the difficulty when the problem scale increases, it becomes difficult to be dealt with even in the case of single objective because of the rapid expansion of the number of the states to be considered. So it is much necessary to develop new approach to solve such kinds of problems.

3. Representation MORAP as a network model Resource allocation problem is to find the best way to allocate scarce resources. The resources may be raw materials, machine time or people time, money, or anything else in limited supply. The ‘‘best’’ or optimal solution may mean maximizing profits, minimizing costs, or achieving the best possible quality. There are many applications to resource allocation problem such as: 1. How many inspectors to allocate to each river (or region) to monitor pollution? 2. How many patrol cars to allocate to different highway segments to catch speeding drivers? 3. How many campaign volunteers should a politician allocate to each district to get maximum total votes? 4. How many machines to allocate to different products? 5. How many $ million to allocate to different regions to maximize benefit? 6. How many workers to allocate to different jobs? Thus we can reformulate MORAP as a network model, where limited supply represented by stages (such as jobs as in application 6), and the re-

M.S. Osman et al. / Appl. Math. Comput. 163 (2005) 755–768

S00

stage 1

759

S11

S21

...........

Sn-11

Sn1

S12

S22

...........

Sn-12

Sn2

. . .

. . .

S1L-1

S2k-1

S1L

S2K

x1

x2 stage 2

...........

...........

...........

..................

. . .

. . .

S

S

Sn-1j

Snt

xn-1

xn stage n

Fig. 1. Network model.

sources gives possible state in each stage (number of workers must be allocated for each job) as shown in Fig. 1.

4. Multiobjective resource allocation problem via genetic algorithm Multiobjective optimization problems give rise to a set of Pareto-optimal solutions, none of which can be said to be better than other in all objectives. In any interesting multiobjective optimization problem, there exists a number of such solutions which are of interest to designers and practitioners. Since no one solution is better than any other solution in the Pareto-optimal set, it is also a goal in a multiobjective optimization to find as many such Pareto-optimal solutions as possible. Unlike most classical search and optimization problems, GAs work with a population of solutions and thus are likely (and unique) candidates for finding multiple Pareto-optimal solutions simultaneously [2,4,8,11]. There are two tasks that are achieved in a multiobjective GA: 1. Convergence to the Pareto-optimal set, and 2. Maintenance of diversity among solutions of the Pareto-optimal set.

760

M.S. Osman et al. / Appl. Math. Comput. 163 (2005) 755–768

GAs with suitable modification in their operators have worked well to solve many multiobjective optimization problems with respect to above two tasks. Most multiobjective GAs work with the concept of domination. For a problem having more than one objective function (say, fj , j ¼ 1; . . . ; p, p > 1), any two solution x1 and x2 can have one of two possibilities, one dominates the other or none dominates the other. A solution x1 is said to dominate the other solution x2 , if both the following condition are true [12]. 1. The solution x1 is no worse (say the operator  denotes worse and  denotes better) than x2 in all objectives, or fj ðx1 Þ§fj ðx2 Þ for all j ¼ 1; . . . ; p objectives. 2. The solution x1 is strictly better than x2 in at least one objective, or fj ðx1 Þ  fj ðx2 Þ for at least onej 2 f1; 2; . . . ; pg. If any of the above condition is violated, the solution x1 dose not dominate the solution x2 . 4.1. Initialization stage In this problem it is required to find a path between two nodes ni (source node) and nj (sink node) having minimum total cost, minimum time, maximum quality or with multiple these criteria. A path from node ni to node nj is a sequence of arcs ðni ; nl Þ; ðnl ; nm Þ; . . . ; ðnk ; nj Þ. A path can be equivalently represented as sequence of nodes ðni ; nl ; nm ; . . . ; nk ; nj Þ. Thus each individual is a string of nodes as shown in Fig. 2. But this chromosome structure may be produce some illegal offspring which do not fulfill the requirement of the problem, that is, if we locating S workers to N jobs, the number of workers must not exceed S worker. In this step, the algorithm generate an initial population containing Npop strings where Npop is the number of strings in each population, which produce both legal and illegal individuals (i.e., chromosome). 4.2. Rejection of illegal individuals This ‘‘death penalty functions’’ method is a popular option in many evolutionary techniques [6,7], this method rejects all infeasible solutions (i.e.,

Fig. 2. Each individual representation.

M.S. Osman et al. / Appl. Math. Comput. 163 (2005) 755–768

761

illegal individual) in the population. Thus, under this method, if in some current population infeasible solutions (illegal offspring), these are simply eliminated and replaced by randomly drawn new legal solutions. 4.3. Evaluation stage Calculate the values of objective functions for the generated strings. Update a tentative set of Pareto optimal solutions by Classifying a population according to non-domination as follows: 4.3.1. Classifying a population according to non-domination Consider a set of population members, each having P (P > 1) objective function values. The following procedure can be used to find the non-dominated set of solutions [11]: Step 0: Begin with i ¼ 1. Step 1: For all j ¼ 1; 2; . . . ; N and j 6¼ i, compare solutions xi and xj for domination using two conditions in page 6 for all P objectives. Step 2: If for any j. xi , is dominated by xj , mark xi as ‘dominated’. Step 3: If all solutions (that is, when i ¼ N is reached) in the set are considered, Go to Step 4, else increment i by one and Go to Step 1. Step 4: All solutions that are not marked ‘dominated’ are non-dominated solutions. All these non-dominated solutions are assumed to constitute the nondominated front in the population in a specified generation 4.4. Selection stage Here we adopt the Random-weight approach [9], the objectives are aggregated into a single parameterized objective function; however the parameters of this function are not changed for different optimization runs, but instead systematically varied during the same run. We uses a weighted sum of multiple objective functions to combine them into a scalar fitness function. The characteristic feature of selection is that the weights attached to the multiple objective functions are not constant but randomly specified for each selection. Therefore the search direction is not fixed as shown in Fig. 3 (taken from [9]). 4.4.1. Selection procedure Murata et al. [9] proposed a random-weight approach to obtaining a variable search direction towards the Pareto frontier. Suppose that we are going to maximize P objective function. The weighted-sum objective is given as follows:

762

M.S. Osman et al. / Appl. Math. Comput. 163 (2005) 755–768

Fig. 3. Search in multiple directions in criterion space.

f ðxÞ ¼ w1 f1 ðxÞ þ þ wi fi ðxÞ þ þ wp fp ðxÞ ¼

p X

wk fk ðxÞ;

ð3Þ

k¼1

where x is a string (i.e., individual), f ðxÞ is a combined fitness function, fi ðxÞ is the ith objective function and wi is a constant weight for fi ðxÞ. When a pair of strings are selected for a crossover operation, we assign a random number to each weight as follows: randomi ð Þ wi ¼ Pp ; k¼1 randomk ð Þ

i ¼ 1; 2; . . . ; p;

ð4Þ

where randomi ð Þ is a non-negative random number. from equation (4), we can see that wi is a real number in the closed interval ½0; 1 . Next pair of strings are selected with different weight values newly given by (4), and so on. 4.5. Elite preserve strategy (i.e., elitism strategy) As opposed to single-objective optimization [8], the incorporation of elitism in multiobjective is substantially more complex. Instead of one best individual, there is an elite set whose size can be considerable compared to the population. During the execution of the algorithm a tentative set of Pareto optimal solutions is stored and updated every generation. A certain number (say, Nelite ) of individuals are randomly selected from the set at each generation. Those solution are used as elite individual in the algorithm. This elite preserve strategy has effect in keeping the variety of each population in our algorithm. 4.6. Genetic operators The routing crossover operator exchange subroutes between two chromosomes from matting pool (i.e., parents). The operator is applied to selected

M.S. Osman et al. / Appl. Math. Comput. 163 (2005) 755–768

763

parent strings as follows: a crossover point is randomly selected between two adjacent nodes. Fig. 4 shows an example of one-point crossover operator in this figure a crossover point is selected between the third position and the forth position in the strings. All the elements from the first position to the third position are swapped. In this way two offspring are generated. Mutation is an operator to change elements in a string which is generated by crossover operator. Such a mutation operator can be viewed as a transition from a current solution to its neighborhood solution. Fig. 5 exemplifies this mutation operation with the a neighborhood search technique suppose a gene is at stage 2 and the possible states are {2,3,4,5} (i.e., the number of possible states to be chosen is 4), thus this gene {2} after mutation may be {3} or {4} or {5} or still as it {2}. It is interesting here to note that all generated offspring has the following characteristics: 1. Some of Generated offspring are legal individual and the other is illegal. 2. The chromosome length is only n  1 for n-stage problem, which is to save much more computation memory.

Crosover Point 1

2

5

8

11 14

1

4

7

9

13 14

1

2

5

9

13 14

1

4

7

8

11 14

Parent indviduals

Offspring individuals

Fig. 4. One-point crossover.

Mutation Gene 1

2

6

8

11 14

1

3

6

8

11 14

11 42

6

8

11 14

1

6

8

11 14

5

Parent indviduals

Possible offsprings

Fig. 5. The mutation operator.

764

M.S. Osman et al. / Appl. Math. Comput. 163 (2005) 755–768

4.7. Proposed algorithm The block diagram of the proposed algorithm is shown in Fig. 6 and it is described as follows: Step 0 (Initialization): Randomly generate an initial population containing Npop individuals, where each individual is represented as in Fig. 2. Step 1 (Rejection of infeasible individuals): The algorithm employed death penalty method (rejection of infeasible individuals). Note that rejection of infeasible individuals offers a few simplifications of the algorithm for example, there is no need to evaluate infeasible solutions and to compare them with feasible one. Step 2 (Evaluation): Calculate the values of P objective function for each individual. Step 3 (Classifying): Classifying a population according to non-domination and update a tentative set of Pareto optimal solutions. Step0: Initialization

Step1: Rejection illegal individual

Step2: Evaluation

Step3: Classifying

Step4: Selection

Step7: Elitist Strategy

Step5: Crossover

Step6: Mutation

Step8: Termination test

Step9: Decision Maker Selection Fig. 6. The block diagram of the MORAP via genetic algorithm.

M.S. Osman et al. / Appl. Math. Comput. 163 (2005) 755–768

765

Step 4 (Selection): Repeat the following steps to select (Npop  Nelite ) pairs of parent (i.e., individuals which undergo the recombination operation): Specify random weights by Eq. (4), calculate fitness value by Eq. (3). Select a pair of strings from the current population according to the following selection probability pðxÞ ¼ P

f ðxÞ  fMin ðwÞ ; x2w ff ðxÞ  fMin ðwÞg

ð5Þ

where pðxÞ, selection probability of a string x in a population w and fMin ðwÞ ¼ Minff ðxÞjx 2 wg: Step 5 (Crossover): For each pair selected, apply a crossover operation to generate offspring. Step 6 (Mutation): Apply mutation operation to each offspring generated by the crossover operation. Step 7 (Elitist strategy): Randomly select Nelite individuals from the tentative set of Pareto solutions. Add the selected solutions Nelite to ðNpop  Nelite Þ individuals generated in the forgoing steps to construct a population of Npop of individuals. Step 8 (Termination test): If a prespecified stopping condition is satisfied, stop the run; otherwise, return to step 1. Step 9 (Decision maker selection): Decision maker selects the most preferred among the alternatives.

5. Simulation results In this section, an experimental verification of the proposed algorithm is carried out. We apply the proposed algorithm for multiobjective resource allocation problem (taken from [1]), then we compare the results MORAP obtained by dynamic programming [1,5] and that obtained using genetic algorithms to verify the convergence of the algorithm for the optimal solution. Consider the problem (taken from [1]) of allocating 6 workers to a certain set of 4 jobs. Tables 1 and 2 provide the expected efficiency and cost, respectively. Abo-Sinna et al. [1] solve this problem by constructing the following parametric problem: " # N N X X QðxÞ ¼ Minxn 2Xn w En ðxn Þ  ð1  wÞ Cn ðxn Þ ; 0 6 w 6 1: n¼1

n¼1

They find the solution after four stage calculation as in Table 3. This makes this approach subjective to the user. Also, we observe that the solution largely

766

M.S. Osman et al. / Appl. Math. Comput. 163 (2005) 755–768

Table 1 Expected efficiency Number of workers

Job 1

2

3

4

0 1 2 3 4 5 6

0 25 42 55 63 69 74

0 20 38 54 65 73 80

0 33 43 47 50 52 53

0 13 24 32 39 45 50

Table 2 Expected cost Number of workers

Job 1

2

3

4

0 1 2 3 4 5 6

70 60 50 40 40 45 50

90 60 50 40 30 20 25

85 60 50 55 40 30 25

130 115 100 100 90 80 80

Table 3 Efficient solution by DP Efficient solution (x) x1

x2

x3

x4

1 2 2

2 2 3

1 1 1

2 1 0

Overall cost

Overall efficiency

Weighting factors

120 126 129

270 275 280

0 6 w 6 0:42 0:42 6 w 6 0:6 0:6 6 w 6 1

depends on the chosen weight factors. However, if the problem scale increases, it becomes difficult to be dealt with even in the case of single objective because of the rapid expansion of the number of the states to be considered, not mansion of the case of multiple objectives. So we develop this a approach to solve such kinds of problem. When genetic algorithm was applied to this problem, we used GA parameters as in Table 5. After 60 generation we get the set of efficient solution as in Table 4. without any corresponding weighting factors which enable the decision maker DM to select the most preferred among the alternatives. Also, this

M.S. Osman et al. / Appl. Math. Comput. 163 (2005) 755–768

767

Table 4 Genetic algorithm simulation results Efficient solution (x) x1

x2

x3

x4

1 2 2

2 2 3

1 1 1

2 1 0

Overall cost

Overall efficiency

120 126 129

270 275 280

Table 5 The GA parameters Population size Mutation rate Crossover rate Crossover operator

5 0.01 0.9 Single point

approach can be applied to any problem with any number of stages, any number of states and with multiple objective functions.

6. Conclusions Although resource allocation problem can be approached by dynamic programming, with the increased problem scale, many stages and states must be considered, which will greatly affect the efficiency of the procedure dynamic programming in getting the optimal solution. Moreover, the DP has the difficulty when the problem scale increases, it becomes difficult to be dealt with even in the case of single objective because of the rapid expansion of the number of the states to be considered. So we investigated the possibility of using genetic algorithms (GAs) to solve multiobjective resource allocation problems (MORAPs). This procedure eliminates the need of any user defined weight factor for each objective. Also, the proposed approach is developed to deal with the problems with either single or multiple objectives. Moreover, this approach can be applied to any problem with any number of stages and any number of states. The numerical analysis shows that the genetic algorithm approach for MORAP may hopefully be a new approach for such kinds of difficult-to-solve problems.

References [1] M.A. Abo-Sinna, S.M. Lee, M.S.A. Osman, A.M. Sarhan, A dynamic programming approach for solving bicriterion resource allocation problems, in: Proceedings of the Third ORMA Conference, MTC, Cairo, Egypt, 28–30 November 1989.

768

M.S. Osman et al. / Appl. Math. Comput. 163 (2005) 755–768

[2] C. Coello, A comprehensive survey of evolutionary-based multiobjective optimization techniques, knowledge and information systems, An International Journal 1 (3) (1999) 269– 308. [3] K. Deb, An introduction to genetic algorithms, Sadhana 24 (parts 4&5) (1999) 93–315. [4] M. Gen, R. Chfng, Genetic Algorithms and Engineering Optimization, John Wiley & Sons, Inc., New York, 2000. [5] M.L. Hussein, M.A. Abo-Sinna, A fuzzy dynamic approach to the multicriterion resource allocation problem, Fuzzy Sets and Systems 69 (1995) 115–124. [6] Z. Michalewicz, M. Schoenauer, Evolutionary algorithms for constrained parameter optimization problems, Evolutionary Computation 4 (1) (1996) 1–32. [7] Z. Michalewicz, A survey of constraint handling techniques in evolutionary computation methods, in: Proceeding of the 4th Annual Conference on Evolutionary Programming, 1995, pp. 135–155. [8] Z. Michalewiz, Genetic Algorithms + Data Structures ¼ Evolution Programs, third ed., Springer-Verlag, 1996. [9] T. Murata, H. Ishibuchi, H. Tanaka, Multiobjective genetic algorithm and its application to flowshop Scheduling, Computers and Industrial Engineering 30 (4) (1996) 957–968. [10] S.S. Rao, Optimization Theory and Application, Wiley Eastern Limited, New Delhi, 1991. [11] N. Srinivas, K. Deb, Multiobjective optimization using nondominated sorting in genetic algorithms, Evolutionary Computation 2 (3) (1999) 221–248. [12] R.E. Steuer, Multiple Criteria Optimization: Theory, Computation, and Application, Wiley, New York, 1986. [13] H.A. Taha, Operation Research: An Introduction, sixth ed., Collier Macmillan, London UK, 1992.