Interactive meta-goal programming

Interactive meta-goal programming

European Journal of Operational Research 175 (2006) 135–154 www.elsevier.com/locate/ejor Decision Support Interactive meta-goal programming Rafael C...

237KB Sizes 4 Downloads 179 Views

European Journal of Operational Research 175 (2006) 135–154 www.elsevier.com/locate/ejor

Decision Support

Interactive meta-goal programming Rafael Caballero

a,*

, Francisco Ruiz a, M. Victoria Rodrı´guez Urı´a b, Carlos Romero c

a

Departamento de Economı´a Aplicada (Matema´ticas), Facultad de Ciencias Econo´micas y Empresariales, Universidad de Ma´laga, Campus El Ejido s/n, 29071 Ma´laga, Spain b Departamento de Economı´a Cuantitativa, Facultad de Ciencias Economicas y Empresana`les, Universidad de Oviedo, Campus del Cristo, 33006 Oviedo, Spain c Departamento de Economı´a y Gestion Forestal, ETS Ingeniero de Montes Universidad Politecnica de Madrid, Avda, Complutense s/n 28040 Madrid, Spain Received 8 June 2004; accepted 26 April 2005 Available online 16 August 2005

Abstract The concept of meta-goal programming is developed and linked to an interactive framework. An algorithm is proposed, in which the decision maker can establish target values on several achievement functions and use an interactive procedure to update these values. This substantially alleviates the problems associated with assigning to each attribute a target value, in order to build the goals, as well as the selection of a suitable achievement function. The functioning of the proposed interactive approach is illustrated with the help of an example taken from the farm management literature.  2005 Elsevier B.V. All rights reserved. Keywords: Goal programming; Meta-goal programming; Interactive; Satisficing

1. Introduction Goal Programming (GP) (Charnes and Cooper, 1961) represents a widely used approach in the Operational Research field. Recent surveys (e.g. Schniederjans, 1995; Jones and Tamiz, 2002) and special issues of specialized journals (e.g. Aouni and Kettani, 2001) have revealed its growing popularity in terms of successful applications to real-world problems and theoretical developments. Several authors argue that the main

*

Corresponding author. Tel.: +34 952 131168; fax: +34 952 132061. E-mail address: [email protected] (R. Caballero).

0377-2217/$ - see front matter  2005 Elsevier B.V. All rights reserved. doi:10.1016/j.ejor.2005.04.040

136

R. Caballero et al. / European Journal of Operational Research 175 (2006) 135–154

reason why GP has been so successful is the Simonian satisficing philosophy underlying the approach (see Lee, 1972, or Ignizio, 1976). Although GP has many good properties, the approach is not exempt from difficulties. Perhaps the most important one is its underlying axiom, namely that the decision-maker (DM) is able to assign to each attribute a ‘‘satisficing’’ target value. This is a strong empirical requirement (Gonza´lez-Pacho´n and Romero, 2004). Another potential shortcoming is that there is no theoretical foundation for the choice of the form of the achievement function for the GP model, that is, the function of the unwanted deviation variables to be minimized in one way or another. Jones and Tamiz (2002) present a survey of GP applications to real problems, which illustrates that a large majority of the problems were solved using the lexicographic GPvariant. The next most popular approaches were the weighted and minmax GP-variants. Obviously, each GP-variant fits a different DM preference structure, and it is not easy to accept that the most common preference structure is the rigid lexicographic order. This leads us to suppose that, in many cases, it is the analystÕs rather than the DMÕs personal preferences that influence the choice of the variant. Needless to say, both the allocation of target values to attributes and the choice of the GP-variant used have a critical impact on the final solution. The mechanistic selection of the achievement function was recently addressed by introducing the concept of meta-goal that leads to a GP extension coined as Meta-GP (Rodrı´guez-Urı´a et al., 2002). This approach uses sensitivity analysis to derive a meta-achievement function, reflecting the DMÕs actual preferences for a particular decision-making problem. In this paper, we take a further step in this direction by formulating the Meta-GP approach within an interactive framework. Thus, ‘‘satisficing’’ targets are allocated to each attribute and the meta-achievement function is selected in accordance with the DMÕs actual preferences. Interactive methods are the most widely used family of algorithms within the frame of Multiobjective Programming. Such methods were pioneered by Geoffrion et al. (1972), Benayoun et al. (1971), and Zionts and Wallenius (1976). Textbooks and surveys Steuer (1986), Shin and Ravindran (1991) and Miettinen (2002) give an idea of both the number of different interactive algorithms that have been developed, and the number of real cases to which they have been applied. The reason of their success lies in their capability to progressively adapt their performance to the decision makerÕs preferences. In some sense, it can be said that both the decision maker and the algorithm ‘‘learn’’ about the problem during the process. Traditionally, interactive methods have been classified according to two main criteria: the information required from the decision maker, and the inner resolution strategy. Following the first criterion, the methods are usually classified into four main groups (although more subgroups are considered in some studies): • Weighting methods: The decision maker is asked to give, at each iteration, local weights for the criteria, e.g. the GDF method by Geoffrion et al. (1972). • Tradeoff methods: The decision maker is asked to give at each iteration local tradeoffs among objectives (e.g. SPOT, by Sakawa, 1982), or to evaluate different tradeoffs (e.g. ISWT, by Chankong and Haimes, 1978), or to answer whether he/she prefers a tradeoff or not (e.g. Zionts and Wallenius, 1976). • Solution generating methods: At each iteration, the decision maker has to choose one among a number of (efficient) solutions (e.g. Steuer and Choo, 1983). • Aspiration level or reference level methods: At each iteration, the decision maker is asked to give reference levels for the objectives (e.g. Benayoun et al., 1971; Wierzbicki, 1981; Korhonen and Laakso, 1986a,b; Korhonen and Wallenius, 1988; Nakayama and Sawaragi, 1984). • If the information given takes the form of target values for goals related to the objectives of the problem, then the method can be considered an Interactive Goal Programming approach. This is the group where the method proposed in this paper should be placed.

R. Caballero et al. / European Journal of Operational Research 175 (2006) 135–154

137

On the other hand, if the inner solution strategy is considered, the following classification is widely accepted: • Feasible region reduction methods: At each iteration, the feasible region is reduced (e.g. STEM by Benayoun et al., 1971). • Feasible direction methods: A line search scheme is used at each iteration to move to the next solution (e.g. GDF method by Geoffrion et al., 1972). • Weight space reduction methods: The weight space is reduced at each iteration (e.g. Zionts and Wallenius, 1976 or Steuer and Choo, 1983). • Achievement function method: A scalarized achievement function is optimized at each iteration (e.g. Wierzbicki, 1981; Korhonen and Laakso, 1986a; Korhonen and Wallenius, 1988; Nakayama and Sawaragi, 1984). Due to their nature, all the Interactive Goal Programming schemes fit in this group. • Other methods such as Tradeoff Cutting Plane, Lagrange Multipliers . . . It should be noted that, despite the apparent diversity of techniques, nearly all the interactive methods have the same primary structure, and they can be accommodated in a unified framework, as proposed by Gardiner and Steuer (1994). As mentioned, the combination of Goal Programming with an interactive scheme is not new in the literature. Dyer (1972) researches the problem from the point of view of the assumed existence of an implicit utility function of the decision maker. Masud and Hwang (1981) describe an algorithm to obtain efficient solutions via the interactive actualization of the tradeoffs. This scheme was later on generalized by Reeves and Hedin (1993). Spronk (1981) proposed an interactive method starting at the anti-ideal solution and sequentially improving the goals. Korhonen and Laakso (1986b) combined the Visual Interactive approach with Goal Programming problems. Later on, Korhonen and Wallenius (1988, 1990) proposed the method VIG method, which uses the Pareto Race algorithm to solve interactive goal programming problems. Weistroffer (1983) derived an interactive goal programming algorithm for nonlinear problems. Yang and Sen (1996) combined GP with an interactive method, at different stages of the procedure. The main contribution of this paper to the Interactive Goal Programming field is the use of the meta-goal concept during the process, in order to ease the previously difficulties of the Goal Programming approach. The rest of the paper is organized as follows. The analytic framework for the Meta-GP approach is presented in Section 2. In Section 3, the interactive Meta-GP algorithm (IMGP) is formulated. In Section 4, a simple example is used to illustrate the interactive algorithm. Some additional factors concerning the calculation phase, the efficiency of the final solution and some interesting features of the algorithm are addressed in Section 5. Finally, some concluding remarks are presented in Section 6.

2. Analytic framework Let us consider the following set of goals and constraints as part of a goal programming problem (see Table 2 for notation): 8 f ðxÞ þ ni  pi ¼ ti ; i ¼ 1; . . . ; s; > < i ðGPÞ gj ðxÞ 6 bj ; j ¼ 1; . . . ; m; > : x 2 Rn . Our problem has s goals and m hard constraints. Without loss of generality, it can be assumed that all goals derive from ‘‘the more the better’’ attributes and, thus, the unwanted deviation variable of each goal is the negative one. Notwithstanding, the methodology presented below is applicable to a context of

138

R. Caballero et al. / European Journal of Operational Research 175 (2006) 135–154

two-sided goals, fi(x) = ti (where both deviation variables are unwanted), by decomposing them into two goals fi(x) 6 ti (equivalent to fi(x) P ti) and fi(x) P ti. The meta-goal programming scheme for this problem is developed in Rodrı´guez-Urı´a et al. (2002). In this scheme, it is assumed that the DM does not have to choose a particular GP-variant. Instead, the DM can combine the GP-variants and establish aspiration values for the different achievement functions. Remember that if the weighted GP-variant is used, then the respective achievement function reads as follows: hðnÞ ¼

s X i¼1

xi

ni ; Ni

where each xi is a preferential weight, and Ni is an appropriate normalizing factor, according to the type of problem functions (see Steuer, 1983 and Kettani et al., 2004, for the issue of normalizing goals). If (as often is the case) Ni = ti is assumed, then the optimal value of the achievement function can be interpreted as the minimum sum of the percentage deviations of each goal from its respective target value, weighted by its respective preferential weight. On the other hand, if the minmax option is taken,   ni hðnÞ ¼ max xi i¼1;...;s Ni and Ni = ti, then the optimal value of the achievement function can be interpreted as the minimum maximum weighted percentage deviation from the target values. Finally, if the lexicographic GP-variant is chosen, the DM defines l priority levels L1, . . . , Ll, and assigns goals to each level, possibly establishing weights for goals that share the same priority level. Then, the weighted or minmax GP-variants can be chosen to solve the problem at each level. Therefore, given these expressions, a model is proposed in Rodrı´guez-Urı´a et al. (2002), in which different kinds of goals can be established for the achievement functions. Namely, three types of meta-goals are defined: Type 1: The percentage sum of the unwanted deviation variables should not be greater than a certain bound Q1: s X ni xi 6 Q1 . N i i¼1 Type 2: The maximum percentage deviation should not be greater than a certain bound Q2:   ni max xi 6 Q2 . i¼1;...;s Ni Type 3: The percentage of unachieved goals should not be greater than a certain bound Q3: ni  M i y i 6 0; Ps i¼1 y i 6 Q3 ; s where yi are binary variables, and Mi is a sufficiently large arbitrary number, which cannot be achieved by ni. The aim of this paper is to develop an interactive algorithm according to the above system of meta-goals. This algorithm consists of successively solving meta-goal programming problems so that the DMs can update, among other parameters, the target values Qk at each iteration.

R. Caballero et al. / European Journal of Operational Research 175 (2006) 135–154

139

3. IMGP: The interactive meta-goal programming algorithm The interactive meta-goal programming algorithm has been designed as a two-stage procedure. The first stage is a calculation phase, while the second is the interactive phase itself. Let us now describe each of these two phases. 3.1. Calculation phase Once the above problem (GP) has been defined, the purpose of this phase is to provide the DM with the ideal and anti-ideal values (and the payoff matrix) for the meta-goal programming problem. Therefore, three problems are solved to determine the least possible aggregate deviation with respect to the target values, the minimum maximum percentage deviation, and the greatest number of goals that can be achieved simultaneously. For this purpose, let us assume that the initial vector of weights is as follows: x ¼ fx1 ; x2 ; . . . ; xs g. In practice, the DM can establish these weights or assume equal weights for all the goals. Similarly, the DM can also decide whether or not to normalize these weights, although it is advisable to do so. Therefore, given a problem (GP), the vector of weights x and the normalizing factors N1, . . . , Ns, the following unified approach (POM stands for Payoff Matrix) can be used to calculate the three aggregate optimal deviations: s  8 s P ni P > > min k x þ k2D þ k3 y i > 1 i Ni > > i¼1 i¼1 > > > > > > s:t: fi ðxÞ þ ni  pi ¼ ti ; i ¼ 1; . . . ; s > > > > > > gj ðxÞ 6 bj ; j ¼ 1; . . . ; m > > > > > > > xi Nnii  D 6 0; i ¼ 1; . . . ; s > > > > > > ni  M i y i 6 0; i ¼ 1; . . . ; s > > > > > s > P > < xi Nnii  Z 1 ¼ 0 ðPOMÞ i¼1 > > > > D  Z2 ¼ 0 > > > > > s P > > > yi  Z3 ¼ 0 > > > i¼1 > > > > > x 2 Rn ; y i 2 f0; 1g; i ¼ 1; . . . ; s > > > > > > > M i arbitrary large > > > > ) > > kr ¼ 1 > > > ; alternatively r ¼ 1; 2; 3. > : k j ¼ 0; j 6¼ r It should be noted that this scheme is used exclusively to obtain the payoff matrix, and thus the different achievement functions are not considered simultaneously. Therefore, the meta-goals do not need to be normalized. The normalizing factors used in this scheme (N1, . . . , Ns) correspond to the original goals, not to the meta-goals. Let us denote the optimal value of the observation variable Zr in problem r (r = 1, 2, 3) as Z r . Thus, Z 1 is the minimum percentage aggregate deviation with respect to the aspiration levels, Z 2 is the

140

R. Caballero et al. / European Journal of Operational Research 175 (2006) 135–154

Table 1 Payoff matrix

r=1 r=2 r=3

Aggregate deviation

Maximum deviation

Unsatisfied goals

Z 1 Z 21 Z 31

Z 12 Z 2 Z 32

Z 13 Z 23 Z 3

minimum maximum percentage deviation and Z 3 is the least possible number of simultaneously unachieved goals. Besides, let us denote the value of the observation variable Zj in the optimal solution of problem r (r 5 j) by Z rj . All these values form the payoff matrix as given in Table 1. It must be pointed out that there might exist alternative optima for any of the three problems solved in order to build the payoff matrix. This eventuality has not been taken into account in this section, because this matrix just gives preliminary information about the possible values of the three global achievement functions, in order to help the decision maker to provide the target values for the meta-goals. Anyway, if considered necessary, it is possible to test the uniqueness of the solutions, and to look for alternative optima which improve the values of the other achievement functions. 3.2. Interactive phase At the beginning of the interactive phase, the DM is given the opportunity to establish priority levels for the meta-goals. It is important to point out that, as opposed to traditional GP, a goal can in this case belong to more than one priority level or even to different meta-goals that share the same level. This is possible because the target values are given on the meta-goals, which may have different forms. For example, let us imagine a case where the Sanitary Authorities must decide about the best activity distribution among the hospitals of the Public System, taking into account, among other criteria, the budget assignment to such hospitals. Thus, the budget assigned to each hospital can be a goal in the original formulation. If there are several hospitals which have been over budgeted in the last years, one meta-goal can be that the maximum deviation of their budgets with respect to their target is not greater than 5% (type 2). On the other hand, there can be a global meta-goal on the budgets: the sum of deviations of all the budgets with respect to their targets should not be greater than 15% (type 1). In this example, the original goals regarding the budgets of the over budgeted hospitals belong to these two meta-goals. These meta-goals can, in turn, be placed by the decision center in the same priority level, or in different ones. Roughly speaking, each priority level is formed by some meta-goals, which in turn contain several goals each. In Fig. 1, the general scheme of the elements that are considered in a given iteration is shown. Therefore, let us assume that the DM provides l priority levels, L1, . . . , Ll. Each of these levels can contain a series of meta-goals of each type. In the most general case, each priority level Lk can contain rk1 type 1 meta goals, defined on the subsets of goals S k11 ; S k12 ; . . . ; S k1rk , rk2 type 2 meta goals, defined on the subsets of 1 goals S k21 ; S k22 ; . . . ; S k2rk , and rk3 type 3 meta goals, defined on the subsets of goals S k31 ; S k32 ; . . . ; S k3rk . Let us de2 3 k k k k k note the target levels as fQ11 ; Q12 ; . . . ; Q1rk g, fQ21 ; Q22 ; . . . ; Qk2rk g and fQk31 ; Qk32 ; . . . ; Qk3rk g, respectively. Be1 2 1 sides, the DM can assign a weight to each of the meta-goals that share the same priority levels. These weights will be denoted as lk11 ; . . . ; lk1rk , lk21 ; . . . ; lk2rk , lk31 ; . . . ; lk3rk . Let us denote their respective normalizing 1 2 3 factors as N kth , although if all the original goals were normalized using a homogeneous scheme, then these weights would not need to be normalized again, because all the meta-goals are given as percentage achievement (in the case of the type 3 meta-goals, dividing by the cardinal of the set of goals the DM wishes to be satisfied). Therefore, these normalizing factors may only be used for instrumental reasons, if the DM or the

R. Caballero et al. / European Journal of Operational Research 175 (2006) 135–154

141

Problem iter k

L1



LP

Several priority levels. Preemptive weights are assumed for these levels. Each level

MG1

MGσ

MGΣ

Several Meta-Goals. Each Meta-goal: • can be of any type (1, 2 or 3), • is assigned a target value Q, • is assigned an unwanted deviation variable, β • is assigned a weight, μ , which represents its relative importance with respect to the other meta-goals, and (probably) a normalizing factor N.

Each Meta Goal

G1



GT

Several Goals. Each Goal: • has a fixed target value during the whole procedure, • is assigned an unwanted deviation variable n or p, • is assigned a weight, μ , which represents its relative importance with respect to the other goals of the meta-goal, and (probably) a normalizing factor N. Fig. 1. Scheme of the elements that constitute each priority level.

analyst wishes to obtain the results in a certain way. Finally, if the ith goal belongs to set S kth , this will be denoted as i 2 S kth ; lkth ðiÞ is the weight assigned to the ith goal in the hth type t (t = 1, 2) meta-goals, and N kth ðiÞ will denote its respective normalizing factor. In order to clarify this notation, Table 2 shows the meanings of all the symbols used:

142

R. Caballero et al. / European Journal of Operational Research 175 (2006) 135–154

Table 2 Notation used in a given iteration Symbol

Meaning

x s i fi m j gj ti ni, pi l k rk1 rk2 rk3 u v w lk1u lk2v lk3w N k1u N k2v N k3w S k1u S k2v S k3w Qk1u Qk2v Qk3w ak1u ; bk1u ak2v ; bk2v ak3w ; bk3w lk1u ðiÞ lk2v ðiÞ N k1u ðiÞ N k2v ðiÞ

Decision vector (x 2 Rn) Number of original goals Goal index, i = 1, . . . , s Function corresponding to goal i, i = 1, . . . , s Number of hard constraints Constraint index, j = 1, . . . , m Function corresponding to constraint j, j = 1, . . . , m Target value of the ith goal Negative and positive deviation variables of the ith goal Number of priority levels Priority level index, k = 1, . . . , l Number of type 1 meta-goals in level k Number of type 2 meta-goals in level k Number of type 3 meta-goals in level k Type 1 meta-goal index, u ¼ 1; . . . ; rk1 Type 2 meta-goal index, v ¼ 1; . . . ; rk2 Type 3 meta-goal index, w ¼ 1; . . . ; rk3 Weight of the uth type 1 meta-goal Weight of the vth type 2 meta-goal Weight of the wth type 3 meta-goal Normalizing factor of the uth type 1 meta-goal Normalizing factor of the vth type 2 meta-goal Normalizing factor of the wth type 3 meta-goal Set of original goals included in the uth type 1 meta-goal Set of original goals included in the vth type 2 meta-goal Set of original goals included in the wth type 3 meta-goal Target value of the uth type 1 meta-goal Target value of the vth type 2 meta-goal Target value of the wth type 3 meta-goal Negative and positive deviation variables of the uth type 1 meta-goal Negative and positive deviation variables of the vth type 2 meta-goal Negative and positive deviation variables of the wth type 3 meta-goal Weight of goal i in the uth type 1 meta-goal Weight of goal i in the vth type 2 meta-goal Normalizing factor of goal i in the uth type 1 meta-goal Normalizing factor of goal i in the vth type 2 meta-goal

Given these data, the problem solved at each iteration can be described as follows: 8 lex min hðbÞ > > > > > s:t: Hard constraints > > < Original goals ðP Þiter k > > > > All the meta-goals ðall the types; all the levelsÞ > > > : Non-negativity conditions;

R. Caballero et al. / European Journal of Operational Research 175 (2006) 135–154

143

where h is a vector achievement function, with as many components as priority levels, and b is a vector formed by the unwanted deviation variables of the meta-goals. More precisely, making use of the previously described notation, the meta-GP problem for the first iteration is as follows: 8 Lexmin ffb111 ; . . . ; b11r1 ; b121 ; . . . ; b12r1 ; b131 ; . . . ; b13r1 g; . . . ; > > 1 2 3 > > > l l l l l > > ; . . . ; b ; b ; . . . ; b ; b ; . . . ; bl3rl gg . . . ; fb l l > 11 1r1 21 2r2 31 > 3 > > > subject to : > > > > > > fi ðxÞ þ ni  pi ¼ ti ; i ¼ 1; . . . ; s > > > > > gj ðxÞ 6 bj ; j ¼ 1; . . . ; m > > > P > k k ni k k > > u ¼ 1; . . . ; rk1 ; k ¼ 1; . . . ; l i2S k1u l1u ðiÞ N k ðiÞ þ a1u  b1u ¼ Q1u ; > > 1u > > ) > < lk2v ðiÞ N kniðiÞ  Dkv 6 0 i 2 S k2v 2v ; v ¼ 1; . . . ; rk2 ; k ¼ 1; . . . ; l ðM-GPÞ1 k k k > k > Dv þ a2v  b2v ¼ Q2v > > 9 > > > ni  M i y i 6 0; i 2 S k3w > > > > > P > = > > yi > i2S k k > k k k 3w > þ a3w  b3w ¼ Q3w > w ¼ 1; . . . ; r3 ; k ¼ 1; . . . ; l > > cardðS k3w Þ > > > > > ; k > > y 2 f0; 1g; i 2 S > i 3w > > > > ni ; pi P 0; i ¼ 1; . . . ; s > > > > > x 2 Rn > > : ak1u ; bk1u ; ak2v ; bk2v ; ak3w ; bk3w P 0 After solving problem (M-GP)1, the DM is shown the solution obtained. If the DM is satisfied with these values, the procedure ends. If not, the DM can restructure the problem, giving new meta-goals with new priority levels, target values, weights, etc., and the algorithm proceeds to the next iteration. The process continues until it reaches a solution that is acceptable to the DM. Thus, the IMGP algorithm scheme is as follows: Step 0. Do Iter = 0. Let x be the initial vector of weights. Step 1. Solve problem (POM) and show the payoff matrix to the DM. Step 2. If the DM is satisfied with any of the rows of the payoff matrix, then end. Otherwise, go to Step 3. Step 3. Do Iter = Iter + 1. Step 4. If the DM so wishes, establish new priority levels L1, . . . , Ll. Step 5. For each level k (k = 1, 2, . . . , l), the DM must provide the data for the meta-goals, which can include, in the most general case, the number of meta-goals of each type, rk1 , rk2 and rk3 , the weights for the meta-goals lk1u ðu ¼ 1; . . . ; rk1 Þ, lk2v ðv ¼ 1; . . . ; rk2 Þ, lk3w ðw ¼ 1; . . . ; rk3 Þ and their respective normalizing factors, the sets S k11 ; S k12 ; . . . ; S k1rk , S k21 ; S k22 ; . . . ; S k2rk , S k31 ; S k32 ; . . . ; S k3rk , the target levels 1

2

3

fQk11 ; Qk12 ; . . . ; Qk1rk g, fQk21 ; Qk22 ; . . . ; Qk2rk g, fQk31 ; Qk32 ; . . . ; Qk3rk g, and the weights of the goals in each 1 2 1 type 1 or 2 meta-goal, lk1u ðiÞ; ði 2 S k1u Þ, lk2v ðiÞ; ði 2 S k2v Þ, and their respective normalizing factors. Step 6. Solve problem (M-GP)it, and show the results obtained to the DM. Step 7. If the DM is satisfied with the current solution, then end. Otherwise, go to Step 4. Fig. 2 shows the flowchart of this algorithm. In the next section, an example will be used to illustrate the functioning and behavior of the algorithm, but two aspects relative to the IMGP method must be pointed out beforehand:

144

R. Caballero et al. / European Journal of Operational Research 175 (2006) 135–154

Start Iter = 0 Give weights ω i, i = 1,…, s Solve (POM) for r = 1,2,3 Show pay-off matrix to D.M.

D.M. accepts any line of POM?

YES End

NO Iter = Iter + 1

Give new levels?

YES

Define levels L

NO

New meta-goals?

YES

Define sets S, values Q and μ

NO NO

Solve problem (M-GP)Iter

Show optimal solution to D.M.

D.M. accepts the solution? YES End Fig. 2. Flowchart of the interactive meta-goal programming algorithm (IMGP).

R. Caballero et al. / European Journal of Operational Research 175 (2006) 135–154

145

• The (M-GP)Iter problems can be solved using a weighted or a minmax scheme within each priority level, and taking into account the weights given by the DM. For the weighted GP-variant, the achievement function for level k is as follows: k

r1 X

rk

rk

lk1u

u¼1

k k 3 2 X bk1u X k b2v k b3w þ l þ l ; N k1u v¼1 2v N k2v w¼1 3w N k3w

while, for the minmax case, the function is:  Max

u¼1;...;rk1 v¼1;...;rk2

lk1u

 bk1u k bk2v k bk3w ;l ;l . N k1u 2v N k2v 3w N k3w

w¼1;...;rk3

Due to the technical complexity of the process at this point, our opinion is that the DM should choose the GP-variant. • (M-GP)Iter problems as well as the interactive procedure itself have been designed to unify all the possible cases that can appear when defining the meta-goals in a single formulation. Nevertheless, as we will see in the example given in Section 4, the DM will not have to provide all the information described for each iteration in practice. Rather, the DM will simply express his or her wishes regarding certain goals and/or achievement functions. In this sense, the implementation of the method must be clear enough so that the decision maker can provide the information he or she considers adequate at any moment. Therefore, although the mathematics and notation involved may seem daunting to many appliers of goal programming compared to the original model, this in itself is not a problem, if the method is incorporated into an automated package.

4. Example To illustrate the functioning of the interactive meta-goal programming approach, let us consider the following set of goals and constraints for a simple farm planning problem, where variables x1 and x2 represent the area covered by fruit trees A and B, respectively (for details, see Romero and Rehman, 2003): 4.1. Goals 6250x1 þ 5000x2 þ n1  p1 ¼ 200; 000

ðprofit-Euros=haÞ;

1375x1 þ 1025x2 þ n2  p2 ¼ 36; 000 ðworking capital available-Euros=haÞ; 120x1 þ 180x2 þ n3  p3 ¼ 4000 ðannual labor for pruning-man-hours=haÞ; 400x1 þ n4  p4 ¼ 2000 450x2 þ n5  p5 ¼ 2000

ðannual labor for harvesting crop A-man-hours=haÞ; ðannual labor for harvesting crop B-man-hours=haÞ;

35x1 þ 35x2 þ n6  p6 ¼ 1000 ðmachinery for tillage-hours=haÞ; x1 þ x2 þ n7  p7 ¼ 15

ðminimum plantation area-haÞ:

4.2. Constraint 6250x1 þ 5000x2 P 75; 000

ðprofit-break-even point-Euros=haÞ.

146

R. Caballero et al. / European Journal of Operational Research 175 (2006) 135–154

The normalizing factor for each goal will be its respective target value throughout the whole procedure. The unwanted deviation variables for this example are as follows: n1 ; p2 ; p3 ; p4 ; p5 ; n6 þ p6 and n7 . Therefore, the model for calculating the payoff matrix is as follows, where three problems are solved, r = 1, 2, 3: h  8 p3 p2 p4 n1 > þ 36;000 þ 4000 þ 2000 þ min k 1 200;000 > > > P7 >  > yi > p5 n7 6 þp6 > > þ 2000 þ n1000 þ 15 þ k 2 D þ k 3 7i¼1 > > > > > s:t: 6250x1 þ 5000x2 P 75; 000 Hard constraint > > > > þ 5000x þ n  p ¼ 200; 000 6250x > 1 2 1 1 > > > > 1375x1 þ 1025x2 þ n2  p2 ¼ 36; 000 > > > > 120x > 1 þ 180x2 þ n3  p 3 ¼ 4000 > > > > Goals 400x 1 þ n4  p 4 ¼ 2000 > > > > 450x2 þ n5  p5 ¼ 2000 > > > > 35x1 þ 35x2 þ n6  p6 ¼ 1000 > > > > > x 1 þ x2 þ n7  p7 ¼ 15 > > > > n > 1  200; 000D 6 0 > > > > p 2  36; 000D 6 0 > > > > p3  4000D 6 0 > > > > Maximum deviation p4  2000D 6 0 > > > > < p5  2000D 6 0 ðPOMr Þ ðn6 þ p6 Þ  1000D 6 0 > > > > n7  15D 6 0 > > > > n 1  2; 000; 000y 1 6 0 > > > > p > 2  360; 000y 2 6 0 > > > > p > 3  40; 000y 3 6 0 > > > Fully satisfied goals p > 4  20; 000y 4 6 0 > > > > p5  20; 000y 5 6 0 > > > > ðn6 þ p6 Þ  10; 000y 6 6 0 > > > > n > 7  150y >  760 > > p3 p2 p4 n1 > > þ 36;000 þ 4000 þ 2000 Z 1  200;000 > > >  Observation constraints > > p5 n7 6 þp6 > > þ n1000 þ 15 þ 2000 ¼0 > > > > > Z 2 þ D ¼ 0 > > > > Z > 3 þ ðy 1 þ y 2 þ y 3 þ y 4 þ y 5 þ y 6 þ y 7 Þ ¼ 0 > > > > ; x ði ¼ 1; . . . ; 7Þ x 1 2 P 0; p i ; ni P 0 > > > > 2 f0; 1g ði ¼ 1; . . . ; 7Þ y > i : k r ¼ 1; k s ¼ 0; s 6¼ r The payoff matrix in Table 3 was calculated from the above model: Let us suppose that the DM places all the goals at the same priority level and suggests the following initial meta-goals: (i) aggregate unachievement less than or equal to 2.10; (ii) maximum deviation less than or equal to 0.60; (iii) number of goals unsatisfied less than or equal to 4.

R. Caballero et al. / European Journal of Operational Research 175 (2006) 135–154

147

Table 3 Payoff matrix of the example Objective

Achieved score Aggregate unachievement

Maximum deviation

Number of unsatisfied goals

Aggregate unachievement Maximum deviation Number of unsatisfied goals

2.00 2.22 2.32

0.69 0.58 1.25

4 5 3

These three meta-goals are weighted equally, and no normalization factor is used. Besides, the goals belonging to each meta-goal are considered to have an equal weight. These data lead to the following meta-GP model: 8 min b111 þ b121 þ b131 > > > > > s:t: 6250x1 þ 5000x2 P 75; 000 Hard constraint > > > > 6250x þ 5000x þ n  p ¼ 200; 000 > 1 2 1 1 > > > > 1375x1 þ 1025x2 þ n2  p2 ¼ 36; 000 > > > > 120x1 þ 180x2 þ n3  p3 ¼ 4000 > > > > > 400x1 þ n4  p4 ¼ 2000 Goals > > > > 450x þ n  p ¼ 2000 > 2 5 5 > > > > 35x1 þ 35x2 þ n6  p6 ¼ 1000 > > > > x1 þ x2 þ n7  p7 ¼ 15 > > > > > n 1  200; 000D 6 0 > > > > p > 2  36; 000D 6 0 > > > > p > 3  4000D 6 0 > > > > p4  2000D 6 0 Maximum deviation > > > > p  2000D 6 0 > 5 > > > > ðn6 þ p6 Þ  1000D 6 0 > > > > n7  15D 6 0 < ðP1Þ n1  2; 000; 000y 1 6 0 > > > p2  360; 000y 2 6 0 > > > > > p 3  40; 000y 3 6 0 > > > > p4  20; 000y 4 6 0 Fully satisfied goals > > > > > p  20; 000y 6 0 5 5 > > > > ðn6 þ p6 Þ  10; 000y 6 6 0 > > > > > n7  150y 7 6 0 >  > > > p3 p2 p4 n1 > Z  þ 36;000 þ 4000 þ 2000 þ > 1 200;000 > >  > Observation constraints > p n þp > n 6 5 6 7 > þ þ ¼ 0 þ > 2000 1000 15 > > > > Z 2 þ D ¼ 0 > > > > > Z 3 þ ðy 1 þ y 2 þ y 3 þ y 4 þ y 5 þ y 6 þ y 7 Þ ¼ 0 > > > > Z 1 þ a111  b111 ¼ 2.10 > > > > > Z 2 þ a121  b121 ¼ 0.60 Meta-goals > > > 1 Z 1 4 3 > > þ a31  b31 ¼ 7 > 7 > > 1 1 > > x ; 1 x2 P 0; p i ; ni P 0; ak1 ; bk1 P 0 > : y i 2 f0; 1g ði ¼ 1; . . . ; 7Þ

148

R. Caballero et al. / European Journal of Operational Research 175 (2006) 135–154

The solution of the above meta-GP model is shown in Table 4: That is, the second meta goal is not satisfied. Moreover, goals 2, 3 and 5 are fully satisfied, while the rest are not. Let us now suppose that the DM establishes the following meta-goals and preemptive priority levels in the second iteration: Priority level 1: Satisfy goal 2. Priority level 2: The aggregate deviation for goals 1, 6 and 7 (with equal weights) must be less than or equal to 1.00. Priority level 3: The maximum deviation for goals 3, 4 and 5 (with equal weights) must be less than or equal to 0.69. Therefore, the following three-stage procedure is carried out: Priority level 1:

ðP2Þ1

8 min > > > > > > > s:t: > > > < > > > > > > > > > > :

b111 6250x1 þ 5000x2 P 75; 000

Hard constraint

1375x1 þ 1025x2 þ n2  p2 ¼ 36; 000

Goal

p2 þ a111  b111 ¼ 0

Meta-goal

x1 ; x2 ; n2 ; p2 ; a111 ; b111 P 0

The solution of this problem (x1 = 12, x2 = 0) meets the first meta-goal. Priority level 2:

Table 4 Satisficing solution of the first iteration Variables x1 = 8.44

x2 = 4.44

Goal #

Value

Target value

Meta-goals 1 2 3

2.00 0.69 4

62.10 60.60 64

Goals 1 2 3 4 5 6 7

75,000 16,166.67 1813.33 3377.78 2000 451.11 12.89

P200,000 636,000 64000 62000 62000 =1000 P15

R. Caballero et al. / European Journal of Operational Research 175 (2006) 135–154

ðP2Þ2

8 min > > > > > > s:t: > > > > > > > > > > > > > > > > < > > > > > > > > > > > > > > > > > > > > > > :

149

b211 6250x1 þ 5000x2 P 75; 000 1375x1 þ 1025x2 þ n2  p2 ¼ 36; 000

Hard constraint

p2 þ a111  b111 ¼ 0

Level 1

b111

¼0 6250x1 þ 5000x2 þ n1  p1 ¼ 200; 000 35x1 þ 35x2 þ n6  p6 ¼ 1000 x1 þ x2 þ n7  p7 ¼ 15

Goals

n1 n7 6 þp 6 Z 1 þ 200;000 þ n1000 þ 15 ¼0

Observation constraint

Z 1 þ a211  b211 ¼ 1 x1 ; x2 ; ni ; pi ; ak11 ; bk11 P 0

Meta-goal

The solution of this problem (x1 = 15.09, x2 = 0) meets Priority level 3: 8 min b321 > > > > > s:t: 6250x1 þ 5000x2 P 75; 000 > > > > > > 1375x1 þ 1025x2 þ n2  p2 ¼ 36; 000 > > > > > p2 þ a111  b111 ¼ 0 > > > > > b111 ¼ 0 > > > > > 6250x1 þ 5000x2 þ n1  p1 ¼ 200; 000 > > > > > 35x1 þ 35x2 þ n6  p6 ¼ 1000 > > > > > x1 þ x2 þ n7  p7 ¼ 15 > > > > n1 n7 6 þp 6 > Z 1 þ 200;000 þ n1000 þ 15 ¼0 > > > > < Z 1 þ a211  b211 ¼ 1 ðP2Þ3 > > b211 ¼ 0 > > > > > 120x1 þ 180x2 þ n3  p3 ¼ 4000 > > > > > 400x1 þ n4  p4 ¼ 2000 > > > > > 450x2 þ n5  p5 ¼ 2000 > > > > > p3  4000D 6 0 > > > > > p > 4  2000D 6 0 > > > > p5  2000D 6 0 > > > > > > Z 2 þ D ¼ 0 > > > > > Z 2 þ a321  b321 ¼ 0.69 > > : x1 ; x2 ; ni ; pi ; aks1 ; bks1 P 0.

the second meta-goal.

Hard constraint Level 1

Level 2

Goals

Maximum deviation Observation constraint

The solution of this problem is shown in Table 5. Therefore, the three meta-goals are achieved. Goal 2 (placed at the first priority level) is achieved. With respect to the goals placed at the second priority level, goal 7 (which was not achieved in iteration 1) is the only one achieved, but there has been an improvement in the values of the other two (i.e., goals 1 and 6) with respect to the values taken in the first iteration. Finally, at the third priority level, goal 3 is satisfied.

150

R. Caballero et al. / European Journal of Operational Research 175 (2006) 135–154

Table 5 Satisficing solution of the second iteration Variables x1 = 8.45

x2 = 7.34

Level 1

Level 2

Level 3

Type

Value

Target

Type

Value

Target

Type

Value

Target

Meta-goal 1

0

=0

1

1

61

2

0.69

60.69

Goal #

Value

Target

Goal #

Value

Target

Goal #

Value

Target

Goals 2

19,138.62

636,000

1 6 7

89,494.79 552.53 15.79

P200,000 =1000 P15

3 4 5

2334.56 3380 3301

64000 62000 62000

The value of goal 4 is slightly worse than in the first iteration, while goal 5 is significantly worse. Nevertheless, the unachievements for these two goals are now very similar, as a result of the use of the minmax approach for this meta-goal. The procedure can continue with further iterations until the DM is satisfied with the solution. Finally, let us point out that this simple example illustrates the flexibility of the interactive meta-goal programming method. Different kinds of meta goals can be considered simultaneously, as shown in iteration 1, and meta-goals can be defined on subsets of goals and placed at different priority levels, as shown in iteration 2. This flexibility allows the DM to give information comfortably, based on the values yielded in the previous iteration.

5. Additional considerations 5.1. Possible refinements of the calculation phase The calculation phase is designed to provide the DM with useful information to be taken into account during the interactive phase of the IMGP algorithm. Therefore, the DM may wish to gather additional information in this preliminary phase. Solving problem (POM) for other values of the parameters kr is a possible way of gathering extra information. This can generate solutions that consider the three achievement functions simultaneously, and the DM can establish their relative importance through their respective weights. In this case, a normalizing scheme for the type 3 meta-goals, similar to the one defined in problem (M-GP)1, should be carried out. On the other hand, problem (POM) solved for r = 3 determines the maximum number of goals that can be achieved simultaneously, and its ‘‘satisficing’’ solution provides a maximal set of achieved goals. Nevertheless, especially in high-dimensional problems, there could be other families of goals (of the same cardinality) that can also be achieved simultaneously. Moreover, for reasons of priority, the DM may wish a set of goals to be achieved in any case. Obviously, an exhaustive study can be carried out to determine all the families of goals that can be achieved simultaneously. But such a study would be unnecessarily longwinded in high-dimensional problems. A possibility, therefore, is to carry out an optional calculation phase, solving three new problems, where a certain set of goals must be satisfied in any case. Let A be the set of indices for such goals, and let B = {1, 2, . . . , s}nA. Then, the three following problems can be stated as follows:

R. Caballero et al. / European Journal of Operational Research 175 (2006) 135–154

ðP1A Þ

ðP2A Þ

ðP3A Þ

8 min > > > > > > > < s:t:

P

> > > > > > > :

ni ¼ 0;

i2B

xi Nnii

fi ðxÞ þ ni  pi ¼ ti ;

i ¼ 1; . . . ; s

i2A

gj ðxÞ 6 bj ; x 2 Rn ;

j ¼ 1; . . . ; m

8 > min > > > > > > < s:t:

n o max xi Nnii

> > > > > > > :

ni ¼ 0; i 2 A gj ðxÞ 6 bj ; j ¼ 1; . . . ; m

8 min > > > > > > > s:t: > > < > > > > > > > > > :

i2B

fi ðxÞ þ ni  pi ¼ ti ;

i ¼ 1; . . . ; s

x 2 Rn P

151

()

8 min > > > > > s:t: > > > <

D fi ðxÞ þ ni  pi ¼ ti ;

> > > > > > > > :

ni ¼ 0; i 2 A gj ðxÞ 6 bj ; j ¼ 1; . . . ; m

xi Nnii

6 D;

i ¼ 1; . . . ; s

i2B

x 2 Rn ; d 2 R;

yi

i2B

fi ðxÞ þ ni  pi ¼ ti ;

i ¼ 1; . . . ; s

ni  M i y i 6 0; i 2 B ni ¼ 0; i 2 A gj ðxÞ 6 bj ; j ¼ 1; . . . ; m x 2 Rn ; y i 2 f0; 1g; i 2 B.

By solving these three problems, the DM can gain accurate information more in line with his/her preferences, which can be useful during the interactive phase. Fig. 3 shows the flowchart of this optional calculation phase. 5.2. Efficiency As could be the case in any GP problem, the final solution may not be efficient with respect to the original attributes of the DM, from which the goals have been derived. Obviously, if this was the case, and the DM wished to get an efficient solution, any of the efficiency restoration schemes for GP problems can be applied. The most sensible way to face the problem of obtaining an efficient solution is to go back to the original (GP) formulation, and to search for efficiency regarding the original set of objective functions. On this basis, the simplest way to guarantee the efficiency of the Meta-GP solution will consist of maximizing the wanted deviation variables of the original goals. That is, the positive ones in our case, without increasing the values of the negative deviation variables which have been minimized so far. If the solution obtained does not change, then the previous Meta-GP solution is efficient, otherwise this solution dominates the previous Meta-GP solution (e.g. Romero, 1991, pp. 16–17). Refinements and extensions of this approach can be seen in Tamiz and Jones (1996) for the general case and Caballero et al. (1998) for the convex case. 5.3. Other features of the IMGP algorithm Some features of the IMGP algorithm, related to its interactive nature, should be pointed out. They all correspond to some of the behavioral points raised in the papers by Korhonen et al. (1990) and Korhonen and Wallenius (1996). First, ours is a learning based method, that is, no mathematical convergence is assured (as is usually the case with Goal Programming or Reference Point based interactive methods), and

152

R. Caballero et al. / European Journal of Operational Research 175 (2006) 135–154

Start

Give the set of goals to be satisfied, A

Solve (P1A) Solution: x1A*, d1A*

Solve (P2A) Solution: x2A*, d2A*

Solve (P3A) Solution: x3A*, yA*

Show pay-off matrix to the D.M.

NO

Does the D.M. accept the solution? YES End Fig. 3. Flowchart of the optional analytic phase.

rather than that, the decision maker learns about the process along the procedure, and his/her conviction that an acceptable solution has been reached has to guarantee the convergence. In this sense, the open structure of the method lets the decision maker explore the problem, and re-evaluate previously discarded solution. Second, no termination criterion, rather than the decision makerÕs will, has been established in the method. Nevertheless, such a criterion can be added to the method. The simplest possibility is to test the similarity between two (or more) successive iterations. A more sophisticated stopping criterion for interactive goal programming algorithms can be found in Korhonen and Laakso (1986a,b). The information shown to the decision maker at each iteration includes, as displayed in Tables 4 and 5 of Section 4, the values of the decision variables, the values of the functions of the original goals and their deviation with respect to the target values, and the results achieved with regard to each meta-goal. In our opinion, this information is clear and complete for the decision maker, and gives him/her enough elements to decide about the next iteration. Finally, given that the information required from the decision maker can vary from a low to a high level of complexity, the method is suitable for decision makers with different degrees of knowledge about their problem. Nevertheless, a certain notion about the main principles of Goal Programming is required.

6. Concluding remarks The interactive meta-GP algorithm proposed in this paper contains as particular cases the classical GPvariants, and it can help to mitigate the two main problems associated to the use of GP as an operational decision-making tool: the allocation of targets to each attribute and the selection of a suitable achievement function for a particular decision-making problem. With respect to the target values, although these are still

R. Caballero et al. / European Journal of Operational Research 175 (2006) 135–154

153

required of the DM, the subsequent interactive procedure (which could be interpreted as a sensitivity analysis on the target values) reduces their impact on the final solution. With respect to the achievement function, the DM does not have to choose and stick with a particular GP-variant throughout the process. Instead, the algorithm makes it possible to use different achievement functions at the same time and to switch from one to another during the problem-solving procedure. The interactive meta-GP procedure fits in well with the general framework of interactive procedures proposed in the multi-criteria field and, thus, has the advantages of any interactive method regarding the sequential information exchange between the system (analyst) and the DM. On the other hand, the computational burden in the two phases of the procedure, that is, the calculation as well as the interactive phase proper, is not larger than is usual in this type of procedure. As shown in Section 4, flexibility is one of the most important features of the IMGP algorithm. In fact, the DM can, at each iteration, establish any kind of meta goal on any subset of goals, and these may or may not be allocated to priority levels. Weights and normalizing factors for goals and meta-goals can be changed at any step of the procedure. Another noteworthy point is that there are no restrictive assumptions concerning DMÕs absolute preferences and the shape of his/her utility function required by the proposed method. In fact, apart from the original goals, the only input required of the DM by interactive meta-GP is the initial and tentative definition of the targets for the precise meta-goals, like aggregate achievement, maximum deviation and number of unsatisfied goals. The paper is methodologically oriented and, as such, requires improvements, such as the incorporation of other GP-variants (e.g. penalizing functions) into the system, the implementation of the procedure within a friendly computerized framework, as well as testing with DMs in real situations.

Acknowledgements A preliminary version of this paper was presented at The Sixth Multiobjective Programming and Goal Programming Conference (MOPGP04) (Hammamet, Tunisia, May 2004). Comments and suggestions raised by the referees have greatly increased the clarity and accuracy of the paper. We thank the Editor Professor Wallenius for his comments and careful editing that has improved the quality and presentation of the paper. We would like to thank our colleagues from the Spanish Multi-Criteria Network, funded by the Spanish Ministry of Science and Technology, Project BFM2002-11282-E, for their support. The work of Carlos Romero was funded by the Spanish Ministry of Educacion y Ciencia. The work of Rafael Caballero and Francisco Ruiz was funded by the Spanish Ministry of Science and Technology, Project MTM2004-01987, and by the Andalusian Government Department of Education and Science, Project SEJ-417. Finally, the English editing by Mrs Rachel Elliott is appreciated.

References Aouni, B., Kettani, O., 2001. Goal programming model: A glorious history and a promising future. European Journal of Operational Research 133, 225–453. Benayoun, R., de Montgolfier, J., Tergny, J., Larichev, O., 1971. Linear Programming with Multiple Objective Functions: Step Method (STEM). Mathematical Programming 1, 366–375. Caballero, R., Rey, L., Ruiz, F., 1998. Lexicographic improvements of the target values in convex goal programming. European Journal of Operational Research 107, 644–655. Chankong, V., Haimes, Y.Y., 1978. An interactive surrogate worth tradeoff (ISWT) method for multiobjective decision making. In: Zionts, S. (Ed.), Multiple Criteria Problem Solving. Springer, New York. Charnes, A., Cooper, W.W., 1961. Management Models and Industrial Applications of Linear Programming. Wiley, New York. Dyer, J.S., 1972. Interactive Goal Programming (IGP). Management Science 19, 62–70.

154

R. Caballero et al. / European Journal of Operational Research 175 (2006) 135–154

Gardiner, L., Steuer, R.E., 1994. Unified interactive multiple objective programming. European Journal of Operational Research 74, 391–406. Geoffrion, A.M., Dyer, J.S., Feinberg, A., 1972. An interactive approach for multi-criterion optimization with an application to the operation of an academic department. Management Science 19, 357–368. Gonza´lez-Pacho´n, J., Romero, C., 2004. Satisficing logic and goal programming: Towards an axiomatic link. Information Systems and Operational Research—INFOR 42, 157–161. Ignizio, J.P., 1976. Goal Programming and Extensions. Lexington Books, Lexington, MA. Jones, D.F., Tamiz, M., 2002. Goal programming in the period 1990–2000. In: Ehrgott, M., Gandibleux, X. (Eds.), Multicriteria Optimization: State of the Art Annotated Bibliographic Survey. Kluwer Academic Publishers, Boston (Chapter 3). Kettani, O., Aouni, B., Martel, J.M., 2004. The double role of the weight factor in the goal programming model. Computers and Operations Research 31, 1833–1845. Korhonen, P., Laakso, J., 1986a. A visual interactive method for solving the multiple criteria problem. European Journal of Operational Research 24, 277–287. Korhonen, P., Laakso, J., 1986b. Solving generalized goal programming using a visual interactive approach. European Journal of Operational Research 26, 355–363. Korhonen, P., Wallenius, J., 1988. A Pareto race. Naval Research Logistics 35, 615–623. Korhonen, P., Wallenius, J., 1990. A multiple objective linear programming decision support system. Decision Support Systems 6, 243–251. Korhonen, P., Wallenius, J., 1996. Letter to the editor: Behavioral issues in MCDM: Neglected research questions. Journal of Multicriteria Decision Analysis 5, 178–182. Korhonen, P., Moskowitz, H., Wallenius, J., 1990. Choice behavior in interactive multiple criteria decision making. Annals of Operations Research 23, 161–179. Lee, S.M., 1972. Goal Programming for Decision Analysis. Auerbach, Philadelphia. Masud, A.S., Hwang, C.L., 1981. Interactive sequential goal programming (ISGP). Journal of the Operational Research Society 32, 391–400. Miettinen, K., 2002. Interactive nonlinear multiobjective procedures. In: Ehrgott, M., Gandibleux, X. (Eds.), Multiple Criteria Optimization: State of the Art Annotated Bibliographic Surveys. Kluwer, Boston. Nakayama, H., Sawaragi, Y., 1984. Satisficing trade-off method for multiobjective programming. In: Grauer, M., Wierzbicki, A. (Eds.), Interactive Decision Analysis. Lecture Notes in Economics and Mathematical Systems. Springer, Laxemburg, pp. 113–122. Reeves, G.R., Hedin, S.R., 1993. A generalized interactive goal programming procedure. Computers and Operations Research 20, 747–753. Rodrı´guez-Urı´a, M.V., Caballero, R., Ruiz, F., Romero, C., 2002. Meta-goal programming. European Journal of Operational Research 136, 422–429. Romero, C., 1991. Handbook of Critical Issues in Goal Programming. Pergamon Press, Oxford. Romero, C., Rehman, T., 2003. Multiple Criteria Analysis for Agricultural Decisions. Elsevier, Amsterdam. Original publication in 1989. Sakawa, M., 1982. Interactive multiobjective decision making by the sequential proxy optimization technique (SPOT). European Journal of Operational Research 9, 386–396. Schniederjans, M.J., 1995. Goal Programming: Methodology and Applications. Kluwer Academic Publishers, Boston. Shin, W.S., Ravindran, A., 1991. Interactive multiple objective optimization: Survey I—continuous case. Computers and Operations Research 18, 97–114. Spronk, J., 1981. Interactive Multiple Goal Programming: Applications to Financial Planning. Martinus Nijhoff, Boston. Steuer, R., 1983. Multiple criterion function goal programming applied to managerial compensation planning. Computers and Operations Research 10, 299–309. Steuer, R.E., 1986. Multiple Criteria Optimization: Theory, Computation and Application. Wiley, New York. Steuer, R.E., Choo, E.U., 1983. An interactive weighted Tchebycheff procedure for multiple objective programming. Mathematical Programming 26, 326–344. Tamiz, M., Jones, D.F., 1996. Goal programming and pareto efficiency. Journal of Information and Optimization Sciences 17, 291–307. Weistroffer, H.R., 1983. An interactive goal programming method for non-linear multiple-criteria decision-making problems. Computers and Operations Research 10, 311–320. Wierzbicki, A.P., 1981. A mathematical basis for satisficing decision making. In: Morse, J.N. (Ed.), Organizations: Multiple Agents with Multiple Criteria. Springer, New York, pp. 465–483. Yang, J.B., Sen, P., 1996. Preference modelling by estimating local utility functions for multiobjective optimization. European Journal of Operational Research 95, 115–138. Zionts, S., Wallenius, J., 1976. An interactive programming method for solving the multiple criteria problem. Management Science 22, 652–663.