Finding dual flexible contingency plans for optimal generalized linear systems

Finding dual flexible contingency plans for optimal generalized linear systems

S0969-6016(97)00041-5 Int. Trans. Opl Res. Vol. 5, No. 4, pp. 303±315, 1998 # 1998 IFORS. Elsevier Science Ltd All rights reserved. Printed in Great ...

253KB Sizes 0 Downloads 33 Views

S0969-6016(97)00041-5

Int. Trans. Opl Res. Vol. 5, No. 4, pp. 303±315, 1998 # 1998 IFORS. Elsevier Science Ltd All rights reserved. Printed in Great Britain 0969-6016/98 $19.00 + 0.00

Finding Dual Flexible Contingency Plans for Optimal Generalized Linear Systems YONG SHI1 1

Department of Information Systems and Quantitative Analysis, College of Information Science and Technology, University of Nebraska at Omaha, Omaha, Nebraska 68182, U.S.A.

Designing optimal linear systems has extended the traditional optimization method (i.e. ®nding an optimal point for a given system) to ®nding an optimal solution structure. This structure consists of optimal systems and their optimal contingency plans which can cope with various decision situations under uncertainty. Multi-criteria and multi-constraint-levels (MC2) simplex method is a primary tool used in designing optimal systems. In this paper, a new approach to designing optimal systems is proposed to ®nd dual ¯exible contingency plans (DFCP) for generalized good systems (GGS). GGS goes beyond the assumption of selecting opportunities for a potentially good system, while DFCP utilizes all possible slack resources and converts non-optimal solutions into optimal ones. This approach enriches the theoretical framework of designing optimal systems. A numerical example is used to illustrate the proposed approach. # 1998 IFORS. Published by Elsevier Science Ltd. All rights reserved Key words: Linear programming, optimal linear system, MC2-simplex method, generalized good systems, dual ¯exible contingency plans.

1. INTRODUCTION A popular or traditional linear optimization method is to ®nd an ``optimal point'' for a given system (Charnes and Cooper, 1961; Churchman, 1968; Dantzig, 1963). This optimal point, interpreted as either an optimal solution (Koopmans, 1951) or optimal design (Papalambros and Wilde, 1988: Wilde, 1978), can be only used to deal with a certain decision situation. It fails when decision situations vary under uncertainty. For years, the challenge of applying optimization techniques in decision-making analysis has been to prepare optimal solutions for future decision situations that are subject to changes of reality, such as the stock market, business competition, or economic engineering. Designing optimal systems attempts to answer this challenge by ®rst, using multi-criteria and multi-constraint-levels (MC2) linear programming (Seiford and Yu, 1979) to formulate a given decision-making problem. Then, it identi®es an optimal solution structure consisting of optimal systems and their corresponding contingency plans to cope with various decision situations under uncertainty. In a 1990 issue of Management Science, Lee et al. (1990) proposed a three-step basic approach to designing the optimal systems and constructing their corresponding contingency plans. In Step 1, given a set of all possible opportunities in the design model, this approach identi®es a set of potentially good systems (PGS) that contain selected opportunities and can potentially optimize the given design problem under certain ranges of decision parameters, such as the contribution of objectives and resource availability levels. In Step 2, for each PGS, a submodel of the design model is built to construct primal rigid contingency plans (PRCP) for overcoming changes of the decision parameters. The contingency plan for a PGS containing selected opportunities and some slack resources with purchased external resources, if necessary, converts infeasible solutions into feasible ones. In Step 3, known techniques of decision making under uncertainty are applied to select the optimal linear system(s) from the set of PGSs and their PRCP as the ®nal decision. Constructing the contingency plans for designing optimal systems di€ers from known postoptimality or sensitivity analysis for linear programming (Gal, 1979). Construction of contingency plans ensures the feasibility and optimality over decision situations for the undertaken system design problem; while the sensitivity analysis identi®es the optimal solution of a given system as estimates of some system data become available. 303

304

Yong ShiÐFinding DFCPs for Linear Optimized GLSs

Lee et al. (1990) stimulated research on various approaches to designing optimal systems by the MC2-simplex method (Seiford and Yu, 1979). Shi and Yu (1992) constructed primal ¯exible contingency plans (PFCP) for PGS. The PFCP containing selected opportunities and all slack resources provides ¯exibility for decision makers to utilize slack resources. Shi et al. (1996) suggested that unions of PGS be used to generate generalized good systems (GGS) as candidates for the optimal system. The GGS can relax the assumption for the MC2-simplex method and are preferred to PGS in terms of optimality conditions. Shi and Zhang (1993) discussed how to ®nd PFCP for GGS. A computer-aided system for designing optimal systems through PGS, GGS, PRCP and PFCP has been proposed by Shi et al. (1994 d). Parallel to the above primal approaches, Shi (1992) used a dual model of the design model to construct dual rigid contingency plans (DRCP) for PGS. The DRCP containing selected opportunities, some slack resources, and the contribution of the opportunities converts possible nonoptimal solutions into optimal ones. Shi (1995a) explored ®nding dual ¯exible contingency plans (DFCP) for PGS to adjust all slack resources to meet the optimality. Shi and He (1994) described how to construct DRCP for GGS. The goal of this paper is to propose an e€ective and systematic procedure for constructing DFCP for GGS. This will complete the theoretical foundation of designing optimal systems. The paper proceeds as follows. First, it sketches a mathematical model for designing optimal systems. Techniques of identifying PGS and GGS are also outlined. Next, it presents two mathematical models for constructing DFCP for GGS. Then, it uses a dual augmented model to derive the e€ective and systematic procedure for constructing DFCP for GGS under various decision situations. A numerical example is used to illustrate the procedure. Finally, it summarizes this study with a number of potential real-world applications.

2. CONCEPTS OF OPTIMAL LINEAR SYSTEMS Let N = {1, . . ., n} be n opportunities under consideration. Then, a model of designing optimal linear systems can be formulated by Max lt Cx s:t: Ax  Dg x  0;

…1†

where C $ Rq  n, A $ Rm  n, and D $ Rm  p are matrices of q  n, m  n and m  p dimensions, respectively; x $ Rn are decision variables; l $ Rq, g $ Rp and both (g, l) are assumed unknown. In equation (1) q situational forces a€ect the unit contribution, while p situational forces a€ect resource availability. Both parameters (g, l) re¯ect changes in either internal or external environments of the systems. These environments could be market conditions, economic conditions, social conditions, political conditions, technological conditions or business philosophy. l is called the contribution parameter and g the resource parameter. Note that l and g may be treated as probability distributions over the situational forces. The interpretation should depend on the individual contexts of linear systems. If parameters (g, l) are known ahead of decision time, the best k opportunities from n possible opportunities can be found as the optimal solution by using linear programming techniques. However, if parameters (g, l) cannot be known ahead of decision time, the linear programming approach will not be e€ective because there are in®nitely many possible combinations of (g, l). Any change of g may make the original choice infeasible, while that of l can render the choice not optimal. Thus, contingency plans need to be constructed to overcome these dicult decision situations. Equation (1) can be abstractly viewed as a ``linear program'' with parameters (g, l). If an optimal solution exists for equation (1), then there is a basic solution that has m basic variables. This suggests the following heuristic assumptions:

International Transactions in Operational Research Vol. 5, No. 4

305

2.1. Assumption 1 (i) The number of selected opportunities, k, in a good system for equation (1) should not exceed the number of resources under consideration, m. (ii) The selected k opportunities should be able to ``optimize'' equation (1) under some possible ranges of (g, l). With Assumption 1, the potential solution concept of MC2 linear programming in Yu (1985) can be used to explore a basic method of designing the optimal system. Given basic variables {xj1, . . ., xjm} for equation (1), the index set of the basic variables is denoted by J = {j1, . . . , jm}. Then, x(J) = {xj1, . . . , xjm}. Note that x(J) may contain slack variables. Without confusion, J is called a basis for equation (1). Let I be an m  m-dimensional identity matrix and 0 be a q  m-dimensional zero matrix. Given a basis J with its basic variables x(J), the associated basis matrix BJ is de®ned as the submatrix of [A, I] with column index of J (i.e. column j of [A, I] is in BJ if and only if j $ J), and the associated objective function coecient CJ as the submatrix of [C, 0] with column index of J. 2.2. De®nition 1 Given a basis J for equation (1), the corresponding primal parameter set is G1(J) = t ÿ1 ÿ1 {g>0vBÿ1 J Dg e0}; and the dual parameter set is L1(J) = {l>0vl [CJBJ A ÿ C,CJBJ ]e0}. 2.3. Theorem 1 Given a basis J for equation (1), (i) J is called a primal potential solution if and only if G1(J)$ ù; (ii) J is called a dual potential solution if and only if L1(J) $ù; and (iii) J is called a potential basis if and only if G1(J)  L1(J)$ ù. (iv) The resulting solution x(J, g) = Bÿ1 J Dg e 0 if and only if g $ G1(J); (v) The objective payo€ of x(J, g) is P(J; g, l) = ltCJBÿ1 J Dg when (g, l) are speci®ed. If a PGS is a potential basis, then it satis®es Assumption 1. Conversely, a PGS satisfying Assumption 1 with some (g, l) can be represented by a potential basis. Thus, the above MC2simplex method (De®nition 1 and Theorem 1) is readily used to locate all PGS. Opportunities selected in J are those we can potentially choose for the optimal system. Let J ˆ fJ1 ; . . . ; Jr g be the set of all PGS derived by using the MC2-simplex method for equation (1). Under the restriction of Assumption 1(i), the number of selected opportunities, k, in a PGS J cannot exceed the number of available resources, m. For some design problems with a large number of possible opportunities, where n further exceeds k, this assumption may rule many opportunities for the optimal system out of consideration. To relax this assumption, we can take unions of subsets of J . It is possible to generate some new good systems from the unions that can contain more opportunities than any PGS and also have a higher ``payo€'' than any PGS under certain distribution of (g, l). Such a system is called the generalized good system (GGS), because it is generated by some PGS. Since J can produce 2r-1-r possible unions of PGS, when r is large, checking non-redundant unions for GGS can be prohibitive. To overcome this diculty, Shi et al. (1994b, c) proposed an ecient algorithm of ®nding all distinct unions from J . The following results are summarized from Shi et al. (1996). Let O be a GGS. Since O is generated from some PGS, a PGS is a special O. The set of all PGS and GGS is denoted by c. Then c represents all possible candidates for the optimal system. For any given index subset H of opportunity variables and/or slack variables that is not selected as an element of c, it can be determined whether there is an element of c, say O0, such that O0 is preferred to the H with respect to optimality (see De®nition 4) for equation (1). If so, the all non-selected subsets of variables can be removed from further consideration in designing the optimal system.

306

Yong ShiÐFinding DFCPs for Linear Optimized GLSs

Let N* = {1, . . ., n + m} be the index set of all possible opportunity variables xj and slack variables si. Note that N*N. Suppose that for given equation (1), c has been obtained by taking unions of subsets of J . Then: 2.4. De®nition 2 Given a PGS J of equation (1), its optimal situation set is de®ned by R(J) = {(g, l)>0vg $ G1(J), l $ L1(J)}. 2.5. De®nition 3 For any subset H, H UN*, its optimal situation set is de®ned by R(H) = [ {R(Jk)vJk UH}. 2.6. De®nition 4 For any subsets H1 and H2, H1 and H2 UN*, an optimality preference with respect to equation (1) is de®ned by (i) H1>H2 if and only if R(H1)wR(H2); and (ii) H10H2 if and only if R(H1) = R(H2). By De®nition 4, the induced preference { g } is de®ned by: H1 g H2 if and only if R(H1)  R(H2) for any H1, H2 U N*. 2.7. Theorem 2 Given a subset H as possible candidate for the optimal system, if H ( c, there is an Ok$c such that Ok g H. 2.8. Theorem 3 For any subset H that contains some opportunity j which is not in any Ji 2 J , there is an Ok$c such that Ok g H. Both Theorems 2 and 3 guarantee that c is the set of all possible candidates for the optimal system. Therefore, the corresponding contingency plans for each element of c can be constructed without considering any non-selected subsets.

3. DUAL FLEXIBLE CONTINGENCY PLANS Given a GGS O of c, let L(O) be a basis associated with O. By applying Theorem 1, if there is a l ( L1(L(O)), L(O) is not optimal. To correct this non-optimal case, two di€erent approaches can be used. The ®rst one adjusts the contribution of selected opportunities and utilizes selected slack resources in O. This leads to constructing DRCP and has been studied by Shi and He (1994). The second constructs, for ¯exible scenarios, DFCP that takes advantage of all m slack resources, in addition to adjusting the contribution of selected opportunities. Because this approach adds more slack resources into each O, it may shrink the size of c. The details of ®nding DFCP for all GGS are as follows. According to the duality of MC2 linear programming (Yu, 1985), the dual model of equation (1) can be written as

International Transactions in Operational Research Vol. 5, No. 4

307

Min ut Dg s:t: ut ‰A; IŠ  lt ‰C; 0Š ut is unrestricted;

…2†

where ut$Rm is an m-dimensional vector.

3.1. De®nition 5 (i) For each g $ G1(J), the basic solution x(J, g) = Bÿ1 J Dg is a (P)-feasible solution. (ii) For each l $ L1(J), ut(J, l) = ltCJBÿ1 J is a (D)-feasible solution. Given a PGS J, ut(J, l) = ltCJBÿ1 J of (ii) in De®nition 5 can be viewed as the minimal unit implicit value of the resources Dg, or are commonly known as the shadow price of Dg. From Theorem 1, L1(J)$ ù is the optimality condition for PGS J. That is, the feasible solution x(J, g) = Bÿ1 J Dg with g $ G1(J) is optimal if l $ L1(J). Given a GGS O of c, the variables (x, s) are decomposed into x(O), x(O'), and x(O0), where x(O) are the selected variables corresponding to O; x(O') are non-selected slack variables; and x(O0) are non-selected opportunity variables (note that N* = O [ O' [ O0). Then, [A, I] is decomposed into [AO, AO', AO0], where AO is the submatrix of [A, I] associated with O, AO' is the submatrix of [A, I] associated with O', and AO0 is the submatrix of [A, I] associated with O0. Similarly, [C, 0] is decomposed into [CO, CO', CO0] where CO' is a zero matrix with proper dimensions. If O is chosen, we use O* = O [ O' to build the following primal model: Max lt CO x…O† s:t: AO x…O† ‡ AO` x…O`† ˆ Dg x…O†; xO`†  0;

…3†

where CO, AO, AO' and D are known and (g, l) are presumed. To construct the DFCP for O*, we solve the dual model of equation (3): Min ut Dg s:t: ut ‰AO ; AO` Š  lt ‰CO ; 0Š ut is unrestricted:

…4†

Note that equation (4) is a submodel of equation (2) since equation (4) can be obtained by deleting columns of AO0 and CO0 from equation (2). Let c be the number of variables in x(O) and p be the number of variables in x(O') (note that c + p em). Then n + m ÿ c ÿ p is the number of variables in xO0). Let u(O) be the dual surplus variables complementary to x(O) and u(O') be the dual surplus variables complementary to x(O'). By adding u(O) and u(O'), equation (4) becomes: Min gt Dt u    t   t    0 AO Ic CO u…O† ÿ l uÿ u…O ˆ s:t: t Ip AO` 0 0 ut is unrestricted;

u…O†; u…O`†  0:

The initial tableau of this model is in Table 1. Given a basis L(O*) in equation (3), BL(O*) is de®ned as the basis matrix, a submatrix of [AO, AO'] with column index in L(O*). Also, CL(O*) is de®ned as the objective coecients, a submatrix of [CO, 0]. For the basic variables x(L(O*)), we denote the corresponding non-basic variables by x(L'(O*)). Let RL(O*) be the submatrix of [AO, AO'] and C'L(O*) be the submatrix of [CO, 0] associated with x(L'(O*)). Note that x(L(O*)) $ Rm and x(L'(O*)) $ Rc + p ÿ m. Then, (u(O), u(O')) is decomposed into (u1(O*), u2(O*)), where u1(O*) are the dual surplus variables

308

Yong ShiÐFinding DFCPs for Linear Optimized GLSs Table 1.

u

u(O)

AtO AtO' t t

ÿIc 0 0

gD

Table 2. u(O')

RHS

0 ÿIp 0

CtOlt

2

u (O*)

u1(O*)

RHS

0 ÿIc + p ÿ m 0

ÿIm 0 0

CL(O*)lt C'L(O*)lt 0

u t

BL(O*) RL(O*)t gtDt

0 0

complementary to x(L(O*)), and u2(O*) are the dual surplus variables complementary to x(L'(O*)). The initial tableau can be rewritten as in Table 2. The dual basis matrix of Table 2, corresponding to the primal basis matrix BL(O*), can be written as   BL …O †t 0 BL ˆ RL …O †t ÿIc‡pÿm and its inverse matrix is …BL †ÿ1 ˆ



…BL …O †t †ÿ1 RL …O †t …BL …O †t †ÿ1

0



ÿIc‡pÿm

By operating (B*L)ÿ1 on Table 2, Table 3 occurs where block 2 is equal to (B*L)ÿ1 times block 2 of the Table 2, and block 3 is equal to [ÿgtDt, 0] times the resulting block 2 plus block 3 of Table 2 gives Table 3.

3.2. De®nition 6 Given a basis U(O*) for equation (4), the corresponding primal parameter set is L2 …U…O †† ˆ fl > 0 j lt ‰CL …O †BL …O †ÿ1 ; CL …O †BL …O †ÿ1 RL …O † ÿ C0L …O †Š  0g; and the dual parameter set is G2 …U…O †† ˆ fg > 0 j …BL …O †ÿ1 D†g  0g: The basis U(O*) for equation (4) is complementary to the basis L(O*) for equation (3). For a given O of c, U ˆ fU1 …O †; . . . ; Uk …O †g is denoted as the set of all potential solutions obtained from equation (4) using the MC2-simplex method. If for every (g, l) there is a Ui(O*) such that (g, l) $ G2(Ui(O*))  L2(Ui(O*)), then U is the set of all DFCP selected by equation (4). Otherwise, if there is some l ( L2(Ui(O*)), equation (4) has infeasible solution. In other words, equation (3) has no optimal solution for the l. The following method can overcome this case. Given L(O*) for equation (3), let b1(O*) = (b1, . . . , bm)t be the vector of increments of the primal basic variables x(L(O*)), and b2(O*) = (bm + 1, . . ., bc + p)t be the vector of increments of the primal non-basic variables x(L'(O*)). Let the unit contribution of b1(O*) be w1(O*)t=(w1, . . ., wm) and that of b2(O*) be w2(O*)t=(wm + 1, . . ., wc + p). Then by adding w1(O*)t and w2(O*)t into the right-hand-side of equation (4) it becomes Min ut Dg ÿ b1 …O †w1 …O †t ÿ b2 …O †w2 …O †t s:t: ut ‰AO ; AO` Š  lt ‰CO ; 0Š ‡ w1 …O †t ‡ w2 …O †t ut is unrestricted;

and

w1 …O †t ; w2 …O †t  0

…5†

Similarly to equation (4), the initial tableau of equation (5) can be written as Table 4. Table 3. u Im 0 0

2

u (O*) 0 Ic + p ÿ m 0

1

u (O*) t ÿ1

ÿ(BL(O*) ) ÿRL(O*)t(BL(O*)t)ÿ1 gt(BL(O*)ÿ1D)t

RHS (CL(O*)BL(O*)ÿ1)tl (CL(O*)BL(O*)ÿ1RL(O*) ÿ C'L(O*))tl ÿgt(CL(O*)BL(O*)ÿ1D)tl

International Transactions in Operational Research Vol. 5, No. 4

309

Table 4. 2

u (O*)

u (O*)

w2(O*)

w1(O*)

RHS

0 ÿIc + p ÿ m 0

ÿIm 0 0

0 ÿIc + p ÿ m 2 ÿb (O*)t

ÿIm 0 ÿb1(O*)t

CL(O*)lt C'L(O*)lt 0

u t

BL(O*) RL(O*)t gtDt

1

Let L3(U(O*)) be the primal parameter set of a basis U(O*) for equation (5), and G3(U(O*)) be the dual parameter set of a basis U(O*) for equation (5). Then the set of all potential solutions obtained from equation (5) is called U; the DFCP selected from equation (5). By adding w1(O*)t and w2(O*)t, equation (5) can always result in feasible solutions. Therefore, using either equation (4) or (5), there is always a DFCP for a GGS O under all possible situations of (g, l). Observe that if equations (4) and (5) are used to construct DFCP for a GGS O of c = {O1, . . ., Oq}, it may be necessary to build 2q submodels. This is certainly not e€ective. The following section discusses an e€ective and systematic procedure for ®nding all DFCP for each O of c.

4. AN EFFECTIVE AND SYSTEMATIC PROCEDURE In order to derive an e€ective and systematic procedure for ®nding all DFCP for each O of c, relationships between equations (2), (4) and (5) must be explored through the following dual augmented model: Min ut Dg ÿ wt b s:t: ut ‰A; IŠ  lt ‰C; 0Š ‡ wt ut is unrestricted; wt  0;

…6†

where wt$Rn + m is a (n + m)-dimensional vector, and b $ Rn + m is a (n + m)-dimensional parameter. When the surplus variables uo$Rn + m are added into equation (6), it can be rewritten as Min gt Dt u ÿ bt w  t  t C A l s:t: u ÿ In‡m uo ÿ In‡m w ˆ Im 0 ut is unrestricted;

and uo ; w  0:

Given a GGS O of c, as discussed in Section 3, the variables (x, s) can be decomposed into x(O), x(O'), and x(O0); [A, I] into [AO, AO', AO0]; and [C, 0] into [CO, CO', CO0]. Then uo can be partitioned into (u(O), u(O'), u(O0)); w into (w1(O*), w2(O*), w(O0)); and b into (b1(O*), b2(O*), b(O0)). Here, w(O0) has n + m ÿ c ÿ p components of w associated with b(O0). Then, the initial tableau of equation (6) is written as Table 5. When comparing equations (4) and (5) with equation (6), it can be seen that both equations (4) and (5) are submodels of equation (6). Equation (4) can be obtained by deleting columns of u(O0), w1(O*), w2(O*) and w(O0) and submatrices AtO0 and CtO0 from equation (6), while equation (5) can be obtained by deleting columns of u(O0) and w(O0) and submatrices AtO0 and CtO0 from equation (6). For preciseness, the relationship between equations (4) and (6) is described as follows. Table 5. 2

u t

BL(O*) RL(O*)t AO0 gtDt

1

u (O*)

u (O*)

u(O0)

w2(O*)

w1(O*)

w(O0)

RHS

0 ÿIc + p ÿ m 0 0

ÿIm 0 0 0

0 0

0 ÿIc + p ÿ m 0 ÿb2(O*)t

ÿIm 0 0 1 ÿb (O*)t

0 0

CL(O*)lt C'L(O*)lt CO0lt 0

ÿIn + m ÿ c ÿ p 0

ÿIn + m ÿ c ÿ p ÿb(O0)t

310

Yong ShiÐFinding DFCPs for Linear Optimized GLSs

Let U(O*) be a basis with u(U), u(U(O)) and u(U(O')) as the corresponding basic variables in u, u(O) and u(O'), respectively. Let W(O*) be another basis that contains u(U), u(U(O)) and u(U(O')) as the basic variables. Let the primal parameter set of W(O*) for equation (6) be L4(W(O*)) and the dual parameter set of W(O*) for equation (6) be G4(W(O*), b). This leads to Theorem 4. 4.1. Theorem 4

(i) If U(O*) is a basis for equation (4), then it must be contained in a basis W(O*) for equation (6). Conversely, if W(O*) is a basis for equation (6) and contains U(O*) for equation (4), then W(O*) can be reduced to U(O*) by deleting columns of u(O0), w1(O*), w2(O*) and w(O0) from equation (6). (ii) The basic feasible solution of equation (4) for given l, (u(U), u(U(O)), u(U(O'))), can be obtained by further deleting submatrices AtO0 and CtO0 from Table 6. (iii) L2(U(O*))  L4(W(O*)). (iv) G2(U(O*))  G4(W(O*), b), b $ Rn + m. To show the relationship between equations (5) and (6), let V(O*) be a basis with u(V), u(V(O)), u(V(O')), w1(V(O*)) and w2(V(O*)) as the corresponding basic variables in u, u(O), u(O'), w1(O*) and w2(O*) respectively. Let Z(O*) be another basis that contains u(V), u(V(O)), u(V(O')), w1(V(O*)) and w2(V(O*)) as the basic variables. Let L4(Z(O*)) and G4(Z(O*), b) be the primal parameter set of Z(O*) and the dual parameter set of Z(O*), respectively, for equation (6). Then, the following results are given: 4.2. Theorem 5

(i) If V(O*) is a basis for equation (5), then it must be contained in a basis Z(O*) for equation (6). Conversely, if Z(O*) is a basis for equation (6) and contains V(O*) for equation (5), then V(O*) can be reduced to Z(O*) by deleting columns of u(O0) and w(O0) from equation (6). (ii) The basic feasible solution of equation (5) for given l, (u(U), u(U(O)), u(U(O')), w1(V(O*)), w2(V(O*))), can be obtained by further deleting submatrices AtO0 and CtO0 from Table 5. (iii) L3(V(O*))  L4(Z(O*)). (iv) G3(V(O*), b)  G4(Z(O*), b), b $ Rn + m. According to these results, an e€ective and systematic procedure to locate all DFCP for all GGS by using the MC2-simplex method is proposed as follows. 4.3. Procedure 1 Step 1. Use the MC2-simplex method to ®nd a set of PGS, J ˆ fJ1 ; . . . ; Jr g, of equation (1). Step 2. Apply the ecient algorithm of Shi et al. (1994b, c), if necessary, to generate a set of GGS, c = {O1, . . ., Oq}, from J ˆ fJ1 ; . . . ; Jr g. By adding O' for each O, reduce c to c* = {O*1, . . ., O*g}. Table 6. Potentially good systems Potentially good systems J1 J2 J3

Basic variables

G1(Ji)

L1(Ji)

(x2, s1) (x1, x2) (x2, s2)

3/5E g1E1 0 Eg1E3/5 0 Eg1E3/5

0E l1E1/3 0E l1E1/6 1/6El1E1/3

International Transactions in Operational Research Vol. 5, No. 4

311

Step 3. Find all possible bases of equation (6) and denote it as W ˆ fW1 ; . . . ; Wz g. Step 4. Use Theorem 4 to identify the set of DFCP of equation (4) for each GGS O* from W. If they are all DFCP, go to Step 6; otherwise, go to Step 5. Step 5. Use Theorem 5 to identify the set of DFCP of equation (5) for each GGS O* from W. Step 6. Use a well-known technique of decision making under uncertainty, such as ``maximizing expected payo€'', ``minimizing the variance of the payo€'', ``maximin payo€'' and ``maximizing the probability of achieving a targeted payo€'' (Keeney and Rai€a, 1976; Shi, 1991; Yu, 1985), to select the ``best'' system from the set of GGS and their DFCP as the ®nal decision. The following example, adopted from Shi and He (1994), is used to illustrate this procedure for designing the optimal system:

4.4. Example 1 Suppose a furniture ®rm produces chairs and tables. There are two types of resources used in the production process. The ®rst is material and the second is labor hours. If each chair consumes one unit of material, then each table needs three units of material. However, while a chair needs 2 hr to be made, a table needs only 1 hr. The resource availability is subject to the future economic changes. When the economy is booming, the ®rm has 40 units of material and 10 hr of labor. Alternatively, if the economy is in recession, the ®rm has 30 units of material and 15 hr of labor. Marketing expenditure for the ®rm is U.S.$4 per chair and U.S.$2 per table. How can the ®rm design an optimal system with DFCP if the ®rm wants to minimize the total cost of marketing expenditure and maximize the production capacity? Because the minimization of marketing expenditure can be treated as a negative maximization problem, the design problem is formulated as the following model with two criteria and two constraint levels for two possible opportunities:    ÿ4 ÿ2 x1 Max …l1 ; l2 † 1 1 x2       40 30 g1 1 3 x1  s:t: x2 g2 10 15 2 1 xj  0;

j ˆ 1; 2

Let si, i = 1, 2, be slack variables for constraints 1, 2, respectively. Then Procedure 1 is conducted as follows: 4.4.1. Step 1. Using the computer software of the MC2-simplex method (Chien et al., 1989), the set of all PGS, denoted by J ˆ fJ1 ; J2 ; J3 g is found in Table 6. Here, J1 has (x2, s1) as the basic variables. If J1 is chosen, then x2 tables will be produced while s1 is the amount of unused material. Assuming l1+l2=1 and g1+g2=1, J1 is optimal whenever 3/5 E g1E1 and 0 El1E1/3. However, if 0 Eg1<3/5, J1 becomes infeasible, and if 1/3 < l1E1, J1 is not optimal. The meanings of J2 and J3 can be similarly explained. When 1/3 < l1E1, the design model has no optimal solutions. Thus, there is a need to construct the corresponding dual contingency plans for each PGS J.

Table 7. Generalized good systems Generalized good systems

Basic variables

O1 O2 O3 O4

(x1, x2, s1) (x2, s1, s2) (x1, x2, s2) (x1, x2, s1, s2)

312

Yong ShiÐFinding DFCPs for Linear Optimized GLSs Table 8. Dual ¯exible contingency plans for O*1 of equation (4)

Wj(O*1)

G2(Wj(O*1))

L2(Wj(O*1))

u(W20) = (u1, u2, u6)

3/5Eg1E1

0El1E1/3

u(W21) = (u1, u2, u5)

0E g1E3/5

0El1E1/3

u(W50) = (u1, u5, u6)

0E g1E3/5

0El1E1/3

u(W52) = (u2, u5, u6)

3/5Eg1E1

0El1E1/3

Payo€ P(Wj(O*1))  l1 l2    ÿ26:7 13:3 l1 (g1, g2) ÿ20 10 l2    ÿ26:7 13:3 l1 (g1, g2) ÿ20 10 l2    ÿ20 10 l1 (l1, l2) l2 ÿ30 15 

(g1, g2)

ÿ20 10 ÿ30 15



4.4.2. Step 2. A set of GGS, denoted by c = {O1, O2, O3, O4} is generated from J (see Table 7). By adding O' for each O, c becomes c* = {O*1, O*2{, where O*1=(x2, s1, s2) and O*2=(x1, x2, s1, s2). 4.4.3. Step 3. In order to e€ectively construct DFCP for O*i , i = 1, 2, the corresponding dual augmented model is built: 0 1 w1    B C 40 10 u1 B w2 C ÿ …b1 ; b2 ; b3 ; b4 †B C Min …g1 ; g2 † @ w3 A 30 15 u2 w4 0

1 1 2 B 3 1 C u  B 0 B B C 1 ÿB s:t: B C @0 @ 1 0 A u2

0 1

0 0

0

1

1 w1 ÿ4 1 0 C  B C B 0C CB w2 C B ÿ2 1 C l1 C CB C  B 0 A@ w3 A @ 0 0 A l2

0

0

0

1

0

1

0 1

0

uj ; j ˆ 1; 2 are unrestricted;

10

1

w4

0 0

and wi  0; i ˆ 1; 2; 3; 4:

Using Chien et al. (1989), all 89 bases, denoted by W ˆ fW1 ; . . . ; W89 g, are found. 4.4.4. Step 4. Theorem 4 is used to identify the set of DFCP of equation (4) for GGS O*1 and O*2 from W as shown in Tables 8 and 9, respectively. Because when 1/3 < l1E1 equation (4) has no optimal solution, DFCP in Tables 8 and 9 are not all the relevant DFCP. Thus, we have to go to Step 5. 4.4.5. Step 5. Theorem 5 is used to identify the set of DFCP of equation (5) for GGS O*1 and O*2 from W as listed in Tables 10 and 11, respectively. Here, assume b = (2, 2, 2)t for illustrative purposes since G2(Wj(O*2), b) is a function of b. From Table 10, in terms of the ranges of (g1, l1), GGS O*1 has eight alternative DFCP sets: {W13, W21, W20}, {W13, W21, W52}, {W13, Table 9. Dual ¯exible contingency plans for O*2 of equation (4) Wj(O*2)

G2(Wj(O*2))

L2(Wj(O*2))

u(W3) = (u1, u2, u5, u6)

0E g1E3/5

0El1E1/6

u(W20) = (u1, u2, u3, u6)

3/5Eg1E1

0El1E1/3

u(W21) = (u1, u2, u3, u5)

0E g1E3/5

0El1E1/3

u(W50) = (u1, u3, u5, u6)

0E g1E3/5

0El1E1/3

u(W52) = (u2, u3, u5, u6)

3/5Eg1E1

0El1E1/3

Payo€ P(Wj(O*2))  (g1, g2)

ÿ20 12 ÿ30 12





 l1 l2    ÿ26:7 13:3 l1 (g1, g2) l2 ÿ20 10    ÿ26:7 13:3 l1 (g1, g2) ÿ20 10 l2    ÿ20 10 l1 (l1, l2) l2 ÿ30 15 

(g1, g2)

ÿ20 10 ÿ30 15



l1 l2

International Transactions in Operational Research Vol. 5, No. 4

313

Table 10. Dual ¯exible contingency plans for O*1 of equation (5) with bi=2 Wj(O*1)

G3(Wj(O*1))

L3(Wj(O*1))

Payo€ P(Wj(O*1))

u(W13) = (u1, u2, w2)

0Eg1E1

1/3El1E1

(g1, g2)

u(W59) = (u2, w2, w3)

0Eg1E1

1/3El1E1

(g1, g2)

u(W20) = (u1, u2, u6)

3/5Eg1E1

0El1E1/3

u(W21) = (u1, u2, u5)

0E g1E3/5

0El1E1/3

(g1, g2)

u(W50) = (u1, u5, u6)

0E g1E3/5

0El1E1/3

(g1, g2)

u(W52) = (u2, u5, u6)

3/5Eg1E1

0El1E1/3

   (g1, g2) 

ÿ4 ÿ4 ÿ4 ÿ4

ÿ20 ÿ30

ÿ26:7 ÿ20



ÿ26:7 ÿ20  ÿ20 (l1, l2) ÿ30

2 2



l1 l2





 l1 l2   10 l1 15 l2   13:3 l1 l2 10   13:3 l1 10 l2   10 l1 15 l2 2 2

W50, W20}, {W13, W50, W52}, {W59, W21, W20}, {W59, W21, W52}, {W59, W50, W20} and {W59, W50, W52}. Similarly, the DFCP sets for O*2 can be found in Table 11. 4.4.6. Step 6. Suppose the ``maximizing expected payo€'' is used as the criterion to select the ®nal optimal system from c* = {O*1, O*2} and the DFCP. In order to show the optimal system, we assume that g1 is independent of l1, and (g1, l1) have the bi-uniform probability distribution F(g1, l1). Then, the expected payo€, EV(O*1), with respect to the DFCP set, {W13, W21, W20}, can be expressed as Z Z P…W13 …O1 †† dF…g1 ; l1 † ‡ P…W21 …O1 †† dF…g1 ; l1 † EV…O1 † ˆ R1

R2

Z

‡

R3

P…W20 …O1 †† dF…g1 ; l1 †;

where P(W13,(O*1)), P(W21(O*1)), P(W20(O*1)), and the (g1, l1) ranges Rj, j = 1, 2, 3, are from Table 10. The computed result for EV(O*1) = 9.0. Other DFCP sets for O*1 yield the same result. Similarly, EV(O*2) = 10.3. Thus, in terms of the ``maximizing expected payo€'', O*2=(x1, x2, s1, Table 11. Dual ¯exible contingency plans for O*2 of equation (5) with bi=2 Wj(O*2)

G3(Wj(O*2))

L3(Wj(O*2))

Payo€ P(Wj(O*2))

u(W5) = (u1, u2, w1, w2)

0Eg1E1

1/3El1E1

(g1, g2)

u(W77) = (w1, w2, w3, w6)

0Eg1E1

1/3El1E1

(g1, g2)

u(W3) = (u1, u2, u5, u6)

0E g1E3/5

0El1E1/6

(g1, g2)

u(W20) = (u1, u2, u3, u6)

3/5Eg1E1

0El1E1/3

(g1, g2)

u(W21) = (u1, u2, u3, u5)

0E g1E3/5

0El1E1/3

(g1, g2)

u(W50) = (u1, u3, u5, u6)

0E g1E3/5

0El1E1/3

(g1, g2)

u(W52) = (u2, u3, u5, u6)

3/5Eg1E1

0El1E1/3

     

ÿ12 4 ÿ12 4

ÿ20 ÿ30

ÿ26:7 ÿ20

ÿ26:7 ÿ20  ÿ20 (l1, l2) ÿ30



l1 l2



 l1 l2   12 l1 l2 12   10 l1 15 l2   13:3 l1 l2 10   13:3 l1 10 l2   10 l1 15 l2

ÿ12 4 ÿ12 4

ÿ20 ÿ30



314

Yong ShiÐFinding DFCPs for Linear Optimized GLSs

s2) is the ``best'' (optimal) system. To understand the meaning of DFCP sets of O*2, we consider the DFCP set {W5, W20, W21} from Table 11. Note that: (i) when 0E g1E1 and 1/3 E l1E1, u(W5) = (u1, u2, w1, w2) can be used to produce x1+2 chairs and x2+2 tables. Here, w1 and w2 are the unit contribution of two extra chairs and two extra tables, respectively. (ii) when 3/5 E g1E1 and 0 E l1E1/3, u(W20) = (u1, u2, u3, u6) can be used to produce x1 chairs and x2 tables; and (iii) when 0 Eg1E3/5 and 0E l1E1/3, u(W21) = (u1, u2, u3, u5) can be used to produce x1 chairs and x2 tables. No matter what values of (g1, l1), one of the above cases guarantees that the optimal production will be achieved and both slack resources (material and labor hours) will be e€ectively utilized.

5. CONCLUSIONS This paper has described a new approach to designing optimal systems that involves ®nding dual feasible contingency plans for generalized good systems. In this approach, generalized good systems go beyond the assumption of selecting opportunities for a potentially good system, while dual feasible contingency plans utilize all possible slack resources and convert non-optimal solutions into optimal ones. This approach enriches the theoretical framework of optimal system designs. The ¯exibility feature of designing optimal systems based on the multi-criteria and multi-constraint-level (MC2) programming can foster great potential for applications in many real-world problems. In business accounting, a transfer pricing problem occurs when a ®rm processes raw materials into intermediate or ®nal products from one division to another division. This problem involves multiple objectives, such as maximizing the overall company's pro®t, maximizing the market share goal of products in each division, and maximizing the utilized production capacity of the company. Preferences of divisional managers can be represented by multiple constraint levels (Shi et al., 1994a). Similarly, equation (1) can be used to formulate a capital budget problem that not only incorporates multiple criteria over future periods of investment, but also allows for multiple (a group of) decision makers in the decision process (Kwak et al., 1996). Both problems can be solved by constructing di€erent contingency plans. Production planning can be formulated by linear programming where production periods consisting of inventory, regular and overtime production rates are treated as ``resources'', and the corresponding demand periods are ``destinations''. When this problem has a MC2 feature, it can be formulated and solved by the MC2 transportation model of Shi (1995b), a special case of equation (1) (Shi and Haase, 1996). Equation (1) can also be used to select telecommunication network systems (Lee et al., 1994a). Given a set of candidate cities for a telecommunication network, some subsets of candidate cities can be selected as hub city designs for the telecommunication system. The selected hub city design (a subset of candidate cities) can maximize the multiple criteria, such as population, economy, education, health care, and transportation, subject to the location constraints with multiple resource availability levels. The integer format of equation (1) developed by Shi and Lee (1996) can be used to ®nd optimal hub city designs that re¯ect policy (decision) makers' goal-seeking and compromise behavior. Similarly, equation (1) can be applied to the problem of allocating data ®les over a wide area network (Lee et al., 1994b). Designing optimal systems is a challenge because the model of the problems involves several decision parameters. This opens a wide range of research ®eld to the optimization community. In this ®eld, we may build the bridges between the MC2 framework and various popular techniques. For example, since the optimal system design models involve the parameters g, l, a, and b, they can be assumed to be random variables with given probability distributions before the selection time. Then, the stochastic solutions for equation (1) can be used as potentially good systems (Kall and Wallace, 1994). It may also be possible to solve equation (1) via neural network techniques (Mangasarian, 1993). The resulting solutions, if well-de®ned, can also serve as

International Transactions in Operational Research Vol. 5, No. 4

315

the candidates for the optimal system. Signi®cant results from these on-going research problems will be reported in the near future. AcknowledgementsÐThis study has been partially supported by a University Research Fellowship award 1994±96 from the University of Nebraska at Omaha. The authors are grateful to Justin D. Stolen, Wikil Kwak and Betty Saunders for their careful reading and constructive comments on this paper.

REFERENCES Charnes, A. and Cooper, W. W. (1961) Management Models and Industrial Applications of Linear Programming. Wiley, New York. Chien, I. S., Shi, Y. and Yu, P. L. (1989) MC2 program: A Pascal program run on PC or VAX (revised version). School of Business, University of Kansas, Kansas, U.S.A.. Churchman, C. W. (1968) The Systems Approach. Delacorte Press, New York. Dantzig, G. B. (1963) Linear Programming and Extensions. Princeton University Press, New Jersey, NJ. Gal, T. (1979) Postoptimal Analysis, Parametric Programming and Related Topics. McGraw-Hall, New York. Kall, P. and Wallace, S. W. (1994) Stochastic Programming. Wiley, New York. Keeney, R. L. and Rai€a, H. (1976) Decisions with Multiple Objectives: Preferences and Value Tradeo€s. Wiley, New York. Koopmans, T. C. (1951) Analysis of production as an ecient combination of activities. In Activity Analysis of Production and Allocation, ed. T. C. Koopmans. Wiley, New York. Kwak, W., Shi, Y., Lee, H. and Lee, C. F. (1996) Capital budgeting with multiple criteria and multiple decision makers. Review of Quantitative Finance and Accounting. in press. . Lee, H., Nazem, S. and Shi, Y. (1994a) Designing rural area telecommunication networks via hub cities, Omega: The International Journal of Management Science, 22, 305±314. Lee, H., Shi, Y. and Stolen, J. D. (1994b) Allocating data ®les over a wide area network: Goal setting and compromise designs, Information and Management, 26, 85±93. Lee, Y. R., Shi, Y. and Yu, P. L. (1990) Linear optimal designs and optimal contingency plans. Management Science 36, 1106±1119. Mangasarian, O. L. (1993) Mathematical programming in neural networks. ORSA Journal on Computing 5, 349±360. Papalambros, P. Y. and Wilde, D. J. (1988) Principle of Optimal Design. Cambridge University Press, Massachusetts. Seiford, L. and Yu, P. L. (1979) Potential solutions of linear systems: the multicriteria multiple constraint level program. Journal of Mathematical Analysis and Applications 69, 283±303. Shi, Y. (1991) Optimal linear production systems: Models, algorithms, and computer support systems. Ph.D. Dissertation. School of Business, University of Kansas, Kansas, U.S.A., . Shi, Y. (1992) Optimal linear designs and dual contingency plans: a contribution adjustment approach (Working Paper 9210). College of Business Administration, University of Nebraska at Omaha, Nebraska, U.S.A.. Shi, Y., (1995) (a) Constructing ¯exible dual contingency plans for optimal linear designs with multiple criteria, Journal of Mathematical Analysis and Applications, 191, 277±304; (b) A transportation model with multiple criteria and multiple constraint levels, Mathematical and Computer Modelling, 21(4), 13±28. Shi, Y. and Haase, C. 1996. Optimal trade-o€s of aggregate production planning with multi-criteria and multi-capacitydemand levels. International Journal of Operations and Quantitative Management. in press. . Shi, Y. and He, Z. (1994) Dual contingency plans in optimal generalized linear designs. Systems Science 25, 1267±1292. Shi, Y., Kwak, W. and Lee, H., (1994a) Optimal trade-o€s of multiple factors in transfer pricing problems (Working Paper 94-3), College of Business Administration, University of Nebraska at Omaha, Nebraska, U.S.A. Shi, Y. and Lee, H. (1996in press) A binary integer linear program with multi-criteria and multi-constraint levels. Computers and Operations Research (), . Shi, Y. and Yu, P. L. (1991) An introduction to selecting linear optimal systems and their contingency plans. In Operations Research, ed. G. Fandel and H. Gehring. Springer-Verlag, Berlin. Shi, Y. and Yu, P. L. (1992) Selecting optimal linear production systems in multiple criteria environments. Computers and Operations Research 19, 585±608. Shi, Y., Yu, P. L. and Zhang, D. (1994) (b) Eliminating permanently dominated opportunities in multiple-criteria and multiple-constraint level linear programming, Journal of Mathematical Analysis and Applications, 183, 658±705; (c) Generating new designs using union operations, Computers and Mathematics with Applications, 27(12), 105±117. Shi, Y., Yu, P. L. and Zhang, D. (1996) Generalized optimal designs and contingency plans in linear systems. European Journal of Operational Research 89, 618±641. Shi, Y., Yu, P. L., Zhang, C. and Zhang, D. (1994d) A computer-aided system for linear production designs, Decision Support Systems, 12, 127±149. Shi, Y. and Zhang, D. (1993) Flexible contingency plans in optimal linear designs. Mathematical and Computer Modelling 17 (7), 13±28. Wilde, D. J. (1978) Globally Optimal Design. Wiley, New York. Yu, P. L. (1985) Multiple Criteria Decision Making: Concepts, Techniques and Extensions. Plenum, New York.