Solving embedded generalized network problems

Solving embedded generalized network problems

g2 Solving embedded generalized network problems Richard D. M c B R I D E School of Business, University of Southern Ca!ifornia, Los Angeles, CA 96~...

2MB Sizes 1 Downloads 44 Views

g2

Solving embedded generalized network problems Richard D. M c B R I D E

School of Business, University of Southern Ca!ifornia, Los Angeles, CA 96~989-142L U.S.A. Abstract: A complete, unified description is given of the design and implementation of a fast and ~fficient program which solves linear programming problems with network substructures. The very efficient generalized network p~oc~uures which have recently been developed are extended in this paper to produce an algorithm, called EMNET, which can solve generalized network problems with additional constraints and additional (complicating) variables. The only requirement on t! ~ network substructure is that it contain, at most two n,-~nzero entries in each celumn. Tb~is requirement is the most general possible for network substructures. Many large finear programming problems can be placed in this form by a simple rearrangement of its rows and columns. Preliminary computational experience indicates E ~ F r is about five times faster than Minos. The algorithm presented should provide a very efficient and fast solution for many linear programming problems involving producti,m scheduling, distribution, facility location and personnel assignment. These problems tend to contain large embedded network substructures. Keywords: Networks, linear programming

Received February 1984; revised May 1984 North-Ho'dand European Journal of Operat~onal Research 21 (!985) 82 -92

I. itntroducfion

In many linear programming problems there exists a large network substructure in the coefficient matrix. The network portion may correspond to a pure or generalized network. Methods have not existed to use this network substructure in a direct efficient application of the simplex method. This paper gives a complete, unified description of the design and implementation of a very fast and efficient pro,gam which solves linear programming problems with network substructures. The program solves generalized network problems with additional copstraints and additional side or complicating variables. It utilizes all of the latest generalized network data structures and solution procedures to yield a state of the art program. Many linear programming problems involving production scheduling, distribution, facility location and personnel assignment possess a large network structure in their coefficient matrices, which is complicated by the existence of a few additional constraints a n d / o r side variables. These network structures can now be directly isolated in the basis inverse to produce a considerable reduction in memory ~'equirements and fast execution of the steps of the simplex method. Considerable progress has been made in the solution of linear programming problems with special network structures. Srinivasan and Thompson [29], Glover, Karney and Klingman [14], and Bradley, Brown and Graves [6] all report efficient algorithms for the solution of pure network problems. Algorithms for the solution of pure network problems with side constraints have been considered by McBride ;25], Klingman and Russell [23], Chen and Saigal [11], and Glover, Kamey, Klingman, and Russell 1115]. However, limited computational experience exists.Glover, Karney, Klingman and Russell [i5] report compmadonal experiel~ce solving transshipment problems with one additional constraint. Glover and Klingman [16] developed procedures to solve truly embedded pure network problems with additional constraints and side variables. But they only report on ar implementation for the solution of pure network problems with side constraints.

0377-2217/~5/$3,30 © lyd5, E1sevier Science Publishers B.V. (No~h-Holland)

R.D. McBride / Solving embedded generalized network problems

Efficient data structures and solution procedures for generalized network problems have been reported by Glover, Hultz, Kfingman, and Stutz [13], Adolphson [1], and Brown and McBride [8]. The latter reported extensk .,s of the most successful pure network data structures and solution procedures to the generalized network case. The proorder traversal method of representing the network basis [6] (also called the augmented th:eaded index method [17]) was extended permitting a one pass update of the basis repres,,-'~'~fion and dual variables. Adolphson ......'~,~a ~,,,,;,,~,[2] ,Iso rennrt..., extending the augmented threaded index method to generalized network problems. The internal arc representation of ordering arc b5 head node and the candidate queue pricing strategy of GNET [6] were also found to be the most efficient for generalized network problems. As will be seen later, these same procedures and techniques have been adopted for the embedded case. These form the heart of the algorithm being presented. Hultz and Klingman [20] report extending generalized network procedures to include additional constraints. No implementation is mentioned. Hultz and Klingman [21] report computational experience for a special implementation when only one additional constraint exists. Computational experience for generalized network problems with an arbitrary number of additional constraints has been given by McBride [26]. Koene [24] noted in his dissertation that no known algorithm existed for truly embedded generalized network problems. We give in this report an extension of the concepts presented by Glober and Klingman [16] concerning externalized nodes for embedded pure network problems. These concepts have been extended to the embedded generalized network case and coupled with an efficient implementation of dynamic factorization due to Graves and McBride [19]. As noted earlier, the data structures and solution procedures found most successful for generalized network problems are now being extended to the embedded case. Previous computational experience [26] indicates the advisability of this further exte; ,sion. When most problems are initially formulated it is easy to identify network substructures for exploitation.: For the remaining problems being formu!ated a~d problems already in existence much work i~ .carrently in progress to identify network substructures. Brown and Wright [10] have ex-

83

plored ways to locate pure network constraint subsets by row mad column rearrangements followed by scaling colunmso Aft, Kennington, Pa~ty and Shetty [3] use linear transformations to identify network problems with side constraints, gixby and Fourer [5] have also discussed methods for finding network structures i n the constraint matrices of large scale linear programming problems. Brown and McBride [9] have preser~ted methods for locating embedded generalized r~etwOrK structures. Embedded nevxork st~ucturc~, can also bc C;:o ploited by the use of decompo.~ition methods. The decomposition methods due to Dantzig--Wotfe [12] and Benders [4] are already well knew,, the p.~Sma', and dual decomposition method due to Graves and Van Roy [18] have,, proven to be compLtationally very efficient. Brown, Graves and Honczarenko [7] have reported excellent comp~J rational experience in sol,ring mixed integer p~:)blems with more than 20000 constraints and more than 40000 variables. Their problem contain, a significant pure network substructure. In this paper we consider solving Embedded Generalized Network (EGN) problems of the form: (EGN)

Min x>~O

s.t

G~x~ + G,x,=b, ,4aX . 4- A c X c = r~ C o X a + CcX c + X 0 = O.

The cost coefficients have been rever~ed in si~;n The computafivn of the reduced cost~ takes imo account this sign reversal. For these problems the coefficient matrix has the form

A, A,. where G~ is an embedded generalized network matrix,

A,.] corresponds to the side or complicating varmb~es and [A~ Ac] are the coefficients to the side co~° straints. The only requirement we impose is that the columns of Ga contain at most two nonzero elements. N-, requirement of sign or value is ira° posed upon the elements of G~.

84

P~l). McBride / Solving embedded generalized network problems

If GC is the zero matrix, then (EGN)becomes a gener ;,ab_zed network problem with side constraints. When [A,, A~] is empty, we have a generalized network problem with complicating variables. When [.4,~ A¢] and [G, Ac] T a r e empty we have a generalized network problem. All embedded pure network models are special cases of (EGN) where each column of G,, is restricted to have a + 1 and a - 1. It is this mild requirement that each column of G~ have at most two nonzero elements which makes (EGN) the most general of the network related models. The matrix G, defines a graph..~my nonsingular square submatrix of G~ can be represented in graph form as a collection of trees and one-trees [22]. We exploit this fact in the simplex basis representation to :reduce memory requirements and speed execution times.

and the basic columns of [Gc] A~ c The inverse corresponding to Bp is

(J) B~-~= -

The basis for (EGN)'hS~da~ fo.rm

Bp=

(J)

0:)

A1

A2

Cl

C2

(1)

where Gi is a non:singular square submatfix of G,, [~-~l "-121 a r e i'ows of t G ,, Gc] and the rows of [A~ Az] are composed of the rows of [G,, Gel not in [G l Gz] and all of the rows of [A~, A~]. The columns of Bp correspond to the basic variables. The columns of

A~

are basic columns from

Cl

ioa] A~ . Ca

The columns of

A2

Lc2 J are tt~e remaining basic columns of

q /'x ] ¢:,~

H-1

oli0 )

w

] J (iii)

(2) where H = A 2 - AiG~ IG2

and w = (cla

2. The basis

H-1A1G'ft

- ( e l + wAOGf I

(JJ)

1G2 - c 2 ) H - 1

H -1 is called the subkernel and is the explicit portion of Be--1. As will be seen later, all of the operations requiring the use of B f i can be easily performed using H - ! . w, and G1. H -1 is the working explicit inverse and (/1 can be viewed as being a basis corresponding to the generafized network portion. In the algorithm being presented. H-1 will be carried in a special explicit compact form due to Reid [28]. The dhnension of H -1 will equal the number of rows of [Aa Ac] plus the number of rows of [Go 6~] in [A1 A2]. As will be seen later, the number of rows of [Ga G,] in [A 1 A2] may change from one pivot to the next which will require the dimension of H-1 to vary. The submatrix G 1 is a nonsingular submatrix of Go. This implies that G1 has at most two nonzero elements in each column and that G1 defines a graph composed of trees and one-trees [22]. The graph of G~ is extended to include all nodes corresponding to rows o~ G, See Figure 1 for the possible different constructs m G a. The graph ,of G may contain trees and one-trees as in the t3q3ical generafized network basis [8]. Each node in th~ graph corresponds to a constraint i n [G~ G~]. Each arc corresponds to a basic variable of x,. TLe arc connecting nodes 1 and 2 in Figure 1 corres?onds to a basic vafable which has the two nonzero entries az and b z in rows 2 and 1 respectively in G~. The arc extending up from node t corresponds to a basic variable in x~

R.1). McBride/Soloing embedded generalized network problems

85

¢

Q ° ". I I ,' .,

~)

a15 /

Tree

Extended-Tree

a!8

One-tree

Figure 1

which has just one nonzero coefficient a~ in row 1 of G~. The nodes 1 through 5 with their arcs form a tree. Node 1 is called the root node. The arc extending up from node 1 is the root arc. It is also called a self-cycle [8]. All arcs are oriented toward the root. Each constraint of [G, G~] placed in [A 1A2] results in an externalized node being created from the node corresponding to the constraint. Extending the graph of G1 m include externalized nodes results in constructs we shall call extended trees. Nodes 6 through 14 with attached arcs form what we call an extended tree. An extended tree is composed of one or more trees joined together. In Figure 1, nodes 7 through 10 form a tree and nodes 11 through 14 form a tree: Node 6 corresponds to row 6 of [G, G~] which has been placed in [A~ A2]. The graph of G~ has been extended to include node 6. The nodes, rNe node 6, are called externalized nodes bex:ause they are external to the graph of G~. All operations requiring G~ in the execution of the simplex method do not involve the externalized nodes. ~,rc (11, 6) in Figure 1, corresponds to an arc of G,, or a variable of xa which has the two nonzero coefficients a~ and bH in C,,. Since the row of [G,,, G,.] corresponding to node 6 is in [A1 A 21, only the nonzero coefficient aa~ remains in G~. Node 11 is the root node and the arc created from the single entry aH is the root arc or self-cycle for the tree formed from nodes 11 through 14. The extended trees are used to keep track of the ex~ ternalized nodes and to facilitate placing them

back into the formal graph of G~. Each node externalized increases the dimension of H-~ by one. We will later discuss ways to keep the dimension of H-~ as small as possible by only externalMng nodes when absolutely required and returning the externalized node back to the graph of G~ whenver possible. The arc leaving node 6 is a dummy :;elf-cycle. The graph of G~ can also contain one :tees. A one-tree would be a tree e×cept for a cycle. The cycle with its nodes and arcs fl~rm the root for this portion of the basis graph. As~eciated with the ~:ycle of each one-tree is a number called the cycle-factor [8]. The nonsingulality of G~ requires that the cycle factor be nonzero [22]. The cycle factor associated witI~ the one-tree in Figure 1 is:

a~3

ale,

(218

Note that all arcs are oriented toward the root cycle. The arcs on the :oo:t cycle are c.riented either in a clockwise or counterclock,.~,ise direction. Cycle factors can aIso be define~ for self-cycles. Vfe define the cycle factor for a sdf-cycle to be the nonzero coefficient the cc,rresp,3nding va~iab!e has in the constraint of the r~zot node. The cycle factor node I in Figure ! is ~ .

3. Row segment generaden In the case when the pivot row is row i of (i) in (2) care must be taken to preserve the structure

86

R.D. McBride / Solvir~gembedded generalizednetwork problems

and nonsingularity of Gi. The row of - G{ 1G2H- ~ corresponding to the pi~ot row l must be computed to update 3p-1. Let gt be row of G~-1. The desired row is -

.

(3)

Rc,w gt is obtained by soiving the system gtGt = e t. Row gt is saved and used in the basis update step ~o help locate, if necessary, a replacement variable for leaving basis variable which preserves the st:ucture and nonsingularity of G v

4. U!~iating the basis partition When the pivot row is a row of (ii) in (9~,_, H - : is updated using the .o,,muara . . . . . " pivot operations. In this case a coiunm of (jj) :~ " ' " - ~ . . . ~ a ~ ,~,,~ ente~'ag basic variable is from x,, it may be possible to efirn~nate an externalized node aad reduce the dimension of H-1. This wiil be considered in detail later. When the Fivot row is a row of ii) in (2), a column of (j) in (1) is being replaced :n the pivot. Here care must be taken to preserve the structure and nonsingularity of Ga. Subcase 1. G~ preserved in direct exch~ nge The nonsingulafity and structure of G I are preser,'ed in the exchange if the enter ing vaaable is from x~ and gtgk is not equal to zero (gk is the portion of the column of the entem~g basis variable that corre:~ponds to the rows cf G~ }. When the entering variable is from x~ we know that g~ has at most two nonzero entries. The nonsingulafity of G~ is maintained by gig k being not ec ual to zero. In practice, this inner product must be greater in absolute value than some tolerance le,~el. We use 0.0001 in our implication. In this subca se column t of Gt is replaced by gk using the stan~zlard generalized network basis update [8]. Fhaall~', pivot row segment g~ in (3) is used to update .:~/-~ and w using the stav.dard pivot update opera ions of finear program_rr~ing.

would also be lost if the entering variable were from x a bat gig k = 0. To find a possible replacement column we search the eo~ .runs of 6;2 which correspond to variables in xa. A n y column of G2 from G~ having a nonzero e n t ~ in gtG2 can be used to repmce the leaving uas~c v a n a m e m G v Suppose the p-th entry in G2, g~, corresponding to a column of G,, and the inner product g~g~ is in absolute value greater than our tolerance level. In this case, we exchange the variable in the p-th column of Gz with the variable in the l-th column of G v This requires a standard generalized network basis update [8]. In making this exchange, we replace the p-th row of H-1 by gt in (3) and make the appropriate exchange of entries in the fighthand-side values. Fie.a~!y, we update H -~ and w using the p-th row of H-~ as the pivot row. Here the entering nonbasic variable replace~ ~he nc~ va, i,~bl~ hi t:uiumn p o t U~. If the new variable is from x~ it may be possible to elimAnate an externalized node and reduce the dimenzion of H -~. This possibilit2~ will be considered later. ,u

1~

"

"

~

i



Subcase 3. G t not preserved in direct exchange and no replacement column found This subcase results when the entering variable is from x c or when the nonsingulafity of G1 is not maintained by an entering variable from x~ and no replacement column exists. In this case a node of G1 must be externalized. Suppose arc (u, v) is the leaving arc in column I of G~ oriented in G~ from node u to node v, then node u must be externalized. Node v remains attached but there is no arc to reattach node u. The externahzation of node u impfies that the row of [Go Gel corresponding to node u must be placed with the rows of [A a A2] and the dimension of H -1 increased by one. As shown ;m (4), the expficit portion of B71 is increased by one row and one column. The pivot row gt computed earlier is u ;ed with,

g? ~1)

Subcase 2. G 1 not preserved in direct e.,:change but replacement column found When the entering variable is from x~ the structure of Gj would be lost beca~ase of 1:he enterh~g complicating variable. The nonsingular~ty of G~

"

row +l

(4)

Cu 1

g", the u-th c o l u a ~ of -H.~ ~G~-~-, g~', the element of (i +

1

in the u-th column ant

l-th row and c u, the

R,D. McBride / Solving embedded generalized network problems

element in the u-th column of -

+ waa) G?

These numbers a._re computed before the basis is updated while t h e current partition of Bp-~ is valid, The r i ~ t - h a n d , side value in row i is also moved down. Next arc (u, v) is replaced in G~ by a dummy self-cycle arc. Finally, B ; 1 is updated using row h + 1 as the pivot row. Controlling the size of H - ~ by eliminating externalized nodes

Externalized nodes can be placed back into the formal (nonextcnded) graph of Ga when easily identified conditions exist. Suppose an arc (i~, Jl) of Ga is in G2 and the backpaths of i, a n d / o r Jl contain an exte~ialized node, u~. Node u~ and arc (il, J1) can be placed in G1 provided the cycle factor corresponding to the resulting tree or onetree is nonzero. Let no cycle be created by the backpaths of i~ and J2 and one of the backpaths ccntain the externalized node u~ [16]. In this situation arc (i~, j~) attaches all of the descendents of u~ to an existing tree which already has a nonzero cycle factor. Node u~ would be placed back into the formal tree of G~ by executing a standard generalized network exchange (pivot) with (i~, j~) replacing the dummy arc attached to u~. The row of H-~ corresponding to the variable with arc (i~, jl ) and the column of H-~ corresponding to node u~ are deleted to reduce the dimension of H -~ by one. This cap be illustrated in Figure 1. Suppose arc (9, 2) is in G2. The backpath of node 9 contains the externalized node 6. Here arc (9, 2) would enter the graph of G~ along with node 6. The arcs connecting nodes 7 and 11 to node 6 womd be contained entirely within the tree rooted at node 1 after the exchange.

Figure 2

87

Let a cycle with nonzero cycle factor be creat. ' ~..4 by the backpaths of il and j~ and the backpaths contain externalized node u~. In this case a ....,.~o is created by the addition of (il, Jl) which becomes the root of a newly created one-tree. Again a standard generalized network pivot is executed using (i 1, jl)as the entering variable and the dummy arc attached tc~ node u~ as the !eaving vanat~ie. ~,~. dkmension of H-1 is reduced by one as before. To illustrate tNs, let arc (9, 1_,_.~:-,~r,~ in._G~. In Figure 1 both backpaths contain the externalized node 6. Suppose that the cycle formed from nodes 7, 8, 10 and 9 has a nonzero c3,cle factor. Exchanging arc (9, 10) for the dummy ~:rc leaving node 6 produces the one-tree in Figure 2. An arc which produces a cycle with a zero cycle factor cannot be brought into the graph of G~. Such an arc would create a system of linearly dependent columns in G~. "

"

"

'T~..L

5. Representation of G i ~ a p h The preo~der traversal method of basis; representation for pure network problems [6] was extended to generalized network problems by Brown and McBride [8]. It is thus ex:ended method that we now further extend to include externalized nodes. The extension required modifying the ~.eneralized network representation so that e~temalized nodes can easily be recognized and yet ~ot influence the computations involving G~. Three fundamental arrays are required for the grapNc representation of G1. The predecess~r function and its array P( ) are defined so that aH basic arcs not on a cscte are oriented so that a backpath is created to a cycle. Also basic arcs in a cycle are oriented uniformly in a clockwise (or counterclockwise) manner. To obtain this orientation the direction of some arcs must be reversed. The sign bit of the preAecessor array is used as an indicator. If P ( I ) < 0, then the orientation of arc (I, - P ( i ) ) is reversed from its original orientation. (All arcs are given what we call their original orientation by the manner they are inputted.) The depth array D( ) gives for each node the number of nodes on the backpath before encou~tering a cycle. Nodes on a cycle have depth zero. The preorder trave~:sai array oiT(~ J~ is mainrained so that mi preorder successors of a node .~re encountered before its predecessor.

ILD. ,McBride / Solving embeddedgeneralizednetworkproblems

88

TaMe 1 gives a representation in terms of P ( ) , D( ) and IT( ) of the network in Figure 1. The predecessor of node 6 equals 26. This implies that the constraint corresponding to row 6 of [G, G~] forms row 5 of [Aa A2]. For an externalized node K we have P ( K ) = M + L. M is the nmnber of nod:.=; or constraints in [G,, G~] and L is the positior. o; that constraint in [A 1 A 2 ] . When P ( K ) is grez]er than M we know that nede K is a exterr:~dized node, a dummy self-cycle exists there and that column P ( K ) - M of H - ~ corresponds to the K-th constraint of [Ga G~]. Other arrays needed would be arrays to store the cycle factors and the location of basic variables in arc arrays. A careful analysis of the computations required using G[ ~ reveals that one must be able to efficiently solve

one nonzero component and z is the vector of unknowns to be determined, The solution of (5.1) requires that the backpath of the node corresponding to the nonzero entry in h be determined. The nonzero entries in z correspond to the nodes on the backpath plus the nodes in the root cycle. The solution of (5.2) requires that all successors o f the node corresponding to the nonzero entry in X be determined. Only successor nodes will have nonzero entries in z 'r. i-he traversal array IT( ) readily yields this set of successors. See Brown and McBride [8] for more details. It is this ability to quickly solve the linear systems in (5) which permits us to fully exploit the network substructure of the problem. The efficient generalized network procedures of [8] are used to quickly solve systems (5) and perform basis exchanges involving G1.

(5.1)

61Z ~-~,

6. Pricing or

zrG~ = ?7"

(5.2)

where z and X are column vectors with M elements ( M is the rcw and column dimension of Gt), X is the right-hand-side vector consisting of Table 1 Pr~.order traversal representation of network in Figt~re 1 Node 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21

Predecessor P( ) -1 1 1 3 3 26 6 7 -7 8 6 11 12 12 - 18 15 16 16 18 19 !9

Depth D( ) 0 1 1 2 2 0 1 2 2 3 1 2 3 3 0 0 1 0 1 2 2

Traversal IT( ) 3 1 5 7 7 9 10 8 11 12 13 14 6 16 17 18 19 20 21 15

The entering basic variable is determined by pricing some subset of the nonbasic variables using the candidate queue pricing procedure [6]. In the candidate queue procedure a list of interesting node; and arcs is maintained and scanned in a cyclic manner. The incoming arc is selected from the candidate queue by pricing the entries in a block of sp~xzified size ("number examined"). When a node is encountered it is replaced by the best pricing arc entering the node. Those arcs pricing negatively are retained in the queue with the best priced one becoming the entering basic variable. When the end of the queue is encountered the queue is refreshed by pricing a cyclic marx~er a number of nodes equal to the parameter IPG. The first NNS (a pricing parameter) pivots is called the opening gambit. During the opening gambit the nodes incident to the entering basic variable are added to the candidate queue. The reduced costs, ?j, for the nonbasic variables are computed using the bottom row of B ; 1 in (2):

ej =

w,

=

= cj + ~gJ ~- wa j

where ~= (-q

- wA1)G; ~

. , , 1)

aJ cj

(6)

R.D. McBride / Solvingembeddedgeneralizednetworkproblems

or we,

= ( -

(7)

- w&).

System (7) can be solved at the beginning of each prizing operation. The compor.ents of g; are the dual variables corresponding to the constra,ints [G1 Gz]. For each tree one would compute the dual variable corresponding to the root node and then, starting with it, compute the dual variable for each direct :~ ccessor node using the traversal array as a guide, r 3r each one tree a dual variable would be computed for one of the cycle nodes and then the rest of the dual variables would be computed from it. See Brown and McBride [8] for complete details. With ~ computed, the reduced costs ~), in (6) can easily be computed as needed. However, a great reduction in computation time can be obtained if the components ,of ~ are computed only as needed. See [26]. Typically only a small percentage of the ~ components are required until the candidate queue becomes short in length. When computing duals as needed one would proceed as follows: The first time a particular dual is needed, proceed up the subtree until a node is encountered which already has its dual computed or the root of the subtree is encountered, tf an existing dual is encountered, proceed back down the backpath computing only the duals on the backpath. The components of the right-hand-side of (7) are also computed as needed. When a root node is encountered the duals for the root nodes are computed and then the duals back down the backpath.

The transformed pivot column is

p-lkj=B;1

aj = cj

We must solve a (5.1) type system for each nonzero coefficient in gJ and then add theft: solutions. Next compute ~-/ in (8.1) and take ratios. Terminate column generation when ratio test yields a zero ratio. When ratio is not zero solve (8.2)° We must solve a (5.1) type system fo," each nonzero coefficient of ( g J - G2a j) and add thei:r solutions. Finally, take ratios using g: to finish the determination of the pivot row.

8. Solution strategy and data structure The arcs in Go are organized by head node. Computational experience with generalized network problems [8] and generalized network problems with side constraints [26] indicates that this data structure is very effective. In problems where Ga has many more sinks than sources this method or organizing the arcs has been found to be very effective. The three-phase solution strategy is employed. In this approach the initial phase is called phase 0. In phase 0 we solve (or partially solve tt~e generalized network problem obtained by dropping the side constraints and the complicating variables. This permits a very quick approximate solution using just the generalized network machinery. The side constraints and complicating variables are added back in Phase I and feasibility is obtained. The complicating variables are grouped together and are priced as one or more fictitious nodes in the candidate queue plzicing procedure. O,ace feasibility is obtained phase II proceeds to optimality.

Node aggregation

7. Colmnn generation

kj=/

89

aj ej

where aS=U-'(¢-A~G;'g/)=U-~(¢

-A,t),

(8.1)

g ) = G~-I( g ) - G2aJ),

(8.2)

t = (s; g j.

(8.3)

First solve system (8.3). This is similar to system (5.1) except for possible muItiple entries in gJ.

We also employ an automatic basis aglgregation refinement [6,8] for the generalized network substructure basis, G~. Only those nodes with successors are explicitly maintained in the tree structure of G~. All nodes, except root nodes, with no successors are carried implicitly or aggr~ated. Figure 3 shows the aggregated trees from Figure 1. In phase 0 the duals for aggregated nodes are computed from that of the immediate predecessor of the node as needed during pficeout. When a backpath of an entering arc begins with an aggregated node it is disaggregated, and when the leaving arc isolates node's with no successors, they are aggregated. This results in a considerable say-

90

ILD. McBride / Solving embeddea generalizednetwork problems

~9 " [5

]

y

>-N ft~ I 10

f13

k

~ 14-

t17

'1

120

I

Figure 3. Aggregated tree structures

ing of execution time fo; many problems in phase O. The aggregated tree structures are also maintained ha phases I and I1 Here, car, ~.must be taken to insure nothing is lost m computations involving G v Savings occur in at l e ~ t t~'o ways. First, smaller tree structures are m~antah-~ed and updated as needed. Second, Lmleeded t:omputations are eliminated. Con.sider, for ex;tmple, the computations involving the row of (,~1 gt, h~ (3). That portion of gt corresponding t~ aggregated nodes is not computed initially. Dur~.ng compu~tations involving gt if a value i.~ needed con'esponding to an aggregated node it is at that time computed from the value of its predecessor. Hen, ze those; entries in gt corresponding to aggregated no.les not needed to compuze -g~G~ in (3) are no~ :'omputed. 9. Computational expe~ence

An algorithm [26] developed to solve generalized network problems with side cons'traints was modified in three major areas: (1) The representation of G1 was changed from using the augmented predecessor index method to the preorder traversal method. The er, tension of the preorder transversal method to the li;eneralizcd network case is full3 developed by l:grown and McBride [8]. (2) The code was modified to be able to solve g,~neralized network problems with both side constraints and side vafables. This required among other things that the preorc~er transversal method be extended to include extended trees and their corresponding externalized nodes. TbJs extension includes the developments of Glover and K!ingman [15] as a special case. We have the first knc~wn implementation of ~:he concepts involving externalized nodes. (3) T~neautomatic basis aggregation refinement [6,8] for the generalized network sub~ractive basis,

G1 was introduced. Very significant improvements in execution time [6,8] result with this i~efinement. The algorithm which resulted from the above mentioned modifications has been tailored primarily for LP problems with significant embedded pure or generalized network components in their coefficient matrices. Several key implementation issues have not been re-examined because of their success in solving generalized network problems [8] and generalized network problems with side constraints [26]. This issues includes: (a) Static storage of arcs. We store the arcs for the Go portion of the coefficient matrix by head node. All arcs ending in the same node are stored consecutively. Other methods include storage by taft node or in arbitrary sequence. (b) Network basis representation. We use an extended preorder transversal method as opposed to the triple label (augmented predecessor index method) method. (c) Basis aggregation. We use an aggregated network basis as opposed to a complete explicit representation. (d) Pricing scheme. We use a candidate queue pricing strategy. Other include different variation of candidate fist schemes. (e) Solution strategy. We employ a three-phase approach. In the initial phase we drop the side constraints and side variables and solve the resulting generalized network. These then are added back and we proceed in the normal two phase approach. Other approaches include some variation of the Big M method or proceeding directly with the two phase method. "['he issues we use in (a), (b), (c), and (d) were studied extensively by Brown and McBride [8] and determined to be very effective. These same issues were reconmaended by Bradley, Brown and Graves [61. Issues (a), (b), (d), and (e) were found to be best for generafized network problems with side constraints [26].

R.D. McBride / Soloing embedded generalized network problems Table 2 Test problems Probl.

# networks constraints

# side # networks constraints variables

# side variables

1 2 3 4 5 6 7 8 9 10 11

295 395 990 586 586 796 796 1000 951 2178 2178

5 5 10 0 45 75 0 0 48 95 150

10 20 30 11 0 0 18 0 66 110 195

3960 4980 3977 3364 3364 4133 4133 4007 4953 7216 7216

The resulting implementation is a code written i n FORTRAN w h i c h w e call 'EMNET.' EMNET is a n

in-core code which employs super-sparsity techniques. Only the unique nonzero elements of G~, A~, and A~ are stored. The code handles directly the fact tha~ each column of (7, contains at most *,wo nonzero elements. Most generalized network codes require that when a column has two nonzero elements, one of them must be either a + 1 or a - 1, depending upon the sign convention adopted. In EMNET, no transformations are employed to give G~ a particular sign convention. Several test problems have been solved to yield some preliminary computational experience. Table 2 identifies some problems that have been solved. Several types of problems are included. Problem 8 is a generalized network (GN) problem, 4 and 7

91

are GN problems with side variables, 5 and 6 are GN problems with side ccnstraints and problems 1, 2, 3, 9, 10, and 11 are GN proNems with side constraints and side vadaNes. Problems 4 and 5 are equivalent prablems. They represent alternate formulations for a problem tha~. has the requirement that the flow on all arcs entering certain nodes be equal in value. Tbfis is an example of a processing network problem. See Koene [24] for further details. Problems 6 and 7 are also equivalent processing network problems. We are currently developing a specialized embedded generalized network code to solve processing network problems [27]. Computational results were obtained on an IBM 370/168-3, using the FORTRAN H complier with OPT(2). The problems in Table 2 were solved using EMNET and MINOS. MINOS is an LU-factorization code developed at the Systems Optimation Laboratory at Stanford University by Murtagh and Saunders. The PARTIAL PRICE parameter was set to 20 for the problems in Table 2. A I~ other MINOS defaults were used. EMNET and MINOS solution times are given in Table 3. The solution limes exclude ~np~zt nverhead. The accumulated solution tim~s ar.d pivot counts are given for EMNFT. The size ~,f ~h :working inverse, H -~, is also given at optimr~::y. The results in Table 3 indicate that 7~MNETis about five times faster than MINos. The times on problems 4 and 5 are surprisingly clos:. As expected on proNem 8, the generalized network problem, EMNFT is about 25 times faster. On prob-

Table 3 Test results (IBM 370/168 H Compiler) Problem

# Pivots at end phase 0 /time (sex)

# Pivots at end phase I /time (sec)

# Pivots at end phase II

Ending size of H - ~

Sdution time (sec)

MINOS time (sec)

1 2 3 4 5 6 7 8 9 10 11 Total

1032/4.75 1639/3.82 2969/4.75 2126/3.50 2403/3.93 2280/4.60 2190/4.05 2548/4.90 1243/3.91 2903/4.84 2903/4.90

1053/ 5.34 1651/ 4.17 3657/15.57 2126/ 3.50 2449/ 4.66 2345/ 6.308 2t90/ 4.05 1498/13.51 3036/11.75 3098/17.93

1370 !942 4496 2602 3070 2365 2686 2548 2424 3157 32.98

10 7 10 3 45 75 7 0 48 95 150

11.280 ~;.432 31.269 10.603 17.321 7.025 12.363 5.082 51.207 19.780 30.378

54.75 61.19 124..86 17.74 22.15 37.10 32i79 !2~,.86 252.0~ ! 76.87 169.76

92

R.D. McBride / Solving embedded generalized network problems

lem 10 E~ar~wr is about nine times faster. These ,e.sutts clearly indicate that a significant reduction in solution t~me can be obtained by exploiting the embedded network in large scale LP problems. The dimensions of H - ~ will always be at least as large as the number of side constraints. Externalized nodes may cause the dimension of H - 1 to exceed the number of side constraints when side variables exists in a problem. The ending sizes of H -1 are very encourtging. The techniques used to control the size of B-~ worked well on all problems. References [1] Adolph:ran, D., "'Design and implementation of a generalized network computer code", presented at ORSA/TIMS m~ti~g, Colorado Springs, November 1980. (Also a working paper, BYU, January 1982). [2] Adolphson, D., and JoG I., "Computational experiments on a threaded index generalized network code", presented at ORSA/TIMS meeting, Chicago, April 1983. (Also a working paper, BYU, 1983). [3] All, A., Kennington, J., Patty, B. and Shetty, B., "Transforming linear programs to networks with side constraints", presented at Detroit ORSA/TIMS Meeting, April 1982. [4] IE;enders, L, "Partitioning procedures for solving mixedvzriable programming problems", Numerische Mathematik 4 (1962). [5] l;ixby, R., and Fourer, R., "Finding embedded network in linear programs", presented at Detroi~ ORSA/TIMS Meeting, April 1982. [6] Bradley, G., Brown, G., and Graves, G., "Design and implementation of large-scale primal transshipment algorithms", Management Science 24 (1) (1977) 1-35. [7] Brown, G., Graves, G., and Honezurenko, M., "Large.scale facifity and equipment location: An application ol goal programming in muhicommodity decomposition", presented at the Houston ORSA/TIMS Meeting, October 1981. [8! Brown, G., and McBride, R., "Solving generalized networks", presented at the Detroit ORSA/TIMS Meeting, April 1982, to appear in Management Science. [9] Brown, G., and McBride, R., "Extracting embedded generalized network problems from general LP problems", presented at the ORSA/TIMS Meeting, Hou., ton, October 1981. Kewin Woods added as author and it is to appear in Mathematfca; Programming. [10] Brown, G., and Wright, W., "Automatic factorization of embedded structure in large-scale optimization models", Naval Pos~-gradua~e Schoo|, Montery, 1980. [11] Chen, S., and Salgal, R., "A primal algorithm for solving a capacitated network flow problem with additional linear constraints", Networks 7 (1977) 59-79.

°

~

.

.

.

.

[12] • antzag, G.B., and Wolfe, P., Decomposition pnnclple for linear programs", Operations Research 8 (1) (1960) 101-11t. [13] Glover, F., Hultz, J., Klingman, D., and Stutz, J., "Generalized networks: A fundaanental computer-based planning tool", Management Scienc~ 24 (12) (1978) 1209-1220. [141 Glover, F., Kamey, D., and Klingraan, D., "Implementation and comput,-,fional study on st~r~ procedures and basis change criteria for a primal network code", Networks 4 (3) (1974) 191.-212. [15] Glover, F., Karney, D., Kfingman, D., and ~u o~-n, o "Solving singly constrained transshipment problems", Transportation Science 12 (4) (1978). [16] Glover, F., and Kling,qaan, D., "The simplex son algorithm for LP/embedded network problems", Mathematical Programrmng Study 15 (1971) 148-176. [171 Glovcr, F., Kllngman, D. and Stutz, J., "Augmented threaded index ;method for network optimization", INFOR 12 (3) (1974) 293-298. [181 Graves, G., and Van Roy, T., "Decomposition for largescale.linear and mbxed integer linear programming" UCLA technical report, November 1979, Applied Mathematics and Optimization 4 0978) 102-119. I19] Graves, G., and McBride, R., "The factorization approach to large-scale linear progran~ming", Mathematical Programming 10 (1976) 91-110. I2o] Hultz, J, and Klingman, D., °'Solving constrained generalized network problems", Research Report CCS 257, Center for Cybernetic Studies, University of Texas at Austin (1976). [21] Hultz, J., and Klingman, D., "Solving singularly constrained generalized network problems", Applied Mathematics and Optimgzation 4 (1978) 103-119. [22] Kennington, J., and Helgason, R., AIgorithms for Network Programming, Jotm Wiley & Sons, New York, 1980. [231 Klingman, D., and Russell, R., "On solving constrained transportation problems", Operations Research 23 (1) (1975) 91--107. [24] Koene, J., "Minhnal cost flow in processing network, A primal approach", Ph.D. Thesis, Eindhoven University of Technology, Netherlands, 1982. [25] McBride, R.D., "Factorization in large-scale linear programming", Working Paper No. 22, University of California, Los Angeles, 1972. I261 McBride, R., "Solving generalized network problems with side constraints", presented to the CORS/TIMS/ORSA Meeting. Toronto, May 1981. Working paper available. [27] McBride, R., "Solving processing network problems", presented at the ORSAI/TIMS meeting, November 1982. Working paper available. [281 Reid, J.K., "A sparsity-cxploiting variant of the BartelsGolub decomposition for linear programming bases", Report CSS 20, Computer Science and Systems Division, A.E.R.Eo, Harwetl, Er, gland, 1975. [291 Srinivasan, V., and thompson, G., "Accelerated algorithms for labeling and relabeling of trees with applications for distribution problems", JACM 19 (4) (1972) 712-726.