A primal deficient-basis simplex algorithm for linear programming

A primal deficient-basis simplex algorithm for linear programming

Available online at www.sciencedirect.com Applied Mathematics and Computation 196 (2008) 898–912 www.elsevier.com/locate/amc A primal deficient-basis...

202KB Sizes 0 Downloads 117 Views

Available online at www.sciencedirect.com

Applied Mathematics and Computation 196 (2008) 898–912 www.elsevier.com/locate/amc

A primal deficient-basis simplex algorithm for linear programming q Ping-Qi Pan Department of Mathematics, Southeast University, Nanjing 210096, People’s Republic of China

Abstract The standard basis, which plays a fundamental role in simplex methodology, was recently extended to include a deficient case (with fewer columns than rows) by taking advantage of primal degeneracy. Computational results have been favorable with dense implementations. In this paper, we propose a primal simplex algorithm using sparse LU factors of deficient bases. Amenable to real-world linear programming problems, which are often degenerate or even highly degenerate, the algorithm would solve them with potentially improved stability compared to the simplex algorithm. The proposed algorithm has been implemented and tested on a set of 50 Netlib test problems as well as a set of 15 much larger real-world problems, including 8 Kennington and 5 BPMPD problems. It significantly outperformed MINOS 5.3 in terms of both iteration counts and run time. In particular, these results reveal that there is no inevitable correlation between an algorithm’s inefficiency and degeneracy (contradicting common belief).  2007 Elsevier Inc. All rights reserved. Keywords: Large-scale linear programming; Simplex algorithm; Primal degeneracy; Deficient-basis; LU factorization

1. Introduction We are concerned with the linear programming (LP) problem in the standard form minimize cT x; subject to Ax ¼ b;

x P 0;

ð1:1Þ

where the columns and rows of the constraint matrix A 2 Rm·n(m < n), the right-hand side b 2 Rm, and the cost c 2 Rn are all nonzero, and Ax = b is compatible. We stress that no assumption is made on the rank of A, except for 1 6 rank(A) 6 m. Theoretically, the difficulty related to (primal) degeneracy in the simplex algorithm [7] is that basic variables with value zero could lead to zero-length steps, and hence even cycling. A number of finite (cycling-preventing) simplex variants were proposed in the past (e.g., [4,6,3,29,17]); however, it is well known that such algorithms are not competitive with the simplex algorithm in practice. q

Project 10371017 supported by National Natural Science Foundation of China. E-mail address: [email protected]

0096-3003/$ - see front matter  2007 Elsevier Inc. All rights reserved. doi:10.1016/j.amc.2007.07.030

P.-Q. Pan / Applied Mathematics and Computation 196 (2008) 898–912

899

Although degeneracy occurs all the time (it is not unusual for a large proportion of basic variables to be zero), the simplex algorithm and its non-finite variants must be regarded as finite in practice, since cycling rarely happens. In fact, such algorithms have been very successful. The obstacle to further progress has long been thought to be stalling; that is, the solution process could be stuck at a degenerate vertex for too long a time before leaving it. This has motivated various remedies from time to time (e.g., [13,1,14,12]). In particular, Leichner, Dantzig, and Davis suggested a ‘‘strictly improving linear programming Phase I algorithm’’, which proceeds without any stalling at all [15]. In addition, if a problem is highly degenerate, some efficient simplex solvers, such as CPLEX, perturb the problem to remove the degeneracy [2]. Pan also described perturbation variants of the simplex algorithm to handle degeneracy [21,22]. While authors have attempted to develop algorithms that are impervious to degeneracy, efforts were made recently to take advantage of degeneracy. As degeneracy results from b lying in a proper subspace of the range of A, it is possible to extend the classical square basis, which plays a fundamental role in simplex methodology [5], to include a rectangular deficient case. The concept of basis deficiency was formally introduced in [19],where (primal) deficient-basis simplex algorithms using the QR factorization were described. They were then modified using the LU factorization together with a new Phase-1 procedure described in [26]. Amenable to LP problems with n  m small relative to m, a novel primal algorithm using QR factorization was presented in [20] and later recast using LU factorization [23]. Dense implementations of tableau versions of such algorithms performed very favorably in computational experiments. Besides, the use of deficient bases seemed beneficial not only to primal algorithms but to dual ones as well [24]. In fact, an alternative setting of the dual simplex algorithm described in an earlier paper was pregnant with such an idea [18], and encouraging results were reported recently with a deficientbasis dual algorithm and its sparse implementation [25]. Here, we propose a primal deficient-basis simplex algorithm using rectangular LU factorization so that it proceeds without storing a full array. The proposed algorithm would solve LP problems with potentially improved stability compared to the simplex algorithm. A sparse implementation significantly outperformed MINOS 5.3 on a set of 50 Netlib test problems and 15 much larger real-world problems, including 8 Kennington and 5 BPMPD problems. In particular, the numerical results reveal that there is no inevitable correlation between an algorithm’s inefficiency and degeneracy, contradicting common belief. The paper is organized as follows. Preliminaries are offered in the next section. In Section 3, an optimality condition is given, shedding light on the selection of an entering index. In Section 4, a search direction is derived and discussed, shedding light on the selection of a leaving index. In Section 5, the algorithm is described formally after two kinds of iterations and related LU factor updates are addressed. Finally, Section 6 reports computational results demonstrating the encouraging behavior of the proposed algorithm. 2. Preliminaries In this section, we briefly present basic ideas and concepts, and make some assumptions. In simplex methodology, the basis is a nonsingular m · m submatrix from the constraint matrix A. Such a basis is well-defined only under the assumption that A has full row rank. A common remedy is to introduce extra logical variables. However, this increases the size of the problem and complicates the solution process. We argue that this common assumption on the rank of A is unnecessary. Further, even if A has full row rank, it frequently happens in real-world problems that b falls into a proper subspace of the range of A, so that basic solution must have basic components at value zero. The basic columns corresponding to such zero components could be made nonbasic. This motivates the following definition of a generalized basis allowing fewer columns than rows. Definition 2.1 (Basis). A basis is a submatrix consisting of any linearly independent set of columns of A whose range space includes b; a nonbasis is one consisting of the remaining columns. All bases fall into one of the following two categories. Definition 2.2 (Deficient basis). If the number of columns in a basis is less than the number of rows, it is a deficient basis. A square basis is a normal basis.

900

P.-Q. Pan / Applied Mathematics and Computation 196 (2008) 898–912

The simplex algorithm and its conventional variants only work with normal bases. In contrast, the proposed algorithm proceeds with either deficient or normal bases; thus, the standard but unnatural full row rank assumption on A is no longer needed in our model. As real-world LP problems are almost always degenerate, or even highly degenerate, it could be expected that a great majority of bases encountered in practice are deficient. As a result, the proposed algorithm would solve LP problems with potentially improved stability compared to the simplex algorithm. We will consider a single iteration. Let B be the current basis with s columns (1 6 s 6 m) and N the associated nonbasis. Corresponding components of vectors and columns of matrices are termed basic and nonbasic respectively. Denote by ji the index of the ith column of B, and by kj the index of the jth column of N. Without confusion, we denote basic and nonbasic ordered index sets by B and N again: B ¼ fj1 ; . . . ; js g;

N ¼ fk 1 ; . . . ; k ns g:

ð2:1Þ

For simplicity of exposition, components of vectors and columns of matrices will always be rearranged conformably to (B,N). The proposed algorithm will be developed with s < m, if not indicated otherwise. Definition 2.3. If xN is zero and xB solves BxB = b, then x ¼ ðxTB xTN ÞT is a basic solution (associated with B). The solution is degenerate if some components of xB are zero. Remarks. (i) Any basis is associated with a unique basic solution. (ii) Moreover, there exists a 1-to-1 correspondence between bases and basic solutions if nondegeneracy is assumed and bases that have the same columns in a different order are not distinguished. (iii) A degenerate basic solution is associated with more than one basis. Definition 2.4. If a basic solution is nonnegative, it is a basic feasible solution. It is now clear that all of the geometry of the underlying polyhedron associated with the conventional LP model remains suitable for our case, although the reduced space is now of dimension n  s rather than n  m. For instance, a basic feasible solution corresponds to a vertex, and vice versa. Let an LU factorization of B be available:     L1 U1 PBQ ¼ LU ; L ¼ ; U¼ ; ð2:2Þ L2 I 0 where P 2 Rm·m and Q 2 Rs·s are permutations that balance stability and sparsity, L1 2 Rs·s is unit lower-triangular, and U1 2 Rs·s is upper-triangular with nonzero diagonals. In the sequel, without loss of generality, we assume P and Q to be the identity permutations. Define the transformed right-hand side  b by      s b1 L b ¼ b;  b ; ð2:3Þ ms 0 where the (m  s)-subvector of  b is zero because the compatible system BxB = b is equivalent to the upper-triangular system UxB ¼  b. Assume that the basic solution x associated with B is feasible, satisfying U 1xB ¼  b1 ; xN ¼ 0; xB P 0: ð2:4Þ The feasible solution x and LU factors will be updated iteration by iteration until optimality is achieved. It is possible to keep L well-conditioned during factorizations and subsequent Bartels–Golub-type updates. The package LUSOL is available for handling rectangular bases of this kind [10]. 3. Optimality condition To find a dual solution matching the primal solution x, we consider the following system: BT y ¼ c B ; zB ¼ 0;

ð3:1Þ T

N y þ zN ¼ cN :

ð3:2Þ

P.-Q. Pan / Applied Mathematics and Computation 196 (2008) 898–912

901

Clearly, there is a unique solution when B is normal (s = m). In general, there are infinitely many solutions when B is deficient (s < m). Lemma 3.1. There exists a dual solution ðy ; zÞ: T

LT y ¼ ðuT1 0T Þ ;

U T1 u1 ¼ cB ; zB ¼ 0;

ð3:3Þ

T

zN ¼ cN  N y :

ð3:4Þ

Proof. It can be easily verified that ðy ; zÞ defined by (3.3) and (3.4) (along with (2.2)) satisfies (3.1) and (3.2), and hence the dual equality constraints. h By Lemma 3.1, a dual solution can be obtained by solving two triangular systems (3.3) successively for y , and then computing z by (3.4) (the so-called pricing). Theorem 3.2. The primal objective value at x is equal to the dual objective value at ðy ; zÞ. If zN P 0, these are a pair of primal and dual optimal solutions. Proof. From (3.3), (2.4) and (2.3), it follows that T

cTB xB ¼ uT1 U 1xB ¼ ðuT1 0T Þð bT1 0T Þ ¼ y T L b ¼ bT y ;

ð3:5Þ

so the primal and dual objective values are equal. Clearly, x and z exhibit complementary slackness. If zN P 0 (so ðy ; zÞ is dual feasible), then x and ðy ; zÞ are primal and dual optimal solutions respectively, as x is assumed to be primal feasible. h Thus, we use zN P 0 as a sufficient condition to test for optimality. Selection of an entering index: If zN j0, we choose an entering index kq by zkq ¼ minfzkj jzkj < 0; k j 2 N g < 0;

ð3:6Þ

which is an analogue to Dantzig’s original rule [5]. Then we compute an associated vector v such that     v1 s Lv ¼ akq ; v  ; ð3:7Þ ms v2 where akq is the nonbasic column indexed by kq. The following development depends on whether v2 is nonzero or not. If it is nonzero, we carry out a so-called expanding iteration, as described in Section 5.1. If it is zero, the following section is relevant. 4. Search direction An assumption throughout this section: v2 ¼ 0;

ð4:1Þ

where v2 is defined by (3.7). It is clear that this case holds whenever s < m and rangeðBÞ  akq , but all discussions are also valid when s = m. 4.1. Derivation Let us derive a downhill search direction Dx with respect to the primal objective in the null space of A such that ADx ¼ 0;

cT Dx < 0:

ð4:2Þ

Lemma 4.1. The vector Dx ¼ ðDxTB DxTN ÞT defined by U 1 DxB ¼ v1 ;

DxN ¼ eq

satisfies (4.2), where eq is the unit (n  s)-vector with the qth component 1.

ð4:3Þ

902

P.-Q. Pan / Applied Mathematics and Computation 196 (2008) 898–912

Proof. From (2.2), (4.3), (3.7) and (4.1), it follows that        U1 v1 v1 T T T ADx ¼ ½LU N ðDxB eq Þ ¼ L ¼ 0: DxB þ akq ¼ L  þ 0 0 v2

ð4:4Þ

Further, by (3.6), (3.4), (3.7) and (3.3), it holds that zkq ¼ ckq  aTkq y ¼ ckq  vT LT y ¼ ckq  ðvT1 vT2 ÞðuT1 0T ÞT ¼ ckq  uT1 v1 :

ð4:5Þ

On the other hand, combining (4.3), (3.3) and (4.3) gives T

cT Dx ¼ ½cTB cTN ðDxTB eTq Þ ¼ ckq þ cTB DxB ¼ ckq þ uT1 U 1 DxB ¼ ckq  uT1 v1 :

ð4:6Þ

The preceding with (4.5) and (3.6) leads to cT Dx ¼ zkq < 0. h 4.2. Line search scheme According to Lemma 4.1, vector Dx defined by (4.3) is eligible to be a search direction at x. Consequently, we are led to the following line search scheme: ^xB ¼ xB þ aDxB ; ^xkq ¼ a;

ð4:7Þ

^xkj ¼ 0;

ð4:9Þ

ð4:8Þ

j 2 N ; j 6¼ q;

where a is a step-length to be determined. Lemma 4.2. ^x defined by 4.7,4.8,4.9 satisfies A^x ¼ b for any real a. Proof. The validity follows from (4.3), (2.4) and Lemma 4.2.

h

Theorem 4.3. If DxB P 0, then ^x defined by 4.7,4.8,4.9 is a primal feasible solution for any a > 0; moreover, program (1.1) is unbounded below. Proof. By Lemma 4.2, ^x satisfies A^x ¼ b for any a. If a > 0, moreover, it follows from (4.7)–(4.9) that ^x P 0 because both xB P 0 and DxB P 0 hold. Thus, ^x is a primal feasible solution for any a > 0. Further, by 4.7, 4.8, 4.9, (4.3) and (2.4), it holds that ð4:10Þ cT^x ¼ cTB xB þ acTB DxB þ ackq ¼ cTx þ acT Dx; which tends to 1 as a increases, since cTDx < 0 by Lemma 4.1. Thus the objective is unbounded below. h Selection of a leaving index: When DxB l 0, we maximize the step length a subject to ^x P 0 to achieve the lowest possible objective value. This leads to a leaving index jp and a step-length a well-defined by a ¼ xjp =Dxjp ¼ minfxji =Dxji jDxji < 0; ji 2 Bg P 0:

ð4:11Þ

If x is degenerate, a determined by (4.11) could be zero: an undesirable case because the ‘‘new’’ iterate ^x determined by (4.7)–(4.9) is then not truly new, but coincides with x. Otherwise, further progress is guaranteed. Theorem 4.4. If DxB l 0, then ^x defined by (4.7), (4.8) and (4.9) along with (4.11) is a feasible solution. Moreover, it is associated with an objective value strictly less than the old if nondegeneracy is assumed. Proof. As in the proof of Theorem 4.3, it can be shown that ^x is a primal feasible solution. Under the nondegeneracy assumption, (4.11) determines a positive a and it follows from (4.10) and cTDx < 0 (Lemma 4.1) that cT^x < cTx. h If we introduce a so-called objective variable xn+1 and append cTx  xn+1 = 0 to the equality constraints, then the corresponding component of the augmented search direction is Dxn+1 = cTDx, and hence

P.-Q. Pan / Applied Mathematics and Computation 196 (2008) 898–912

903

^xnþ1 ¼ xnþ1 þ acT Dx. We see from (4.10) that xn+1 gives the associated (primal) objective value. Such a variable might simplify the implementation. 5. Formulation of the algorithm Iterations carried out by the proposed algorithm fall into one of two categories: expanding iteration and full iteration, associated with certain basis changes. In this respect, LUSOL might be the only package available for updating LU factors of rectangular bases [10,12,27,28]. In this section, we address these issues and then describe the algorithm formally. 5.1. Expanding iteration Returning to the end of Section 3, let us assume that an entering index kq has been selected by (3.6) and v2 6¼ 0;

ð5:1Þ

where v2 is defined by (3.7). In this case, the main action is related to moving kq from N to B. Consider the matrix resulting by appending akq to the end of B: e ¼ ½B akq : B

ð5:2Þ

e is a basis, also associated with the basic feasible solution x. Proposition 5.1. B Proof. By (5.2), (2.2) and (3.7), it holds that " # U 1 v1 e B ¼ L½Uv ¼ L ; 0 v2

ð5:3Þ

e has column rank s + 1 because U1 2 Rs·s is nonsingular upper triangular and v2 is nonwhich implies that B e  rangeðBÞ  b. Therefore, B e is a basis. zero (5.1). Moreover, it is clear that rangeð BÞ e h Then, it is evident that x is also associated with the basis B. It is seen from (5.3) that [U v] is upper triangular except for its last column. It can be triangularized by moving the largest element of v2 to the diagonal position, and then eliminating all entries below the diagonal using e a Gaussian transformation. This leads directly to LU factors of B. Next, we update the index sets by moving kq from N to the end of B, and set s: = s + 1. Since a column enters the basis, the associated operations are called an expanding iteration. No new iterate is produced from such an iteration. 5.2. Full iteration Assume now that an entering index kq has been selected by (3.6) but (4.1) holds. Assume that a leaving b resulting from B with its pth column index jp was then chosen by (4.11). Consider the associated matrix B ajp replaced by akq . b is a basis associated with the new basic feasible solution ^x defined by 4.7, 4.8, 4.9. Proposition 5.2. B Proof. Without loss of generality, assume that the last column in B leaves (p = s). From (2.2), B can be written e aj  ¼ L½ U e uj ; B  ½B p p

ð5:4Þ

e ¼ ½aj ; . . . ; aj  and U  ½ U e uj . By (5.4) and (3.7), it holds that where B 1 s1 p b ¼ ½B e akq  ¼ L½ U e v: B

ð5:5Þ

904

P.-Q. Pan / Applied Mathematics and Computation 196 (2008) 898–912 T

Further, combining (3.7) and (4.1) gives v ¼ ½vT1 0T  , and (4.3) with Dxjs ¼ 6 0 (see (4.11)) implies that the sth e v 2 Rms is upper triangular with nonzero diagonals, with component of v1 is nonzero. Thus, ½ U e vÞ ¼ s: rankð½ U

ð5:6Þ

Noting that the last m  s components of ujp are zero, we assert that there exists a unique s-vector w such that e vw ¼ uj . Premultiplying both sides by L leads to ½U p e vw ¼ Luj : L½ U p From the preceding, (5.5) and (5.4), it follows that b ¼ Luj ¼ aj : Bw p p

ð5:7Þ

b are linearly independent. It is then evident that (5.5) with (5.6) and rank(L) = m implies that columns of B b b e Moreover, (5.7) implies rangeð BÞ  ajp and (5.5) implies rangeð BÞ  rangeð BÞ. Thus, noting (5.4) and that b  rangeðBÞ  b. Therefore, B b is a basis. B is a basis, we assert that rangeð BÞ It is easy to verify that the ^x defined by (4.7), (4.8) and (4.9) is a basic feasible solution associated with b h B. b can be computed by two successive steps. The first, termed conIn practice, LU factors of the new basis B tracting, computes LU factors of the matrix B 0 resulting from dropping ajp from B. The second, termed b when akq is appended to B 0 . All associated operations constitute a full expanding, computes LU factors of B iteration. Expanding was already described in Section 5.1. Contracting is addressed next. 5.3. Contracting Let B 0 be the matrix resulting from dropping ajp from B. It is clear that B 0 may not be a basis even though it has full column rank s  1. Nevertheless, it is simple to compute LU factors of B 0 from those of B. Consider B 0 = LH, where the upper Hessenberg H is U with its pth column removed. As in [27,10], the pth and sth rows of H are first interchanged. The p through (s  1)th entries in the sth row are then eliminated in turn by a sequence of (stabilized) Gauss transformations, producing the upper-triangular factor of B 0 . The lower-triangular factor is updated by the Gauss transformations. To complete the contracting, we move jp from the basic index set to the end of the nonbasic index set and set s: = s  1. 5.4. Algorithm The steps of the overall procedure are summarized in the following algorithm. Algorithm 1. (PDBSA) Let (2.1) denote the initial basic and nonbasic index sets. Given the LU factorization of the associated basis B (2.2) and the basic feasible solution x P 0 ðxN ¼ 0Þ, this algorithm solves (1.1). 1. 2. 3. 4.

Solve U T1 u ¼ cB and then LT y ¼ ½uT 0T T for y (3.3). Compute zN ¼ cN  N T y (3.4). Stop if zN P 0. Determine an entering index kq by (3.6): zkq ¼ minfzkj jzkj < 0; k j 2 N g < 0:

5. 6. 7. 8. 9.

Solve lower triangular system Lv ¼ akq for v ¼ ½vT1 vT2 T (3.7). Go to step 13 if v2 5 0. Solve triangular system U1DxB = v1 for DxB (4.3). Stop if DxB P 0. Determine a leaving index jp and a step-length a by (4.11): a ¼ xjp =Dxjp ¼ minfxji =Dxji jDxji < 0; ji 2 Bg P 0:

P.-Q. Pan / Applied Mathematics and Computation 196 (2008) 898–912

10. 11. 12. 13. 14. 15.

905

Update x by (4.7)–(4.9). Update LU factors by the contracting associated with jp. Move index jp from B to the end of N and set s: = s  1. Update LU factors by the expanding associated with kq. Move index kq from N to the end of B and set s: = s + 1. Go to step 1.

Remarks . (i) Steps 1–14 perform a full iteration, while steps 1–6 and 13–14 are related to an expanding iteration. All iterations fall into one of such two categories. (ii) For large-scale and sparse computations, the basis should be refactorized periodically from scratch, as in conventional implementations. (iii) Algorithm 1 proceeds just like the standard simplex algorithm if it starts from a normal basis (with m columns). We give the following result associated with the proposed algorithm. Theorem 5.3. Under the nondegeneracy for full iterations, Algorithm 1 terminates either at step 3, producing a pair of primal and dual optimal solutions, or at step 8, detecting lower unboundedness of (1.1). Proof. Since there are only finitely many bases, Algorithm 1 fails to terminate if and only if cycling occurs. Note that the number of columns of the basis either increases by one in expanding iterations, or remains unchanged in full iterations. Thus, cycling cannot involve any expanding iteration. If nondegeneracy exists for full iterations, there is no chance of cycling because the objective value decreases strictly. By Theorem 3.2, termination at step 3 produces a pair of primal and dual optimal solutions, and by Theorem 4.3, termination at step 8 indicates lower unboundedness. h In Theorem 5.3, nondegeneracy is assumed in order to guarantee termination of the proposed algorithm. Of course, such an assumption is entirely unrealistic; in fact, degeneracy occurs all the time. Even so, it turned out that cycling rarely happens, if at all (see the next section). Algorithm 1 should be regarded as finite in practice, just like the standard simplex algorithm. Note that an expanding iteration involves two triangular solves in step 1 and another in step 5. As an additional triangular system is solved in step 7, a full iteration involves four triangular solves, as in simplex algorithms. Nevertheless, the computational complexity of the new algorithm is reduced because the s · s systems U T1 u ¼ cB and U1DxB =  v1 are smaller than the m · m systems solved in simplex algorithms (when s < m). Such advantages disappear once s reaches m, whereupon Algorithm 1 becomes the standard simplex algorithm. A small initial basis therefore seems to be attractive. Nevertheless, since real-world LP problems are often degenerate or even highly degenerate, resulting in small bases, Algorithm 1 appears to be particularly suitable for practical applications. 6. Computational experiments In this section, we report numerical results obtained in our computational experiments and make final remarks. 6.1. Test codes Coded in Fortran 77, our implementation PDBSA 1.0 consists of two Phases. Phase 2 is based on Algorithm 1. Phase 1 solves an auxiliary problem with piecewise-linear sums of infeasibilities as its objective, as in standard codes like MINOS [16]. We used MINOS 5.3 to make a comparison. In fact, PDBSA 1.0 was developed using MINOS 5.3 as a platform. Consequently, the two codes shared such features as preprocessing, scaling, LUSOL [10], etc. Only the modules Mi25bfac.f and Mi50lp.f were replaced by two programs written by the author. Very limited changes were made to the other parts. Subroutine M2crsh in MINOS 5.3 searches for a (permuted) triangular initial basis. Since there is no need for a square basis, subroutine M2crsh was modified by deleting its last lines filling gaps with logical columns. PDBSA 1.0 uses Harris’s two-pass ratio test [13] for leaving index selection,

906

P.-Q. Pan / Applied Mathematics and Computation 196 (2008) 898–912

as in MINOS 5.3. We made every endeavor to ensure a fair competition between the primal deficient-basis simplex algorithm and its conventional counterpart. Like MINOS 5.3, the new code carries out a new LU refactorization whenever the factors were updated more than 98 times, or more than 19 times but the sum of nonzeros in L and U exceeded two times that of the fresh factors from the previous refactorization. MINOS 5.3 worked with the default threshold pivoting tolerances sF = 100 for factorization and sU = 10 for expanding. In the comparison, PDBSA 1.0 worked with sF = sU = 10. The smaller value of sF favors stability over sparsity whenever the basis is refactorized. Even smaller values such as sF = 5 or 2.5 may be desirable [11, Sections 5.2 and 5.3]. Compiled using Visual Fortran 5.0, both MINOS 5.3 and PDBSA 1.0 were run under a Windows 98 system on a Pentium III 550E personal computer with 256MB of memory and about 16 digits of precision. In PDBSA 1.0, both the primal and dual feasibility tolerances were taken to be 106. All reported CPU times were measured in seconds with utility routine CPU_TIME, excluding the time spent on preprocessing and scaling. For both MINOS 5.3 and PDBSA 1.0, the usual default options were used except for: Rows 70,000; Columns 1,500,000; Elements 4,000,000; Iterations 1,000,000; Scale yes; Partial price 1; Log frequency 0; Print level 0. 6.2. Results for set 1 Our first set of test problems included 50 standard LP problems from Netlib1 that do not have Bounds and Ranges sections in their MPS files, since our current code cannot handle such problems implicitly [9]. Actually, this test set includes all such Netlib problems except for the largest four problems, for which m + n > 10,000, where m and n are the numbers of rows and columns of A. MAROS-R7 and STOCFOR3 were left for test set 2, and QAP12 and QAP15 were not included in our tests because they are too time-consuming for both MINOS 5.3 and PDBSA 1.0. Numerical results obtained with the first set are displayed in Tables 6.1 and 6.2 in the order of increasing sum m + n before slack variables are added. In these tables, total iterations and time required for solving each problem are listed in columns labeled Itns and Time, and percentages of total degenerate iterations are given in columns labeled %Degen. Note that the Itns column in Table 6.2 lists full iteration counts, since for each run of the new code, all expanding iterations should be considered together with the basis factorization and refactorizations. Final objective values reached are not listed, as they are the same as those given in the Netlib index file. In order to see how the codes perform as problem size increases, the 50 problems are categorized into three groups: group Small includes the first 20 problems (from AFIRO to SCTAP1), Medium includes the next 15 problems (from SCFXM1 to SHIP04L), and Large includes the last 15 problems (from QAP8 to TRUSS). Table 6.3 lists ratios of of MINOS 5.3 to PDBSA 1.0 problem-by-problem for this set, and Table 6.4 serves as an overall comparison between the two codes. From the bottom line labeled Total, we see that total iteration ratio and time ratio are 1.07 and 1.41, respectively. Thus, PDBSA 1.0 solved problems with fewer iterations and run time than MINOS 5.3 with set 1. 6.3. Results for set 2 In order to see the codes’ behavior on problems larger than those in set 1, we included 15 much larger realworld problems in the second test set, where m + n > 10,000 holds for most problems. The results obtained with MINOS 5.3 and PDBSA1.0 are listed in Tables 6.5 and 6.6 respectively, where the first 8 problems (from CRE-C to OSA-60) are from Kennington2, the next 5 problems (from RAT7A to NSCT2) are from BPMPD3, and the last 2 problems (MAROS-R7 and STOCFOR3) are from Netlib. The Kennington and BPMPD problems were all of the ones that had no Bounds and Ranges sections in their 1 2 3

http://www.netlib.org/lp/data/. http://www-fp.mcs.anl.gov/otc/Guide/TestProblems/LPtest/. http://www.sztaki.hu/~meszaros/bpmpd/.

P.-Q. Pan / Applied Mathematics and Computation 196 (2008) 898–912

907

Table 6.1 MINOS 5.3 statistics for set 1 of 50 Netlib problems Problem

m

n

Itns

Time

%Degen

AFIRO SC50B SC50A ADLITTLE BLEND SHARE2B SC105 STOCFOR1 SCAGR7 ISRAEL SHARE1B SC205 BEACONFD LOTFI BRANDY E226 AGG SCORPION BANDM SCTAP1 SCFXM1 AGG2 AGG3 SCSD1 SCAGR25 DEGEN2 FFFFF800 SCSD6 SCFXM2 SCRS8 BNL1 SHIP04S SCFXM3 25FV47 SHIP04L QAP8 WOOD1P SCTAP2 SCSD8 SHIP08S DEGEN3 SHIP12S SCTAP3 STOCFOR2 SHIP08L BNL2 SHIP12L D2Q06C WOODW TRUSS

27 50 50 56 74 96 105 117 129 174 117 205 173 153 220 223 488 357 305 300 330 516 516 77 471 442 524 147 660 490 643 402 990 821 402 912 244 1090 397 778 1503 1151 1480 2157 778 2324 1151 2171 1098 1000

32 48 48 97 83 79 103 111 140 142 225 203 262 308 249 282 163 358 472 480 457 302 302 760 500 534 854 1350 914 1169 1175 1458 1371 1571 2118 1632 2594 1880 2750 2387 1818 2763 2480 2031 4283 3489 5427 5167 8405 8806

13 15 22 121 74 102 38 66 69 217 273 81 84 239 273 400 121 155 444 160 332 189 191 310 211 1108 297 1028 612 717 980 141 918 6769 226 11,515 540 681 1606 237 10060 384 827 2050 444 5695 826 50,754 2940 27,067

0.00 0.00 0.05 0.00 0.06 0.06 0.00 0.05 0.00 0.11 0.11 0.06 0.06 0.11 0.16 0.22 0.17 0.16 0.33 0.11 0.22 0.17 0.16 0.16 0.16 1.15 0.33 0.72 0.82 0.72 1.38 0.17 1.97 19.88 0.33 55.31 4.39 1.48 3.30 0.44 57.51 1.10 2.64 8.79 1.54 33.50 3.90 458.46 20.38 218.55

69.2 66.7 40.9 9.1 45.9 28.4 28.9 43.9 18.8 3.7 0.4 22.2 16.7 10.0 11.7 20.8 28.1 52.9 10.4 30.0 16.6 12.2 12.0 63.5 24.2 55.6 25.6 50.8 19.6 21.9 12.0 17.0 19.6 9.5 15.0 31.3 50.2 47.9 60.0 22.8 56.5 20.8 59.7 45.7 14.9 19.4 23.6 7.9 40.0 55.6

Total

29,084

74,632

132,622

901.45

1490.1

MPS files and were more than about 500KB in compressed form. We see that MINOS 5.3 failed to solve the last three problems (NSCT2, MAROS-R7 and STOCFOR3) while the new code solved all problems successfully with acceptable effort. Thus, the new code appears reliable compared with MINOS 5.3.

908

P.-Q. Pan / Applied Mathematics and Computation 196 (2008) 898–912

Table 6.2 PDBSA 1.0 statistics for set 1 of 50 Netlib problems Problem

m

n

Itns

Time

%Degen

AFIRO SC50B SC50A ADLITTLE BLEND SHARE2B SC105 STOCFOR1 SCAGR7 ISRAEL SHARE1B SC205 BEACONFD LOTFI BRANDY E226 AGG SCORPION BANDM SCTAP1 SCFXM1 AGG2 AGG3 SCSD1 SCAGR25 DEGEN2 FFFFF800 SCSD6 SCFXM2 SCRS8 BNL1 SHIP04S SCFXM3 25FV47 SHIP04L QAP8 WOOD1P SCTAP2 SCSD8 SHIP08S DEGEN3 SHIP12S SCTAP3 STOCFOR2 SHIP08L BNL2 SHIP12L D2Q06C WOODW TRUSS

27 50 50 56 74 96 105 117 129 174 117 205 173 153 220 223 488 357 305 300 330 516 516 77 471 442 524 147 660 490 643 402 990 821 402 912 244 1090 397 778 1503 1151 1480 2157 778 2324 1151 2171 1098 1000

32 48 48 97 83 79 103 111 140 142 225 203 262 308 249 282 163 358 472 480 457 302 302 760 500 534 854 1350 914 1169 1175 1458 1371 1571 2118 1632 2594 1880 2750 2387 1818 2763 2480 2031 4283 3489 5427 5167 8405 8806

14 28 26 98 61 135 62 49 87 279 141 152 26 158 274 412 106 118 315 187 237 166 185 319 338 829 289 1090 559 662 1070 163 876 7592 232 8043 888 703 3027 319 7270 485 977 967 657 5462 1019 48,400 2385 26,572

0.00 0.00 0.00 0.05 0.00 0.06 0.06 0.05 0.06 0.11 0.05 0.06 0.00 0.06 0.11 0.16 0.05 0.06 0.22 0.05 0.11 0.11 0.16 0.11 0.17 0.66 0.22 0.66 0.55 0.50 1.09 0.16 1.32 15.76 0.22 38.12 7.69 0.88 4.12 0.33 32.57 0.82 1.86 2.70 1.32 21.36 3.07 330.49 15.11 155.82

71.4 71.4 53.8 3.1 34.4 14.8 45.2 28.6 20.7 1.4 0.0 37.5 30.8 13.9 7.3 20.9 13.2 20.3 6.7 33.7 15.6 11.4 16.8 73.7 29.0 64.3 23.9 72.0 17.2 26.1 12.7 26.4 18.5 10.1 25.9 35.6 51.5 49.6 62.0 37.6 72.1 33.8 61.4 32.3 33.5 18.5 30.8 9.6 41.4 67.6

Total

29,084

74,632

124,509

639.27

1610.0

Table 6.7 offers ratios of MINOS 5.3 to PDBSA 1.0 only for the first 12 problems. From the bottom line of Table 6.8, we see that both total iteration ratio 1.52 and time ratio 2.06 are even higher than those with set 1. Thus, the new code outperformed MINOS 5.3 significantly with set 2, in terms of both iteration counts and run time.

P.-Q. Pan / Applied Mathematics and Computation 196 (2008) 898–912

909

Table 6.3 Ratio of MINOS 5.3 to PDBSA 1.0 for set 1 of 50 Netlib problems Problem

m

n

Itns

Time

%Degen

AFIRO SC50B SC50A ADLITTLE BLEND SHARE2B SC105 STOCFOR1 SCAGR7 ISRAEL SHARE1B SC205 BEACONFD LOTFI BRANDY E226 AGG SCORPION BANDM SCTAP1 SCFXM1 AGG2 AGG3 SCSD1 SCAGR25 DEGEN2 FFFFF800 SCSD6 SCFXM2 SCRS8 BNL1 SHIP04S SCFXM3 25FV47 SHIP04L QAP8 WOOD1P SCTAP2 SCSD8 SHIP08S DEGEN3 SHIP12S SCTAP3 STOCFOR2 SHIP08L BNL2 SHIP12L D2Q06C WOODW TRUSS

27 50 50 56 74 96 105 117 129 174 117 205 173 153 220 223 488 357 305 300 330 516 516 77 471 442 524 147 660 490 643 402 990 821 402 912 244 1090 397 778 1503 1151 1480 2157 778 2324 1151 2171 1098 1000

32 48 48 97 83 79 103 111 140 142 225 203 262 308 249 282 163 358 472 480 457 302 302 760 500 534 854 1350 914 1169 1175 1458 1371 1571 2118 1632 2594 1880 2750 2387 1818 2763 2480 2031 4283 3489 5427 5167 8405 8806

0.93 0.54 0.85 1.23 1.21 0.76 0.61 1.35 0.79 0.78 1.94 0.53 3.23 1.51 1.00 0.97 1.14 1.31 1.41 0.86 1.40 1.14 1.03 0.97 0.62 1.34 1.03 0.94 1.09 1.08 0.92 0.87 1.05 0.89 0.97 1.43 0.61 0.97 0.53 0.74 1.38 0.79 0.85 2.12 0.68 1.04 0.81 1.05 1.23 1.02

– – – 0.00 – 1.00 0.00 1.00 0.00 1.00 2.20 1.00 – 1.83 1.45 1.38 3.40 2.67 1.50 2.20 2.00 1.55 1.00 1.45 0.94 1.74 1.50 1.09 1.49 1.44 1.27 1.06 1.49 1.26 1.50 1.45 0.57 1.68 0.80 1.33 1.77 1.34 1.42 3.26 1.17 1.57 1.27 1.39 1.35 1.40

0.97 0.93 0.76 2.94 1.33 1.92 0.64 1.53 0.91 2.64 – 0.59 0.54 0.72 1.60 1.00 2.13 2.61 1.55 0.89 1.06 1.07 0.71 0.86 0.83 0.86 1.07 0.71 1.14 0.84 0.94 0.64 1.06 0.94 0.58 0.88 0.97 0.97 0.97 0.61 0.78 0.62 0.97 1.41 0.44 1.05 0.77 0.82 0.97 0.82

Total

29,084

74,632

1.07

1.41

0.93

Note that time ratios are larger than iteration ratios for both set 1 and set 2. This is because the computational effort per iteration for the new algorithm was less than that for the standard simplex algorithm in general, because of use of the deficient basis.

910

P.-Q. Pan / Applied Mathematics and Computation 196 (2008) 898–912

Table 6.4 Total ratios of MINOS 5.3 to PDBSA 1.0 for set 1 Problem

Itns

Time

%Degen

Small (20) Medium (15) Large (15)

1.09 0.96 1.08

1.50 1.30 1.41

1.06 0.85 0.87

Total

1.07

1.41

0.93

Table 6.5 MINOS 5.3 statistics for set 2 of 15 test problems Problem

n

Itns

Time

3068 3516 1118 2337 8926 9648 4350 10,280 3136 22,901 18,804 18,906

3678 4067 23,949 52,460 69,980 72,447 100,024 232,966 9408 14,981 27,355 27,355

4774 4179 616 1354 229,701 360,737 2717 5252 4073 9715 14,900 58,719

31.91 31.80 13.13 63.49 13132.41 22,005.62 247.00 1124.54 532.95 779.01 1726.64 7663.76

43.0 38.6 1.0 0.3 42.4 45.1 0.1 0.1 0.2 7.9 15.7 67.1

Total

106,990

638,670

696,737

47,352.26

261.5

NSCT2 MAROS-R7 STOCFOR3

A triangular diagonal is zero Singular basis after two factorization attempts Constraints cannot be satisfied accurately

n

Itns

Time

%Degen

3068 3516 1118 2337 8926 9648 4350 10,280 3136 22,901 18,804 18,906

3678 4067 23,949 52,460 69,980 72,447 100,024 232,966 9408 14,981 27,355 27,355

4952 4351 616 1354 186,394 198,911 2685 5252 4073 13,276 14,296 21,563

21.97 21.91 10.60 51.30 7711.98 8823.18 196.25 899.95 468.40 980.37 1461.24 2322.47

51.5 52.0 1.3 0.4 83.6 83.1 0.1 0.1 0.2 33.2 26.8 84.7

106,990

638,670

457,723

22,969.62

417.0

23,003 3136 16,675

14,981 9408 15,695

13,871 2515 8724

1036.01 54.43 262.16

73.1 0.0 30.7

CRE-C CRE-A OSA-07 OSA-14 CRE-D CRE-B OSA-30 OSA-60 RAT7A NSCT1 DBIR1 DBIR2

m

%Degen

Table 6.6 PDBSA 1.0 statistics for set 2 of 15 test problems Problem CRE-C CRE-A OSA-07 OSA-14 CRE-D CRE-B OSA-30 OSA-60 RAT7A NSCT1 dbir1 DBIR2 Total NSCT2 MAROS-R7 STOCFOR3

m

We do not list numerical results associated with Phase 1 alone, but only report that the Phase-1 iteration and time ratios are 1.03 and 1.15 for set 1, and 2.66 and 3.09 for set 2. Therefore, our Phase 1 using the generalized basis outperformed its conventional counterpart.

P.-Q. Pan / Applied Mathematics and Computation 196 (2008) 898–912

911

Table 6.7 Ratio of MINOS 5.3 to PDBSA 1.0 for 12 Netlib problems in set 2 Problem CRE-C CRE-A OSA-07 OSA-14 CRE-D CRE-B OSA-30 OSA-60 RAT7A NSCT1 DBIR1 DBIR2 Total

m

n

Itns

Time

%Degen

3068 3516 1118 2337 8926 9648 4350 10,280 3136 22,901 18,804 18,906

3678 4067 23,949 52,460 69,980 72,447 100,024 232,966 9408 14,981 27,355 27,355

0.96 0.96 1.00 1.00 1.23 1.81 1.01 1.00 1.00 0.73 1.04 2.72

1.45 1.45 1.24 1.24 1.70 2.49 1.26 1.25 1.14 0.79 1.18 3.30

0.83 0.74 0.77 0.75 0.51 0.54 1.00 1.00 1.00 0.24 0.59 0.79

106,990

638,670

1.52

2.06

0.63

Table 6.8 Total ratios of MINOS 5.3 to PDBSA 1.0 for set 2 Problem

Itns

Time

%Degen

Kennington(8) BPMPD(4)

1.51 1.64

2.07 2.05

0.63 0.63

Total

1.52

2.06

0.63

6.4. Effects of degeneracy We report that PDBSA 1.0 terminated at a deficient basis for most test problems. In order to illustrate the size of the bases used in the new code relative to the classical size m, Table 6.9 gives total average s/m(%) and total final s/m(%) (total s to total m). We see that the percentages are quite high: around 95%. This is because the initial bases tended to have a high s/m (see the second paragraph of Section 6.1). It would be preferable to have a low initial s/m ratio, along with some effective tactic to limit subsequent fill-in. On the other hand, from the bottom line and column labeled %Degen in Table 6.4, we see that the ratio of percentages of total degenerate iterations is 0.93. Thus, the proportion of such iterations with PDBSA 1.0 is larger than that with MINOS 5.3 for set 1. Moreover, for set 2 in Table 6.8, total %Degen ratio 0.63 reveals an even larger proportion of total degenerate iterations for PDBSA 1.0 than for MINOS 5.3. Even so, the new code outperformed MINOS 5.3 significantly with both set 1 and 2. This is quite encouraging because it serves as a clue to the merit of the proposed algorithm. We draw from these results that there is no inevitable correlation between an algorithm’s inefficiency and degeneracy, contradicting what we have long thought. This viewpoint coincides with recent results obtained with a sparse implementation of a so-called revised dual projective pivot algorithm [25]. It is also supported by experiments with the steepest-edge pivot rules, which outperformed other rules by large margins even though the proportions of their associated degenerate iterations are quite similar [8]. In summary, the use of deficient bases is beneficial, and the proposed deficient-basis simplex algorithm appears favorable for solving real-world LP problems, compared with its conventional counterpart. Table 6.9 Ratios s/m(%) Problem

Average

Final

Set 1 Set 2

95.55 94.94

97.19 94.95

912

P.-Q. Pan / Applied Mathematics and Computation 196 (2008) 898–912

Nevertheless, there is much room for improvement. In particular, incorporating the steepest-edge criterion into the proposed algorithm should be as important as for conventional algorithms, although it is still open how to do so efficiently. Acknowledgements The author extend his respect to Professor George B. Dantzig for his great contributions to this field and for his enthusiastic encouragement given to the author during ismp97 and after. References [1] M. Benichou, J.M. Gauthier, G. Hentges, G. Ribie`re, The efficient solution of linear programming problems – some algorithmic techniques and computational results, Mathematical Programming 13 (1977) 280–322. [2] R.E. Bixby, Solving real-world linear programs: a decade and more of progress, Operations Research 50 (1) (2002) 3–15. [3] R.G. Bland, New finite pivoting rules for the simplex method, Mathematics of Operations Research 2 (1977) 103–107. [4] A. Charnes, Optimality and degeneracy in linear programming, Econometrica 20 (1952) 160–170. [5] G.B. Dantzig, Programming in a linear structure, U.S. Air Force Comptroller, USAF, Washington, DC, 1948. [6] G.B. Dantzig, A. Orden, P. Wolfe, The generalized simplex method for minimizing a linear form under linear inequality restraints, Pacific Journal of Mathematics 5 (1955) 183–195. [7] G.B. Dantzig, Linear Programming and Extensions, Princeton University Press, Princeton, NJ, 1963. [8] J.J.H. Forrest, D. Goldfarb, Steepest-edge simplex algorithms for linear programming, Mathematical Programming 57 (1992) 341– 374. [9] D.M. Gay, Electronic mail distribution of linear programming test problems, Mathematical Programming Society COAL Newsletter 13 (1985) 10–12. [10] P.E. Gill, W. Murray, M.A. Saunders, M.H. Wright, Maintaining LU factors of a general sparse matrix, Linear Algebra and Its Applications 88/89 (1987) 239–270. [11] P.E. Gill, W. Murray, M.A. Saunders, SNOPT: an SQP algorithm for large-scale constrained optimization, SIAM Review 47 (1) (2005) 99–131. [12] P.E. Gill, W. Murray, M.A. Saunders, M.H. Wright, A practical anti-cycling procedure for linearly constrained optimization, Mathematical Programming 45 (1989) 437–474. [13] P.M.J. Harris, Pivot selection methods of the Devex LP code, Mathematical Programming Study 4 (1975) 30–57. [14] B. Hattersley, J. Wilson, A dual approach to primal degeneracy, Mathematical Programming 42 (1988) 135–145. [15] S.A. Leichner, G.B. Dantzig, J.W. Davis, A strictly improving linear programming Phase I algorithm, Annual Operations Research 47 (1993) 409–430. [16] B.A. Murtagh, M.A. Saunders, MINOS 5.5 User’s Guide, Technical Report SOL 83-20R, Dept. of Operations Research, Stanford University, Stanford, 1998. [17] P.-Q. Pan, Practical finite pivoting rules for the simplex method, OR Spektrum 12 (1990) 219–225. [18] P.-Q. Pan, A dual projective simplex method for linear programming, Computers and Mathematics with Applications 35 (6) (1998) 119–135. [19] P.-Q. Pan, A basis-deficiency-allowing variation of the simplex method, Computers and Mathematics with Applications 36 (3) (1998) 33–53. [20] P.-Q. Pan, A projective simplex method for linear programming, Linear Algebra and Its Applications 292 (1999) 99–125. [21] P.-Q. Pan, A new perturbation simplex algorithm for linear programming, Journal of Computational Mathematics 17 (3) (1999) 233– 242. [22] P.-Q. Pan, Primal perturbation simplex algorithms for linear programming, Journal of Computational Mathematics 18 (6) (2000) 587–596. [23] P.-Q. Pan, A projective simplex algorithm using LU decomposition, Computers and Mathematics with Applications 39 (2000) 187– 208. [24] P.-Q. Pan, A dual projective pivot algorithm for linear programming, Computational Optimization and Applications 29 (2004) 333– 344. [25] P.-Q. Pan, A revised dual projective pivot algorithm for linear programming, SIAM Journal on Optimization 16 (1) (2005) 49–68. [26] P.-Q. Pan, Y. Pan, A phase-1 approach to the generalized simplex algorithm, Computers and Mathematics with Applications 41 (2001) 1455–1464. [27] J.K. Reid, A sparsity-exploiting variant of the Bartels–Golub decomposition for linear programming bases, Mathematical Programming 24 (1982) 55–69. [28] U.H. Suhl, L.M. Suhl, Computing sparse LU factorizations for large-scale linear programming bases, ORSA Journal of Computation 2 (1990) 325–335. [29] K. Fukuda, T. Terlaky, Criss-cross methods: A fresh view on pivot algorithms, Mathematical Programming 79 (1997) 369–395.