Annals of Discrete Mathematics 4 (1979) 39-49
@ North-Holland Publishing Company
MATROID INTERSECTION Jack EDMONDS University of Waterloo, Ont., Canada
A matroid M = (E, F ) can be defined as a finite set E of elements and a non-empty family F of subsets of E, called independent sets, such that (1) every subset of an independent set is independent; and (2) for every A 2 E, every maximal independent subset of A, i.e. every M-basis of A, has the same cardinality, called the M-rank r ( A ) of A. (This definition is known to be equivalent to many others). An M-basis of E is called a basis of M ; the M-rank of E is called the rank of M. ( 3 ) A 0,l-valued vector x =[xi], ~ E Eis, called the (incidence) vector of the subset J G E of elements j for which xi = 1. ( 4 ) Let Ma = (E, Fa) and Mb= (E, & , ) be any two matroids on E, having rank functions r, (A) and r b ( A ) ,respectively. ( 5 ) For k = a and b, let
p k = { x = [ x j ] , j E E : V j , x i > O , and V A G E , z x i S r k ( A ) } . ;€A
(6) Theorem. (i) The vertices (extreme points) of polyhedron pk are the vectors of the members of Fk. (ii) The vertices of polyhedron Pa fl P b are the vectors of the members of Fa f l Fb. ( 7 ) Let c = [ c , ] , j E E , be any real-valued vector on E. For any other such vector x = [x,], j E E, let cx = c j ~j E, E. For any J c E, let c(J) = 1 ci, j E J. Where x is the vector of J, cx = c ( J ) . ( 8 ) A (convex) polyhedron P in the space of real-valued vectors x = [ x i ] , j E E, can be defined as the set of solutions of some finite system L of linear inequalities and linear equations in the variables xi. A vertex of a polyhedron P can be defined as a vector xoE P such that, for some c, cx is maximized over X E Pby x" and only xo. Not every P has vertices. However, it is well-known that (8a) if P has at least one vertex then there are a finite number of them and every cx which has a maximum over x E P is maximized over x E P by at least one of them; if P is non-empty and bounded, as is Pkrthen it has vertices and every cx is maximized over x E P by at least one of them; (8b) where L is a finite linear system whose solution-set is polyhedron P, a
xi
39
J. Edmonds
40
vector xo is a vertex of P if and only if it is the unique solution of some linear system, Lo, obtained from L by replacing some of the ‘‘< ” ’s in L by “ = ” ’S. (9) It is well-known that for any finite set V of vectors x, there exists a unique bounded polyhedron P containing V and such that the vertices of P are a subset of V. P is called the convex hull of V. Where L is a finite linear system whose solution-set P is the convex hull of V, the problem of maximizing cx over x E V is equivalent to the linear program: maximize cx over solutions x of L by a “basic feasible solution” of L, i.e., by a vertex of P. In particular, where V is the set of vectors of a family F of sets J c E, maximizing c(J) over J c F is equivalent to this l.p. (10) A problem of the form, maximize cx over x E V, where V is a prescribed finite set of vectors, we call a loco problem. “Loco” stands for “linear-objective combinatorial”. The equivalence of a loco problem with a linear program can be quite useful if one can readily identify the linear program. (The linear program is not unique since the L whose P is the convex hull of V is not unique.) In particular, every loco problem has a “dual”, namely the dual of a corresponding linear program. The dual of the linear program, (11) maximize cx = cixj, j E E, subject to (12) xi 3 0 , j E E ; and aijxjG b,, j E E, i E F ; is the Lp., (13) minimize by biyi, i E F, subject to (14) yi 3 0 , i E F ; and aiiyi5 cj, t E F, j E E. For any x satisfying (12) and any y satisfying (14), we have
xi xi =xi xi
-
-
(16) Cancelling we get c x s b y. (17) Thus, cx=by implies that cx is maximum subject to (12) and by is minimum subject to (14). (18) The linear programming duality theorem says that if cx has a maximum value subject to (12) then it equals by for some y satisfying (14) -that is, it equals the minimum of by subject to (14). (19) Theorems (6), (8a) and (18), immediately yield a useful criterion for the maximum of c(J), JEFk ; or the maximum of c(J), J E Fa nFb. (20) For a given polyhedron P, and a given set V c P , one way, in view of (8), to prove that the vertices of P are a subset of V is, for any linear function cx, either show that cx has no maximum over x E P or else produce some X ( ’ E V which maximizes cx over P. (21) A way to show that xo maximizes cx over P, where say (12) is the linear system defining P, is to produce a solution yo of (14) such that cxo= by’. (22) Showing that X’E V maximizes cx over x E P is, at the same time, a way of showing that xo maximizes cx over V. (23) A good illustration of this technique, which we also use on more difficult
Matroid intersection
41
loco problems, is provided by the problem, maximize c ( J )= ci, j E J, over J E F, where M = ( E ,F ) is a matroid. The following algorithm, called the greedy algorithm, [l ] , solves this problem: (24) Consider all the elements j E E , such that c, > O , in order, j(l), . . . ,j ( r n ) , such that c j ( l ) a c i c z *) ~3 ciCm,>O.Let Jo= 9. For i = 1 , . . . , m, let J, = Ji-lU{j(i)} if this set is a member of F. Otherwise, let Ji =Ji-l.
-
(25) Theorem. J, maximizes c(J) over F. (26) For i = 1,. . . , m, let Ai = { j ( l ) ,. . . ,j(i)}. It follows immediately, by induction, from the definition of matroid that the vector x o = [ x p ] ,j E E , of the set J,,, c E is = r(Al); = r(Ai)-r(Ai-l)for i = 2, . . . , m ; and xp = 0 for every other j E E. (27) It is fairly obvious that the vector of any J E F is a vertex of the set P of solutions to the system:
&,
(28)
x i 3 0 , ~ E EC;i x i S r ( A ) ,A G E , jEA.
(29) The dual of the l.p., maximize cx subject to (28), is the 1.p.: (30) minimize ry = CA r(A)* y(A),A E E, subject to: (31) y ( A ) > O , A G E ; C A y ( A ) a c jj,E E , ] ' G A G E . (52) Let y o = [y"(A)], A E E, be yo(Ai)= c , ~ i ~ - c i ~ ifor + l )i = 1 , . . . , rn - 1; yo(A,) = cj(,); and yo(A)= 0 for every other A c E.
(33) Theorem. ry is minimized, subject to (31), by y o . (34) Theorems (33), (25), and the tricky part of (69, i.e. that every vertex of the P of (28) is the vector of a J E F , are all proved by verifying that y o satisfies (31) and cxo = ryo. Proving (25) this way is using a hammer to crack a peanut. However, a coconut is coming up -a similarly good algorithm for maximizing c(J) over J E Fa f l Fb. First we do a few preliminaries on matroids -in particular, what I believe to be a simplified approach to basic ideas of Whitney and Tutte on matroid duality and contraction. (35) For any matroid M = ( E , F ) , with rank function r(A), let F*= {J 5 E :r(E - J ) = r(E)};then M* = (E,F*) is a matroid, called the dual of M. (35a) It follows immediately from (35) that B c E is an M*-basis of E if and only if E - B is an M-basis of E. Clearly, M* satisfies (1). We show that it satisfies (2) by showing that, for any A E E, any J E F*, J c A, can be extended to a J' E F*, J' E A, having cardinality, (36)
r * ( A ) =l A l - r ( E ) + r ( E - A ) .
42
J. Edmonds
By definition of rank, there is a Jo E F, Jo c E - A, lJol = r(E - A ) . By property (2) for matroid M, there is a J1E F, Jo c J1c E - J, J,- Jo c A, IJ,I = r(E - J) = r(E). Let J’=A-(Jl-Jo). Then J z J ’ c A ; J , c E - J ’ + r ( E - J ’ ) = r ( E ) $ J ’ E F”; and IJ’I=IAI-(lJ1l-IJol)=IAl-r(E)+r(E-A). (37) For any matroid M = (E,F ) , any R cE, E’ = E - R , and some M-basis B of R, let F’ = {JcE ’ :J U B E F}, then F’ is the same for any choice of B ; and M’ = (E’,F’) is a matroid, called the contraction of M to E’ (obtained from M by “contracting” the members of R). Let B1and B2 be M-bases of R . Suppose J UB , E F, J c E‘. Clearly J U B, is an M-basis of J U R . B2 is contained in an M-basis Jo UB1of J U R. Since Jo c J,
l&l= 1%
and IJUBII = I J o U B 2 L we have lJol = IJI and Jo = J. Thus, J UB2 E F. Clearly, M’ satisfies (1). We show that M’ satisfies (2) simply by observing that, for any A E E’, and any JEF’ such that J c A, J U B can be extended to an M-basis J’ U B of A U R having cardinality r(A U R ) . Thus J can be extended to a J’E F’, J’E A, having cardinality (38) r’(A)= r(A U R ) - r(R). 0 (39) For any matroid M = (E, F ) , and any E’ E E, let F’ = {J E E’ :J c F } ; then M’ = (E’,F’) is a matroid, called the restriction of M to E’ (obtained from M by “deleting” the members of E - E’ = S ) . The rank function of this M’ is simply the rank function of M restricted to subsets of E’. Obvious. (40) For any matroid M = ( E , F ) , any R c E and any S E E - R , E ‘ = E - ( R U S ) , the matroid M’ = (E‘,F’) obtained from M by contracting the members of R and deleting the members of S is called a minor of M. (41) It is fairly obvious that the operations of contracting the members of R and deleting the members of S are associative and commutative. That is, M’ is the same for any way of doing them. (42) Let M = (E,F ) be a matroid. Let M” = (E, F*) be the dual of M. Let M’ = (E‘,F’) be obtained from M by contracting R c E and deleting S = E - R. Then M’” = (E’,F’*), the dual of M’, is the same as the matroid, say M*’= (E’,F”’), obtained from M* by deleting R and contracting S.
Proof. Let Jc E‘. J E F * ’ e IJI = r”’(J) = r*(JU S ) - r * ( S ) = (JUSI- r ( E ) +r(E (JU S ) ) - IS(+ r ( E )- r(E - S ) r(E - (JU S ) ) = r(E - S ) e r ( ( E ’ - J )U R ) - r ( R )= r(E’U R )- r ( R ) r’(E’- J) = r’(E) J E F’”. 0 (43) Let M = ( E , F ) be a matroid; let n be an integer; and let F‘“)= {JEE:JEF,IJI=Sn}; then M(”)=(E,F’”’), called the n-truncation of M, is a matroid having rank function #”)(A)= min [n, r(A)].Obvious. (44) For any A G E and any eEE, e is said to M-depend on A either when e E A or when A contains some J E F such that J U{ e } 6 F. The set of all elements in E which M-depend on A, is called the M-span or the M-closure of A.
Matroid intersection
43
(45)The M-closure of a set A c E is the unique maximal set S such that A c_ S c E and r(A)= r(S). Proof. Let S be any maximal set such that A c S E E and r(A)= r ( S ) . Consider any e E E -A. If there is some J E F, J c A, {e} U Jy!F, then J is contained in some basis B of A which is also a basis of S and thus which is also a basis of {e} U S. Hence, e E S. Conversely, if e E S, then, for any basis J of A, J is also a basis for S, and so JU{e}y!F. 0 (46) The maximal subsets of E having given ranks are called the spans or flats or closed sets of M. (47) The minimal subsets of E which are not in F are called the circuits of M. (48)For any J E F and any e E E, J U { e } contains at most one circuit of M.
Proof. If C1and C, are two circuits contained in J U{e}, there is an el E C, - C,. Then C,-{e,}EF can be extended to a basis B of JU{e}-{el} which excludes both e, and some element of C,. B is also a basis of JU{e}, smaller than basis J of JU{e}, contradicting (2). 0 (49) By a rnatroid-algorithm we mean an algorithm which includes as inputs a collection of arbitrary matroids. The actual application of a matroid-algorithm to given matroids depends on how the matroids are given. A matroid M = (E, F ) would virtually never be given as an explicit listing of the members of F ; or even of the maximal members of F (called the bases of M ) ; or of the circuits of M ; or of the flats of M. For moderate sizes of \El and r(E),such listings would tend to be astronomical. Matroids arise implicitly from other structures (sometimes other matroids). The following are examples; see [l]for a more general theory on the construction of matroids. (50) Forest matroids, (E,F): where E is the edge-set of a graph G and F is comprised of the edge-sets of forests in G. (51) Transversal matroids, (€2, F ) : where for a given family Q = {Q,}, i E I, of sets Qi E E, F is comprised of the partial transversals J of Q, i.e., sets of the form J = {e,}, i E 1’, such that ei E Q,, for some I’EI. (52) Matric matroids, (E, F ) : where E is the set of columns in a matrix, and F is comprised of the linearly independent subsets of columns. (53) Matroid-sums, (E,F ) : where (E, f i ) , i E I, are given matroids on E, and where F is comprised of all sets of the form J = U 4, i E I, J, E F,. For each matroid, say M = (E, F ) , to which a matroid-algorithm refers as an input, it is assumed that an algorithm (“subroutine”) is available for, (54) given any J c E, determine whether or not J EF. The efficiency of a matroid-algorithm is relative to the efficiency of these assumed subroutines.
44
J. Edmonds
It is obvious that there are good algorithms, relative to (54), for: (55) given any A cE, determine r ( A )by finding a J EA such that J E F and IJ1 is maximum (indeed we have defined matroid, essentially, as a structure for which there is a certain trivial algorithm which does this); (56) given any A E E, find in A, if there is one, a circuit of M ; (57) given any A c E, find the closure of A ; (58) given any A c E, determine whether or not A E F*, where M” = (E,F”) is the dual of M ; (59) given A cE’ c E, determine whether or not A E F’, where M‘ = (E’,F‘) is the contraction of M to E’; and so on. (60) Thus these operations can also be used in matroid algorithms without difficulty. Conversely, matroid-algorithms can equivalently be based on the assumption that a subroutine is available for some operation other than (54) for each matroid input if there is a good algorithm for doing (54) using that operation. For example, (57) could equivalently play the role of (54), since one can show (61) J ~ F i f f for , every e EJ,the closure of J - e does not contain e. In [2] there is a good algorithm for: (62) given a finite collection of matroids, Mi = (E,E ) , i E I, and given J c E, determine whether or not J E F , where F is defined as in (53), by finding or determining that it is not possible to find, a partition of J into a collection of sets, 4, i E I, such that JiE F,. Since for this F, M = (E, F ) is a matroid, [2] together with (55) immediately gives a good algorithm for find sets JiE fi, i E I, such that IUJilis maximum. (63) given the matroids Mi, More generally, [2] together with the greedy algorithm, (24), gives a good algorithm for (64) given c =[ci], j E E, and given the matroids Mi,i E I, maximize c(J) = Citr ci over J E F. Let Mu = (E,Fa)and Mb = (E,F h ) be any two matroids on E ; then [2] together with (58) gives immediately a good algorithm for: (65) given matroids Ma and M b , find a set J E F, nFb such that IJI is maximum. (66) Let Ma be the Ml of (62) and let @, the dual of M b , be the M2 of (62), where I = { l , 2). Use (63) and (58) to find an A of the form A =J1UJ,, where J1E F,, J, E R, and IAl is maximum. Extend J2 to B, an @-basis of A. Let J = A -B. Then J maximizes IJI over J EFa f l Fh.
Proof. Clearly, JEF,. B is an @-basis of E, since otherwise it could be extended to a B ’ E such ~ that IJ,UB’I>IAI. Thus r ; ( E - - J ) = r;(E). Thus J e F h . If there were a J’EF,n F h larger than J, then where BEEis an @-basis of E - J’, also an @-basis of E, we would have IJ’U B I > IA I. Thus J E Fa fl Fb is such that IJI is maximum. 0 Besides finding sets Ji E 6 , i E I, such that A = U Ji has maximum cardinality,
Matroid intersection
45
the algorithm in [2] for (63) finds a set S E E (called Sn there), such that (67) A U S = E and IA n S( ri(S). For the case (66) of (67), it is easy to show from (67) that (68) I J I = r , ( S ) + r b ( E - S ) . For any J' E Fa fl F b and any S' G E, we have (69) IJ'I = (J'nS'l + IJ' - S'I s r, (S') + rb( E - S'). Thus, from (68) and (69), we have the theorem: (70) For any two matroids, Mk= (E,Fk), k = a, b,
=xi
max {IJI :JEFa n F b }= min {r,(S) + rb(E- S) : S E E } .
Thus, the algorithm of [2] for the problem of (66), finds a minimizing S of (70) as well as a maximizing J of (70). Clearly, for any such S and J we have that (7 1) J" = J n S and J b = J n( E - S) partition J into two sets such that (72) cl,(Ja)Uclb(Jb)= E, where clk(A) denotes Mk -closure of A. (73) Conversely, the existence of a J EFa fl Fb which partitions into sets J" and J b satisfying (72), immediately implies Theorem (70). (74) Given J EFa fl Fb. We describe a direct algorithm for finding either a larger J' E F' flF b or else a partition of J into sets, 'J and Jb, which satisfy (72). It is essentially the algorithm of [2] applied to (66). (75) Let Jk = 9. Let K - , = 8. (76) For each Jb, let 5: = J - Jb, and (77) Ki = E - cl, (Ja). (78) For each Ki such that K i - l c Ki c cIb(J), let (79) JPtl = {C : C E J, C U e is an Mb-circuit for some e E K,}. We thus generate sequences ? JK ~ -,, c K , c (80) @ = J : c J : c . * * C J " , ~ G J ~ ,J, = J ~ ~ J ~ ~ * . . 3 J z - 1and K , c * c Knp1G K,, such that either (81) K,,-l = K,, or else (82) Kn-I c K,,, and Kn -clb(J) # 9. (83) In the case of (81), we have cl,(Jz) U clb(Jb,)= E since E -cla(Jt) = K,, = KnPlE clb(Jt), and the algorithm is done. (84) In the case of (82): (85) ForeveryeiEKi-Ki-l=cl,(J:-'_,)-cla(J;), i = 1 , . . . . n,let{ei}UDi be the unique Ma -circuit of {e,}UJ:-l. Clearly, Di n (JaPl- J;) # pl. That is, there is an hiE Jb - JP-l = J:-l - 5: such that hi E Di. (86) For every hi E Jp-Jp-,, i = 1, . . . , n, clearly by (79) there is an e i P ,E Ki-,-Ki-* such that hi E C ~where - ~ {ei-l}UCi-l is the Mb-circuit of {ei..,}UJb. (K-i = 9)(87) Choose a single ei for each i = 0 , . . . , n and a single hi for each i = 1,. . . , n, such that (88) enE K, -clb(J), and such that (85) and (86) hold.
u
--
46
J . Edmonds
(89) Clearly, the algorithm can specify an hi for every ei E Ki -Kip, and an e,-, for every ~ , E J ~ - J : -as~ the sequences (80) are being generated, or it can determine the {eo,. . . , e n } and { h l , . . . , h,,} after (80). (90) Let J ’ = J U { e o , . . . ,en}-{hl,. . . ,h,,}. Then IJ’l>IJI and J ’ E F , nF,. (95) To prove J’EF,, let 14=JU{eo, e l , . . . ,ei}-{hl, hZ,. . . , hi} for i = 0, 1,.. . , n. Observe that I:EF,, and that I“F, implies I ~ U { e i + l > - { h i + l } = I ~ + , , ~ Fa because { e i + l } U D i + l ~ I ~ U { e i + l } s i n c e { h l., .h.z, h, J n D i + l = P )by (85), and hence by (48), {ei+,}UDi+, is the only M,-circuit in IaU{ei+,}. (96) To prove J ’ E F b , let Ib=JU{e,, e i + l , .. . ,e,,}-{4+l, h.,+*,. . . , h,,} for i = n, n - I , . . . , O . Observe that I:EF,, and that IPEF, implies I ~ u { ~ ~ - , } - { ~ , } = I ~ - , F b because {ei-l}uCi-lEIP~(ei-l} since {hitl, h,+z , . . . , h.,,}nci-,=0 by (86), and hence by (48),(ei-JU Cipl is the only M,-circuit in IfU{ei-l}.
Problems (65) and (23) are of course both special cases of the loco problem: (97) given matroids Ma = (E,Fa) and it& = (E,F,), and given any real-valued c = [c,], j E E, find a set J E Fa nFb, IJI S m, which maximizes c ( J ) . (98) We show, in analogy to the Hungarian method for the optimum assignment problem, how algorithm (75)-(90) can be used as a subroutine in an algorithm for solving (97). (99) The vector x’ = [x: :j E El of any J E Fa nFb, IJI S m, obviously satisfies the inequalities. (100) For every j E E, x, 3 0; and for k = a or b, and for every Mk -closed set A E E, CIEAx, S rk(A);and x, S m. Since c ( J )= cxJ = I I Ec,x:, E if we maximize cx over all x satisfying (100) by the vector x’ of some J E F , n F , , then by (99), this J will solve (97). If we can do this for any rational (or integer-valued) c then by (20) we will have proved (6ii). This is precisely what the algorithm does. The dual of the linear program, (101) maximize cx subject to (100) is the linear program minimize (102) y r = my,,, +Irk(Ak)yk(Ak), where the summation is over k = a , h, and all hfk -closed sets A ‘, subject to (103) y =[y,,,, y,(Ak):k = a , h ; A k ,Mk-closed] satisfying (104) y 3 0 and (105) for every j E E , y,,,+C{yk(Ak):k=a, b; ~EA~}Z=C,. (106) Let tk(y, j ) = x yk(Ak),summation over the Mk-closed sets A k which contain j. Thus (105) can be written as (107) t(y, j ) = Y,, + ta(y7 j ) + tb(Y, j ) 3 c,. The algorithm terminates with a J E Fa r l Fb, 1.T S m, and a solution y of (104) and (105), such that (108) cx’ = yr. By (21), this x’ solves the primal linear program, (101). We will ascertain (108) from the “complementary slackness” conditions,
z,EE
Matroid intersection
47
(109) x:>O, i.e. ~ E Jonly , if t(y, j ) = c , ; and i.e. only if (110) yk(Ak)>Oonly if IJnAkll=C,EA*X:=rk(Ak), (111) J n A k is an Mk-basis of A k ;and (112) y, > O only if (JI= rn. Conditions (109)-( 112) immediately imply that the corresponding instance of (15) holds as an equation, and thus that the corresponding instance of (16) holds as an equation, namely equation (108). (113) At any stage of the algorithm we have a solution yo of (103)-(105) such that, for each k = a, b, yE(A) > 0 if and only if A is an Mk-closed set A: of a E E. sequence $3 E A: c A,kc * c Aa(k) (114) Let Eo = Q E E :t(yo, j ) = c,}. We also have a set J o E Fa nFb, lJo(s rn, such that (115) JOE Eo, and such that condition (110) holds. That is (116) for k = a, b, and i = 1,.. . ,q ( k ) , JonA,k is an Mk-basis of A:. The set J o = $3, and a yo which is zero-valued except for y k = maximum of 0 and c,, j E E, are suitable for a starting J o and yo. Since yo and JO satisfy all the complimentary slackness conditions (109)-( 112) except possibly (112), if yo and Jo satisfy (112) then J o is an optimum solution of (97) and the algorithm stops. In fact, where rn, = clearly Jo optimizes c ( J ) subject to JEF,flF, and
--
IJ1= m,.
If (112) is not satisfied then, for i = 1,.. . ,q ( k ) + 1, let (117) m k = (A: - A:-,, Fk)be the minor obtained from k f k by deleting E -A: and contracting A:-,. (A: = and = E). For each k, let M ; = (E,FL) be the disjoint union of these matroids w k . (118) That is, F~={JcE:Jn(A,k-A,k-l)EFk for i = 1 , ...,q(k ) } Clearly, from (37)-(38), M)k is a matroid having rank function, (119) r)k(S)=Eik)+'[rk((SnA,k)UA:-l)-ik(AfC_l)],where A: denotes $3 and A&)+l denotes E. Let w k = (E', FE) be the restriction of M)k to E o . That is, FE = {JE E o:J E F;}. The formula for rank in w k is the same as (119). (120) It follows immediately from the definition (38) of "minor" that J O E F';. (122) Thus we apply algorithm (75)-(90) where the J is J o and where for k = a, b the Mk of (75)-(90) is @. (123) If we get a J', then we replace f ' by J', keep the same yo, and return to (113). (Returning to (113) is the same as stopping if (J'I = rn, and returning to (122) if (J'I < m). If we fail to get a set J' as in (123), that is in case (81) of algorithm (75)-(90), then we get a partition of f'into two sets J t and Jf:such that (124) cl:(J,") U cl:(jk) = Eo, where clt(A) denotes the @-closure of A. (125) In this case, let s k = cl)k(Jk)z cli(Jk), where cl)k(A) denotes the M i closure of A. Let clk(A) denote the Mk-closure of A. By the construction of M)k from M k
J. Edmonds
48
k=a,b
i=l
From the formula (102) for yr, we see immediately that (132) y'r = yo' + SD, where y 1 is obtained from yo by raising by 6, for each positive term of D, the corresponding component of yo, and lowering by 6, for each negative term of D, the corresponding component of yo. More precisely (since as many as three consecutive members of the sequence (127) can be identical sets), let (133) y k = - S + y", and for every Mk-closed set A c E, let (134) y:(A)=SN;+yi(A), where N: is the number of indices i = 1,. . . ,q ( k ) + 1, such that A = B:, minus the number of indices i = 2 , . . . , q ( k ) , such that A = (135) Clearly by (127), N: 0 , and that (141) y 1 is a feasible solution of (104), (105). By (131), (132), and (140), we have (142) y'r
Matroid intersection
49
(147) In fact, by (83) and (138), the set K,,, which arose in the application of algorithm (75)-(90) to i@, and Jo, is contained in E l . (148) Though it is not essential to the present algorithm, it can be shown that the structures generated by subroutine (75)-(90) in an unsuccessful attempt to nG, can be used as part of the generation of the augment .To to a larger J in like structures which either augment f'to a larger J in F A n F t corresponding to y', or else lead as we have just described to a better dual solution y 2 and matroids ME. The structure developed by the successive applications of the subroutine can be used until either augmentation of Jo or a y such that ym = 0 is obtained.
e,
(149) Though (148) leads to better bounds on the algorithm, it is simpler to bound the algorithm by observing that for integer-valued c, the vectors yo and y' of the algorithm, and hence the numbers yor and y'r of'the algorithm, are integer-valued. Therefore, by (142), the algorithm terminates in no more than max (0, ci:j E E } iterations of (113)-(139). The same consideration proves the theorem: (150) Where c is integer-valued, max { c ( J ):J E Fa fl Fb}= min {yr :y, an integer valued solution of (103)-(105)). This is somewhat stronger than the same statement with the words "integervalued" left out. The latter is immediately equivalent to (6ii), using basic linear programming theory, but (150) does not follow immediately from (6ii). (151) Theorem (70) is essentially the special case of Theorem (150) where c is all ones.
Note added in proof A simpler non-algorithmic proof of a generalisation of (150) appears in [l].The present paper appeared as a report in 1970 though its methods, and announcements private and public of its methods, predate [l].
References [13 J. Edmonds, Submodular functions, matroids, and certain polyhedra, Proceedings of Calgary Inter. Conf. on Combinatorial Structures and their Applications, June 1969 (Gordon and Breach, New York, 1970). [2] J. Edmonds, Matroid partition, Math. of the Decision Sciences, Amer. Math. SOC.Lectures in Appl. Math. 11 (1968) 335-345. [3] J. Edmonds, Optimum branchings, same as [2], 346-361. [4] R. Rado, A theorem on independence relations, Quart. J. Math. Oxford Ser. 13 (1942) 83-89. [5] W.T. Tutte, Menger's theorem for matroids, J. Res. Nat. Bur. Standards Sect. B 69B (1965) 49-53. [6] H. Whitney, On the abstract properties of linear dependence, Amer. J. Math. 57 (1935) 509-533.