13.
Algebraic
Independent Set Problems
In this chapter we discuss the solution of linear algebraic optimization problems where the set of feasible solutions corresponds to special independence systems (cf. chapter 1 ) . In particular, we consider matroid problems and 2-matroid intersection problems; similar results for matching problems can be obtained. We assume that the reader is familiar with solution methods for the corresponding classical problems (cf. LAWLER [ 1 9 7 6 ] ) . Due to the combinatorial structure of these problems it suffices to consider discrete semimodules ( R = 72 1 ;
in fact, the
appearing primal variables are elements of {O,l}.
We begin with a discussion of linear algebraic optimization problems in rather a general combinatorial structure. In chapter 1 we introduce independence systems F which are subsets of the set P(N) of all subsets of N:= assume that F is norrnaz, i.e.
{1,2,
...,n}.
W.1.o.g.
we
{i) E F for all i E N . By defini-
tion (1.23) F contains all subsets J of each of its elements IEF.
In the following we identify an element I of F with its
i n c i d e n c e v e c t o r x E {0,1)n defined by (13.1)
x
j
= 1
CI*
jEI
for all j E N . Vice versa I = I ( x ) is the s u p p o r t of x defined by I = {j E N 1 xj = 1). A
linear description of an independence system F can be derived
in the following way. Let H denote the set of all flats (Or closed sets) with respect to F and let r: P ( N ) 30 1
+
Z + denote the
301
Linear Algebraic Optimization
rank function of F. A denotes the matrix the rows of which are the incidence vectors of the closed sets C E H ; b denotes a vector with components b
:=
r(C), C E n . Then the set P of
all incidence vectors of independent sets is (13.2)
P = {x E Z:
I
Ax
5 b}.
We remark that such a simple description contains many redundant constraints; for irredundant linear descriptions of particular independence systems we refer to G R ~ T S C H E L 119771.
Let H be a linearly ordered, commutative monoid and let a E H
n
.
We state the following linear algebraic optimization problems: (13.3)
minfxToal ~ E P I ,
(13.4)
max{xToal
(13.5)
min{xT
0
a1 x E P ~ I ,
(13.6)
max{xT
0
a1 x E P ~ I ,
with Pk:=
~ E P I I
I x E P I E x . = k) for some k E N . 7
(13.3) and (13.5) can
trivially be transformed into a problem of the form (13.4) and (13.6) in the dual ordered monoid. Since P and Pk are finite sets all these problems have optimal solutions for which the optimal value is attained. Theoretically a solution can be determined by enumeration; in fact, we are only interested in more promising solution methods.
An element in P of particular interest i s the lexicographically
maximum Vector ;(PI. defined by
A related partial ordering
on 22
n
+
is
303
Algebraic Independent Set Problems
(13.7)
X'y
f o r x,y E
k 1 j=l
VkEN:
k
z:.
( 1 3 . 8 ) Proposition I f x E P i s a maximum o f P w i t h r e s p e c t t o t h e p a r t i a l o r d e r i n g (13.7)
t h e n x = x(P).
Proof:
I t s u f f i c e s t o show t h a t x
x,yEP.
>
Let x
f o r some k E N .
y.
= yi
Then x i
<, y i m p l i e s
for a l l 1
5
x -$ y f o r a l l
i < k b u t x k > Yk
Therefore
k
k j=1
which shows t h a t x
k
y.
We i n t r o d u c e two v e c t o r s x
+
a n d xk d e r i v e d f r o m x E ZZ
i f a
>
j -
e
,
(13.9) otherwise
(13. 1 0 )
:=
{z.l
if j
5
I
j (k),
otherwise
f o r a l l j E N a n d f o r a l l k E 22+
w i t h j ( k ) : = max{jENI
j Z
'
x. zk}. i=1 The r e l a t i o n s h i p o f t h e s e v e c t o r s a n d t h e p a r t i a l o r d e r i n g (13.7) i e described i n t h e following proposition.
Linear Algebraic Optimization
3 04
(13.11) P r o p o s i t i o n Let ii be a linearly ordered, commutative monoid. Hence H is a
linearly ordered semimodule over
LI =
+ xx.
and
3
e
(1)
Proof:
(2)
V
xy
=
j'
Z + .
Let x,y E
with
P,:
Then
.
5 (x+lT0a
( 1 ) is an immediate implication of (13.9).
y 5 x implies
k(x'):= rnin{jl
yj
*
x'.
-
Exv = v . j
Assume y
*
X I : =
xv and let
Then 6:= x ' - y k > 0. Define x" by k
XI).
j
6
i f j = k ,
x " :=
if j = k+l, otherwise
Then y 5 x " and due to 6 O a
k+l
-<
.
6 O a k we get
-
( x " ) ~ O< ~ ( x ' ) ~ O ~
If y = x " then (2) is proved. Otherwise we repeat the procedure for x " . Since k(x") > k(x') after a finite number of steps we find y = x " . (31
If
r
5
P
5
y then xr j
- xj
= 0 for all j with a
j
< e. There-
f o r e xr < xs for all j E N implies the claimed inequality. If 1 - j
;
S for all j with a . 2 e. Further x 2 xj xj I for all j E N implies x r O a < xs O a f o r all j with a < e. j j - j j j This proves the claimed inequality.
LI
5 s 5
(4)
r then xr =
I
From (2) we get y T o a 5 ( x " ) ~O a . From ( 3 ) we know
Algebraic Independent Set Pmblems
305
( ~ ~ ) 5~ (o ~ a+ ) ~ ofor a all k E Z + . F o r k = u we find the claimed inequality.
The importance of proposition
( 1 3 . 1 1 ) for the solution of
the linear algebraic optimization problems (13.4) and ( 1 3 . 6 ) can be seen from the following theorem which is directly implied by ( 1 3 . 1 1 ) and (13.8).
(13.12) T h e o r e m Let H be a linearly ordered, commutative monoid. Hence H is a linearly ordered semimodule over Z + .Let
x
denote the lexico-
graphically maximum solution in P and let k E N with 1
< -
max(Ex.1 1
xEPI.
5 k 5
~f
...
(1)
a 1 -> a 2 ->
(21
P has a maximum with respec
> -
an ' to the partial o r ering
(13.7)
;+lToa
= max{xTnal
~ E P I ,
x -k T o a = maxfxT m a 1 XEP,}.
Theorem ( 1 3 . 1 2 ) gives sufficient conditions which guarantee that a solution of (13.4) and (13.6) can easily be derived from the lexicographically maximum solution
x
in P. It is well-known
(cf. LAWLER [ 1 9 7 6 ] ) that the following algorithm determines the lexicographic maximum x(P); ej vector o f 23
n
.
, jE
N denotes the j-th unit
306
Linear Algebraic Optimization
(13.13)
Greedy algorithm
Step 1
x
Step 2
If x + e . E P 1
Step 3
If j = n then stop; j:=
0; j:= 1 .
j + 1
then
x:= x + e . 7
.
and go to step 2.
If we assume that an efficient procedure is known for checking x i e . E P in step 2 then the greedy algorithm is an efficient 3
procedure for the determination of x(P).
I t is easy to modify
the greedy algorithm such that the final x is equal to
Condition ( 1 3 . 1 2 . 1 )
x+
-k or x
.
can obviously be achieved by rearranging
the components of vectors in Z n .Let Il denote the set of all permutations n: N
-B
N. For n E Il we define a mapping
n:
Zn+Zn
by
which permutes the components of x E 22 inverse permutation of n). to achieve (13.12.1)
If
TI
accordingly (n-'
is the
is the necessary rearrangement
then fi[Pl:= {i(x)I
x E P 1 is the corres-
ponding new independence system. We remark that +[PI does not necessarily have a maximum with respect to the partial ordering
(13.7) if P satisfies (13.12.2).
If ;[PI
satisfies ( 1 3 . 1 2 . 2 )
for all n E TI then the lexicographically maximum solution ;(;[PI) leads to an optimal solution if
307
Algebraic Independent Set Problems
Such independent systems are matroids as the following theorem shows which can be found in ZIMMERMANN, U.
119771. Mainly
the same result is given in GALE [1968]; the difference lies in the fact that GALE explicitly uses assigned real weights.
(13.14) T h e o r e m Let T(P) denote the independence system corresponding to P. Then the following statements are equivalent: (1)
T(P) is a matroid,
(2)
for all n E I I there exists a maximum of ;[PI
with
respect to the partial order (13.7). Proof: (1)
*
(2).
If T(P) is a matroid then T(n[P])
is a matroid
since the definition of a matroid does not depend on a permutation of the coordinates in P. Thus it suffices to show that P contains a maximum with respect to the partial ordering (13.7). Let
=
;(PI
Suppose y
&
be the lexicographical maximum of P and let y E P.
x.
Then there exists a smallest k E N such that k
k
I: Y j >
I:
j=l Let J : = {jI y j =. 1, j
5
k);
j=l
xj
.
-
I:= {jl x
=
j
1, j < k}. Then
IJl > 111 and due to (1.24) there exists an independent set IU
{ u ] for some
x'
%
( 2 ) =+
x
p E J L I . The incidence vector x' of I satisfies
contrary to the choice of
-
X.
(1). Assume that T(P) is not a matroid. Then there exist
two independent sets 1,J with 111
C
IJI such that for all
j E J X I the s e t I U {j} is dependent. We choose n E l l such that
n[I] < n[J\I]
< n[N\ (IUJ)]
.
3 08
Linear Algebraic Optimization
Let u denote the incidence vector of J. If ;(PI
contains a
maximum n(y) with respect to the partial ordering (13.7) then proposition (13.8) shows that the lexicographical maximum
-
- -
x = X ( n ( P ) ) satisfies
= n(y).
yi =
Therefore i(u)
0
for all i E J \ I .
that ;[PI
Then y
i
1 for all i E I and
=
i(y) which implies
does not contain a maximum with respect to the parti-
cular ordering (13.7).
Due to the importance of the lexicographical maximum x ( P ) we describe a second procedure for its determination. Let
-
5 be a partial ordering on Zn. Then we define
5'
x
(13.15)
y
O(y)
5
on 22
n
by
G(x)
for a € ll with o ( i ) = n - i + 1 for all i E N .
In particular we
may consider the partial orderings i'and 4 ' . We remark that x 5' y implies x -s'
y.
(13.16)
M o d i f i e d greedy a l a o r i t h m
Step 1
x
Step 2
If x + e
Step 3
If j = 1 then stop; j:=
0;
j
-
j:= n.
j
EP
then
xi= x + e
j'
1 and go to step 2 .
The final vector x in this algorithm is denoted by x ' ( P ) . Since the application of this algorithm to P is equivalent to the application of the greedy algorithm to ;[PI
we get x ' ( P ) =;(:[PI)
(13.15) shows that x' Is the minimum of P with respect to 4'
.
.
Algebraic Independent Set Problem
1
L e t r:= max{Ix. 1
p*:=
309
x E PI and d e f i n e Iy€{0,11"1
3 xEP,:
y
5
1
- X I .
T h e n P* c o r r e s p o n d s t o a c e r t a i n i n d e p e n d e n c e s y s t e m T ( P * ) .
In p a r t i c u l a r , i f T ( P ) i s a m a t r o i d t h e n T ( P * ) i s t h e d u a l matroid
( c f . c h a p t e r 1 ) . T h e r e f o r e w e c a l l P* t h e d u a l of P.
L e t s:= m a x { I y i l
Y E P * ) . Then r + s = n.
(13.17) Proposition I f P h a s a maximum w i t h r e s p e c t
-
then x(P) = 1 Proof:
S i n c e ;(PI
t i a l ordering x
x
i
- x'
t o t h e p a r t i a l ordering (13.7)
(P*).
is t h e maximum o f P w i t h r e s p e c t t o t h e pa.r-
(13.7),;EPr.
Thus 1 - , € P i .
Let xEPr.
Then
means by d e f i n i t i o n k
r
-
j=l xJ
for a l l kEN.
k
<
X
j-1
F u r t h e r Ex n
>
I:
-
j -k+ 1 x J f o r a l l k = O,l,.
. ., n - 1
i s t h e maximum o f P
r
i
j
-
j
= Ex
j
= r . Thus
n E x j j=k+l
i ( x ) . Therefore
which shows
w i t h respect t o
i'. Hence
x
i s the maxi-
mum o f P r w i t h r e s p e c t t o 4 ' . T h u s
1 - x 6' 1 - x for a l l x E P Let y € P*xPs. implies
and 1
-x
i s t h e minimum of P* w i t h r e s p e c t t o 4 ' . 9
Then t h e r e e x i s t s
P:.such
that y
5
which
4' y . T h e r e f o r e 1 - x is t h e minimum o f P* w i t h
-
r e s p e c t t o 6'. H e n c e 1 - x = x ' ( P * ) .
Linear Algebraic Optimization
310
Proposition ( 1 3 . 1 7 )
shows that the lexicographical maximum
x(P) can be derived a s the complement of x'(P*I determined by ( 1 3 . 1 6 )
which is
applied to P*. The application of the
modified greedy algorithm to P * is called the d u a l g r e e d y
a z g o r i t h m . This is quite a different method for the solution of the optimization problems ( 1 3 . 4 )
and ( 1 3 . 6 ) .
In particular,
checking x + e . E P in step 2 of the greedy algorithm is reI placed by checking x + e . E P * in the dual greedy algorithm. 3
Therefore the choice of the applied method will depend on the computational complexity of the respective checking procedures.
For a more detailed discussion on properties o f related partial orders we refer to ZIMMERMANN, U.
([19761,
[19771).
The greedy
algorithm has been treated before by many authors. KRUSKAL 1 1 9 5 6 1 applied it to the shortest spanning tree problem, RADO 1 1 9 5 7 1 realized that an extension to matroids is possible.
WELSH 1 1 9 6 8 1 and EDMONDS 1 1 9 7 1 1 discussed it in the context of matroids. The name 'greedy algorithm' is due to EDMONDS. Related problems are discussed in EULER 1 1 9 7 7 1 ;
a combinatorial
generalization is considered in FAIGLE 1 1 9 7 9 1 . For further literature, in particular for the analysis of computational complexity and for related heuristics we refer to the bibliographies of KASTNING 1 1 9 7 6 1 and BAUSMANN 1 1 9 7 8 1 . For examples of matroids we refer to LAWLER [ 1 9 7 6 1 and WELSH 1 1 9 7 6 1 .
The following allocation problem is Solved in ALLINEY, BARNABEI and PEZZOLI [ 1 9 8 0 1 for nonnegative weights in the group ( I R , + , z ) . Its solution can be determined using a special (simpler) form of the greedy algorithm ( 1 3 . 1 3 ) .
Algebraic Independent Set Problems
(Allocation Problem)
Example A
31 1
system with m units (denoted a s set T) is requested to serve
n users (denoted as set S). It i s assumed that each user can engage at most one unit. Each user i requires a given amount a
i
of resources and each unit j can satisfy at most a given required amount b unit j iff ai
5 b
with vertex sets
of resources. Thus user i can be assigned to j' S
Let G = (S U T , L ) denote the bipartite graph and T and edge set L where (i,j) E L iff
< b.. A matching I is a subset of L such that no two diffei I rent edges in I have a common endpoint. A set A 5 S is called a
assignable if there exists a matching I such that
A =
{il
(i,j)EI).
The set of all assignable sets is the independence system of a matroid
(cf. EDMONDS and FULKERSON 1 1 9 6 5 1 or WELSH [ 1 9 7 6 ] ) .
In
particular, all maximal assignable sets have the same cardinality. We attach a weight c i E H + to each user i where (A,*,() a linearly ordered commutative monoid.
(13.12)
is
and ( 1 3 . 1 4 )
show that we can find an optimal solution of the problem max{xTUcI
x E PI by means of the greedy algorithm ( 1 3 . 1 3 )
notes the set of all incidence vectors of assignable sets).
(P de-
Due
to the special structure o f the matroid it is possible to simplify the independence test ( x + e E P ) in step 2 o f the greedy j algorithm. In fact, we will give a separate proof o f the validity o f the greedy algorithm for the independence system T(P). Together with theorem ( 1 3 . 1 4 )
we find that T(P) is a matroid.
= max{ckl k E S 1 . If Ti:= {j € T I ai 5 b 1 = 0 then there i j exists no unit which can be assigned to user i. Then G ' denotes
Let c
the subgraph generated by (S\{i))
U T. If we denote the optimal
Linear Algebraic Optimization
312
value of an assignable set of (11
z ( G ' )
If Ti
$
= z G)
0 then let b . 3
by
z(G)
then
-
min{b
=
G
k
I k E T i } . Let I denote a match( I J , E~ ) I}.
ing corresponding to an optimal assignable set {vl
Every such matching is called an "optimal matching". We claim (i,j) E I. Let V:= { u , p l
that we can assume w.1.o.g. If i , j 6 V then I U {(i,j)}
( I J , ~ )E
I}.
is an optimal matching, too. If
i E V , j e V then we replace (i,k) E I by
(i,j). If i @ V , j E V
then we replace (k,j) € 1 by (i,j); since c . is of maximum value the new matching is optimal, too. If i,j E V but , E I by (i,j) and ( p , v ) . then ,we replace (i,v), ( I Jj)
that
( I J , ~ )E
Let G ' Then I
L since a
We remark
5 bv. Thus we can assume (i,j) E I.
< b.
u -
1
U (T'{j}).
denote the subgraph generated by (S'{i}) { (i,j) 1 is a matching in G '
.
Therefore z
(G)
On the other hand, if I' is an optimal matching in I' U { (i,j)) is a matching in G and thus Therefore in the case Ti (2) ( 1 ) and
z ( G ' )
(2)
*
ci =
$
(i,j) a I
z ( G ' )
*
c
< i -
5 G ' ,
z (G')
* ci.
then
z ( G ) .
@ we find
z ( G ) .
show the validity of the following variant of the
greedy algorithm. W e assume c 1 > c2 >
0;
Step 1
I =
Step 2
If T.:= {b
T:= { 1 , 2 ,
i:= 1 ;
k
1 -
I:= ~ U { ( i , j ) l
;
j
=
Cn.
...,m}.
I a . < bk, k E T }
find j E T . such that b
T:= ~'{j}.
. - -->
=
0 then go to step
min Ti
;
3;
313
Algebraic Independent Set Problems
Step 3
If i = n then stop; i:= i
+
1 and go to step 2 .
At termination I is an oFtimal matching which describes the assignment of the users i of an optimal assignable set A* = {il
(i,j) E 11 to units j. It should be clear (cf. theo-
rem (13.12)) that {ii
( i , j ) € 1 1 is optimal among all assign-
able sets of the same cardinality at any stage of the algorithm and for arbitrary c E H i
..
(iE.5)
satisfying c 1
5 cn . In particular, we may consider (IR,+ , L ) and
min,l). Then for ciEIR, i
E S
2
c2
2
..
(1R U Em},
we find
for all assignable sets A with
IAl
=
lA*l;
i.e. the sum as
well as the minimum of all weights is maximized simultaneously.
In the remaining part of this chapter we discuss linear algebraic optimization problems for combinatorial structures which do not contain a maximum with respect to the partial ordering (13.7), in general. We assume that H is a weakly cancellative d-monoid. Hence w.1.o.g.
H is an extended semimodule over Z +
(cf. 5.16 and chapter 6). F r o m the discussion in chapter 1 1 we know that max- and min-problems are not equivalent in d monoids due to the asymmetry in the divisor rule (4.1). In
Linear Algebraic Optimization
314
chapter 1 1 we developed reduction methods for both types of algebraic optimization problems which reduce the original problem to certain equivalent problems in irreducible sub-d-monoids of H ; since H is weakly cancellative these sub-d-monoids are parts of groups. Let ( H A ; A E h ) denote the ordinal decomposition of H. Then
for an incidence vector x of an independent set ( A o : =
min A ) .
According to ( 1 1 . 2 1 ) and ( 1 1 . 5 4 ) the optimization problems (13.3)-
-
( 1 3 . 6 ) are reduced to optimization problems in H
(13.18) (13.19)
u1 u2
=
min{A(xTma)
I
~ E P ,I
=
max{A(xTna)
I
x~ P I
(13.20)
p 3 = minlA(x
(13.21)
p4
u with
,
a) I x E pkI,
T
= maxrA(xTna)l
~ E P ~ I .
Since o E P the reduction ( 1 3 . 1 8 ) of ( 1 3 . 3 ) is trivial:
u 1 - Ao.
The reduction ( 1 3 . 1 9 ) of ( 1 3 . 4 ) is also simple; since the corresponding independence system is normal the unit vectors e
j
,
j E N are elements of P. Therefore
uz
=
maxIA(aj)
I
j EN).
For the determination of u 3 we assume that it is possible to compute the value of the rank function r : P(N)
+
Z+
for
a given subset of PJ by means of an efficient (polynomial time) procedure. This is a reasonable assumption since we will consider only such independence systems for which the classical
Algebmic Independent Set Problems
315
combinatorial optimization problems can efficiently be solved; then an efficient procedure for the determination of an independent set of maximum cardinality in a given subset of N is known, too. In chapter 1 the closure operator u: P ( N )
-D
P(N)
is
defined by
A
subset I of N I s called c l o s e d if U(1) = I. Clearly, if the
rank function can efficiently be computed then we may efficiently construct a closed set J containing a given set I. J is called a m i n i m a l c l o s e d c o v e r of I. Every set I C_ N has at least one minimal closed cover J , but in general J is not unique. Let s : = k - r ( J )
2
0 (cf. 13.5 and 13.6).
Then we define the
b e e t r e m a i n d e r s e t AJ by i f s = O , (13.22)
AJ:=
{il,i2,...,is}
otherwise,
...
.
5 < a. The It i2 following method for the determination of p 3 is a refinement
where N L J = {il,i2,...,it}
and a
5
a
of the threshold method in EDMONDS and F U L K E R S O N 119701.
(13.23)
R e d u c t i o n m e t h o d f o r (13.5)
Step 1
u:=
Step 2
If r(K) 2 k then stop;
max{A(a.)I
I
j€A@};
K:= {j
€NI
A(a,)
determine a minimal closed cover J of K.
5
p}.
Linear Algebraic Optimization
316
In the performance of (13.23) we assume k method terminates in O(n) steps.
A
5 r(N). Then the
similar method for the deri-
vation of thresholds for Boolean optimization problems in linearly ordered commutative monoids is proposed in ZIMMERMANN, U. [ 1978cl.
(13.24)
Proposition
u in
The final parameter
(13.23) i s equal to
u 3 in
(13.20).
Proof: Denote the value of u before the last revision in step 3 by
i.The
corresponding sets are denoted by
r ( K ) < k. Since Pk
*
@ we conclude
.
u3 >
k , 5 and
Aj.
Then
Further r ( ? ) = r ( K )
shows that an independent set with k elements has at least s = k
- r(k)
elements in N L j . Thus "3 > maxE)i(aj)
I
jE
which implies u 3 2 u. Now r ( K )
A ~ I
5 k shows
p3
5 u.
W e remark that for bottleneck objectives (cf. 11.58)
the re-
duction method (13.21) leads to the optimal value of (13.5) since A(x
T
a) = xT m a in this case.
In the determination of u 4 we consider certain systems F j
, jEN
derived from the underlying independence system F. We define (13.25)
FJ:= {I E F I
I
u
{j} E F , j
d I)
for j E N.
Algebraic Independent Set Problems
317
Obviously F j is an independence system for each j E N . Its rank function is denoted by r j
a
1
> -
a2 >
.
W.1.o.g.
we assume
- - .-> a n
in the following method.
(13.26)
Reduction method f o r ( 1 3 . 6 1
Step 1
w:=
Step 2
If r ( N )
Step 3
w:=
1. W
w
+
2
k
-
1 then stop ( p = X(a,)).
1;
to step 2 .
go
Finiteness of this method is obvious.
(13.27)
Proposition
The final parameter w in ( 1 3 . 2 6 ) satisfies p 4 = h(aw) for p 4 in ( 1 3 . 2 1 ) . Proof: Let
V
denote the final parameter in ( 1 3 . 2 6 ) .
Then there
exists an independent set I E F of cardinality k such that j E I. Therefore h ( a w ) r
w-1
5
pa,
If v = 1 then equality holds. Otherwise
(N) < k-1 shows that there exists no I E F of cardinality k
such that j €.I. Thus p 4
5
A(aw).
Again, we remark that for bottleneck objectives the reduction method ( 1 3 . 2 7 )
leads to an optimal value of ( 1 3 . 6 ) .
After determination of the corresponding index optimization problems ( 1 3 . 3 )
-
(13.6)
u the algebraic
are reduced according to
L i n w Algebraic Optimization
31R
(11.23)
and ( 1 1 . 5 5 ) . The sets of feasible solutions P and P k
are replaced by P : = {xMI x E P M
and x
(Pk)M:= CxMl x E P k and x M(p)
with M =
= {j E N 1
X(a.1 7
5 u).
= 0 for all j E M ) ,
j
= 0
j
for all j E M }
I f F denotes the correspond-
ing independence system then ( 1 3 . 2 8 )
shows that the reduced sets
of
feasible solutions correspond to the restriction F
M,
i.e.
M
of F to
(13.28)
which is again a normal independence system. Theorems ( 1 1 . 2 5 ) and ( 1 1 . 5 6 )
show that ( 1 3 . 3 )
-
(13.6)
are equivalent to the
following reduced l i n e a r a l g e b r a i c o p t i m i z a t i o n probZems T
(13.29)
min{x
(13.30)
max{x
(13.31)
minIx
(13.32)
max{xT
T T
o a ( ~ ) ~XIE P # I
,
oa(u)MI XEP,}
,
0
a(v),l
xE
a(ujMI x E (pklM)
with respect to M = M ( p ) and p = p i , i = 1 , 2 , 3 , 4 defined by (13.18)
-
(13.32)
are problems in the extended module G
p = pi
, i
(13.21).
Since H i s weakly cancellative ( 1 3 . 2 9 ) 1!
over ZZ
-
for
= 1,2,3,4.
Next we consider two particular independence systems. Let F denote the intersection F
n
F 2 o f two matroids F 1 and F
2
where F 1 and F 2 are the corresponding independence systems. An element I E F is called an i n t e r s e c t i o n . Then ( 1 3 . 3 )
-
(13.6)
319
Algebraic Independent Set Problem
are called a l g e b r a i c m a t r o i d i n t e r s e c t i o n p r o b l e m s . In particular, ( 1 3 . 5 )
and ( 1 3 . 6 )
are called a l g e b r a i c m a t r o i d k - i n t e r -
s e c t i o n p r o b l e m s . Let (V,N) denote a graph. Then I
N is
called a m a t c h i n g if no two different edges in I have a common endpoint. We remark that in a graph (i,j) and ( j , i ) denote the same edge. The set of all matchings is an independence system (13.3)
F. Then
-
(13.6)
In particular, ( 1 3 . 5 )
are called a t g e b r a i c matching p r o b t e m s .
and ( 1 3 . 6 )
are called a t g e b r a i c k-match-
ing probtems.
The classical matroid intersection problem is well-solved; efficient solution methods are developed in LAWLER ( [ 1 9 7 3 ] ' , [19761), [1979].
IRI and TOMIZAWA [ 1 9 7 6 1 , FUJISHIGE 1 1 9 7 7 1 and EDMONDS An augmenting path method and the primal dual method
are described in the textbook o f LAWLER [ 1 9 7 6 ] ;
FUJISHIGE [ 1 9 7 7 ]
considers a primal method. The classical matching problem is well-solved, too; an efficient primal dual method f o r its solution is given in EDMONDS [ 1 9 6 5 1 and is described in the textbook of LAWLER 1 1 9 7 6 1 . A primal method is considered in CUNNINGHAM and MARSH 1 1 9 7 6 1 . The necessary modification of the primal dual method for the classical versions of ( 1 3 . 5 ) (13.6)
is discussed in WHITE [ 1 9 6 7 ] .
and
For the classical matroid
intersection problem such a modification is not necessary since in the shortest augmenting path method optimal intersections of cardinality 0 , 1 , 2 , . . .
are generated subsequently (cf.
LAWLER [ 1 9 7 6 ] ) .
In particular, efficient procedures for the determination of a
Linem Algebraic Optimization
320
matching
( m a t r o i d i n t e r s e c t i o n ) o f maximum c a r d i n a l i t y i n a N a r e d e s c r i b e d i n LAWLER
g i v e n s u b s e t N'
[1976].
Thus t h e
e v a l u a t i o n o f t h e rank f u n c t i o n i n step 2 of t h e r e d u c t i o n method
is possible i n polynomial t i m e f o r both pro-
(13.23)
blems. Let F be t h e
(cf.
13.25)
w h e r e N'
set of a l l m a t c h i n g s i n t h e g r a p h
is t h e s e t of a l l m a t c h i n g s i n t h e g r a p h
is t h e set of
common w i t h j sections.
( V , Nj )
a l l edges which have no e n d p o i n t i n
(jEN). L e t F = F 1 n F 2 be t h e set of a l l inter-
Then F j
= Fi
nF;.
a l l i n d e p e n d e n t s e t s of of
(V,N). T h e n F J
F u r t h e r F'
1
( a n d F:)
a certain matroid
t h e o r i g i n a l m a t r o i d t o N'
{ j),
cf.
is t h e set of
(called contraction
WELSH
[1976]).
Thus the
e v a l u a t i o n of t h e r a n k f u n c t i o n r v , V E N i n s t e p 2 o f t h e r e d u c t i o n method
(13.26)
is p o s s i b l e i n polynomial t i m e
for both
problems. The r e s t r i c t i o n FM ( c f .
1 3 . 2 8 ) l e a d s t o t h e set o f a l l match-
i n g s i n t h e g r a p h (V,M) and t o t h e i n t e r s e c t i o n o f t h e t w o restricted matroids blems
(13.29)
-
( F 1 l M f l( F 2 ) M . T h e r e f o r e t h e r e d u c e d p r o -
( 1 3 . 3 2 ) are a l g e b r a i c matching
section) problems provided . t h a t matching
-
( 1 3 . 6 ) are algebraic
(matroid i n t e r s e c t i o n ) problems.
It remains t o
modules.
(13.3)
(matroid inter-
solve t h e reduced problems i n t h e respective
A l l methods f o r t h e s o l u t i o n o f
the classical matroid
i n t e r s e c t i o n problem are v a l i d and f i n i t e i n modules.
A re-
f o r m u l a t i o n of t h e s e m e t h o d s i n g r o u p s c o n s i s t s o n l y i n rep l a c i n g t h e u s u a l a d d i t i o n of r e a l numbers by t h e i n t e r n a l composition i n t h e u n d e r l y i n g group and i n r e p l a c i n g t h e u s u a l l i n e a r o r d e r i n g o f t h e r e a l numbers by t h e l i n e a r o r d e r ing i n t h e underlying group.
Optimality o f t h e augmenting p a t h
32 1
Algebraichiependent Set Problems
method is proved by KROGDAHL (cf. LAWLER 119761) using only combinatorial arguments and the group structure (i.e. mainly cancellation arguments) of the real additive group. Thus his proof remains valid in modules. Optimality of the primal method in FUJISHIGE [I9771 is based on similar arguments which remain valid in modules, too. Optimality of the primal dual method is based on the classical duality principles. From chapter 1 1 we know how to apply similar arguments in modules. In the following we develop these arguments explicitly; for a detailed description of the primal dual method we refer to LAWLER 119761. Let Ax
5 b and
xx
5
be the constraint systems (cf. 13.2) with
respect to the restricted matroids. Then (13.33)
In modules it suffices to consider max-problems since a minproblem can be transformed into a max-problem replacing the cost coefficients by inverse cost coefficients. We assign dual variables ui and vk to the closed sets 1 and k of the restricted matroids. Then (u,v) is dual feasible if A
(13.34) N
where a = a(v),
T
*
ou
NT A ov
and where
2
N
a, ui
2
e l vk
2
e
u denotes the respective index of
the reduction. The set of all dual feasible (u,v) is denoted by DM. Then the algebraic dual of (13.30) (13.35)
min{b
T
ou
*
gTovl
is
(u,v) E DH).
In the usual manner we find weak duality, i.e. x T O z 5 bT O u
NT b Ov.
Linem Algebraic Optimization
322
The corresponding complementarity conditions are
(13.37)
*
ui > e
(Ax)i = b i
(13.38) N
for all j E M and all closed sets i and k. If a j E M then x
0, u
E
e, v
=
< e for all
1 -
e is an optimal pair of primal
and dual feasible solutions. Otherwise the initial solution x s 0 , v s e , ui = e for all i
*
M and
u : = maxIZ. I j E M ) M
3
is primal and dual feasible. Further all complementarity conditions with the possible exception of (13.39)
are satisfied. We remark that bM is the maximum cardinality Of an independent set in one of the restricted matroids. Such a pair (x;u,v) is called compatible, similarly to the primal dual method for network flows. Collecting all equations ( 1 3 . 3 6 ) after external composition with x (
j
we find
13.40)
for compatible pairs. The primal dual method proceeds in stages. At each stage either the primal solution is augmented or the values of the dual variables are revised. At each stage no more than 21MI dual variables are permitted to be non-zero. Throughout the performance of the method the current pair (x;u,vl is compatible. This is achieved by an alternate solution of
323
Algebraic Independent Set Pmblems
(13.41)
max{x
T
O
-
a1 x E P M
(x;u,v) compatible)
I
for fixed dual feasible solution (u,v) and of minjb
(13.42)
T
* FT 0 vI
0u
(u,v) E DM
, (x;u,v) compatible}
f o r fixed primal feasible solution x. From
that ( 1 3 . 4 1 )
i s equivalent to
(13.43)
max{ 1
x
jEM
and that ( 1 3 . 4 2 ) (13.44)
1
I
xEPM
I
(13.40)
we conclude
(x;u,v) compatible)
is equivalent to
(u,v) E D M
min{uMI
,
(x;u,v) compatible).
An alternate solution of these problems is constructed in the same way as described in LAWLER [ 1 9 7 6 1 for the classical case. In particular, the method is of polynomial time in the number of usual operations, *-compositions and (-comparisons.
The
final compatible pair (x;u,v) is complementary. Thus it satisfies xT
0:
= bT
u
* FT 0 v;
then weak duality shows that
(x;u,v) is an optimal pair. Therefore this method provides a constructive p r o o f of a duality theorem for algebraic matroid intersection problems.
5 c and Cx 5 c be the constraint systems (cf. N
Let Cx
M
13.2)
with respect to the two matroids considered. Then (13.45) A
closed set I of one of the matroids with I C_ M is closed,
with respect to the restricted matroid, too. We remark that c, = b
N
j
u
(c, = b
j
)
f o r all j E M .
We assign dual variables u i , v k to the closed sets i and k of the matroids. Then (u,v) is dual feasible with respect to
Linear Algebraic Oprimization
37-4
maxfx
(13.4')
T
oa(p2)1 X E P )
if
*
cTou
(13.46) The set of
-T
c
2
o v
a l l dual feasible
a l g e b r a i c dual o f
(13.4)
min{cTou
2
a ( u 2 ) , ui
2
e, v k
e
( u , v ) i s d e n o t e d by D 2 .
. Then t h e
is
*
-T
c
( u , v ~E D ~ I .
0v1
Algebraic dual programs with r e s p e c t t o (13.3')
minix
T
oa(ul)
I
XEPI
a r e c o n s i d e r e d i n c h a p t e r 11. with respect t o
(13.3') e
An o b j e c t i v e
if
za(ul)
X(ui)
(u,v) i s strongly dual feasible
*
C
5 u1 ,
~
h(vk)
*
-T
c
O
o ~v
,
5 u l , e 5 ui
function is defined according t o
,
e z vk '
(11.19).
( 1 3 . 4 7 ) Theorem ( M a t r o i d i n t e r s e c t i o n d u a l i t y theorem) L e t H be a weakly c a n c e l l a t i v e d-monoid.
Hence H i s an e x t e n -
d e d s e m i m o d u l e o v e r Z + .T h e r e e x i s t o p t i m a l f e a s i b l e p a i r s ( x ; u , v ) f o r t h e a l g e b r a i c matroid i n t e r s e c t i o n problems and
(13.4')
which a r e complementary a n d s a t i s f y
(11
e = x T o a * c
(2)
( - c l T 0 u ( u1 1
(31
cTou
Proof: -
(13.3')
T
*
-T
o u * c - T
(-c)
(for 13.3')
o v
ov(v,) = x
T
ma
TT0v = xT0a
( 1 ) . The m i n - p r o b l e m
( f o r 13.3'1, (for 1 3 . 4 ' ) .
( 1 3 . 2 9 ) i s t r a n s f o r m e d i n t o a max-
problem o f t h e form ( 1 3 . 3 0 ) r e p l a c i n g a
j
(v,)
by i t s i n v e r s e i n
325
Algebmic Independent Set Problems
for a l l j E M = M(pl).
t h e group G
- - -
(x;u,v) denote the
Let
p1
f i n a l p a i r generated i n the application of the primal dual Then f o r j E M :
m e t h o d t o t h i s max-problem. a.(pl) 7 when A a n d
-1
5
(
~
~
*0 . i;i T o v )j ?
a r e s u b m a t r i c e s o f C and
( c o l u m n s j E M and rows
o f c l o s e d s e t s 1 5 M a n d row a ( M ) w i t h t h e c l o s u r e f u n c t i o n of t h e respective matroid). L e t if ] E M , x j:=
then x E P .
otherwise
Further l e t if i $ M u := i
I
if i = U 1 ( M ) ,
e
otherwise
where a l is t h e c l o s u r e f u n c t i o n o f t h e m a t r o i d F 1 and l e t i f k 5 M v
k
I
i f k = a2(M),
:=
otherwise
where a 2 is t h e c l o s u r e f u n c t i o n o f t h e m a t r o i d F 2 .
e for a l l j E M .
*
aj
T (C n u
*
ZTov)
j
N o w A ( a . ) > u1 f o r j 6 M shows
I
e
Therefore
5
Then
5
a(pl)
*
cTou
*
-T
c
~v
(u,v) is strongly dual f e a s i b l e ,
m e n t a r y and ( I ) and
(2)
. ( x ; u , v ) is c o m p l e -
are satisfied.
(31 Let (x;u,v) denote the f i n a l p a i r generated i n the applic a t i o n o f t h e p r i m a l d u a l method t o
(13.30).
Since N = M ( p 2 )
Linear Algebraic Optimization
326 we find
Therefore (u,v) is dual feasible, (x;u,v) is complementary and (3) is satisfied.
A valuable property o f compatible pairs leads to the solution
of algebraic matroid k-intersection problems. Again we consider a solution in the respective group. Then
1
(PkIM = ( x E Z f
Ax
5
N
b, Ax
5
z,
x
jEM
x
j
= kl.
We assign a further dual variable X to E x . = k . Then (u,v,x) is 1
dual feasible if (13.48)
A
T
nu
*
-T A
o v
*
[A]
zZ,
ui
2 e , vk 2 e.
The dual variable X is unrestricted in sign in the respective group. The set o f all dual feasible (u,v) is denoted by (DkIM. Then the algebraic dual o f (13.32) is
Again we find weak duality xTO;5b
T
Ou
*
*
-T b Ov
(koX)
for all primal and dual feasible pairs
(x;u,v,X). We apply the
primal dual method to (13.301,but now for v
v
v
A sequence of compatible pairs ( x ;u ,v
EX'!
I
= v
for v =
o,I,...
)
u:= u 4 and M:= M(u4). is generated with
.
We modify the dual revision procedure slightly in order to admit negative uM (cf. LAWLER [1976], p. 347: let 6 = min{6 6v.6wl).
The stop-condition uM = 0 is replaced by Ex
j
= k.
U
,
327
Algebmic Independent Set Problems
T h e n i t may h a p p e n t h a t uM b e c o m e s n e g a t i v e d u r i n g t h e p e r formance o f t h e p r i m a l d u a l method. t y c o n d i t i o n s remain v a l i d .
and l e t A:=
Hence
u
M'
(x;u',v,A)
Now r e p l a c e u by u '
{:
u' := i
Then
A l l other compatibili-
i
$
MI
i
=
M
s a t i s f i e s (13.pB) and
(x;u',v,h)
i s o p t i m a l f o r (13.32)
primal feasible, i.e.
i f xx
= k.
w e f i n d a s t r o n g d u a l i t y theorem.
W e a s s i g n d u a l v a r i a b l e s ui
and
j
5
k.
(13.49) i f x i s
S i m i l a r l y t o theorem
(13.47)
From (13.45) w e g e t
a n d vk t o t h e c l o s e d s e t s o f t h e
m a t r o i d s a n d two f u r t h e r d u a l v a r i a b l e s A t o Ex
d e f i n e d by
t o xx
I n t h e c a s e o f a max-problem w i t h
> k and A+
j -
= M\N(p4)
$
@
w e have t o a d j o i n t h e i n e q u a l i t y
< o
t - X
j -
M
with assigned dual variable Y. PI:= k {X€Pk[ 1 ; (u,v,A+,A-,Y)'is (13.6')
Then x,
f 01.
dual feasible with respect t o max{x
T
aa(p4)
I
XEP;I
if
ui where
Y
2
e , Vk
2
e l A-
2
e,
denotes t h e vector with j-th
A+
:e ,
Y
2
e
I
c o m p o n e n t y f o r j €;
and
328
Linear Algebraic Optimizarwn
j-th component e otherwise. Without this additional dual variable y which does not appear in the objective function it is necessary that at least one of the other dual variables has the index A(max(a.1 7
j
€:I)
> p4.
This leads to a duali-
ty gap which can be avoided by the introduction of y:= max{a j
€GI.
I
j
The set o f all dual feasible solutions according to
(13.50)
is denoted by D;.
f: Di
H can be defined with respect to (13.6') similarly as
+
in ( 1 1 . 1 9 ) .
Let a:= k O A -
6:= c T O u
Then a dual objective function
and
*
-T c Ov
*
(kOA+)
.
Then f is defined by
if a where
5 B or m
A(a) = A ( E ) = Ao.
Otherwise let f(u,v,A+,A-,y):=
-
denotes a possibly adjolnt maximum o f H. The dual of
(13.6') is
Dual programs with respect to
are defined as in chapter 1 1 .
(u,v) is strongly dual feasible
if
e
5 A- , e 5
A+
,
e
5 u i , e 5 vk
.
An objective function is defined according to ( 1 1 . 1 9 ) .
329
Algebmic Independent Set Pmblerns
(13.51) Theorem
(Matroid k-intersection duality theorem)
Let H be a weakly cancellative d-monoid. Hence H is an extended semimodule over Z+
.
There exist optimal feasible pairs
(x;u,v,A+,A-) and (x;u,v,A+,A-,Y) for the algebraic matroid k-intersection problems ( 1 3 . 5 ' ) and ( 1 3 . 6 ' ) which are complementary and satisfy T
*
-T
c
Ov
T (-c) Ou(v3)
*
(-c)~OV(U)=xToa
(1)
koA- = x
(2)
ko6(U ) 3
(3)
xToa
(41
xToa = cT0u
with 6 ( p 3 ) : = A
*
Oa
*
*
cTou
*
(koA-) =cTOu
*
(A+)
* -1
-T
c
*
Ov
and
*
(for 1 3 . 5 ' )
(kOAy)
N
3
-T
c
*
Ov
(for 1 3 . 6 ' 1 ,
(koA+)
(for 1 3 . 6 ' )
ko€(p4)
€(u4)
:=
A+
(for 1 3 . 5 ' 1 ,
*
(A_)
-1
.
Proof: The proof of ( 1 ) and ( 2 ) is quite similar to the proof of ( 1 ) and ( 2 ) in theorem ( 1 3 . 4 7 ) . The value of the variable in the final pair generated by the primal dual method is assigned to A +
or A -
in an obvious manner.
The proof of ( 3 ) and ( 4 ) follows in the same manner as the proof of ( 3 ) in theorem ( 1 3 . 4 7 ) if we choose y = max{a
j
I
jEN)
;
the variable y does not appear in the objective function.
If we are not interested in the determination of the solutions of the corresponding duals then we propose to use the augmenting path method instead of the primal dual method for a solution of the respective max-problem in a group. This method generates subsequently primal feasible solutions x
I
v=1,2,
...
330
Linear Algebraic Optimization
of cardinality V which are optimal among all intersections of the same cardinality. It should be noted that such solutions are not necessarily optimal among all intersections of the same cardinality with respect to the original (not reduced) problem; optimality with respect to the original problem is implied only for those v which satisfy
where u denotes the index used in the reduction considered. The solution of algebraic matroid intersection (k-intersection) problems is previously discussed in ZIMMERMANN, U. [1978b],
([1976],
[1978c], [1979a]) and DERIGS and ZIMMERMANN, U. [1978a].
A solution of the reduced problems in t h e case of matching problems can be determined in the same manner. Again it suffices
to consider max-problems. For a detailed description of the primal dual method we refer to LAWLER [1976]. We may w.l.0.g. assume that the respective group G is divisible (cf. proposition 3.2). Hence G is a module over Q. The primal dual method remains valid and finite in such modules. Again a reformulation of the classical primal dual method for such modules consists only in replacing usual additions and usual comparisons in the group of real numbers by the internal compositions and comparisons in the group considered. A l l arguments used for a proof of the validity and finiteness in the classical case carry over to the case of such modules. Optimality follows from similar arguments as in the case of matroid intersections.
Algebraic Independent Set Roblems
33 1
Let A denote the incidence matrix of vertices and edges in the underlying graph (V,N). Let S k be any subset of V of odd cardinality 2 s k + l . Then
is satisfied by the incidence vector x of any matching. We
represent all these constraints in matrix form by Sx (13.52)
n P = { x E Z , I Ax
5
8.
Then
5 1, Sx 5 9 1 .
For the reduced problem with M = M ( p ) we find (13.53)
P
M
=
Ex E 22: I AMx
We assign dual variables u
i
5 1, SMx 5
s].
to the vertices i E V and dual
variables vk to the odd sets S k . Then (u,v) is dual feasible if (13.54)
A
T nu M
*
2
s;ov
N
a
N
where a : = a(llIj for j E M = M(p). The set of all dual feasible j
solutions is denoted by DM. In the usual manner we find weak duality, i.e.
-
xToY<
*
lT.U
sTov
for primal feasible x and dual feasible (u,v). The corresponding complementarity conditions are (13.55) (13.56)
u
i
for all (i,j) E N ,
> e
I .
E x
j i j
=1,
for all vertices V and for all odd sets T k Z N .
Let (13.58)
V(x):= l i E V l
r jx ij
< 11,
Linem Algebraic Optimization
332
i.e. V(x) is the set of all vertices which are not endpoint N
of some edge in the matching corresponding to x. If a for all (i,j) E M then x
*
0, u
e, v
e
9
ij
< -
e
is an optimal pair
of primal and dual feasible solutions. Otherwise let 6:=
(maxIY;.. I
(I/z)
(i,j) E M I )
11
which is well-defined in the module considered. The initial solution x
0, v
E
2
6 is primal and dual feasible,
e and u
satisfies (13.55) and (13.57) and all dual variables vi for i E V ( x ) have an identical value
rl.
Such a pair is called
c o n p a t i b z e , similarly to previously discussed primal dual methods. From (13.55) we conclude xToy
*
(1
-
AMx)
T
O u = lT0u
*
sTOv
for a compatible pair (x;u,v). Since all variables u . with ( 1 -AMxIi
(13.59)
+
0 have identical value rl we get
x
T
oa
*
( I V I- 2 zMxij)
0 0
= 1
T
ou
*
sTov.
Similar to the primal dual method for matroid intersections the primal dual method for matchings alternately proceeds by revisions of the current matching and the current dual solution without violation of compatibility. This is achieved by an alternate solution o f (
13.60)
T maxfx
O
a1 x E P M
, (x;u,v) compatible}
for fixed dual solution (u,v) and of (13.61)
T rninI1 n u
s
T
OvI
(u,v) E D M ,
(x;u,v) compatible)
for fixed primal solution x. From (13.59) w e conclude that
333
Algebmic Independent Set Problems
(13.60) is equivalent to (13.62)
maxI1 x
M ij
I
xEPM
,
( x ; u r v ) compatible)
and that (13.61) is equivalent to (13.63)
(U,V) E D M
min{nI
;
(x;u,v) compatible)
where TI is the common value of all variables u
i with i E V ( x )
(cf. 13.58). Finiteness and validity of the method can be shown in the same manner as described in LAWLER [19761 for the classical case. In particular, the method is of polynomial time in the number of usual operations, *-compositions and
<-comparisons.
The final compatible pair is complementary.
Thus it is optimal. Therefore the method provides a constructive proof of a duality theorem for algebraic matching problems.
We consider the algebraic matching problems (
13. 3" )
min{xT
(
13.4 " )
max{xT
0
a(ul) I x E P I , a(uZ) I x E PI.
The dual of (13.4") is (13.64)
min{lTnu
*
sTovl A T n u
*
sTnv
2 a(u2), u i z e , vk?e}.
Strong dual feasibility with respect to (13.3") is defined by
"k
and an objective function I s defined according to
11.19).
Linear Algebraic Optimization
334
(13.66) T h e o r e m ( M a t c h i n g d u a l i t y t h e o r e m ) Let H be a weakly cancellative d-monoid. Hence w.1.0.g. an extended semimodule over Q,.
H is
There exist optimal feasible
pairs (x;u,v) for the algebraic matching problems (13.3") and (13.4") which are complementary and satisfy (1) (2)
(3)
e = xToa
*
lTou
( - 1 ) T 0u(pl)
1
T
ou
*
*
*
(for 13.3"),
sTov
(-s)Tov(pl) = x T o a
(for 13.3"), (for 13.4").
sTov = xToa
Proof: Similar to the proof of theorem (13.47).
WHITE 119671 develops a parametric approach for k-matching problems (the classical case of (3.6)) which remains valid f o r modules over Q. DERIGS [1978a1 proposes a modification of the primal dual method which remains valid and finite in such modules. Compatibility is slightly modified, too. Then similar results as for the corresponding algebraic k-intersection problems (cf. theorem 13.51) can be derived. Algebraic matching problems have previously been discussed by DERIGS (11978a1, [1978b]).
In particular, solution methods for the perfect
matching problem
(cf. 11.37) are proposed in DERIGS ([1978b],
11979a1, [1979b]). A short joint discussion of algebraic problems is given in D E R I G S and ZIMMERMANN, U.
[1978a].
Matroid intersection problems for particular matroids are discussed in further papers. ZIMMERMANN, U.
119761 gives an ex-
tension of the Hungarian method for assignment problems using the concept of admissible transformations (cf. 12.59). If the
Algebraic Independent Set Pmblems
335
set of feasible solutions consists in the intersections of two matroids then a method based on admissible transformations is valid if and only if the two matroids are partition matroids (cf. theorem 12.15 and 12.16 in ZIMMERMANN, U.
119761). A fur-
ther extension to three partition matroids is considered in BURKARD and FROHLICH [19801 within a branch and bound scheme. Algebraic matroid intersection problems in the particular case that one of the matroids is a partition matroid are solved by DETERING 119781. Algebraic branching problems are solved by HAAG 119781 with generalizations of the various known methods. In particular, the derivation of EDMONDS' method
(cf. EDMONDS
119671) in KARP 119711 (cf. also C H U and LIU 119651) remains
valid in arbitrary d-monoids.