Fast Distributed Algorithms for Brooks–Vizing Colorings

Fast Distributed Algorithms for Brooks–Vizing Colorings

Journal of Algorithms 37, 85᎐120 Ž2000. doi:10.1006rjagm.2000.1097, available online at http:rrwww.idealibrary.com on Fast Distributed Algorithms for...

208KB Sizes 0 Downloads 88 Views

Journal of Algorithms 37, 85᎐120 Ž2000. doi:10.1006rjagm.2000.1097, available online at http:rrwww.idealibrary.com on

Fast Distributed Algorithms for Brooks᎐Vizing Colorings David A. Grable Institut fur ¨ Informatik, Humboldt-Uni¨ ersitat ¨ zu Berlin, D-10099 Berlin, Germany E-mail: [email protected]

and Alessandro Panconesi Informatica, Uni¨ ersita ´ di Bologna, Mura Anteo Zamboni 7, 40127 Bologna, Italy E-mail: [email protected] Received July 29, 1998

Let G be a ⌬-regular graph with n vertices and girth at least 4 such that ⌬ 4 log n. We give very simple, randomized, distributed algorithms for vertex coloring G with ⌬rk colors in O Ž k q log n. communication rounds, where k s O Žlog ⌬ .. The algorithm may fail or exceed the above running time, but the probability that this happens is oŽ1., a quantity that goes to zero as n grows. The probabilistic analysis relies on a powerful generalization of Azuma’s martingale inequality that we dub the Method of Bounded Variances. 䊚 2000 Academic Press

1. INTRODUCTION Vertex coloring is a much studied problem in combinatorics and computer science for its theoretical as well as its practical aspects. In this paper we are concerned with the distributed version of a question stated by Vizing, concerning a Brooks-like theorem for sparse graphs. Roughly, the question asks whether there exist colorings using many fewer than ⌬ colors, where ⌬ denotes the maximum degree of the graph, provided that some sparsity conditions are satisfied. In this paper we show that such colorings not only exist, but that they can be quickly computed by extremely simple distributed, randomized algorithms. Before stating our results precisely, we review the relevant facts. 85 0196-6774r00 $35.00 Copyright 䊚 2000 by Academic Press All rights of reproduction in any form reserved.

86

GRABLE AND PANCONESI

For any graph G of maximum degree ⌬ with n vertices, the following tri¨ ial algorithm computes a ⌬ q 1 Žlist. coloring. Each vertex u initially has a list, or palette, of degŽ u. q 1 colors. The computation proceeds in rounds. During each round, each uncolored vertex, in parallel, first performs a tri¨ ial attempt: it picks a tentative color at random from its palette and if no neighbor picks the same color, the color becomes final and the algorithm stops for that vertex. Otherwise, the vertex’s palette undergoes a tri¨ ial updateᎏthe colors successfully used by the neighbors are removed ᎏand a new attempt is performed in the next round. Henceforth we shall refer to this as the trivial algorithm. The trivial algorithm always computes a valid coloring regardless of the composition of the initial lists and does so in O Žlog n. rounds with high probabilityᎏthat is, with probability approaching 1 as the number of vertices increases w14x. It is apparent that the trivial algorithm is distributed, since each vertex only relies on information from the neighboring vertices. The well-known distributed algorithm for the same problem given by Luby w19x amends the trivial algorithm in the following way: at the beginning of each round every uncolored vertex is asleep. Each such vertex awakes with probability p and executes a trivial attempt Žin Luby’s paper p s 12 .. Then, whether or not the vertex awoke, the palette undergoes a trivial update. At the end of the round the vertex goes back to sleep. We shall refer to this variant of the trivial algorithm as the dozing-off algorithm. The dozing-off algorithm has the same asymptotic performance as the trivial algorithm, but its analysis just needs pairwise independence. Luby used this fact to carry out a derandomization procedure in the PRAM model. Can better coloringsᎏi.e., colorings using fewer colorsᎏbe computed efficiently in a distributed setting? In 1948 Brooks gave a theorem that characterizes the graphs which are not ⌬-colorable: a graph is ⌬-colorable if and only if it is neither an odd cycle nor a ⌬ q 1 clique Žsee, for instance, w4x.. The proof of Brooks’ theorem is actually a polynomial time sequential algorithm. ⌬-colorings can also be quickly Ži.e., in polylogarithmic time. computed in the PRAM model w12, 15, 16x. In fact, a distributed version of Brooks’ theorem can be derived from a certain locality property of ⌬colorings, yielding the following: There is no oŽ n. randomized, synchronous protocol to ⌬-color paths, cycles, or cliques. For all other graphs, there is a randomized protocol which, with high probability, computes a ⌬-coloring in polylogarithmically many rounds w22x. ŽThe property in question, holding for graphs which are not cliques, paths, or cycles, is this: if G is ⌬-colored except for one last vertex, it is possible to complete the coloring by a simple recoloring operation along an augmenting path of length O Žlog ⌬ n. starting from the uncolored vertex w22x..

DISTRIBUTED BROOKS ᎐ VIZING COLORINGS

87

It is an open problem whether randomization is necessary in all of the above algorithmic results; the asymptotically best deterministic protocols known to date need O Ž n ⑀ Ž n. . rounds, where ⑀ Ž n. tends Žvery slowly. to zero as the number of vertices grows w1, 23x. In a 1968 paper Vizing asked whether upper bounds for the chromatic number better than those given by Brooks’ theorem existed, provided some sparsity conditions were satisfied. In particular, he asked what happens for triangle-free graphs. We shall refer to colorings of triangle-free graphs using significantly fewer than ⌬ colors as Brooks᎐Vizing colorings. This existential problem was settled about two decades later. Johansson w13x showed that every triangle-free graph has chromatic number O Ž ⌬rlog ⌬ .. This is best-possible up to a constant factor, since Bollobas ´ had shown the existence of graphs with arbitrarily high girth such that ␹ Ž G . s ⍀ Ž ⌬rlog ⌬ . Žthe girth of a graph G is the size of the smallest cycle therein . w5x. Johansson’s result, as well as an earlier result of Kim w18x which shows that graphs of girth at least 5 have chromatic number Ž1 q oŽ1.. ⌬rlog ⌬, make use of certain distributed coloring algorithms, but their results are only existential in the following sense. They show only that the probability that the algorithm produces a valid coloring is positive. Their analyses do not rule out the possibility of their algorithms failing with a high probability. ŽBut then, this was not their main concern.. In this paper, we show that Brooks᎐Vizing colorings can be computed efficiently even in a distributed setting. We present very simple randomized, distributed algorithms, which are also easily implementable on a PRAM and in the sequential setting, and demonstrate that they produce the desired colorings with high probability. Our algorithms are variants of the dozing-off algorithm. In fact, when the input graph has no 4-cycles Žgirth 5 or greater. the algorithm is the dozing-off algorithm modified so that the probability that vertices awake is not constant but varies with the round. This probability, initially very low, quickly rises to one, causing the algorithm to behave as the trivial one from that point on. For the general triangle-free Žgirth 4. case, the algorithm adds a mechanism which forces the vertex degrees of the uncolored portion of the graph to remain roughly regular. This regularity is extremely useful in the analysis, but unfortunately gives the algorithm a somewhat high message and space complexity. It may be, however, that this mechanism is unnecessary. The simplicity of the algorithms is an appealing feature and we expect them to work quite well in practice. Although these algorithms display similarities to those in Kim’s work w18x, we have striven for speed and simplicity. Moreover, our analyses are much simpler than those given there. It is perhaps worth remarking that our analyses demonstrate that Brooks᎐Vizing colorings are not rare, for

88

GRABLE AND PANCONESI

the randomized color assignments computed by the algorithm are almost always such colorings. Our result may be conveniently stated as follows: We give an algorithm which, for any triangle-free, ⌬-regular input graph G such that ⌬ G log 1q ␦ n, where ␦ ) 0 is any fixed constant, computes with probability 1 y oŽ1. a vertex coloring of G using ⌬rk colors, for any k F ␣ log ⌬, where ␣ is a constant which depends on ␦ . Moreover, with probability 1 y oŽ1., the coloring will be completed within O Ž k q log n. rounds in the synchronous, message-passing distributed model of computation Žwith no shared memory.. Both of the above oŽ1. terms are functions going to 0 with n, the number of vertices in the network. Notice that the algorithm works also for k s ⌰Žlog ⌬ ., thereby matching the lower bounds of Bollobas ´ and the existential statements of Johansson and Kim. It should be pointed out, however, that our statement is weaker than their existential statement insofar as it needs the additional assumption ⌬ s ⍀ Žlog n.. This, as well as the regularity assumption Žalso assumed in w13, 18x., might in fact be an artifact of our analysis which relies on large deviation inequalities that cease to give strong enough bounds for lower values of ⌬. Our probabilistic analysis makes use of a powerful generalization of Azuma’s inequality, dubbed the Method of Bounded Variances ŽMOBV., described and commented upon in Section 3, that is very well suited for the analysis of algorithms w7᎐9, 11x. Indeed, we regard this paper as an illustration of its power and usefulness.

2. PRELIMINARIES We shall make use of the following definitions, notation, and conventions: Given a finite set A we shall often select a random subset S : A by ‘‘flipping a coin’’ for each element a g A. The random variable B Ž A, pa . is defined as 䢇

B Ž A, pa . [

Ý

Xa ,

agA

where the X a’s are independent indicator random variables such that Prw X a s 1x s pa and Prw X a s 0x s 1 y pa . The event X a s 1 denotes that a g S. In general, the pa’s need not be equal. We shall say that B Ž A, pa . is binomially distributed over the set A. We shall make use of the asymptotic notation that a ; b means a s Ž1 q oŽ1.. b; f Ž n. 4 g Ž n. means that g Ž n.rf Ž n. goes to zero as n grows; 䢇



DISTRIBUTED BROOKS ᎐ VIZING COLORINGS

89

A graph G is almost D-regular if degŽ u. ; D for all vertices u; Nuk denotes the set of vertices at distance exactly k from u. Nu stands for Nu1 ; All logarithms are natural logarithms; e will always denote the base of the natural exponential. 䢇 䢇



The following fact will be used repeatedly Žsee, for instance, w21x.. Fact 2.1. Let AŽ n. and B Ž n. be such that AŽ n. 2 B Ž n. s oŽ1. Ž n tending to infinity, as always.. Then, Ž . Ž 1 y AŽ n . . B n s Ž 1 q o Ž 1. . eyAŽ n. BŽ n. .

Of the many versions of the Chernoff᎐Hoeffding bounds, we shall use the following. Fact 2.2. Let X s B Ž A, p . and ␮ [ Exw X x, then, for all 0 F t F ␮ , Pr w < X y ␮ < G t x F 2 exp y

t2

½ 5 3␮

.

3. THE METHOD OF BOUNDED VARIANCES Recent years have seen the increased utilization of sophisticated tools from probability theory to assist in the analysis of probabilistic algorithms. Of particular importance are tools which allow one to establish that the randomized algorithm under study obeys the concentration of measure phenomenon. Roughly speaking, this means that the random variables describing the behavior of the algorithm are sharply concentrated around their expectations. For an introduction to this important and fascinating subject see, for instance, w8, 21, 24x. In this paper we make use of a powerful generalization of Azuma’s martingale inequality developed by the first author. This method was originally called query games but since the Method of Bounded Variances seems more apt, we shall adopt it here. For the genesis of the method and the proofs, see w9, 2x. For an introduction to the area of concentration of measure and a comparative analysis of the various existing methods, including recent ones such as Talagrand’s inequalities, see w8, 24x. For applications of the MOBV to the analysis of algorithms see w7, 9, 11x and, obviously, the present paper! In order to make the paper as self-contained as possible we now describe the MOBV. Assume we have a probability space generated by independent random variables X i Žchoices., where choice X i is from the

90

GRABLE AND PANCONESI

finite set A i and a function Y s f Ž X 1 , . . . , X n . on that probability space. We are interested in proving a sharp concentration result on Y, i.e., to bound Prw < Y y Exw Y x< ) ax, for any a, as well as we can. The well-known Chernoff᎐Hoeffding bounds give essentially best possible estimates when Y s Ý i X i . The Method of Bounded Differences ŽMOBD. w20x, a nicely packaged generalization of a martingale inequality known as Azuma’s inequality, allows one to consider any function Y s f Ž X 1 , . . . , X n . which satisfies the bounded difference requirement that changing the choice of one of the variables does not affect the final value of Y by too much. More precisely, the result states that if, for all vectors A and B differing only in the ith coordinate f Ž A. y f Ž B . F ci ,

Ž 1.

then Pr Y y Ex w Y x )





c i2r2 F 2 ey ␸ .

Ž 2.

i

When using this inequality, the crux of the matter is to find values c i which are as small as possible. In an extension of this result known as the MOBD in expected form, Condition 1 is replaced by Ex f < X 1 , . . . , X i y Ex f < X 1 , . . . , X iy1

F ci .

Ž 3.

This might translate to much stronger bounds, but at the expense of involved computations. The MOBV offers a ‘‘quick-and-dirty’’ solution, by allowing one to find quite good estimates of the c i essentially without computations Žfor a more comprehensive discussion of these issues, see w8x.. We now describe the method precisely. Consider the following procedure, the aim of which is to determine the value of Y. We can ask queries of the form ‘‘what was the ith choice?’’ᎏi.e., ‘‘what was the choice of X i?’’ᎏin any order we want. The answer to the query is the random choice of X i . A querying strategy for Y is a decision tree whose internal nodes designate queries to be made. Each node of the tree represents a query of the type ‘‘what was the random choice of X i?’’. A node has as many children as there are random choices for X i . It might be helpful to think of the edges as labeled with the particular a g A i corresponding to that random choice. In this fashion, every path from the root to a node which goes through vertices corresponding to X i1, . . . , X i k defines an assignment a1 , . . . , a k to these random variables. We can think of each node as storing the value Exw Y < X i1 s a1 , . . . , X i k s a k x. In particular, the leaves store the

DISTRIBUTED BROOKS ᎐ VIZING COLORINGS

91

possible values of Y, since by then all relevant random choices have been determined. Define the ¨ ariance of a query Žinternal node. q concerning choice X i to be ¨q s

Ý

Pr w X i s a x ␮2q , a ,

agA i

where

␮ q , a s Ex Y < X i s a and all previous queries y Ex w Y < all previous queries x . By ‘‘all previous queries,’’ we mean the condition imposed by the queried choices and exposed values determined by the path from the root of the strategy down to the node q. In words, ␮ q, a measures the amount by which our expectation changes when the answer to query q is revealed to be a. Also define the maximum effect of query q as c q s max < ␮ q , a y ␮ q , b < . a, bgA i

As we shall see, in practice good upper bounds on c q are very easy to obtain. A line of questioning l is a path in the decision tree from the root to a leaf and the ¨ ariance of a line of questioning is the sum of the variances of the queries along it. Finally, the ¨ ariance of a strategy S is the maximum variance over all lines of questioning: V Ž S . s max l

Ý ¨q . qg l

The use of the term variance is meant to be suggestive Žbut hopefully not confusing.: V Ž S . is an upper bound on the variance of Y. The variance plays essentially the same role as the term Ý i c i2 in the MOBD, but it is often a much better upper bound to the variance of Y. PROPOSITION 3.1.

Let S be a strategy for determining Y and let the

¨ ariance of S be at most V. Then for e¨ ery 0 F ␸ F Vrmax q c q2 ,

Pr Y y Ex w Y x ) 2 ␸ V F 2 ey␸ .

'

One reason why the term ¨ q is often much smaller than the c i2 term found in the MOBD is that the probabilities that X i takes on value a for the various a g A i can be factored into the computation. This is especially apparent when the ␮ q, a’s take on only two values Žwhen holding q fixed

92

GRABLE AND PANCONESI

and varying a g A i .. This partitions the space A i into two regions, which we call the YES region and the NO region. These regions correspond to two mutually exclusive events, the YES and NO events. In this paper, for instance, the A i will be a set of colors and the YES event will often be of the form ‘‘was choice X i color ␥ ?’’. Denote the probabilities of these two events by pq, YES and pq, NO s 1 y pq, YES and the values of ␮ q, a taken on within each of these region by ␮ q, YES and ␮ q, NO . In w7x the following fact is proven. Fact 3.1. ¨q s

Ý

Pr w X i s a x ␮2q , a F pq , YES c q2 .

agA i

We shall use this fact repeatedly. A simpler bound, which applies regardless of the values of the ␮ q, a’s, is ¨ q F c q2r4.

Ž 4.

The proof appears in w7x. A further improvement that makes the method particularly versatile is its ability to dispense with subspaces of very low probability where the c i ’s are very large. In other words, there might be just a few, very low probability vectors of choices Žoutcomes. which lead to lines of questioning with very large variance. This of course drives up the variance of the whole strategy, possibly ruining the usefulness of the result. We’d like to be able to ignore these exceptional outcomes and the corresponding lines of questioning. The following corollary, also proven in w9x, shows that we can. COROLLARY 3.1. Let M and m be, respecti¨ ely, the minimum and maximum ¨ alues taken by Y o¨ er all possible outcomes. Let C be a set of exceptional outcomes. Consider a strategy for determining Ž Y < C ., i.e., assuming that the actual outcome is not in C . If the ¨ ariance of this strategy is at most V then Pr Y y Ex w Y x ) 2 ␸ V q Ž M y m . Pr Ž C . F 2 ey␸ q Pr w C x ,

'

for e¨ ery 0 F ␸ F Vrmax c q2 . In this paper we shall apply this corollary systematically. A rule of thumb to decide whether the bounds on the variance are going to be good enough is that if the variance is of the same order as the expectation then the bounds given by the MOBV are comparable in strength with those of the Chernoff᎐Hoeffding bounds.

DISTRIBUTED BROOKS ᎐ VIZING COLORINGS

93

4. GIRTH 5 OR MORE In this section we assume the distributed network to be a ⌬-regular graph with girth at least 5. We will give a coloring algorithm for this situation and analyze it. Then later, we will add a mechanism for dealing with the presence of 4-cycles. Each node of the network knows the value of ⌬ and of a positive real k. The goal is a coloring with at most ⌬rk colors. The protocol appears in Fig. 1. The algorithm is exactly the dozing-off algorithm where the probability of awakening increases from round to round, until, after ek rounds, p s 1 for the remaining of the execution of the algorithm. Therefore from this point on the algorithm is just the trivial algorithm. We will refer to these two phases as the dozing-off and the tri¨ ial phase, respectively. Our goal is to establish the following theorem. THEOREM 4.1. Algorithm Dozing-off computes a ⌬rk coloring of the input graph G in O Ž k q log n. rounds with high probability. High probability means that the probability that the algorithm fails or that it exceeds the abo¨ e running time is oŽ1., a quantity that goes to zero as the number of ¨ ertices of the graph increases.

FIG. 1. Algorithm Dozing-off executed by each vertex, denoted here as u.

94

GRABLE AND PANCONESI

Remark. The only way the algorithm can fail is if some palette runs out of colors. In the analysis of the algorithm we keep track of the following random variables: For each vertex u and round i, the size of u’s palette at round i, denoted by si Ž u.; For each vertex u, color c, and round i, the number of uncolored neighbors of u which have c in their palettes, denoted by d i Ž u, c .; For each vertex u and round i, the number of uncolored neighbors of u, denoted by deg i Ž u.. 䢇





Our goal here is to analyze the algorithm sufficiently to show that with high probability there exists a round i s O Ž k . such that, for every vertex u, si Ž u . ) deg i Ž u . q 1.

Ž 5.

We will refer to this as the termination condition. It is known that if the graph satisfies condition 5 the trivial algorithm will with certainty complete the coloring Žsee w14x. and it will do so within O Žlog n. rounds with high probability w17, 14x. Therefore, if we prove that the termination condition is attained in O Ž k . rounds with high probability, we can conclude that the dozing-off algorithm succeeds in O Ž k q log n. rounds with high probability, thereby establishing Theorem 4.1. Our goal, then, is to establish the termination condition 5. In order to do so we shall show that the random variables si Ž u., d i Ž u, c ., and deg i Ž u. are approximated very well by ‘‘ideal’’ values si , d i , and Di , which will be defined by means of suitable recurrences. After that is done, it is a simple matter to determine at what point si 4 Di , which implies the termination condition. Indeed, we do this first. Not surprisingly, the random variables evolve differently during the dozing-off and trivial phases of the algorithm, prompting two sets of recurrences for the ideal values. The first set applies for round i - ek during the dozing-off phase. Here, siq1 s si exp Ž y1re . , d iq1 s d i 1 y

ž ž

Diq1 s Di 1 y

si 1 di e si 1 di e

exp Ž y1re . , and

/ /

,

with initial conditions D 0 s d 0 s ⌬ and s0 s ⌬rk.

Ž 6.

DISTRIBUTED BROOKS ᎐ VIZING COLORINGS

95

The recurrences depend on the ratio d irsi , which, during the dozing-off phase, satisfies the recurrence d iq1 siq1

s

di si

ž

1y

si 1 di e

/

s

di si

y

1 e

.

Thus we immediately see that the ratio decreases in each step by the constant 1re. After ek s O Ž k . steps, we will no longer be in the situation that si - d i . Observe that this number of steps is constant if k is. Also notice that the wake-up probability p is in every round i defined to be exactly equal to the inverse ratio sird i , implying that the algorithm switches from the dozing-off to the trivial phase in ek s O Ž k . steps. This value of p was chosen in order to maximize the probability that a vertex colors itself during the dozing-off phase. After ek steps, d ek s s ek s s0 exp Ž yekre . s

⌬ ke k

.

Ž 7.

Since, at all times, the vertex palettes must be nonempty, ⌬rke k G 1 is a necessary condition Žon k in terms of ⌬ .. Asymptotically, this condition is satisfied for all k s Žlog ⌬ .r␤ with ␤ ) 1 and not satisfied for k s log ⌬. Thus we see that k s O Žlog ⌬ . is a necessary condition for the algorithm to work. This is in accordance with the lower bound given by Bollobas ´ for the existence of Brooks᎐Vizing colorings w5x. The ratio Dird i is initially 1, but increases by a factor of expŽ1re . in each step. Therefore, Dekrd ek s e k and we see that Dek s

⌬ k

.

Ž 8.

Thus at the end of the dozing-off phase, we are still quite far from attaining the termination condition, so we must continue our analysis into the trivial phase. During the trivial phase, the ideal values obey the following recurrences:

½

siq1 s si exp y

di si

5 ½

eyd i r s i ,

d iq1 s d i Ž 1 y eyd i r s i . exp y Diq1 s Di Ž 1 y eyd i r s i . .

di si

5

eyd i r s i , and

Ž 9.

96

GRABLE AND PANCONESI

Once again, these recurrences depend on the ratio d irsi . Since d iq1 siq1

s

di

Ž 1 y eyd i r s i . -

si

2

di

ž /

,

si

Ž 10 .

we see that this ratio goes to zero at a doubly exponential rate! This implies that Di decreases much faster than si ᎏexactly what we need for condition Ž5.. To see this, bound si and Di as follows Žfor the second one, use the fact that eyx G 1 y x .: di

½ 5 ž ž //

siq1 ) si exp y

si

di

Diq1 F Di 1 y 1 y

si

s Di

di si

Ž 11 . ,

From these recurrences we see that while Di also decreases doubly exponentially, the rate at which si decreases is much slower and ever decreasing. Since the ratio d irsi may be identically one at the beginning of the trivial phase Ž i s ek ., the inequality Ž10. would not be very useful if we started there. Instead, take the values after one round of the trivial phase Ž i s ek q 1.. At that point we have s ekq1 ) ⌬rke kq 1, Dekq1 F ⌬rk, and

␳[

d ekq1 s ekq1

F1y

1 e

- 1.

Now, using Eq. Ž10., we get d ekqi s ekqi

F ␳2

iy 1

.

Therefore, using Ž11. repeatedly, we see that Dekqi F Dek ␳ 2

iq 1

y1

F

⌬ k

␳2

iq 1

y1

,

while

½ ½

s ekqi G s ekq1 exp y G s ekq1 exp y

iy2

Ý ␳2

j

js0 iy2

Ý ␳j js0

5

5 G

⌬ ke

kq1

exp y

½

␳ 1y␳

5

.

Ž 12 .

DISTRIBUTED BROOKS ᎐ VIZING COLORINGS

97

Thus, within O Žlog k . additional rounds, Di - csi

Ž 13 .

for all constants c - 1. Also, if k s O Žlog ⌬ . then si ) 1 for every i, which intuitively means that the algorithm never runs out of colors. To summarize, if we could show that the random variables deg i Ž u., si Ž u., and d i Ž u, c . were arbitrarily close to the ideal values Di , si , and d i then Eq. Ž13. would imply that the termination condition held. Our aim for the rest of the section is to show that this is indeed the case. That is, we will prove the following theorem. THEOREM 4.2. With high probability si Ž u . s Ž 1 " e i . si ,

d i Ž u, c . s Ž 1 " e i . d i , and

deg i Ž u . s Ž 1 " e i . Di

Ž 14 .

for all rounds i, colors c, and ¨ ertices u, where the error factors e i are defined by ei s C i

(

ke 2 k log n ⌬

,

for a constant C ) 1 and e0 s 0. High probability means that the probability that Eq. Ž14. does not hold for any round, ¨ ertex, or color is oŽ1., a quantity that goes to zero as the number of ¨ ertices grows. The proof of the theorem is rather involved and will take several pages. Before plunging into the proof, we show that the assumptions ⌬ s ⍀ Žlog 1q ␦ n. and k s O Žlog ⌬ . imply that e i ’s are oŽ1.’s, thereby ensuring correct termination. Since we will need the statement of the theorem to be true with a small e i for all rounds i F ek q O Žlog k ., we would like e ekqOŽlog k . s C1k

(

log n ⌬

< 1,

which is just the condition C2k log n < ⌬, for suitable constants C1 and C2 . Assuming that ⌬ G log 1q ␦ n, for any ␦ ) 0, this condition becomes C2k < ⌬1y 1rŽ1q ␦ ., which is just that kF

log ⌬



for an appropriately large, but constant ␤ .

98

GRABLE AND PANCONESI

So, if the e i ’s behave as advertised in Theorem 4.2, the statements concerning the running time and the correctness of the algorithm Ži.e., Theorem 4.1. follow. Although the actual computations differ, the proof of Theorem 4.2 for the dozing-off phase and for the trivial phase is conceptually the same. The proof is by induction on the round number i. The base case i s 0 holds with equality. The induction step consists of two parts. First, it is shown that the approximate equality holds in expectation. That is, we first show that the expectations are as promised; Ex siq1 Ž u . ; siq1 ,

Ž 15 .

Ex d iq1 Ž u, c . ; d iq1 , and

Ž 16 .

Ex deg iq1 Ž u . ; Diq1 ,

Ž 17 .

for all rounds i, vertices u, and colors c. Then it is shown that the random variables siq1Ž u., d iq1Ž u, c ., and deg iq1Ž u. are sharply concentrated around their expectations, i.e., Ex siq1 Ž u . ; siq1 Ž u . ,

Ž 18 .

Ex d iq1 Ž u, c . ; d iq1 Ž u, c . , and

Ž 19 .

Ex deg iq1 Ž u . ; deg iq1 Ž u .

Ž 20 .

for all rounds i, vertices u, and colors c. The use of the asymptotic equality in place of explicit error factors is justified provided that the e i ’s behave as stated in Theorem 4.2. We now sketch the road map that will bring us to prove Theorem 4.2. In the next section, we shall prove that, assuming Eqs. Ž15. ᎐ Ž20., then Eqs. Ž15. through Ž17. still hold after the execution of one additional round of the algorithm. Then, in a separate section, again assuming that Eqs. Ž15. ᎐ Ž20. hold by induction, we shall show that after an additional round of the algorithm Eqs. Ž18. through Ž20. are also still valid. This does not quite prove the theorem because, round after round, the compounded error might grow too large. Therefore, in a final section we perform the error analysis showing that the error can be kept under control. We feel that presenting the proof in this modular fashion offers the best compromise between the opposing needs of precision and intuition. Hopefully, the main thrust of the argument and the main ideas will emerge clearly, in spite of the unavoidable menagerie of cluttering technical details. 4.1. Deri¨ ing the Recurrences Next in our proof is to show that Eqs. Ž15. ᎐ Ž17. hold. The discussion will also clarify the genesis of the recurrences governing the behavior of the algorithm. We argue separately for the dozing-off and trivial phase.

DISTRIBUTED BROOKS ᎐ VIZING COLORINGS

99

In what follows, we focus on an arbitrary round during the dozing-off phase and assume by induction that the ith version of Eq. Ž14. holds. To simplify notation, s, d, and D will denote the values si , d i , and Di and s⬘, d⬘, and D⬘ will denote the values siq1 , d iq1 , and Diq1. We will also omit explicit error factors such as Ž1 " e i . and just write asymptotic equality. By ‘‘u c-colors’’ we will refer to the event that vertex u colors itself successfully with color c during the current round, and by ‘‘c decays at u’’ we will refer to the event that color c is deleted from u’s palette at the end of the current round. The current palette of a vertex u is denoted by A u . In the rest of the section, we focus on an arbitrary uncolored vertex u. During the dozing-off phase, s - d and the vertices awake with probability s p s d. LEMMA 4.1. During the dozing-off phase, for each uncolored ¨ ertex u and each color c in u’s palette, Pr w u c-colors x ;

p s

ž

1y

p s

d

/

;

1 1 d e

.

Proof. Let Nuc be the set of u-neighbors which have color c in their s palettes. Recalling that, by definition, p s d and the induction hypotheses < A u < ; s and degŽ u, c . s < Nuc < ; d, for all u and c, Pr w u c-colors x s

s

p < Au <

ž Ž . ž Ł

¨ gNuc

1qo 1 d

1y

s Ž 1 q o Ž 1. .

p

1y

1 1 d e

< A¨ <

/

Ž 1 q o Ž 1. . d

d Ž1qo Ž1 ..

/

.

The last equality follows from Fact 2.1 by taking AŽ n. [ Ž1 q oŽ1..rd and B Ž n. [ Ž1 q oŽ1.. d. From the lemma, again using Fact 2.1 and the induction hypothesis, we see that Pr w u colors x s

Ý cgA u

Pr w ¨ c-colors x ;

s 1 d e

,

Ž 21 .

100

GRABLE AND PANCONESI

where A u is the current palette of u. Denoting by Nu the current set of uncolored neighbors of u, Ex deg⬘ Ž u . s

Ý Ž 1 y Pr w u colorsx . ; D

¨ gNu

ž

s 1

1y

d e

/

s D⬘, Ž 22 .

since by induction < Nu < s degŽ u. ; D. This shows that the expectation of deg⬘Ž u. is near the ideal value D⬘, thereby establishing Eq. Ž17.. This was of course what suggested the recurrence D⬘ s D 1 y

ž

s 1 d e

/

in the first place. We now show that the palette sizes also have expectation asymptotically equal to their ideal values. First, we need a statement similar to that of Lemma 4.1. LEMMA 4.2. During the dozing-off phase, for each uncolored ¨ ertex u and each color c in u’s palette, Pr w u c-colors ¬ ¨ does not color x ;

1 de

.

Proof. Let us denote by v the event ‘‘¨ does not color’’ and denote by t¨ s c the event ‘‘¨ ’s tentative color is c.’’ This subsumes the fact that ¨ wakes up at the current round. Then, Pr w u c-colors ¬ v x s Pr u c-colors, t¨ s c ¬ v q Pr u c-colors, t¨ / c ¬ v s Pr u c-colors, t¨ / c ¬ v s s ; ;

Pr u c-colors, t¨ / c, v Pr w v x Pr v ¬ u c-colors, t¨ / c Pr w u c-colors, t¨ / c x Pr w v x

Ž 1 y srde . 1rde Ž 1 y srde . 1 de

.

The first asymptotic equality follows because, by the definition of the algorithm and the induction hypothesis, and recalling that there are no

DISTRIBUTED BROOKS ᎐ VIZING COLORINGS

101

triangles in the graph, Pr v ¬ u c-colors, t¨ / c s Pr v ¬ ᭙ w g Nuc t w / c, t u s c, t¨ / c s Pr v ¬ t u s c, t¨ / c s 1 y Pr w ¨ colors ¬ t u s c, t¨ / c x s1y

Ý Pr w ¨ a-colors ¬ t u s c x a/c

;1y

s de

.

That Prw u c-colors, t¨ / c x ; 1rde follows directly from the definition of the algorithm and the induction hypothesis. To compute the new size of u’s palette we need to compute the probability that a given color c decays at u, that is, the probability that it is removed from u’s palette. As the neighbors of u color themselves, the colors they have used are removed from u’s palette. Again denoting by Nuc the set of neighbors of u whose palettes contain c at the current round, we see that for each color c in the palette, Pr w c decays at u ¬ u does not color x s Pr w some ¨ g Nuc c-colors ¬ u does not color x s 1 y Pr w no ¨ g Nuc c-colors ¬ u does not color x . The probability that neither of two neighbors’ c-colors is, in general, difficult to compute but, in the case where there are neither 3- nor 4-cycles, the 2-neighborhood of u is a tree. Thus we see that the events ‘‘ x does not c-color’’ and ‘‘ y does not c-color’’ are essentially independent, since c-coloring only depends on the Žindependent. tentative color choices made by each vertex and its neighbors and since any two vertices x and y have no neighbors in common except u. The same is true for any number of neighbors of u. Therefore, from the induction hypothesis, using the last lemma, Lemma 4.1, and Fact 2.1, Pr w c decays at u ¬ u does not color x s 1 y Pr w no ¨ g Nuc c-colors ¬ u does not color x ;1y

Ł

¨ gNuc

;1y 1y

ž

Pr w ¨ does not c-color ¬ u does not color x 1 ed

d

/

; 1 y exp Ž y1re . .

Ž 23 .

102

GRABLE AND PANCONESI

Then, again by induction,

Ý Ž 1 y Pr w c decays at u x . ; s exp Ž y1re .

Ex s⬘ Ž u . s

cgAŽ u .

which establishes Eq. Ž15. and suggests that palette decay during the dozing-off phase is governed by the recurrence s⬘ s s exp Ž y1re . . Now we turn to the d⬘Ž u, c .’s. For each color c, the graph Gc is defined as the graph induced by the vertices whose palettes contain c. The probability that a vertex u must be removed from Gc is, recalling Eqs. Ž21. and Ž23., Pr w u disappears from Gc x s Pr w u colors x q Pr w c decays at u and u does not color x s Pr w u colors x q Pr w c decays at u ¬ u does not color x = Ž 1 y Pr w u colors x . ;

s 1 d e

q 1y

ž

s1y 1y

ž

s 1 d e

s 1 d e

/



1 y exp Ž y1re . .

exp Ž y1re . .

By induction, Ex d⬘ Ž u, c . s

Ý Ž1 y Pr w u disappears from Gc x . ; d

¨ gNuc

ž

1y

1 1 d e

/

exp Ž y1re . ,

suggesting that the random variables d i Ž u, c .’s should be approximated well by the solution to the recurrence d⬘ s d 1 y

ž

s 1 d e

/

exp Ž y1re . .

Therefore we see that Eqs. Ž15. ᎐ Ž17. hold throughout the dozing-off phase. After the algorithm switches to the trivial phase, the recurrences for the ideal values take on new forms, reflecting the fact that pi s 1 instead of sird i . Their derivations are conceptually the same, however.

DISTRIBUTED BROOKS ᎐ VIZING COLORINGS

103

Start again with the probability that a vertex colors itself with color c. Namely, 1

Pr w u c-colors x ;

ž

s

1y

1 s

d

;

/

1 s

eyd r s .

Thus, Pr w u colors x s

Pr w u c-colors x ; eyd r s .

Ý cgA u

Again both asymptotic equalities follow from Fact 2.1. This implies that the expected number of neighbors of u which remain uncolored in the next round is asymptotically equal to DŽ1 y eyd r s ., which led to the recurrence D⬘ s D Ž 1 y eyd r s . . To derive the recurrence for the palette sizes we again use the fact that, when the girth is at least 5, the events ‘‘¨ c-colors’’ are essentially independent for any set of neighbors of u. Pr w c decays at u ¬ u does not color x s Pr w some ¨ g Nuc c-colors ¬ u does not color x s 1 y Pr w no ¨ g Nuc c-colors ¬ u does not color x ;1y

Ł

¨ gNuc

;1y 1y

ž

Pr w ¨ does not c-color ¬ u does not color x 1 s

d

eyd r s

d ; 1 y exp y eyd r s . s

½

/

5

Thus, the expected number of colors in u’s palette which survive is asymptotically equal to s exp yŽ drs . eyd r s 4 which led to the recurrence d s⬘ s s exp y eyd r s . s

½

5

Finally, for the d i Ž u, c .’s we have Pr w u disappears from Gc x s Pr w u colors x q Pr w c decays at u and u does not color x d ; eyd r s q Ž 1 y eyd r s . 1 y exp y eyd r s s

ž

½

d s 1 y Ž 1 y eyd r s . exp y eyd r s , s

½

5

5/

104

GRABLE AND PANCONESI

which led to d d⬘ s d Ž 1 y eyd r s . exp y eyd r s . s

½

5

4.2. Concentration Results As in the previous section, the current palette size, vertex degree, and number of neighbors having a certain color c in their palettes are denoted by s, D, and d, respectively. In what follows we analyze one step of the algorithm assuming by induction that the input graph G satisfies the following: G is almost D-regular with girth at least 4; For all vertices u of G, < A u < ; s, where A u is the current palette of vertex u; For all colors c and vertices u, the number of u-neighbors which have color c in their palette is degŽ u, c . ; d; i.e., Gc is almost d-regular for all colors c. 䢇 䢇



Notice that we are considering the girth-4 case. That is, we will prove the concentration results directly for the most general case, paving the way for the extension of Section 5. Throughout, t u s c denotes the event ‘‘vertex u’s tentative color choice is c.’’ We start with some preliminary facts that will come in handy later. DEFINITION 4.1. Given a set of vertices A, UPŽ A. is the set of vertices in A that wake up at the current round. When the context allows it, for notational convenience we shall denote
D d

s

DISTRIBUTED BROOKS ᎐ VIZING COLORINGS

105

and

Pr UP Ž Nu . y Ex UP Ž Nu .

G

(

3␸ s

D d

F 2 ey␸ .

s Proof. UPŽ Nu . s B Ž Nu , p . where, by definition of the algorithm, p s d and, by the induction hypothesis, < Nu < ; D. The claim follows from Fact 2.2.

There is no need to prove Fact 4.1 for the trivial phase, for then UPŽ Nu . s Nu . Consider the set of neighbors of a vertex ¨ which have a particular color c in their palettes, i.e., N¨ c . The next fact says that the number of such neighbors which wake up is sharply concentrated around its expectation. Again, the proof is routine. Fact 4.2. Ex UP Ž N¨ c . ; s and, for all ␸ G 0, Pr UP Ž N¨ c . y Ex UP Ž N¨ c .

G 3 ␸ s F 2 ey␸ .

'

s Proof. UPŽ N¨ c . s B Ž N¨ c , p . where, by definition, p s d and, by the induction hypothesis, < N¨ c < ; d for all ¨ and c. The claim follows from Fact 2.2.

After these preliminaries we are set to prove the concentration result. For brevity, we will prove them only for the dozing-off phase. An essentially identical proof holds for the trivial phase. Intuitively, things improve during the trivial phase, for s grows larger relative to both D and d. The bounds will be strong enough to hold simultaneously for all vertices u, colors c, and rounds i until the termination condition is satisfied. 4.2.1. Concentration of ¨ ertex degrees. The theorem we want to prove here is the following: THEOREM 4.3.

For a ¨ ertex u let h u [ ¨ g Nu : ¨ colors 4 .

106

GRABLE AND PANCONESI

Then, for ␸ G 5 log n and n G 2,

(

Pr h u y Ex w h u x ) 3



sD ed

F 2 ey␸ Ž 2 q n q n3 . .

We start with a definition. DEFINITION 4.3. For a vertex ¨ , let F¨ [ c g A ¨ : ᭚u w g N¨ , t w s c 4 , i.e., the set of colors in ¨ ’s current palette which are not chosen tentatively by any ¨ -neighbor. F¨ is the set of colors which remain ‘‘free’’ for ¨ after ¨ ’s neighbors make their tentative color choice. The next fact asserts that, almost surely, the number of colors from u’s palette not claimed as tentative colors by any neighbor is sharply concentrated around its expectation. Said differently, it states that, for each vertex ¨ , the number of free colors is sharply concentrated around its expectation. Let f¨ [ < F¨ <. Then,

LEMMA 4.3.

Ex w f¨ x ;

s e

and, for all ␸ G 4 log n and n G 2, Pr

f¨ y Ex w f¨ x ) 3 ␸ s F 2 ey␸ Ž 1 q n2 . .

'

Proof. To prove the claim we shall use the MOBV as stated in Corollary 3.1. For a vertex w and a color c, consider the events Bw c [ Ž UP Ž Nw c . y Ex UP Ž Nw c .

F 3␸ s .

'

and let B [ nw c Bw c . By Fact 4.2 we have Pr w B x F 2 n2 ey ␸ for all ␸ G 0. First, we compute the expectation of f¨ . Assuming B, let Xc [

½

1 0

no neighbor in UP Ž N¨ c . picks c tentatively otherwise

DISTRIBUTED BROOKS ᎐ VIZING COLORINGS

107

so that

Ž f¨ < B . s Ý

Xc .

cgA ¨

By the induction hypothesis and Facts 2.1 and 4.2, Ex w f¨ x s Ex w f¨ ¬ B x Pr w B x q Ex f¨ ¬ B Pr w B x ; Ex w f¨ ¬ B x s

Ý

Ex w X c x

cgA ¨

s

Ý

Ł

Ý

Ł

Pr w t w / c x

cgA ¨ wgUP Ž N¨ c .

s

cgA ¨ wgUP Ž N¨ c .

ž

1y

s Ž 1 q o Ž 1. . s 1 y

ž

;s

1 e

1 < Aw <

/

1 q o Ž 1. s

s Ž1qo Ž1 ..

/

.

The X c’s are not independent, for if a ¨ -neighbor w g N¨ a l N¨ b wakes up then X a and X b will both be affected. Therefore we resort to Corollary 3.1 and study the variance of Ž f¨ ¬ B .. To estimate the variance of Ž f¨ ¬ B . we query, in any order, the vertices in the set B[

D UP Ž N¨ c . , c

asking for their tentative color choices. These are the only vertices whose tentative color choices can belong to ¨ ’s palette. For each such query, there are only two events to consider: either the tentative color is in ¨ ’s palette or it is not. For w g B, this happens with probability < A w l A¨ < < A¨ <

.

The maximum effect of the query is 1 since at most one color can be affected. To estimate the global variance of this strategy, consider the following bipartite graph H Žwe should actually write H¨ but we drop the subscript for notational convenience.. The two sets of the bipartition are

108

GRABLE AND PANCONESI

A ¨ and B. An edge cw g EŽ H . if and only if c g A w and w wakes up. Assuming B implies that d H Ž c . s
< A w l A¨ < < Aw <

s

dH Ž w . < Aw <

;

dH Ž w . s

,

where, as remarked c w s 1 since w’s tentative choice can affect at most one color in A ¨ . Therefore, V Ž f¨ ¬ B . s

Ý V Ž w. ; wgB

EŽ H .

; s.

s

Hence, invoking Corollary 3.1, Pr

f¨ y Ex w f¨ x ) 2 ␸ s q s Pr Ž B . F 2 ey␸ Ž 1 q n2 .

'

which implies, for ␸ G 4 log n and n G 2, Pr

f¨ y Ex w f¨ x ) 3 ␸ s F 2 ey␸ Ž 1 q n2 . .

'

We can now prove Theorem 4.3, again using Corollary 3.1. Let F¨ [ Ž f¨ y Ex w f¨ x F 3 ␸ s .

'

and F[

H F¨ . ¨

Then, from Lemma 4.3, Pr w F x F 2 ey␸ Ž n q n3 . . Also, let A[

ž

UP Ž Nu . y Ex UP Ž Nu .

so that, from Fact 4.1, Pr w A x F 2 ey␸ .

F

D

( / 3␸ s

d

DISTRIBUTED BROOKS ᎐ VIZING COLORINGS

109

We shall study the variance of ˆ h u [ Ž h u ¬ A n F .. The strategy is simply this: expose the tentative color choices of vertices in UPŽ Nu ., in any order. Assuming F we have that a u-neighbor ¨ colors itself successfully if and only if t¨ g F¨ . By Lemma 4.3 and the induction hypothesis, this happens with probability < F¨ < < A¨ <

;

1 e

.

The effect of every such query is 1, since ¨ can affect only itself. Therefore, again by Fact 3.1, the variance of this strategy is given by VŽˆ hu . F

Ý

¨ gUP Ž Nu .

Pr w t¨ g F¨ x c¨2 ;

1

Ý

¨ gUP Ž Nu .

e

;D

s 1 d e

.

Ž 24 .

Applying Corollary 3.1 together with the above gives

(

Pr h u y Ex w h u x ) 2



D s d e

q D Pr Ž A n F . F 2 ey␸ q Pr Ž A n F . .

Plugging in the figures and recalling that ␸ G 5 log n yields the claim. As remarked, a very similar argument holds for the trivial phase. 4.2.2. Concentration of palette sizes. Our aim here is to establish the following theorem. THEOREM 4.4. For a ¨ ertex u let f u be the number of colors that are remo¨ ed from A u after one round of the algorithm Ž note that we are reusing the symbol f u , i.e., we are not referring to the f u of Lemma 4.3.. Let ␸ G 5 log n and n G 2. Then, Pr

f u y Ex w f u x ) 3 ␸ s - 2 ey␸ Ž 1 q n q n3 . .

'

We shall prove the statement for the dozing-off phase of the algorithm. The arguments for the trivial phase are essentially the sameᎏonly the computations differ slightly. DEFINITION 4.4. Let S [ ¨ g Nu1 : t¨ g A u 4 . This is the set of u-neighbors whose tentative color choices can affect u’s palette. Our first goal is to show that < S < ; Ex w < S < x

110

GRABLE AND PANCONESI

with high probability, for all vertices u. Since there is a set S for every u, one should write S u , but the subscript is dropped for the sake of convenience. LEMMA 4.4.

For any ␸ G 4 log n and n G 2, Pr < S < y Ex w < S < x ) 3 ␸ s F 2 ey␸ Ž 1 q n2 . .

'

Proof. Notice that we will be able to establish the claim without actually computing the value of Exw < S
F 3␸ s

'

and Bs

H Buc . uc

By Fact 4.2, Pr w B x F 2 n2 ey ␸ .

Ž 25 .

To establish that < S < is sharply concentrated we shall study the variance of Ž< S < ¬ B . and invoke Corollary 3.1. At this point, to estimate the variance it is sufficient to expose, in any order, the tentative color choices of vertices in the set B[

D UP Ž Nuc . ,

cgA u

i.e., the set of vertices which wake up and whose palettes have nonempty intersection with A u . Only these vertices can choose a tentative color in A u , i.e., B = S. Once a tentative color choice of a vertex in B is exposed, there are only two relevant events: either the tentative color belongs to u’s palette or it does not. For ¨ g B, this happens with probability < A¨ l A u < < Au <

.

The effect is one since at most one color can be affected. To estimate the variance of this strategy, define a bipartite graph H as follows Žwe should really write Hu but we drop the subscript.. One side of the bipartition is A u and the other side is the set B. The edges of H are defined as follows. Edge ¨ c is inserted if and only if ¨ g UPŽ Nuc ., i.e., if ¨ wakes up and c g A ¨ . By definition, for a c g A u , deg H Ž c . s
DISTRIBUTED BROOKS ᎐ VIZING COLORINGS

111

colors Žvertices. c g A u . This implies that EŽ H . ; s2 .

Ž 26 .

We now estimate the variance of all vertices in B. Recalling that the maximum effect c¨ of every ¨ g B is 1, for ¨ g B, we have V Ž ¨ . F Pr w t¨ g A u x c¨2 s

deg H Ž ¨ . < A¨ <

;

deg H Ž ¨ . s

which, together with Eq. Ž26., gives a total variance of V Ž B. ;

1 s

Ý

¨ gB

EŽ H .

deg H Ž ¨ . s

s

; s.

Applying Corollary 3.1, together with Fact 4.2, gives Pr < S < y Ex w < S < x ) 2 ␸ s q D Pr Ž B . F 2 ey␸ q Pr Ž B . .

'

From Eq. Ž25., the hypothesis ␸ G 4 log n and the fact that D F n, it follows that Pr < S < y Ex w < S < x ) 3 ␸ s F 2 ey␸ Ž 1 q n2 . .

'

Ž 27 .

To conclude the proof of Theorem 4.4 we apply again Corollary 3.1 to the random variable Ž f ¬ S . where S is defined as follows. Let Su [ Ž < S < y Ex w < S < x F 3 ␸ s .

'

and S[

H S. u

It follows from Lemma 4.4 that Pr w S x F 2 ey␸ Ž n q n3 . .

Ž 28 .

Define a bipartite graph L whose sides of the bipartition are the set S and the set of ‘‘relevant’’ Nu2-neighbors Žas usual, it should be L u but we drop the subscript.. The set in question is T[

D UP Ž N¨ t . .

¨ gS

¨

112

GRABLE AND PANCONESI

This needs some explanation. After conditioning on S we know which u-neighbors woke up and which tentative colors they chose. If t¨ is ¨ ’s tentative color then, whether or not ¨ successfully colors, thereby depleting u’s palette, only depends on vertices in UPŽ N¨ t ¨ .. Clearly then, Ž f < S . is only a function of the color choices of vertices in T. Assuming S , we have that for each ¨ g S, deg LŽ ¨ . s
deg LŽ w . < Aw <

c w2 ;

deg LŽ w . s

because c w s 1 Žat most one color can be affected.. Therefore VŽT . F

1 s

Ý

deg LŽ w . s

wgT

EŽ L. s

; s.

Invoking Corollary 3.1 gives Pr < f y Ex w f x < ) 2 ␸ s q 2 sey␸ Ž n q n3 . F 2 ey ␸ q 2 ey ␸ Ž n q n3 . .

'

Using the fact that ␸ G 5 log n, the above implies Pr < f y Ex w f x < ) 3 ␸ s F 2 ey␸ Ž 1 q n q n3 .

'

which concludes the proof of Theorem 4.4. 4.2.3. Concentration of dŽ u, c .. We now focus on d i Ž u, c .. We shall adopt the same conventions and notation of the preceding sections. As before we shall give the proof only for the dozing-off phase, that for the trivial phase being essentially identical. Since the arguments are very similar to those already employed, the pace will be somewhat faster. Focus on a vertex u g Gc , the subgraph induced by all vertices that have c in their palettes. There are two different reasons why a u-neighbor ¨ can disappear from Gc . 䢇 䢇

¨ can color itself during the current round; a ¨ -neighbor can color itself with c during the current round.

We already know that the first random variableᎏthe number of neighbors that color themselves at the current roundᎏis sharply concentrated. More precisely it is within 3 ␸ s of its expectation with probability at least 1 y 2 ey␸ Ž2 q n q n2 .. We focus then on the second. The theorem we want to show is the following.

'

DISTRIBUTED BROOKS ᎐ VIZING COLORINGS

113

THEOREM 4.5. Let g u be the number of u-neighbors such that color c is remo¨ ed from their palettes after the current round. Then, for ␸ G 3 log n and n G 2, Pr < g y Ex w g x < ) 3 ␸ d F 2 ey␸ Ž 1 q n2 . .

'

During the proof, keep in mind that we are working in the graph Gc . The difficulty of the proof is, loosely speaking, that g potentially depends on the tentative color choices of as many as d 3 vertices. To bound the effect, as in the previous sections, we shall condition on the event B, defined previously, which says that, for all vertices u and colors c, UPŽ Nuc . ; s. Recall that Prw B x F 2 n2 e ␸ ŽLemma 4.2.. We shall study the variance of Ž g < B .. Focus on a vertex w g Nuc2 ; under what circumstances can it affect Ž g < B .? First, w must wake up, i.e., w g UPŽ Nuc2 .. Second, after w wakes up, there are two possible, mutually exclusive outcomes: Ži. none of its neighbors picks tentative color c, or Žii. some w-neighbor picks tentative color c. Define w to be li¨ e in the first case and killed in the second. Notice that if w is killed, it cannot affect g. Third, for the live vertices, only two events are relevant: either t w s c or t w / c. Therefore, Ž g < B . is a function of the tentative color choices of the live vertices. To bound the variance, we consider the bipartite graph H induced by the set of u1 neighbors that can lose c, i.e., Nuc , and the set of their neighbors that 2 1 wake up, i.e., UPŽ Nuc .. Assuming B, it follows that, for all ¨ g Nuc , deg H Ž ¨ . ; s and therefore < EŽ H .< ; sd. The variance of a vertex w g UPŽ Nuc2 . is bounded from above by V Ž w . F Pr w w live, t w s c x deg H Ž w . F Pr w t w s c x deg H Ž w . ;

deg H Ž w .

Therefore, V Ž UP Ž Nuc2 . . ;

1 s

deg H Ž w . s

Ý 2. wgUP Ž Nuc

1 s

E Ž H . ; d.

Invoking Corollary 3.1, Pr < g y Ex w g x < ) 2 ␸ d q d Pr Ž B . F 2 ey␸ q Pr Ž B .

'

which, for ␸ G 3 log n and n G 2, implies Pr < g y Ex w g x < ) 3 ␸ d F 2 ey␸ Ž 1 q n2 . .

'

s

.

114

GRABLE AND PANCONESI

4.3. Error Propagation We can now go back to the problem of error propagation and derive the recurrence for the error terms e i given in the theorem. Assume by induction that at round i each of aŽ u., dŽ u, c ., and dŽ u. has a cumulative error factor of Ž1 " e i .. In each of the expectation computations, we did one of the following operations a Žsmall. constant number of times: we used the induction hypothesis, we used Fact 2.1, we made transformations like 1rŽ1 q oŽ1.. s Ž1 q oŽ1.. and e aŽ1qoŽ1.. s e a Ž1 q oŽ1... Altogether, these operations imply that each expectation has an error factor Ž1 " c1 e i ., for some constant c1. Next we apply the concentration results to show that, with high probability, each random variable deviates from its mean by no more than a certain amount, which may be interpreted as another error factor. Of all the concentration results, the one with the worst error factor is the one given by Theorem 4.3. We will use that error factor, namely c 2 ␸ s Ž Drd . uniformly. In fact, we will bound this factor in each round by the value in the very last round, which gives the worst error. Recalling Eqs. Ž7. and Ž8. which, respectively, give the smallest value of s and the worst Žbiggest. ratio Drd, we see that, choosing ␸ s ⌰Žlog n., the error is at most

'

c2

'Ž ke

log n . r⌬ .

2k

Thus, e iq1 s c1 e i q c 2

(

ke k log n ⌬

.

Solving this recurrence gives the error term stated in Theorem 4.2ᎏnamely, ei s C

i

(

ke 2 k log n ⌬

,

for an appropriate constant C ) 1. During the trivial phase things work out in the same way Žin fact, things improve since the ratio srD becomes better, i.e., larger., giving an error factor of the same formᎏnamely ei s C

i

(

where only the constant C differs.

ke k log n ⌬

,

DISTRIBUTED BROOKS ᎐ VIZING COLORINGS

115

This finishes the proof of the analysis of the algorithm down to the point where condition Ž5. holds and establishes Theorem 4.2. Thereafter, the analysis of the trivial algorithm completes the proof of Theorem 4.1 w14x.

5. GIRTH 4 Now we allow the input graph to have cycles of length 4, but we still require it to be triangle-free. Having cycles of length 4 affects the analysis in the following fundamental way: it does not affect the probability that a vertex colors itself, but it does affect the probability that colors decay from its palette. In order to justify rigorously the following argument, an analog of Lemma 4.2 would be required. But, since the present discussion is only needed to motivate our approach and the correctness of Lemma 5.1 below and, consequently, of our results remains unaffected, the proof is omitted for the sake of brevity. To see why a different approach is needed for the girth-4 case, consider an extreme example: the complete bipartite graph K d, d with bipartition Ž A, B .. If every vertex has a palette of size a, the probability that a particular color c decays from the palette of a particular vertex u g A can be computed as follows, denoting with E the event ‘‘u does not color.’’ Pr w c decays at u ¬ E x s 1 y Pr w no ¨ g B c-colors ¬ E x s 1 y Pr w some w g A picks t w s c ¬ E x y Pr w no vertex at all picks c ¬ E x s1y 1y 1y

ž ž

;

1 e

y

1 e2

p s

d

y 1y

// ž

p s

2d

/

f 0.233.

ŽHere we used p s s , as in the dozing-off phase.. This is strictly less than d the probability of the same event in the absence of 4-cycles, namely 1 y expŽy1re . f 0.308. In general, when 4-cycles are allowed, the probability of color decay is a function of the local topology. Since the graph may not have the same local topology everywhere, the rate of color decay may vary from vertex to vertex and even from color to color, since it is the topology of Gc , the graph induced by vertices with color c in their palettes, that counts. One thing is true, however, the presence of 4-cycles can only lower the probability of decay. This is proven in the following lemma.

116

GRABLE AND PANCONESI

LEMMA 5.1. In an Ž almost . d-regular triangle-free graph with palettes of size Ž almost . a, for e¨ ery ¨ ertex u and color c, Pr w c decays at u ¬ u does not color x F 1 y exp y

½

pd yp d r s s

e

5.

Proof. The proof compares the probability that the color c decays at vertex u in the given graph with the same probability in a graph with no 4-cycles. Since only the first and second neighborhoods of u can affect whether c decays at u, we ignore the rest of the graph. Similarly, edges between second neighbors play no role and may be ignored. With this local perspective, the graph consists of u, u’s neighbors Nu , and second neighbors w, each of which is adjacent to a fixed set of first neighbors. If the graph were to contain no 4-cycles, the graph seen from this local perspective would simply be a depth-2 tree rooted at u. Starting with the given graph, one can easily split each second neighbor into a set of degree 1 vertices Žleaves., producing a 4-cycle-free graph Žtree. where each first neighbor keeps the same degree. Vice versa, one could start with just such a tree and merge leaves one after another into composite second neighbors to produce the given graph. The proof follows this second process. We start with the tree where the degrees of the first neighbors of u, the root, are the same as in the given graph. We know already that for the tree the probability that c decays is pd asymptotic to 1 y exp y eyp d r s 4 . We show that as we merge leaves into s

composite second neighbors, the probability that c decays cannot increase. Thus, in the original graph, the probability that c decays is at most that in the 4-cycle-free graph. We can make one further simplification: We can ignore all neighbors of u which have not chosen c as their tentative color and all second neighbors which are only adjacent to such first neighbors. So, throughout we will condition on knowing that u does not color and knowing the set X of neighbors of u which tentatively color themselves c. This conditioning will be represented by the event A. If u’s tentative color is c, the fact that u does not color tells us that X is not empty and additionally that no vertex in X can color itself and therefore that c does not decay at u, regardless of the topology of the second neighborhood. Thus, we may assume that u’s tentative color was not c and hence that any of the vertices in X might color themselves c and thereby cause c to decay at u. So, consider an intermediate situation: some of the leaves have already been merged, and now we need to merge a leaf w 1 , adjacent to neighbor ¨ 1 , with a Žpossibly already composite. second neighbor w 2 . We can

DISTRIBUTED BROOKS ᎐ VIZING COLORINGS

117

assume that w 2 is not adjacent to ¨ 1 , since otherwise this operation would create a duplicate edge, none of which exist in the graph given originally. Pr w c decays at u ¬ A x s Pr Ž ¨ 1 c-colors . or

D Ž ¨ c-colors. ¬ A

¨ gX_¨ 1

s Pr w ¨ 1 c-colors ¬ A x q Pr y Pr Ž ¨ 1 c-colors . and

D Ž ¨ c-colors. ¬ A

¨ gX_¨ 1

D Ž ¨ c-colors. ¬ A

.

¨ gX_¨ 1

Considered individually, the merge operation changes neither the value of Prw ¨ 1 c-colors ¬ A x nor that of PrwD ¨ g X _¨ 1 Ž ¨ c-colors. ¬ A x. As for Pr Ž ¨ 1 c-colors . and

D Ž ¨ c-colors. ¬ A

¨ gX_¨ 1

s Pr w ¨ 1 c-colors ¬ A x Pr

D Ž ¨ c-colors . ¬ Ž ¨ 1 c-colors. and

A ,

¨ gX_¨ 1

again, the probability that ¨ 1 c-colors remains unchanged. But the last probability increases, since in the new merged situation, knowing that ¨ 1 c-colors tells us that w 1 ' w 2 does not tentatively choose c. This slightly increases the probability that one of the vertices in X adjacent to w 2 c-colors since it removes a possible obstruction. Thus, after merging w 1 with w 2 , the joint probability increases, decreasing the probability that c decays at u, as desired. Since the same is true after each merge operation, this probability in the given graph, where all leaves are merged as needed, is at most the probability of the same event in the 4-cycle-free graph. Although it would seem to help, the 4-cycles make the analysis very difficult because they systematically destroy the regularity of the graphs Gc and the uniformity of the palette sizes. One way to correct this unfortunate state of affairs is to apply the following approach, reminiscent of the agricultural policy of the European Union: Each vertex normalizes the probability of color decay of each color in its palette by artificially removing colors from its palette even if no neighbor has used the color successfully. By doing this with the correct probability, the effective decay probability can be made to be exactly 1 y exp ypdrs. eyp d r s 4 .

118

GRABLE AND PANCONESI

The algorithm given in the previous section is amended in the following way: each vertex maintains a complete description of its neighborhood out to distance 2. This allows each vertex u to compute for each color c the natural probability pu, c of decay. This is done before tentative colors are chosen. The round then proceeds as before. At the end, each color c which was not naturally removed is artificially removed with probability 1 y exp y Ž pdrs . eyp d r s 4 y pu , c 1 y pu , c

.

Finally, each vertex broadcasts its updated palettes to all neighbors out to distance 2. With this additional balancing mechanism, the expectations for the c-degrees and palette sizes are exactly as they were in the analysis of the previous section. The concentration results are also affected in no significant way by the independent artificial color removals. We therefore get identical performance bounds, with the exception that the communication and space complexity of the algorithm are somewhat increased.

6. CONCLUSIONS We have given a fast, distributed algorithm for vertex coloring ⌬-regular triangle-free graphs using ⌬rk colors, for k s O Žlog ⌬ ., as long as ⌬ G log 1q ␦ n. The analysis prompts several interesting questions. First of all, is the lower bound on ⌬ really necessary or is it an artifact of our analysis? Another possible improvement would be the removal of the regularity assumption to deal with irregular graphs, perhaps with some condition on the minimum degree. Another question is whether the balancing mechanism used in dealing with girth-4 graphs is necessary at all; it seems counterintuitive to remove colors from the palette when no neighbor has used them. Additionally, our methods can cope with a limited number of triangles in the graph. An interesting extension, both algorithmic and existential, would be to characterize classes of nontriangle-free graphs which admit Brooks᎐Vising colorings. For instance, we know that line graphs can be colored with roughly ⌬r2 colors by very fast distributed algorithms w11x, but a more complete characterization would be desirable. These questions might be tackled from a theoretical standpoint, but the best course of action might be a careful experimental study of the algorithm presented in this paper.

DISTRIBUTED BROOKS ᎐ VIZING COLORINGS

119

ACKNOWLEDGMENT David Grable’s financial support came from Deutsche Forschungsgemeinschaft project number Pr 296r4-2. Alessandro Panconesi gratefully acknowledges the financial support of the Alexander von Humboldt Foundation. This research started while he was visiting the ˚ Humboldt University in Berlin and was continued at Brics, University of Arhus, Denmark. Both authors thank Michał Karonski, Tomasz Łuczak and Andrzej Rucinski ´ ´ for comments helpful to the presentation of this result.

REFERENCES 1. B. Awerbuch, A. V. Goldberg, M. Luby, and S. Plotkin, Network decomposition and locality in distributed computing, in ‘‘Proceedings of the 30th Symposium on Foundations of Computer Science ŽFOCS 1989.,’’ pp. 364᎐369, IEEE, Research Triangle Park, North Carolina. 2. N. Alon, J. H. Kim, and J. Spencer, Nearly perfect matchings in regular simple hypergraphs, Israel J. Math. 100 Ž1997., 171᎐187. 3. N. Alon, J. Spencer, and P. Erdos, ˝ ‘‘The Probabilistic Method,’’ Wiley-Interscience, New York, 1992. 4. B. Bollobas, ´ ‘‘Graph Theory,’’ Springer-Verlag, New York, 1979. 5. B. Bollobas, ´ Chromatic number, girth, and maximal degree, Discrete Math. 24 Ž1978., 311᎐314. 6. S. Chaudhuri and D. Dubhashi, Probabilistic recurrence relations revisited, Theoret. Comput. Sci., to appear. 7. D. Dubhashi, D. A. Grable, and A. Panconesi, Nearly-optimal, distributed edge-coloring via the nibble method, Theoret. Comp. Sci. 203 Ž1998., 225᎐251. wA special issue for ESA95, the 3rd European Symposium on Algorithms ŽESA 95.x 8. D. Dubhashi and A. Panconesi, ‘‘Concentration of Measure for Computer Scientists,’’ draft available at http:rrwww.cs.unibo.itr; panconesrpapers.html. 9. D. A. Grable, A large deviation inequality for functions of independent, multi-way choices, Combin. Probab. Comput. Nr. 7 Ž1998., 57᎐64. 10. D. A. Grable, On random greedy triangle packing, Electronic J. Combinatorics 4 Ž1997., R11, 19. 11. D. A. Grable and A. Panconesi, Nearly optimal distributed edge coloring in O Žlog log n. rounds, Random Structures and Algorithms 10 Ž1997., 385᎐405. wExtended Abstract in Proceedings of the Eighth Annual ACM-SIAM Symposium on Discrete Algorithms ŽSODA 97., pp. 278᎐285, New Orleans.x 12. P. Hajnal and E. Szemeredi, ´ Brooks’ coloring in parallel, SIAM J. Discrete Math. 3 Ž1990., 74᎐80. 13. A. R. Johansson, Asymptotic choice number for triangle-free graphs, preprint, DIMACS, September 30, 1996. ¨ Johansson, Simple distributed ⌬ q 1-coloring of graphs, Inform. Process. Lett., 70Ž5. 14. O. Ž1999., 229᎐232. 15. N. Karchmer and J. Naor, A faster parallel algorithm to color a graph with ⌬ colors, J. Algorithms 9 Ž1988., 83᎐91. 16. H. J. Karloff, An NC-algorithm for Brooks’ theorem, Theoret. Comput. Sci. 68 Ž1989., 89᎐103. 17. R. M. Karp, Probabilistic recurrence relations, in ‘‘Proceedings of the 23rd Annual ACM Symposium on Theory of Computing ŽSTOC 91., New Orleans,’’ pp. 190᎐197.

120

GRABLE AND PANCONESI

18. J. H. Kim, On Brooks’ theorem for sparse graphs, Combin. Probab. Comput. 4 Ž1995., 97᎐132. 19. M. Luby, Removing randomness in parallel without processor penalty, J. Comput. System Sci. 47 Ž1993., 250᎐286. 20. C. McDiarmid, On the method of bounded differences, in ‘‘Surveys in Combinatorics’’ ŽJ. Siemons, Ed.., London Mathematical Society Lecture Notes Series, Vol, 141, pp. 148᎐188, 1989. 21. R. Motwani and P. Raghavan, ‘‘Randomized Algorithms,’’ Cambridge Univ. Press, Cambridge, UK, 1995. 22. A. Panconesi and A. Srinivasan, The local nature of ⌬-coloring and its algorithmic applications, Combinatorica 15 Ž1995., 255᎐280. 23. A. Panconesi and A. Srinivasan, On the complexity of distributed network decomposition, J. Algorithms 20 Ž1996., 356᎐374. 24. J. M. Steele, Probability theory and combinatorial optimization, in ‘‘CBMS-NSF 69 of Regional Conference Series in Applied Mathematics,’’ SIAM, Philadelphia.