A fast algortihm for finding all shortest paths

A fast algortihm for finding all shortest paths

INFORMATIONPROCESSINGLETTERS Volume 13, number 1 27 October 1981 A FAST ALGORTIHM FOR FINDING ALL SHORTEST PATHS Osamu WATANABE Lkpartmentoflnfomti...

318KB Sizes 3 Downloads 118 Views

INFORMATIONPROCESSINGLETTERS

Volume 13, number 1

27 October 1981

A FAST ALGORTIHM FOR FINDING ALL SHORTEST PATHS Osamu WATANABE Lkpartmentoflnfomtion

scfcnce, Tokyo Instituteof Technology,Ookayatna, Meguro-ku, Tokyo, Japan

Received26 January1981; reded version received 28 July 1981

Directedgraph, all-pairsshortest-path problem, matrix multiplication

1. Illtroduction

(+m, #) if (Vi,Vj)is not an edge and (c,Q) if (vi, vj) is an edge, and c, !I are the cost and the label of the

Let G be a directedgraph whose edges are assigned positive integers as their costs. Let (vl s .... v,} be the set of vertices of C. The problem we consider here is to compute, for each pair vi, vf, one of the shortest paths from vi to vi and its cost (the all-pairsshortest path problem). Yuval[4] and Romani [3] proposed a fast algorithm to compute the costs of shortest paths (but not shortest paths themselves). Here we present an algorithm to compute both of the shortest paths and their costs. The complexity of our algorithm is the same as Yuval and Romani’s. Let A =
edge (vi, vi) respectively.

two

qymin{q~+b~l

l
To compute the costs of shortestpaths Yuvaland Romaniintroduceda fast algorithmto compute this matrixA + B. WeWar; not explain theiralgorithm here (see [3,4]) but it is an essentialpartof our algorithm. LotLbethas8t {(a’>a”) I a’ EN, a” E 2’) U I(+-, #)} ,

For two matrices A = ((a;, a$)), B = ((bij, bii)) in M,, let A 6 B = ((Cij, c;)),A @ B = ((d; 94;)) be the matrices in M, that are defined as follows: C; =

min(ab, bfj)

,

(0 is the concatination operator of paths), where h = min(k 1 I GkG n,afk + bij=d;jIFor k 3 1 let Ai denote A0 @ ..- @ A0 (k multiplications @ ), let I,, denote the matrix in M, having (0, e) on the diagonal and (t-, #) elsewhere, and let

HG be the matrix in M, defined by HG=In@AG@A&@.... Note that Ho is the transitive closure of A0 where @, @ are used instead of the usual matrix multiplication and addition. It is easy to see that (1) if there is a path from Vito Vj in G then the ij-th entry of Ho is (c, Q),where c is the minimum cost of these paths and IIis one of the paths from vi

where C* is the set of all finite pa&s of G including

tOVj With COSt

the empty path e (the path of length of 0). Let M, be thesetofallaXnmatrices0verL. A matrixin M, is used to represent the graph G. Let AG denote the matrix in M, whose ij-th entry is

(2) if there is no path from vi to vj in G then the ij-th entry of Ho is (+~a,$9. Therefore to find shortest paths and their costs in G it is sufficient to compute the transitive closure

OO2OOl9O~8l/lOOO-OOOO/$O2.75 0 1981 North-Holland

C,

respect to @ and@ . A fast algorithm for the computation of transit tive closure of matrices of closed semiringsis knrown(Theorem 5.7 of [I]). This algorithm has the same order of complexity as that or matrix multiplication. Moreover it is easy to see that this algorithm works even if the usual matrix multiplication and addition are replaced by @ and @ . So in the following we show how to compute @ product efficiently using fast algorithms for ordinary matrix multiplication such as the Strassen’s algorithm.

HG of’ Ac with

2. A fast algorithm for computing @ product

Let X, Y be two matrices in M,. We show how to compute the product 2 = X @ Y. Let aj = (X;l,x$), yij = (y;, J$), zij = (Z;, ZfJ) be the ij-th entry of X, Y, 2 respectively. Let X’, X”, Y’, Y”, Z’, 2” be the matrices whose ij-th entries are xij 9di, Yij, $, Zij, 4; respectively. It is evident that Z’ = X’ * Y’. Hence we can compute Z’ using Yuval and Romani’s algorithm. To compute Z’ we need a lemma. LRmma.LRth=min(k 11 G k< n,ak+& whereak,bkENU {+=},C=Ih(ak’+bk’ Gn)andcEN. Then min{nak+nbk+k-1

=c}, 11
11 GkGnn)=

=nah+nbr,+h-1. Proof. From the definition of h and c, we have

ah+bh=c
for all k, 1 G h < n .

bk then we have h < k from the definition of h, so

If c = ak +

nah+nbh+h-

lGnak+nbk+k-1.

otherwise we have c < ak t bk and, ah+bh+l=c+lGaktbk, nab + zbh +nGnak+nbk, nah+nb,+h-l
27 October1981

INFORMATION PRCK!ESSINGLETTERS

Volume 13, number 1

nah+nbh+hThis

l
complete the proof.

Let U = (uu), V = (vij)be the matricesdefinedby nx[jtj-1

ifx$N,

%=

=:,

Vi]=

Wecompute the productW = (wU, = U * V using Yuvaland Romani’s algorithm. Usingthis matrix Wand the previouslycomputedmatrix2’ = X’ + Y ‘), we can compute Z” = (zi) in the followink way. If z; = +oo,then it is evidentthat ZQ= (+oD,#) and consequently2; = #. Supposethat zfi E N. In this case zb = min{x[k +yh 11
)lGk(n) (byw=u*v)

=nxfhtnykth-

1

(by the Lemma)

= n(xih + yt) 9 h - 1 =nzbth-1. Henceh=wg-nz~+l.Usingthisformulawecan computethe value of h. In this way we can calculateZ’, 2” and 2 = X @ Y. To analyzethe complexityof this algorithm,we assumethe randomaccessmachineunder the uniform cost criterion([ 1, section 1.2)) as the model of cornputation, except that it can use the realarithmeticoperations (+, -) X, 1 X J, loga x, 2’) ([W tbt MOO denote the complexityof the best algorithmfor ordinaryn Xn matrixmultiplication.The complexityof the computationof A’* B from A, B is O(M(n))([4]). Hencethe complexity of our algorithmis as follows: (1) O@@(n) t nl) for the computationof 2’ = X’ + Y’,W=U*V, (2) O(n2) for the computationof 2” from 2’ and W. Hencethe complexityis O(M(n)+ n2).

Volume 13, number 1

INFORMATION PROCESSING LETTERS

27 October 1981

3. condusioa

References

Usingthis fast @ productalgorithm,the algorithm describedin Theorem5.7 of [l) finds the transitive t to @ , @, and conseclosureHB of AQ with quently all shortestpaths and their costs. Its complexity is O(M(n)+ ns). For example,if we use the Strassen’salgorithm([Z]) then M(n)= O(n@!), a= 2.81 and the complexity of our algorithmis also O(n&).Smaller valuesof Qarealso known now. Althoughthe algorithmpresentedhere uses infinite-precisionrealarithmetic[J] , it is possibleto mod@ it to use only finite-precisionarithmeticas describedin [3].

[ 1 J A.V. Aho, J.E. Hopcroft and J.D. Ullman, The Design and Analysis of Computer Algorithms (Addison Wesley, Reading, MA, 1974). [ 21 V. Strassen, Gaussian elimination is not optimal, Numer. Math. 13 (1969) 354-356. [b] F. Romani, Shor t est-pa th problem is not harder than matrix multiplication, Information Processing Lett. 1 l( 3) (1980) 134-136. (41 C. Yuval, An algorithm for finding ali shortest paths using N2*8’ mfimite-precision multigiications, Information Processing Lett. 4(6) (1976) 155-156.

Adcnowkdgement

‘Iheauthorwishes to thank Prof. KojiroKobayashi for his carefirlreadingof the first drafts.

l