Refined analysis and improvements on some factoring algorithms

Refined analysis and improvements on some factoring algorithms

JOURNALOF ALGORITHMS3, 101-127(1982) Refined Analysis and Improvements Factoring Algorithms* on Some C. P. SCHNORR Fachbereich Mathematik, Univ...

1MB Sizes 4 Downloads 77 Views

JOURNALOF

ALGORITHMS3,

101-127(1982)

Refined Analysis and Improvements Factoring Algorithms*

on Some

C. P. SCHNORR Fachbereich

Mathematik,

Universitiit

Frankfurt,

Frankfurt,

Germany

Received February 17, 198 I ; revised September 2 1, 198 1

By combining the principles of known factoring algorithms we obtain some improved algorithms which by heuristic arguments all have a time bound O(exp& In n in In n ) for various constants c 2 3. In particular, Miller’s method of solving index equations and Shanks’ method of computing ambiguous quadratic forms with discriminant - n can be modified in this way. We show how to speed up the factorization of n by using preprocessed lists of those numbers in [ -I(, u] and [n - u, n + u], 0 Q: u a n which only have small prime factors. These lists can be uniformly used for the factorization of all numbers in [n - u, n + u]. Given these lists, factorization takes O(exp[2(1n n)‘/3(ln1n n)2/3]) steps. We slightly improve Dixon’s rigorous analysis of his Monte Carlo factoring algorithm. We prove that this algorithm with probability l/2 detects a proper factor of every composite n within O(expd6 In n lnln n )steps.

1. INTR~DUCTIONAND Suhmut~ Recently the interest in factoring integers dramatically increased since the security of the RSA public key cryptosystem mainly relies on the difficulty of factoring large integers [l]. The problem of factoring integers is one of the classical computational problems in mathematics. Gauss quoted it as one of the most important and most useful problems of arithmetics. Only modest progress has been made from the factoring methods known to Gauss and Legendre to the most efficient algorithms known today. In fact almost no new ideas came up, the progress mainly relies on more efficient programming and the use of faster computing machinery. Landmarks of this progress have been the factoring of the Fermat numbers F, = 2*’ + 1 *This work started in summer 1980 during a stay at the Stanford Computer Science Department. Preparation of this report was supported in part by National Science Foundation grant MCS77-23738, and by the Bundesminister ftir Forschung und Technologie, Federal Republic of Germany, Grant 083 0108.

101 0196-6774/82/020101-27$02.00/O All

Copyright 0 1982 by Academic Press, Inc. rights of reproduction in any form reserved.

102

C. P. SCHNORR

by Morrison and Brillhart [2] and recently the factoring of & = 2’* + 1 by Brent. The theoretical progress mainly concerns a better understanding and a more detailed analysis of the known methods. Also, with the evolution of the theory of computational complexity there evolved an increasing interest in asymptotical runtimes of algorithms. We will continue in this direction, too. In order to factor n, or equivalently to solve x2 = -a mod n, Gauss (Artikel 327) makes extensive use of the theory of quadratic forms. The usefulness of quadratic residues mod n which are small or only have small prime factors has been known long ago. Gauss (Artike1328) gives a method to construct such residues w with w = O(G) by means of quadratic forms. Legendre already used the continued fraction expansion of fi. The more recent factoring algorithms of Morrison and Brillhart [2], Shanks [3], Miller [4] are all refinements and variations of these old ideas. This will become clear by a comparative study of these algorithms, including proper modifications and improvements. From the theoretical point of view, Dixon [5] achieved a major step. He proposed a probabilistic factoring algorithm and gave a rigorous proof that this algorithm for every composite number n with probability l/2 detects a proper factor of n within o(exp(dl6ln nlnln n )) steps. Section 2 contains an outline of Dixon’s analysis together with some improvements. In fact we decrease the constant m to 6. If in addition quadratic residues mod n are constructed via Legendre’s continued fraction method then, under reasonable assumptions, we obtain the time bound o(expJ3 In n lnln n ) for a tuned up version of the Morrison-Brillhart algorithm. In Section 3 we analyze Miller’s method of using the solutions of index equations. We point out that this is not an independent method but rather a modification of solving x 2 = y *mod n by combining congruences mod n. Under reasonable assumptions we obtain a time bound O(expd4.5 In nlnhr n ). However this algorithm might be the most efficient one, if one likes to factor many numbers in a small region. The reason is that this algorithm uses lists of those numbers in [-u, u] and [n - u, n + u] which only have small prime factors. These lists can be uniformly used for the factorization of all numbers in [n - u, n + u]. In Section 4 we modify Shanks [3] method of factoring n via the construction of ambiguous quadratic forms with discriminant -n. Our modification relates this algorithm to the previous ones and in particular to the Morrison-Brillhart algorithm. Under reasonable assumptions we obtain the time bound o(exp\j3 In n In In n ). This latter algorithm, the Morrison-Brillhart algorithm and the Schroeppel algorithm (see Monier [6]) are the asymptotically fastest known factoring algorithms. A first rough analysis of the Schroeppel algorithm by Monier [6] seemed to favor this algorithm. However, after this paper went

IMPROVEMENTSONFACTORINGALGORITHMS

103

into press a revised analysis of the Schroeppel algorithm has been done independently by Pomerance [26] and Sattler and Schnorr [27]. The result is that the Schroeppel algorithm is inferior to the Morrison and Brillhart algorithm both asymptotically as well as for small numbers. All the above time bounds are based on some knowledge of the function #(n, u), “the number of integers 5 n and free of prime factors > 0.” In particular, a lower bound on #(n, ,‘jr) is needed for r proportional to Jm. As this paper went into press #(n, ,‘ir) > n(ln n)-’ was the best proved bound and we use it throughout the paper. Meanwhile it has been proved, see Pomerance [26] that #(n, .‘lr) = nr-’ and this implies that the constant fi in the exponent of each of our time bounds reduces to fl. We should at least mention the important algorithms of Pollard [7] and of Schroeppel (see Monier IS]), which are not included in this comparative study. For more complete surveys on factoring algorithms we recommend Guy [8], The Art oj Computer Programming, Vol. 2 by D. Knuth (in particular the 1980 edition), and the thesis of Monier [6].

2. A REFINEDANALYSISOF

DIXON'S PROBABILISTIC FACTORING ALGORITHM

So far the asymptotically fastest run time of a factoring algorithm has been proved by Dixon [5]. Given a composite number n, this algorithm finds a proper factor of n with probability l/2 within O(exp(4dm)) steps. In denotes the “logarithmus natural&” with the Eulerian number e as base and exp is the inverse function to In. Dixon mainly applies the method of “combining congruences” to generate solutions of x2 = y2mod n. In Sections 3 and 4 we will see that this technique can well be combined with factoring algorithms proposed by Miller [4] and Shanks [3]. We give an outline of Dixon’s algorithm with an improved analysis. We decrease the constant 4 in Dixon’s bound to 6. The improved theoretical time bound results from a tighter lower bound on the number of quadratic residues mod n which can be completely factored over small primes (Lemma 1) and a specific method for detecting small prime factors. Here we do not focus on designing the most practical algorithm but we like to prove a rigorous asymptotical time bound as small as possible, Dixon ‘s Algorithm begin input n stage 1 v = [n’/*‘j comment the optimal

choice of r E G3i:will be made below.

104

C.P.SCHNORR

Form the list P of all primes I u:P = {p ,,.. .,p,,(,,}. if 3pi E P: pi 1n then print pi stop B:= 0 stage 2 Choose z E [ 1, n - l] at random and independently from previous choices of z. if gcd(z, n) # 1 then print gcd(z, n) stop w:=z2modnwithOIw
(1) test 2 if there is no nontrivial

solution then goto stage 2 X: = II,= lz@7Y: = lli,,(“)Pi Adb7L)/2 - commeiit [The construction implies x2 = y2mod n; in case x # k-y mod n, gcd( x 4 y, n) are proper factors of n.] test 3 if x Ztymod n then print gcd(x *y, n) stop Choose the first a E B such that f, = 1. B: = B - {a}, goto stage 2 end

Obviously a proper factor of n has been found as soon as test 3 succeeds. In the following analysis of the algorithm we suppose that n is an odd number with prime factor decomposition: d

lir1anddr2.

tl=nCJ!’ i=l

Clearly the cases that n is even or a pure prime power can easily be handled in advance. The following facts are due to Dixon. FACT 1. prob(x = ~ymodn within test 3) = 21-d and the corresponding events for distinct passes of test 3 are mutually independent.

Proof. Consider the last chosen z and w = z2mod n. We prove that there are exactly 2d distinct zi, i = 1,. . . ,2d, such that z,? = w mod n. Clearly 55;, the multiplicative group mod n, is a direct sum d

%,*=i=lwq,.

IMPROVEMENTSONFACTORINGALGORITHMS

105

For each i there are exactly two distinct solutions ti = ui, ui of tf = w mod q$. Then by the Chinese remainder theorem the zi correspond in one-one manner to the 2d elements in {u,, u,} X *. - X(u,, z)~}. Now each of ZI,..., Z2d is equally likely to be chosen for z. The values off, and y do not depend on the choice of z E {z,, . . . , z2d}, Only x = fl, = ,z, depends on this choice. Observe that the value f, corresponding t6 z = z, must be 1, otherwise the algorithm would pa% test 3 without choosing this final z. Therefore the 2d choices for z yield 2d distinct values for x and exactly two of them imply x = -y mod n. This evaluates the probability that “x = -my mod n during test 3” to 21ed. Since our analysis is completely based on the last chosen z, it is clear that the distinct events of “test 3 succeeding” are mutually independent. 0 Let r(n) be the total time of the algorithm and let T’(n) be the time till the first pass of test 3. We count arithmetical steps mod n as single steps. T(n), T’(n) are random values depending on the random variables z of stage 2. Fact 1 immediately implies: FACT 2.

E[T(n)]

= (1 - 2’-d)-‘E[T3(n)]

I 2E[T,(n)].

Here E[X] denotes the expectation of the random value X. Let T,(n)(T,(n), resp.) be the time spent from any entering of stage 2 till the first pass of test 1 (test 2, resp.) without counting the steps used to solve the various linear systems of equations (1). Since a linear dependence of the g with c E B must exist as soon as #B L rr( u) + 1 = O(u/ln u) it follows that there are almost r(u) + 1 passes of test 2 before the first pass of test 3. Hence FACT 3.

E[T,(n)J

I (m(u) + l)E[q(n)]

+ Ok).

Here 0( a( u)~) bounds the steps to solve all the linear systems (1) occurring in the various passes of stage 3. Indeed this task amounts to solve one system of linear equations with rr( u) + 1 unknowns. In order to analyze E [ T2( n)] we define Q: = {set of quadratic residues mod n} fl %,* T(n, u): = {r E [ 1, n]: all prime factors of r are 5 u} M(n, u): = {z E [l, n]: z*mod n E Q fl 7’(n, c)}. Let q(n) = #%z be the Eulerian function. FACT 4.

E[T,(n)]

I U(E[T,(n)]tp(n)/#M(n,

u)).

Proof: We clearly have prob(w* = 1) = #M(n, u)/cp(n). Hence test 1 will almost be passed about cp(n)/#M(n, u) times between two passes of test 2.

106

C. P. SCHNORR

T,(n) depends on how the factorization done. A crude way is as follows: w*: = w forallpEPdo [whilep(w*

of w over the prime base P is

do w*: = w*/p]

This yields FACT 5. E[T,(n)l

I a(u) + log n.

Here log n(= logarithm of n to base 2) bounds the number of multiple prime factors of n according to their multiplicity. So far Facts l-5 yield under the assumption log n I a( u):

and it remains to prove a sharp lower bound on #M(n, u). This will be our main improvement over Dixon’s analysis. Let K: %z + { t l}d w E13&,%2 be the quadratic character, defined as follows. For (a,, . . . , ad) E @$,=,Z,$,, let K(U,,.-0, adI = (e, ,. . . ,ed) with e, = 1, (- 1, resp.) if b is a quadratic residue (nonresidue) mod 4:. It is well known that K: %,* + ‘B$,%~ is a group homomorphism and a E Q iff K(U) is the group unit (1, 1, . . . , 1) E {‘l}d. LEMMA 1. #M( n, u) 1 n( ~)~‘/(2r)! for all natural numbers r with u2’ I n provided all prime factors of n are > u. Proof. Let T,(m, u):= {w E [l, m] 1w = Ilp,Sop~l A &ui = r}. Since all prime factors of n are > u we have c(fi, U) C %z. We partition T,(fi, u) into classes q, i = 1,. . . , 2d according to the 2d possible values of K. Then

6

qq C T2&

u) n Q.

i=l

Since for each w E T2Xn, u) fl Q, #{z E %z ) z mod n} = 2d it follows #M(n,

Here (#q)2

u) 2 2d#(T,,(n,

u) n Q)

counts the number of ordered pairs (w,, w2) E T X q and bounds for each w E Q the number of distinct pairs (w,, w2) E U iq X q that yield the product w,w2 = w. The Cauchy Schwarz inequality

(2r)!/(r!)2

IMF’ROVEMENTS

ON FACTORING

107

ALGORITHMS

implies

5

(#zy)'L

2-j

2

#kq2

=

2P#r,(fi,

4’

i

i=l

use~u~-~vjZ(Zuioi)*withui=#~,oi=l i

i

i

. I

i

since u) = (r(o) ; T-- ’ ) 2 a(u)‘/r!, Obviously we have #T,@, n(o) + r- 1 is the number of possibilities of choosing with repetitions r ( 1 elemetks out of n(u). Finally we obtain from (3), (4):

Putting (2) and Lemma 1 together we obtain

E[T(n)]= 0 7r(u)23 i i

+ B(U) II

provided log n I r(u) and u2r I n. Using u = r~‘/~~, u/in u I a(u) 5 2u/ln u (which follows from the prune number theorem and (2r)!= ) (which follows from Stirling’s formula) we obtain 0(4Tr(2r)%-*’

E[T(n)] = 0 (4r)2n”’ (1 )’ 1/2rP(h n)2r+ F i nn [ We choose r E 5X. as to minimize r=L

n’/‘(ln

- Inn Inhrn d Jz

. 11

n)“. This implies

+”

ICI1

l/2

and &‘(h

n)*r = O(ln n expv%iCXiE).

This finally yields E[T(n)J

JZTe-21 lnlnn exp\/8

= 0 (

= o(exp/ZiEiKk).

I (5)

108

C. P. SCHNORR

The asymptotic behavior of this bound is quite attractive for excessively large n : n can be factored within nccn) steps with e(n) -+ 0 for n --f co. However, for reasonably sized values the exponent c(n) is not much smaller than 0.5 and the algorithm therefore hardly beats straightforward factoring algorithms. For instance in the range n = e200 we choose r = 4 and (5) yields E[T(n)] I eM = n”.42. Can the above analysis of Dixon’s algorithm still be refined leading to a constant in the exponent which is smaller than fi? We discuss two main points: (a) the tightness of our lower bound on #M(n, u) in Lemma 1, (b) the use of more sophisticated factoring algorithms for factoring w over the prime base P in stage 2. (a) By the proof of Lemma 1 one obtains

#M(n,?P)

2 ~(~,rzq*(r!)*/(2r)!

where \Cl(n, u): = the number of integers 5 n and free of prime factors > u. The asymptotical behavior of II/( n, u) has been analyzed for a long time, see de Bruijn [9, lo], Knuth and Trabb Pardo [ 111. By de Bruijn [lo], formula 1.4:

d4n3n‘1’) L Unfortunately

provided hr( n”‘)

nr-’

> (In n)2’3.

we have for r - &hi: h(n’/‘)

- @GiiiG

K (In n)2’3.

In fact

iClh n‘I’) 2 nr-’

for I - Jm

cannot be proved by de Bruijn’s method. The de Bruijn estimation for #(n, u) is inevitably charged with the error term of the prime number theorem: r(n)

=A’&+

O(nexp(-c/G)).

However for r - $iiiv.J= nr-‘I

nexp(-@GGG)

is smaller than this error term. We even know that \Cl(n, n’lr) < nr-’ for large r, e.g., r = In n/in In n, i *e*, n’/’ = In n. By a result of ErdBs, see de Bruijn [lo, formula 1.7 we have

IMPROVEMENTS

ON FACTORING

109

ALGORITHMS

for this r

which is smaller than the lower bound in question. (b) Instead of using within stage 2 the straightforward factoring algorithm that leads to Fact 5 we could use one of Pollard’s algorithms that finds factors I u of n in about O(fi) steps. By computational experience, Pollard’s p-method [7] detects factors I u of n in O(@ln u) arithmetical steps mod n, see Guy [8] and Knuth [ll]. This method is highly practical although no rigorous theoretical time bound is known so far. Recently Brent [21] succeeded in factoring FS = 22* + 1 by a variant of this method. Pollard [ 121 also proposed a second method with a rigorous time bound. He computes for sufficiently many small a E %z, gcd(ll,,&aVJ6 - aeB), n) for /.A= 1,2,..., 6. For fixed Q these gcd-values can be computed by the fast Fourier transform within O(fi@ ~)~ln ln u) steps. In total, Pollard obtains a worst case time bound O(U’.~+‘) for arbitrarily small E > 0, but the constant factor, expressed by 0, increases in an unknown way as E decreases. We give a similar but slightly stronger result, see also Strassen [13]. LEMMA 2.

O(fi(ln

The smallest prime factor I u of n can be found u)21nln u) arithmetical steps mod n, provided In n = O(ln u)~.

in

ProofI Without loss of generality we assume that fi is integer. We also assume that n has no prime factors d 6, since these prime factors can be removed in advance. Evaluate

f(x) = j, (x + 1 +

i)mod n

t=

atx=&,

2,...,JJ.

Then compute t:=

min(i5

6:

gcd( f(iG),

n) > 1)

p:=min(&-i:(&-i)In,lSiSfi).

Clearly p is the smallest factor I u of n. Next we bound the number of steps of this procedure. We first claim that evaluating a polynomial f E %,[x], deg f 5 d at d points can be done in O(d(ln d)ln In d) arithmetical steps mod n. It is known that evaluating a polynomial f, deg f 5 d at d points can be reduced to O(ln d) arithmetical operations on polynomials of degree I d,

see e.g. Borodin and Munro [20] pp. 99ff. Multiplication of two polynomials in %:,[x] with degree 5 d can be done in O(d ln d ln In d) arithmetical steps mod n via the fast Fourier transform (FFT). Since 2 is a unit in Zn such an

110

c. P. scHNoRFc

FFT algorithm can be derived from the Schonhage-Strassen multiplication scheme [25] in which we replace the base 2 by the indeterminate X. Then the division of polynomials of degree I d can also be done in 0( dln d In In d ) steps by using the algorithm of Sieveking [ 191. It is important that divisions by nonunits of $Enwill not occur since following Borodin and Munro, pp. 99ff we only divide by monk polynomials, namely polynomials of the type lJj(x - xj). This altogether proves the above claim. The final computation of t and p above can be done in O(,kln n) steps. In total we have a step bound O(\lir(m u)~~I In u) provided ln n = O(hr u)~. 0 The step bound in Lemma 2 even bounds the steps for finding all prime factors I u of n. Suppose we are given the smallest prime p I TVof n and the values g, = gcd( f( tfi), n) for t = 2,. . . , 6. For simplicity assume that n # Omod p2. Then we compute min{i:

t:=

g, kZ (1, p}} (&-i)(n,lIiS/L (tfi-i)#Omodp

q is the second largest prime (c o of n. The computation of q requires O(G) additional steps. We leave it as an easy exercise to the reader to continue this procedure in a way that each further prime factor 5 u of n can be detected within O(G) additional steps. Using the above procedure in stage 3 of Dixon’s algorithm for factoring w over primes I u clearly improves Fact 5 to

FACT 6.

T,(n) = O(,hT(ln ~)~lnln

u).

Now from Facts 1-4, 6, Lemma 1, u/in u I Q(U) I 2u/m u and Stirling’s formula, we obtain for u = n1/2r: E[T(n)]

7r(u)fi(lnu)21nln(u)n(2r)!

= 0

+ 7r( u)3

T( u)2’

i

n314’h ulnln(

We choose r E 97, as to minimize

u)fie-2r(ln

I n)2r + n3/2r

n3/4r(ln n)2r and obtain:

2r = n314’(ln n)2r I ln n expJ6lnnlnlnn

with]E(<

1

IMPROVEMENTS

ON FACTORING

111

ALGORITHMS

w( +-)l = o(exp@iiGGG). This finally yields

E[T(n)]

= 0 e~2’(hn)ZlnInuexp~iiiGGZG I

F

i

= o(exp&-iiZZG).

(7)

Thus we succeeded in decreasing the constant in exponential term at the expense of increasing the low order factor. In the range n = e200 we have r = 6 and (7) yields E[T( n)] = e8’/14 = n0.4/14 which is only marginally better than the conclusion from (5). THEOREM 1. For each composite n let E[ T( n)] be the expected time that the abooe algorithm takes to find a proper factor of n. Then for all n (1) E[T(n)J = o(expd6hr nlnln n). (2) The event that the algorithm does not find a proper factor of n within kE[T(n)] steps has probability I 2-k.

Statement (2) is an immediate consequence of the fact that the distinct events of “test 3” (test 1, resp.) failing are mutually independent. A more practical factoring algorithm is obtained if the quadratic residues w in stage 2 are produced via the continued fraction method [2], which implies w = O(fi) and if Pollard’s p-method is used for detecting small prime factors of w. Under the assumption (AO) the continued fraction of fi generates quadratic residues mod n which are uniformly distributed in [ 1, O(6)] the time bound (6) transforms into a time bound n314%(n)eer(ln with r even, for the Morrison-Brillhart

n)r + n312’ method. By choosing

we obtain n3/4’(ln n)’ = O((ln n)2expJ31nnlnlnn) n3/2r = o(exp\l3lnnlnlnn).

(8)

112

C.P.SCHNORR

By (8) this implies COROLLARY~. Under the assumption (AO) the Morrison-Brillhart runs in average time o(explj3 In n ln ln n ).

method

In particular (8) with r = 6 implies E[T(n)] it: e56 m no.28 for n w e200. However, by experience a well tuned version of the Morrison-Brillhart method behaves somewhat better for reasonably sized n. Wunderlich [14] obtained average run times 322 . no.152= no.21 for n = 1040. In fact there are several points where our worst case analysis is too pessimistic. The lower bound on #M(n, v) in Lemma 1 is somewhat too small. Under the assumption $(n, r) 2 nr-’ Theorem 1, (1) transforms into E[T( n)] = o(expd3 In n lnhr n ) and the bound of Corollary 1 transforms into o(expdl.5 ln n mm n ). Moreover, it is known that the quadratic residues generated by the continued fraction method can only have prime factors p with (n/p) = 1. Since only about half of the primes appear as factors of the w’s, this has the effect of doubling the size rr( v) of the prime base. We estimate that this increases the ratio of w’s that are completely factorizable over the prime base by 22’ and therefore causes a speed up factor of about 212 = no.@‘1for n = e2@‘. Assuming (AO) is only a first imperfect step towards an analysis of the Morrison-Brillhart method. Indeed the continued fraction of fi behaves too uncivilized. It should be important for a more rigorous analysis to have a lower bound #{p I v: p prime, (n/p) = l} 2 v/(cln v) with c > 0 fixed. This would ensure a sufficiently large base of small primes for this method. It is also unclear whether this method finds each factor of n equally likely or whether some factors are harder to find than others. A similar situation will occur in the discussion of an analogous algorithm in Chapter 4.

3. ANANALYSISAND

REVISIONOF MILLER'SFACTORINGMETHOD

Miller [4] proposed a factoring method based on the computation of indices. We shall develop a slightly improved version of Miller’s method which turns out to be quite similar to the previously analyzed Dixon algorithm. Under reasonable heuristic assumptions the runtime of our version of Miller’s algorithm will be O(expJ4.5 ln n ln In n ). In particular, Miller’s method does not yield an independent factoring algorithm but merely a specific modification of the method of “combining congruences mod n “. However, as we shall point out, this modification has some decisive advantages in the case that one likes to factor many numbers in the same range. So far all known factoring algorithms collect data which are only useful for factoring one specific number. For instance the congruences

IMPROVEMENTS

ON FACTORING

ALGORITHMS

113

collected in Dixon’s algorithm cannot be used for different n’s. This observation also applies to the factoring algorithms of Morrison-Brillhart [2], Schroeppel (unpublished, see Monier [6]), Shanks [3, 151, and Pollard [7, 121. In our version of Miller’s method we will collect products of small prime numbers which are near to the number n to be factored. These products of small primes can be uniformly used for factoring all numbers near to n. For a E %z, ord(a, n): = min{~ ( u” = 1 mod n} is the order of a mod n. A(n): = max{ord(u, n) 1a E %i} is the order of 2:. Let h,, h,,. . . ,h, be a system of independent generators of %z, then for every u E !Zz there is a representation a = ;g, hyimod

n

where m, mod ord(hi, n) is uniquely determined. Then ind(u): = tm ,, . . . ,llz[) is called a (multi-) index of a. Miller first tries to determine ord(u, n) for some small primes a as follows. Every solution x of x . ind( u) = O(mod A(n))

0)

is a multiple of ord(u, n). Linear index equations mod X(n) are obtained from representations of n as a sum of a difference of products of small primes. These equations are solved by Gaussian elimination in order to obtain a solution x of (1). We have to factor x in order to determine ord(u, n). Let ord(u, n) = fljuT with uj prime, then eventually iwx~ ordk n)/aj - 1, n) will be a proper factor of n. As an example, let n = 1037. stage 1. Search for many distinct representations of n or multiples of n as a sum or difference of two products of small primes. For instance we have * 1037 = 29 - 35 i.e. 2*5 = 35mod n =24’5.13-3 24.5. 13=3modn * = 2 * 3 - 52.7 - 13 2.3.5*.7=13modn 2” = -13modn = 2’O + 13 * = 2235 + 5 . 13 2*3’ = -5 . 13modn * = 3 * 73 + 23 3*73= -23modn It follows that there exist multi-indices

I, a, b, c, d, e for - 1,2,3,5,7,13

114

C. P. SCHNORR

such that 8a + c = 5bmod A(n) 4a + c + e = bmod A(n)

a+b+2c+d=emodX(n) lOa = z + emod A(n) 2u + 5b = z + c + emod h(n)

b+3d=z+3umodh(n) stage 2. Gaussian elimination

yields

120~ = Omod A(n). Hence 212’ = 1 mod n 9

which means ord(2, n) ) 120. The prime factors of 120 are 2,3,5 and since 2@‘,240, 224 # 1 mod n we know ord(2, n) = 120. stage 3. Proper factors of n are found as gcd(260 - 1, n) = 61 gcd(2@ - 1, n) = 61 gcd(224 - 1, n) = 17. The main critical points of this algorithm

are the following:

stage 1 How can we generate sufficiently many congruences such that elimination works in stage 2? stage 2 Suppose a multiple x of ord(u, n) has been found, what is the chance to find sufficiently many prime factors of x? stage 3 will fail to find a proper factor of n = ll$l,Pf’ if ord(u, pf~), i= l,..., d all coincide. The following modification circumvents the traps of stages 2 and 3. In our example for n = 1037 we obtain by multiplying the marked congruences: 2”375374 = 23355 . 13’mod n.

Since no prime of our base divides n, this yields 28325274= 132mod n.

IMPROVEMENTS

ON FACTORING

115

ALGORITHMS

From 24 . 3 . 5 . 7* = 353 mod n we obtain 353* = 13’mod n

which gives us the proper factors gcd(353 - 13, n) = 17 gcd(353 + 13, n) = 61. A formal description of our method is as follows. begin input n

u*= n’/r u-z rid/r cornmen; the optimal choice of r and d will be made below Form the list P = {po, p , , . . . ,P,~,,} of all prunes I u, including p,, = - 1 if 3pi E P, i L 1: pi 1n then print pi stop stage 1 Compute the lists

1I

lw)Iu,w=nip;l

L:=

(w9g)

L:=

g=

(ai,Oli+u)) IwIIu,n+w=n,p~

(n + w, b) i

I

B:= {(cQ)pw: stage 2 Find a nontrivial

1

b=

(b,JOlilm(u))

1

( w, LZ) E L A(n + w, b) E L1) solution (&, 4j I (a, !I) E B) of

x J,,b)(~, EU411. (D,b)EB - - h)=0_mod2J&,t test

(2)

2 if no solution exists then increase 24got0 stage 1 x: = i~~“,P(E”.?‘,af(‘.?,.l)/* y:

=

fl p!2,4.1,EBfco.p,b,)/* iDr( 0)

comment the construction implies x2 = y*mod n. test 3 if x # ky mod n then print gcd(x r+ y, n) stop

Choose the first (a, b) E B such that f&, 6j = 1 B: = B - {(g, b)} goto stage 2. end

This algorithm is virtually very similar to the one of Dixon, and on the other hand it is an improved version of Miller’s method. Clearly the linear

116

C. P. SCHNORR

system (2) has a nontrivial solution as soon as #B > 2(7r( u) + 1). Compare this with the use of the congruences in Miller’s method: if the vectors in B are linearly independent, then Gaussian elimination in Miller’s method works as soon as #B > W(V) + 1. However, linearly dependent vectors in B are useless in Miller’s method and must be discarded. It is not easy to analyze the ratio of linear dependencies occurring in B. These linear dependencies will speed up our algorithm while they slow down Miller’s method. Even if Gaussian elimination succeeds in Miller’s method there are still further traps in stages 2 and 3 of this method, in particular the required factorization of x and ord(a, n) is a serious obstacle. On the other hand the only remaining trap in our algorithm after solving the linear system is the test “x # %y mod n?” Here the argument of Fact 1, Section 2 indicates that this test fails at a frequency 2’-d when n has d distinct prime factors. However we are no more able to provide a rigorous proof. The time analysis of our algorithm will be based on the following assumptions. (Al) The ratio of the number of times of “test 3 failing” to “test 3 succeeding” is uniformly bounded. (A2) The numbers which are completely factorizable over P are independently distributed in [- U, U] and [n - U, n + u]. These numbers have about the same frequency in [n - U, n + U] and [0, n] for 0 < u =x n. In particular (A2) implies #B 1 #(no,

,‘/‘)

* $q n, nq/n

2 nd/‘(ln

n)-r-d.

This follows from #(n, n”’ ):= #{w I

E [l, n]: all prime factors of w I 0)

vr(n’/‘) r

(

+ r - 1 2 n(ln n)-‘. 1

Let T(n) be the time of our algorithm. FACT

7.

T(n)

=

O(n

Then (Al), (A2) imply

d/‘ln n + n3ir) provided nd/‘(ln n)-r-d

2 2n’/‘.

Proof. According to (Al) and (A2), the relation nd/‘(ln n)-r-d L 2n’/’ implies #B > 2( a( 0) + 1) and therefore implies the solvability of the linear system (2). O(nd/‘ln n) bounds the steps to generate L, L, and B, if we compute L (and similarly L) as follows. The prime factors I n’/’ of w are collected in L, as follows: forallwwith]wIInd/‘doL,:= forallpEPandallvwith(vIInd”/pdo [insert p into Lvp]

0

IMPROVEMENTS

ON FACTORING

ALGORITHMS

117

for every w and every pi E L, do [a,(w): = max{v: pr 1w}] L:= {(w&,(w): irvr(u))Iw =rI;ln(“~p;‘(w)} O(n31r) bounds the number of steps to solve the linear system (2). Cl In order to minimize our time bound we choose d, r such that ndjr = 2@ n>r+dn’/r. This yields provided d K r. This yields for d = 3: T(n) = O(expJ4.5In

nlnhr n).

This means that our algorithm is asymptotically rithm, but inferior to the Brillhart-Morrison proved: THEOREM

superior to Dixon’s algomethod. So far we have

2. [Assume (Al), (A2).] The above algorithm has time bound O(expJ4.5lnnlnlnn).

One interesting feature of the above algorithm is that the main work in stage 1, namely the construction of the lists L, i is almost independent from n. These lists can be used uniformly for the factorization of all numbers in [n - u, n + u], 24= n d/r . In particular, if someone has factored n he already has collected the data to easily factor each number near to n. Considering the problem of factorizing many numbers in [n - u, n + u] we will assume that the lists L, L are built up once for ever and that they are sorted with respect to the first component of the elements (w, a) and (n + w, Q Under this assumption we will now bound the remaining number of steps. Given L and i: we can form a sufficiently large subset Bof B as follows: -

B:= 0 while #fBI 2(s(u) + 1) do begin choose (n + w, b) E i: at random eliminate (n + w, &) from L if (w, ~3) E L for some a then [insert (a, !7) into 31 end It follows from (A2) that this will take 0( 7r( o)(ln n)“)

= 0( n’/‘(ln

n)“)

118

C.P.

SCHNORR

steps. This yields a total time bound as T(n) = o( nqn for all r, d with n“/“(ln

n)-r-d nd/‘(ln

L 2n”‘. n)-r-d

n)” + n3q We choose r, d such that w 2n’/‘,

which yields provided d K r. Then minimizing

the time bound with respect to d yields

and the corresponding

time bound is

T(n)

= O(exp(2(ln

n)1’3(lnln

n)2’3).

Thus we have proved: THEOREM 3. algorithm is

[Assume

T(n)

(Al),

(A2).] Given L, i,

= O(exp(Z(hr n)“3(lnln

the time bound of the

n)2’3)).

This theorem can be interpreted as follows. Suppose we like to factor all numbers in [n - u, n + u], u = nd/2r and let the cost to preprocess the lists L, i be uniformly distributed to the numbers in [n - u, n + u]. Then the factorization of each number in [n - u, n + u] accounts for O(exp[2(ln n)‘i3(lnln II)~/~]) steps. We observe that the improvement by preprocessing the lists L and i, can even be strengthened, if we also preprocess for small k’s the lists of all numbers in [kn - u, kn + u] which are completely factorizable over P.

4. IMPROVEMENTSONAMETHODOFSHANKS

Shanks [3] proposed a factoring method which starts by computing the group of equivalence classes of primitive quadratic forms with discriminant - n and in particular he computes the order h( - n) of this group. Then he factors n by constructing a nontrivial ambiguous class. Under the implicit

IMPROVEMENTS

ON FACTORING

ALGORITHMS

assumption that the entire group of classes is generated by small forms, and by neglecting log n factors, Shanks proves a time bound under the W ‘I4 1. This time bound can be improved to O(n’/‘) tion of the generalized Riemann hypothesis. Under this hypothesis formula converges as (see Lenstra [16] for references):

J;;n p ( 1

= h( -n)(

= plm

p-

119 “prime” of about assumpthe class

1 + O(ln( nnz)/&)).

7

This speeds up the evaluation of h( - n). We propose a way to construct ambiguous classes without evaluating h( - n) at all. We exploit the fact that ambiguous forms can be constructed mainly in the same way as we generate solutions of x2 = y2mod n, by the method of combining congruences. Under reasonable assumptions this yields an asymptotical time bound O(expJ3 ln n In In n ). We summarize some basic facts on binary quadratic forms. We find it most convenient to follow the original presentation of Gauss (1801, 1889) which slightly differs from that of Shanks [3]. The form ax2 + 2bxy + cy2 with a, b: c E 8: will be described by the triple (a, b, c). Two forms (a, b, c) and (6, b, C) are equivalent if there exist linear transformations with integer coefficients and determinant 1 transforming the one form into the other; i.e.,T’(; E)T= (i bJ for some integer matrix T with det T = 1. Equivalent forms have the same discriminant D: = b2 - ac. Gauss called b2 - ac the determinant of the form (a, b, c). In modem language -D = ac - b2 is the determinant of the form ax2 + 2bxy + cy2 = (x, y)(; :)(;). Our notion of the discriminant distinguishes from the standard notion by the factor 4. When forms ax2 + Bxy + cy2 with B E % are considered, then B2 - 4ac is called the discriminant. However in our case with B = 2b we split off the square factor 4 from the discriminant. A form (a, 6, c) is (properly) primitive if gcd( a, 2 b, c) = 1. According to Gauss, the nonprimitive forms can all be derived from primitive ones. Therefore it is most important to understand the structure of the primitive forms. Henceforth we will restrict all considerations to forms with negative discriminants D = b2 - ac < 0. In this case the equivalence classes can be characterized by reduced forms. A form (a, b, c) is reduced if 21b 1 5 1a / 5 (cl. There is a gcd-like algorithm which, given (a, b, c) computes an equivalent reduced form within O(lnl abc() arithmetical steps: while (a, 6, c) is not reduced do

beglnb:=

-bmodcwith[bjlc/2

(a, b, c): = (c, b, (p end

- D)/c)

120

C. P. SCHNORR

THEOREM 4 [Gauss, Artikel 1721. In every equivalence class H with D < 0 there is either exact@ one reduced form (a, 6, c) or exactly two reduced forms (a, 4 b, c). In the latter case, H is called ambiguous.

A form with D < 0 either satisfies a, c > 0 or a, c < 0. It is called positive in the first and negative in the second case. Positive (negative, resp.) forms ax2 + 2bxy + cy2 only take positive (negative, resp.) values for real x, y (which follows from ac > b2), Since this property is preserved under the equivalence relation, a class must be either positive, containing only positive forms, or it must be negative and contains only negative forms. Moreover there is a one-one correspondence between the positive and the negative forms as (a, b, c) - (-a, b, -c). Therefore we can w.1.o.g. restrict our considerations to positive forms and these generate exactly half of the equivalence classes. The number of equivalence classes with discriminant D is finite since a reduced, positive form (a, b, c) always satisfies 2JblSar

\lr310)/3.

Gauss [17] introduced the composition of (binary) quadratic forms and proved that the equivalence classes with fixed discriminant D form an abelian group, say QF( D), under composition which is defined by scheme (1) below. Given two classes H,, H2 represented by their reduced forms, the reduced form of H, . H2 can be computed within O(ln ] D I) arithmetical steps over numbers I ] D ( . The forms which are primitive and positive generate a subgroup of QF(D) which we call QFP(D). The unit element I of the group is represented by (1, 0, -D). The following assertions are equivalent: (1) H is ambiguous, (2) H . H = I, (3) every form (a, b, c) in H is equivalent to (a, -b, c), (4) T(; :)T’= (i bcl for some integer matrix T with det T = - 1, (5) there is a form (a, b, c) in H such that a ) 2b. The reduced form of an ambiguous class is of either of the following three types: or

b=O

a=2b

or

a = c.

We call these forms ambiguous, they always represent ambiguous classes. These three types of ambiguous forms yield the following factorizations of the discriminant: -D

= ac,

-D

= b(2c - b),

-D

= (a - b)(a

+ b).

In this way the problem of factoring n reduces to the construction of -n. It is important that Gauss has ambiguous forms with discriminant established a strong correspondence between the factor pairs of n and the ambiguous classes in QFP( - n). We only report the case n odd, since we like to factor only odd numbers.

IMPROVEMENTS

ON FACTORING

121

ALGORITHMS

A pair (n , , n2) E X2 is an admissible factor pair for n if n = n I . n *, n, < n2 and gcd(n,, n2) = 1. Suppose n has (exactly) 1 distinct prime factors, then there are (exactly) 2’- ’ admissible factor pairs for n. THEOREM 5 [Gauss, Artikel 257, 2581. Suppose n E % is odd and has I 2 1 distinct prime factors. Then there are 2’-’ or 2’ ambiguous classes in QFP( - n) according to whether n = 3 mod 4 or n = 1 mod 4. Each of the 2’- ’ admissible factor pairs of n is obtained by the reduced form of exactly one in case n = 3 mod 4 (two in case n = 1 mod 4) of these ambiguous classes. EXAMPLE. We list n: all ambiguous forms with discriminant -n and b L 0 that are primitive, reduced and positive; the corresponding list of

admissible factor pairs. n = 3: (1,0,3);

(1,3)

n = 5: (1,0,5), (2,1,3); (1,5), (1,5) n = 15: (l,O, 15), (3,0,5); (1, IS), (3,5) n = 21: (1,0,21), n = 105: (1,0,105),

(10,5,13),

(3,0,7), (2,1,11), (5,2,5); (3,0,35),(5,0,21),(7,0,15), (11,4,11>;

(1,105),

(1,21), (3,7),(1.21),(3.7) (2,1,53),(6,3,19),

(3735) (5921) (7315) (1,105),

(3,35), (5921) (7715). The distinction between the cases n = 1 mod4 and n = 3 mod 4 is explained as follows. The ambiguous and reduced from (2,1, (n + 1)/2) is primitive in case n = 1 mod 4 whereas it is imprimitive in case n = 3 mod 4, since in the latter case gcd(2,2, (n + 1)/2) = 2. Since the product of two ambiguous classes is again ambiguous, there must be twice as many ambiguous classes in case n = 1 mod 4 as there are in case n = 3 mod 4. The remaining point to be discussed for the factorization of n is how to generate ambiguous classes in QFP( - n),This will be done by exploiting the group structur_e of QFP( -n). Let H, H_E QFP( xn) be represented by (a, b, c) and (a, b, C), i.e., H = [(a, b, c)], H = [(a, 6, C)]. Then by definition of the composition of classes a representative (A, B, C) for H . g can be found as follows: p: = gcd(a, a, b + 6) Compute (Y,j3, y E % such that aa+&Ti+y(b+b)=p A : = aZ/p* B:= C:=(n

i[aab+ + B*)/A

#Gb + y(bb-

n)]mod

A

(1)

122

C.P.SCHNORR

In the special case that gcd(a, 5) = 1 one obtains in this way (observe that we can choose y = 0 and a, B such that aa + /3Z = 1): A:= aZ Choose B such that B=bmodaandB=bmodZ C: = (n + B2)/A. (A, B, C) will be primitive but not necessarily reduced. [(A, B, C)] does not depend on the distinct possible choices for a, B, y, B, and C. Since (Y,B, y can be computed via Euclid’s gcd-algorithm, it is clear that this multiplication scheme requires only O(ln n) arithmetical steps over numbers I O(n) provided (a, b, c) and (a, b; C) are reduced. It can easily be seen that

[(a, b, c)] [(a, -b, c)] = I. In this case p = a, A = 1 and therefore (A, B, C) - (1,0, n) which means

[(A, 4 C)l = 1. The special case gcd(a, 5) = 1 of this multiplication implies the following.

scheme immediately

FACT 8. Let [(a, b, c)] E QFP( - n) and let a = IIjpp be the prime factorization of a, then [(a, b, c)] = II,[(pp, bi, ci)] with bi: = bmod p,?l and

q: = (bf + n)/ppl. The possibly occurring factors [( ppl, b,, ci)] in Fact 8 can be characterized as follows. LEMMA 3. Letp be prime, p # 2, gcd(p, n) = 1 and (Y E GJC.There exists [(pa, b, c)] E QFP( -n) with b, c E $5 iff -n is a quadratic residue mod p, i.e., (y) = 1. Zf (y) = 1 th ere are exact+ two forms ( p”, b, (n + b2)/pa) with 0 < b
Proof It is well known for p # 2 that A is a quadratic residue mod p iff A is a quadratic residue mod p”. In fact, since ‘Z;- is cyclic of order pa- ‘( p - 1) we have A is quadratic residue mod p * A(p-‘)/2

= 1 mod p

H APnm’(P--1)/2 = 1 mod p” * A is quadratic residue mod p”. Now, suppose that [( p”, b, c)] E QFP( -n) then -n = b2 - n is a quadratic residue mod p”. On the other hand if -n is residue mod p” then there are exactly two square roots b of Hence there are exactly two forms ( p”, b, (n + b2)/p”) with and discriminant - n. Cl

p*c. Hence a quadratic -n in %Ta. 0 < b
IMPROVEMENTS

ON FACTORING

123

ALGORITHMS

Incase( 1 thereisauniquebE%withb2= We take this b and we denote

-nmodp,O
p/2.

I ,,,n: = [( P, b, (n + b2),‘p)]

then ‘;l,

From the composition (I,,,)a

= [( P, -b,

in QFP( - n), see scheme (1) we conclude that

= [(pa,

= [(pa,

for some b, E FE;,,.

b,, (n + bz)/p”)]

b, = b,-,mod ($, n)-,

(n + b2)/p)].

pa-‘.

-b,,

(n + b:)/p”)].

We denote one of the classes [(pa, +-b, (n + b2)/p”)] 3 as Ip=,n. Then the other class must be (IPa, “)-I. multiplication

occurring in Lemma It is clear from the

scheme that

This implies that Fact 8 can be rewritten as follows. LEMMA 4. Let [(a, b, c)] E QFP(-n), prime factorization of a. Then [(a, b, c)] = n (I,,, ,,)ul.f’ i

a odd and let a = llppl be the

with ei = t 1.

In particular, factoring [(a, b, c)] E QFP( -n) as in Lemma 4 can be done roughly in the time which is necessary to factor a. Since we know ( Zp,,n)u”’

= [( P?, b,, ci)]

with bi = bmod p,?~, ci = (b,? + n)/pPa, we can easily check whether E; = 1 or ci = - 1. Also, in the case that a is even, c must be odd provided (a, b, c) is primitive. Hence, if cu is even we can apply Lemma 4 to the form (c, -b, a) which is equivalent to (a, b, c). By means of Lemma 4 we can generate ambiguous forms with discriminant - n mainly in the same way as congruences x2 = y2mod n are produced by Dixon’s factoring algorithm.

124 Construction

C.P.SCHNORR of Ambiguous Classes in QFP( - n)

stage 1 Construct a factor base P:={p]2
1, = [(3,1,346)],

I,, = [(13,4, Sl)l.

One obtains 1; = 113’ z:j = 1;. Hence I,, . 1; ’ is ambiguous. The reduced form in this class is (34,17,39), which yields the factorization 1037 = 17(78 - 17) = 17 . 61. Observe that the factor base in this example is smaller than in the application of Miller’s method in Section 3. Dixon’s algorithm would require a

IMPROVEMENTS

ON

FACTORING

ALGORITHMS

125

larger factor base too. Indeed the factor base is so small since the primes p = 5,7,11 are excluded because (7) = - 1. In our analysis of the algorithm we will use the following heuristic assumptions. (A3) # {p I v: p prime, (7) = 1} 1 V/(C In o) with c > 0 fixed. (A4) Every admissible factor pair of n corresponds to some ambiguous class which is generated by the ZP, p I v. Even if Assumption (A3) fails for n it will certainly hold for the product nq with some small prime q. Let q,, q2 be primes with

#{pap)

= (:)I} ~+4

Then (A3) with c = F/2 certainly holds for nq with either q = q, or q = q2. Thus we apply the factoring algorithm to nq. By (A4) each factor of n will be found with the same frequency as the factor q. The assumption (A4) is still somewhat weaker than the assumption used by Shanks [3] that the whole group GFZ-‘( -n) is generated by the classes ZP with small p. Under the assumptions (A3), (A4) the analysis of the algorithm becomes virtually very similar to the analysis of Dixon’s algorithm. The main advantage over Dixon’s algorithm is that we have to factor numbers a = O(6), instead of numbers w = O(n), over the base of small primes. Therefore we can argue as in the case, that quadratic residues mod n, w = O(fi) are constructed by the continued fraction method, see the end of Section 2. We choose i) Ix nwr ,

r=2[+/=]

and obtain as a final result: THEOREM 4. [Assume (A3), (A4).] Suppose we factor a composite n via the construction of ambiguous forms with discriminant - n as above, then for each n a proper factor of n will be found with probability l/2 within o(expJ3 In n ln In n ) steps.

The above factoring method can be interpreted as the continued fraction method in case of negative discriminants. Conversely, in case of positive discriminants D = b2 - ac > 0, there is a different concept of reduced forms and there are many equivalent reduced forms. According to Gauss, Artikel 183-187, the equivalent reduced forms can be developed into an even and symmetric period. The recursion for developing this period is the same as that for evaluating the period of the continued fraction of 0. Shanks exploited this coincidence and proposed an algorithm to factor n by

126

C.P. SCHNORR

constructing an ambiguous form with positive discriminant n. Shanks has a way to make giant steps within the period of equivalent reduced forms which imposes a group-like structure on the period. This second algorithm of Shanks runs in about O( n114) steps, see Monier [6] and Lenstra [ 161 for a more detailed exposition of this method. ACKNOWLEDGMENTS I am greatly indebted to the Stanford Computer Science Department whose support enabled this work. In particular I thank D. Knuth for many hints and his efficient cooperation. J. Vuillemin communicated to me the thesis of L. Monier. I thank A. Schijnhage and H. De Groote for many useful comments.

REFERENCES I. R. L. RIVEST, A. SHAMIR, AND L. ADLEMAN, A method for obtaining digital signatures and public key cryptosystems, Comm. ACM 21, 1978, 120-126. 2. M. A. MORRISON AND J. BRILLHART, A method of factorization and the factorization of F,, Muth.

Comput.

29, 1975, 183-205.

3. D. SHANKS,Class number, a theory of factorization and genera, in “1969 Number Theory Institute” (D. J. Lewis, Ed.), Proc. Symp. Pure Math No. 20, pp. 415-440. Amer. Math. Sot.. Providence, R.I., I97 I. 4. J. P. C. MILLER, On factorization, with a suggested new approach, Math. Comput. 29, 1975, 155-172. 5. J. D. DIXON, Asymptotical fast factorization of integers, Marh. Comp. 36, 1981, 255-260. 6. L. MONIER, Algorithmes de factorisation d’entiers, These d’informatique, Universite Paris Sud. 1980. 7. J. M. POLLARD, A Monte Carlo method for factorization, BIT 15, 1975, 331-334. 8. R. K. GUY, How to factor a number, in “Proc. Fifth Manitoba Conference on Numerical Math.” 1975, pp. 49-90. 9. N. G. DE BRUIJN, On the number of positive integers 5 x and free of prime factors > .r, Indug. Murh. 13, 1951. 50-60. IO. N. G. DE BRUIJN, On the number of positive integers 5 x and free of prime factors > v, II, Nederl.

Akad.

Wetensch.

Proc. Ser. A 69, 1966, 239-247.

I I. D. E. KNUTH, “The Art of Computer Programming, Vol. 2, Seminumerical Algorithms,” Addison-Wesley, 1969: 2nd ed., 1980. 12. J. M. POLLARD, Theorems on factorization and primality testing, Proc. Cambridge Philos. Sot. 76. 1974, 521-528. 13. V. STRASSEN.Einige Resultate tiber Berechnungskomplexitat, Jber. Deufsch. Math.- Verein. 78, 1976, l-8. 14. M. C. WUNDERLICH, A running time analysis of Brillhart’s continued fraction method, in Lecture Notes in Mathematics. No. 75 I, pp. 328-342, Springer, Berlin, 1979. 15. D. SHANKS, The infrastructure of real quadratic fields and its application, in “Proc. Boulder Number Theory Conference,” University of Colorado 1972, pp. 217-224. 16. H. W. LENSTRA, JR., On the Calculation of Regulators and Class Numbers of Quadratic Fields, preprint. University of Amsterdam, 1980. 17. C. F. GAUSS. “Disquisitiones Arithmeticae,” Leipzig. 1801; German translation: “Untersuchungen tiber hohere Arithmetik,” Springer, Berlin, 1889.

IMPROVEMENTS

ON

FACTORING

ALGORITHMS

127

18. M. 0. RABIN, Probabilistic algorithms in finite fields, SIAM J. Comput. 9, 1980,273-280. 19. M. SIEVEKING, An algorithm for division of power series, Computing 10, 1972, 153-156. 20. A. BORODIN AND I. MUNRO, “The Computational Complexity of Algebraic and Numeric Problems,” American Elsevier, New York, 1975. 21. R. P. BRENT, “Analysis of Some New Cycle Finding and Factorization Algorithms,” Department of Computer Science, Australian National University, 1979. 22. D. E. KNUTH AND L. TRABB PARDO,Analysis of a simple factorization algorithm, Theoret. Comput. Sci. 3, 1976, 321-348. 23. A. M. LEGENDRE, “Theorie des Nombres,” Tome I, Paris, 1798: reprint Blanchard. Paris, 1955. 24. R. L. RIVEST AND R. Y. PINTER, “Using Hyperbolic Tangents in Integer Factoring,” MIT Report, 1979. 25. A. SCH~NHAGE AND V. STRASSEN,Schnelle Multiplikation groper Zahlen, Computing 7, 1971,281-292. 26. C. POMERANCE, “Analysis and Comparison of Some Integer Factoring Algorithms,” Report, Department of Mathematics, University of Georgia, 1981 27. J. SATTLER AND C. P. SCHNORR, “Ein Effizienzvergleich der Faktorisierungsverfahren von Morrison-Brillhart und Schroeppel,” Report, Universitat Frankfurt, Sept. 1981