Algorithms for adaptive factorization of polynomials

Algorithms for adaptive factorization of polynomials

Signal Processing 35 (1994) 67-74 Elsevier 67 Short communication Algorithms for adaptive factorization of polynomials Ari Kangas Systems and Contr...

475KB Sizes 2 Downloads 112 Views

Signal Processing 35 (1994) 67-74 Elsevier

67

Short communication

Algorithms for adaptive factorization of polynomials Ari Kangas Systems and Control Group, Uppsala University, Department of Technology, Box 27, S-751 03 Uppsala, Sweden Received 2 December 1991 Revised 10 July 1992 and 22 March 1993

Abstract. This correspondence treats adaptive factorization of polynomials with real-valued, possibly time-varying coefficients. We discuss and compare the properties of serial and parallel Newton algorithms which factorize the polynomial into first order and second order factors. The parallel methods are based on a coefficient matching principle and estimate simultaneously all roots of the polynomial, in contrast to the serial procedures which determine the roots one by one.

Zusammenfassung. Diese Korrespondenz behandelt die adaptive Faktorisierung von Polynomen mit reellwertigen, m6glicherweise zeitvarianten Koeffizienten. Wir diskutieren die Eigenschaften von seriellen und parallelen Newton Algorithmen, die ein Polynom in Faktoren erster und zweiter Ordnung zerlegen. Die Parallel-Methoden basieren aufeinem Koeffizienten-Anpassungs-Prinzipund sch~itzen simultan alle Wurzeln des Polynoms, im Gegensatz zu den seriellen Prozeduren, bei denen die Wurzeln nach-einander determiniert werden. Rrsumr. Cet article prrsente la factorisation adaptative d'un polynrme h coefficients reels, pouvant varier dans le temps. Nous discutons et comparons les propri&6s des algorithmes de Newton srquentiel et parall~le qui factorisent les polynomes en facteurs du premier et du second ordre. Les mrthodes parall~les sont basres sur le principe des coefficients adaptrs et estiment simultanrment toutes les racines du polynrme, par opposition au procedures srquentielles qui drterminent les racines une par une. L'algorithme n&essite approximativement 4n 2 'flops' (opErations flottantes) pour d&erminer les racines d'un polynomes de degr6 n.

Keywords. Roots of polynomials; polynomial factors; adaptive Newton algorithms.

1. Introduction The need to factorize polynomials arises naturally in numerous engineering problems. The main application that we have in mind in the present correspondence is adaptive signal processing tasks such as frequency estimation, direction-of-arrival estimation and fault detection. In such applications, the first step consists of recursive estimation of polynomial coefficients. In the second step, the roots are determined in order to extract the interesting information. These two-stage proce-

Correspondence to: Mr. A. Kangas, Automatic Control and Systems Analysis Group, Uppsala University, Department of Technology, Box 27, S-751 03 Uppsala, Sweden.

0165-1684/94/$07.00 © 1994 Elsevier Science B.V. All rights reserved SSDIOI65- 1 6 8 4 ( 9 3 ) E 0 0 4 4 - L

dures can be coupled into a one-stage method. This was done in [ 10] for the special case of AR models. The main advantage of using a two-stage procedure is that it can be applied to a variety of problems where it may not be feasible to estimate the roots directly. Several factorization algorithms have been developed during the years. One of the simplest ways is to use Newton's method [2]. Within the numerical analysis community, Laguerre's method [ 11 ] is one of the favourites with some nice properties, such as guaranteed convergence for real roots. Another popular method is to compute the eigenvalues of the companion matrix associated with the polynomial. This is, for example, the way polynomial rooting is implemented in the software package MATLAB.

A. Kangas ~Algorithmsfor adaptivefactorization of polynomials

68

Newton's method and Laguerre's method determine the zeros one by one. In order to ensure that all zeros are found, one usually reduces the order of the polynomial after finding one zero. This procedure is called deflation. One of the drawbacks of deflation-based methods is that each root has to be estimated with maximum accuracy before deflation, otherwise the accuracy of the remaining roots will be affected. As a result, it is necessary to perform several iteration steps which increases the computational burden. In addition, the deflation procedure is in itself computationally intense. The above mentioned drawbacks may prohibit the use of deflation-based methods in some real-time applications. In this correspondence we therefore focus on serial and parallel Newton type algorithms which do not use deflation. The name serial refers to the fact that these algorithms determine the roots one by one. One potential drawback of the serial approach is that we cannot be sure that all zeros are found. The parallel Newton algorithms, on the other hand, estimate all roots simultaneously. Besides presenting algorithms of both serial and parallel type, this correspondence also shows that there is a close link between the serial and the parallel Newton algorithms.

2. Problem formulation

A=(AI

An) T,

(2.3)

where Ai ~ C, or O=(al

/31

...

am

/3,,) T,

(2.4)

where ai, fli ~ ~. For later use, it is convenient to define 0h = (ak

/3k)T.

(2.5)

Usually, the coefficients of the given polynomial A (z) will be time-varying, but, for notational convenience, we have omitted the explicit time-dependence in the vectors a, A and 0. Note that since a is real-valued, we know that for some p<~n/2, we have A2i i = • ~ ,~, Pj e''°j-/~2j for 1 <~j<~p ( denotes complex conjugate) and Aj~ ~, for j > 2p. It is assumed that A ~ )tj for all i ~j. The assumption of distinct roots is not a serious restriction in many signal processing applications. If necessary, it is easy to modify the algorithms for the case of multiple zeros.

3. The serial Newton algorithm Let us review the classical Newton procedure for determining the roots of a polynomial. This algorithm solves A(Ak) = 0 ,

The basic relation between the direct form polynomial coefficients and the factorized forms is given by

...

k = l ..... n ,

(3.1)

and the corresponding iterative procedure is A~(t) = , ~ ( t -

1)

n

A(Z) =zn+alzn-l+...+an

= 1 - I ( z - - Ai)

(2.1a)

i~l

. . a(A~(t-y(t)~----,

1))

A ( A k ( t - 1))

k = l ..... n.

(3.2)

m

= l-I(z 2- aiz-

~i),

(2.1b)

i=1

with m = n / 2 . If the polynomial is of odd order, say n - 1, then set a n 0 and/3,~ = 0. Given the vector of polynomial coefficients, =

a = (al

...

an) T

(2.2)

where ai ~ N, we want to estimate the parameters of some factorized representation of the polynomial, for example, the root vector Signal Processing

This algorithm (which we denote SNA-1 ) is serial since it determines the roots one by one. Its properties are well-documented [2] and it includes a quadratic convergence rate close to the true roots. The integervalued index t can be interpreted either as an iteration count or, in the case of time-varying polynomials, as a time-step. In the latter case, we have A(z) = z" + a ~ ( t ) z " ~+ • • • + a , ( t ) . The implementation of this method can be done by using the following way to evaluate a polynomial:

A. Kangas/ Algorithmsfor adaptivefactorizationofpolynomials

A(z)

A(q -j ) 1 -zq

-

l6(n),

(3.3)

H(q-l)=l_~q-J_flq

69

2

(3.8)

and where

A(q -~) = 1 + a l q - J + and

• - • +a.q -n

(3.4)

denotes the shift operator (q-Ix(s) = x ( s - 1 ) ) and 6(s) is the dicrete Dirac function 6(s)=

q- l

1, s = 0 , 0, s4:0.

(3.5)

The definition (3.3) may appear strange, but it can easily be verified that an evaluation of (3.3) leads to Homer's rule for evaluating polynomials in an efficient manner [2]. Thus, the definition (3.3) seems to be very natural since this is the way polynomials are evaluated in practice. The derivative of A(z) can be evaluated by differentiating (3.3),

~'= (c~ /3) T .

Note that this definition is a very natural extension of the definition of A(z). The polynomial Hk(q-l) = 1--a~q-J--[3kq -2 is a factor of A(q i) iff B(OD =0. Hence, 0k is a zero of the function B. The (serial) Newton algorithm for solving B( 0D = 0 is

Ok(t) = Ok(t- 1 ) - y(t)[B'(0k(t-- 1 ) ) ] - ' ×B(Ok(t--1)),

8,(¢)

A(q -l)

1),

=

] i S ( n - l)

A(q-i) [l?tk(q-,)]5 6(n-2)

A(q J) [lglk(q---~]i6(n-2)

A(q-Z) [i?i~(q-,)]5 6(n-3)

A(q-1~

[/~k(q_~

i)2q-16(n)

- ( l__--~q-- ~) ~6(n--

k = l ..... m. (3.10)

The derivative is straightforward to evaluate,

A(q -~)

A'(Z)=(I_z q

(3.9)

(3.11)

(3.6) with

which provides an alternative to 'standard' ways to compute the derivate in (3.2).

3.1. A serial Newton algorithm for second-order factors The definition (3.3) shows that the problem of estimating the roots of a polynomial can be interpreted as a problem of cancellation of impulse responses. This observation will be used in order to derive a serial solver for the second-order polynomial factors (see (2.1) ). Consider the following new function of a polynomial:

[ H(q_l) A(q-l)

6(n)

B(¢) ~= A(q -~_____~) H(q -t) 6 ( n - l ) where

1 (3.7)

l?lk(q--I)= l__dkq--l__ fikq 2.

(3.12)

This algorithm ( denoted SNA-2) is believed to be new. Although it involves a matrix inversion, it is still serial in the sense that Ok,k = 1..... m, are determined one by one. In the next section, we show that the serial methods can easily be modified to yield parallel estimation methods.

4. Simultaneous estimation of polynomial factors We now turn our attention to the development of parallel algorithms for estimating factors of polynomials. The algorithms are based on the simple fact that the polynomial coefficients are equal to the coefficients obtained by the construction of a polynomial from the factors of the polynomial. This property was called the Vol. 35, No. I, January 1994

A. Kangas ~Algorithms for adaptive factorization of polynomials

70

coefficient matching principle in [ 12]. Note that it is an easy task to construct a polynomial from a set of factors (use repeated convolution).

4.1. A parallel Newton algorithm for root estimation The coefficient matching relation can be written ~(A) - a = 0 .

(4.1)

Here, ~(- ) denotes the map induced by (2.1a), (2.2) and (2.3), while a are the coefficients of the given polynomial. The Newton algorithm for the recursive (iterative) solution of (4.1) is (see [2] ) )~(t) = . ~ ( t - 1) - y(t) [ d ' ( , ~ ( t - 1))] - '

×{d(A(t-1))-a},

(4.2)

where y (t) is a positive scalar that can be used in order to control the step size. The elements of the Jacobian a' of the transformation A ~ a are given by [a'] ij = Oai/OA~. In the following, let A (q - 1; A) denote the polynomial A(. ) corresponding to ~(A). In [5], we showed that the algorithm (4.2) can be given the following very simple form:

The structure of the recursions (4.3) is very nice, since the matrix inversion in (4.2) has been replaced with n scalar equations. Also, note the close correspondence with the SNA-1 method. The only difference between the methods is that for the SNA-1 the denominator is the derivative of the given polynomial, whereas the PNA-1 uses the derivative of the polynomial constructed from the estimated roots. The algorithm defined by (4.3) (or equivalently by (4.2)) is unfortunately not new. It was discovered independently by several researchers in various ways. One early reference is [ 13], who in 1903 used the iteration formula (4.3) in a constructive proof of the fundamental theorem of algebra. Brrsch-Supan [ 1 ] arrived at the algorithm while deriving error bounds for estimated roots. Keruer [9] and Starer and Nehorai [ 12] derived the algorithm from the coefficient matching principle (4.1). The algorithm in [12] was, however, given in the (less elegant) form (4.2), with ~' given as a product of two matrices. In [5], the author reformulated the algorithm of [ 12] in the form (4.3), based on a similar derivation in [8]. We will return to discuss some of the properties of the algorithm (4.3) in Section 5.

^

Ak(t) = '~k(t-- 1 ) -- e ( t ) ~

1) )

4.2. A parallel Newton algorithm for second-order factors

1)) '

k = 1..... n,

(4.3) We are now interested in the factorization defined

where

by o/i(z; ;~) ^ - • 0z z=.,~ = ( A ~ - ~ )

~a(n-

I)"

A ( Z ) = z n q - a l z n - 1 + " " " +an m

(4.4) The algorithm (4.3) will be denoted PNA-1. It can easily be verified that (4.4) is equivalent to

O,4(Z;oA,A) z=ak"

(4.5)

Using (2.1a) and (4.5) we can derive an alternative expression for A~, which may be advantageous to use from a computational point of view:

.4~(A)= f i i= l,i~k Signal Processing

(A,-A/).

(4.6)

= 1-I(z 2 - a i z - fli).

(4.7)

i=1

It should be clear that the 'coefficient matching' principle

8(0) - a = 0

(4.8)

holds also for this parametrization. The Newton algorithm for the recursive solution of (4.8) is 0(t) = 0 ( t - 1 ) - y ( t ) [ d ' ( 0 ( t - 1 ) ) ] - I × { d ( 0 ( t - 1)) - a } .

(4.9)

The evaluation of the mapping ~(0) and the Jacobian

A. Kangas ~Algorithmsfor adaptivefactorization of polynomials

d' (0) can be done in a manner similar to the derivation of the PNA- 1. Alternative expressions for the Jacobian can be found in [7] in the form of the product of two matrices. Equivalent, although not equally explicit, expressions can also be found in [ 10]. In [i5], we proved the following simple explicit expression for the update equations (4.9): 0h(t) = 0k(t- 1) + y(t)[/~,(0(t- 1))]

--1

×B(Ok(t--1)), k = l ..... m,

where

A(q -1)

/4k(q - ')

6(n) J 6 ( n - 1)

A(q -1)

Newton type, the convergence rate is quadratic in that case. Far away from the true value, however, divergence may easily occur [4]. This should not be surprising, since the problem is highly non-linear, especially for high-order polynomials. It is well-known [2] that small changes in the polynomial coefficients may cause large changes in the root locations. One possible way to improve the global convergence is to use the scalar y(t) (see (4.3)) in a proper way, for instance to ensure that IId(X(t))-all
II

(4.10)

8(0k)=

71

(4.11)

5.1. Analysis of PNA-1

/-tk ( q - 1 )

It is easy to verify from (4.3) that the only possible convergence points of PNA- 1 are given by

and

A(,~k) = 0

[ [/~k(qJ~(q ';t~) l)] 2~(n- 1) = [/~k(q ,~(q-,;-1 0]2a(n_ 2) )

~(q:l;,O)26(n-2)

[Hk(q - ' ) ]

]

/ ~ ( q - ~ 0)28(n_ 3)

[H,(q- )]

"

(4.12) This new algorithm will be denoted PNA-2. Compare (4.10)-(4.12) with (3.10)-(3.11). Once again, it turns out that the only difference between the serial and the parallel solver lies in how the derivative B' is constructed. The SNA-2 uses the derivative of the function B, which is a function of the given polynomial, whereas the PNA-2 uses the derivative of the function/~, which is constructed from the estimated polynomial factors.

5. A l g o r i t h m properties

We will now discuss some properties, which influence the global convergence properties of PNA-1 and PNA-2. An interesting general question is, of course, whether the algorithms will converge or not. Like most nonlinear algorithms, they are well-behaved close to the true solution and since the algorithms are of the

for all k,

(5.l)

which corresponds to the desired solution. From (4.3), we also note that it is important that A ~(,~) ~ 0. Since /~'(Ak) = 0 iff ,~, =,~j (see (4.6)) for somej4:k, we may formally require that

A~a={A:l]Ai-Ajll>&foralli-~j}.

(5.2)

In order to avoid numerical problems, it is therefore recommended to introduce some monitoring in order to ensure (5.2). The step size y(t) can serve as a useful tool for this purpose. By imposing such a restriction, it is guaranteed that the algorithm does not collapse due to a division by zero. However, it does not guarantee that the algorithm will converge to a true convergence point. It is still possible that the algorithm converges to a point on the boundary of the admissible region ~a. This may occur, for instance, when the number of real roots is not known in advance. In that case, the algorithm does not converge, which will be shown below. For notational convenience, let ~t'(p) denote the set of root vectors consisting of 2p complex roots and n - 2p real roots.

THEOREM 1. Assume that the PNA-1 algorithm is Vol. 35. No. 1, January 1994

72

A. Kangas ~Algorithms for adaptive factorization of polynomials

applied in such a fashion that y(t) is chosen to ensure that A(t) ~ 2 a . Then A ( t - 1) ~ ~ ( p ) ~ A(t) ~ ~ ( p ) .

ommend the use of PNA-2 instead, which solves the same problem without the need of such a rescue device.

PROOF. In order to prove this, examine the update equations (4.3). It can be verified from (2.1) and (4.61 that if )t~ = i*+ 1, we have

5.2. Analysis of PNA-2 From (4.10) we see that the possible convergence points of PNA-2 satisfy

A(,~k) = A * ( ) t k + l ) , m

A'~(~)=- [I (ft.-L) i- l,i4:k

i= I~i~k.k+ I

=-(XL, -X~)i= l,i,~k.k+ ~I 1 =-(XLI-X~) i= 1,i4-k,k+ (-I I (~+, - ~ )

B(0~) = 0 ,

for all k,

(5.3)

which corresponds to the desired solution. Note, however, that it is also required that Bk(O) "' is non-singular at the convergence points. Therefore, the singularity properties of the m a t r i x / ~ ( 01 are interesting to examine. We have the following result.

THEOREM 2. The matrices B'k(O) (4.12) are nonsingular ¢~, the polynomials H i and Hj are coprime (have no common factors)for all i ~j.

m

^t~ --Ak+,(,~),

It then follows that

y ( t ) A ( A k ( t - 1) ) A ~ ( A ( t - 1))

A~(t) = ) t ~ ( t - 1) =A*+l(t-

1)

7(t)A *(,~k+l(t-- 1)) ^,. ^ Ak+ l(A(t-- 1)) = A*+ j ( t ) '

which means that complex conjugated pairs remain complex conjugated after the update. If )t k ~ ~, then A(Ak)~ and P ~ ( , ~ ) ~ , thus A k ( t - 1 ) ~ R A k(t) ~ ~ and the proof is concluded. [] If the number of real roots have been guessed wrongly, or if the number is not constant with time, the parameters will very likely converge towards a double root on the real axis, that is, to the boundary of the domain -~A. If ~ is chosen too small, the update equations will be ill-conditioned and the root estimates will scatter away in an unpredictable way. This deficiency of PNA-I can probably be resolved by using some clever monitoring devices, which transforms real roots into complex ones at an 'appropriate' stage. The author's experience is that this is not a trivial task. If this is a problem for the intended application, we rec-

SignalProcessing

PROOF. Recall the expression (4.12) for the matrix At Bk(0). This matrix is singular iff we can find a vector (p~ p2) T such that I

~(q_l;O) 6(n_1 ) [Hk(q- i) ]ft(q-I;O) S(n_2)

[Hk(q-')] ~

A(q 1;10)2~(n_2) )] A(q- t;10)2~(n- 3) [nk(q- )] [Hk(q-

I ~ ] =0. Pz (5.4t

Define

S~(q ') = I - I H i ( q - ~), i4=k

(5.5)

p ( q - l) = P l +P2q-i

(5.6)

Then, with ( 5 . 5 ) - ( 5 . 6 ) , (5.4) can be rewritten as rk(S) zx

P(q - ~)Sk(q Hk(q 1) l)6(s) = 0 ,

s=n-2, n- 1 .

(5.7)

Since the numerator PSk is of degree n - 1, it follows that if r k ( n - 1) = r ~ ( n - 2 ) = 0 , then rk(s) = 0 , s>~n. This can be verified by rewriting (5.7) as

A. Kangas ~Algorithms for adaptive factoriaation of polynomials

rAs) = akrk(s-- 1 ) +/3krk(s-- 2) + (PSk) 6(s).

(5.8)

The stated result follows since (PSk) 6(n) = 0. Now, rk ( s ) -=0, s >~n, implies that

P(q

')Sk(q-~) Hk(q -1)

(5.9)

has a finite impulse response, which means that the denominator must be cancelled by factors of the numerator. But P(q ~) can only cancel one factor and since all H~ are coprime, it follows that the only solution to (5.4) is Pl =P2 =0. T h u s / ~ ( 0 ) is nonsingular iff Hj are coprime, ff].

73

polynomial with real coefficients, then it is necessary to know the number of real roots in advance, which may be unrealistic and difficult to verify. Furthermore, it makes the method difficult to use when the polynomials are time-varying and the number of real roots is not constant with time. The second parallel algorithm is based on parametrization in terms of second-order polynomial factors and does not require a priori knowledge of the number of real-valued roots. The latter property turns out to lead to better convergence properties for the second algorithm as compared with the first algorithm, also in cases where the number of real roots is known in advance.

REMARK. A direct consequence of Theorem 2 is that the transition from a real root pair to a complex conjugate pair does not cause any problems, since a double root on the real axis is an admissible point (iff the double roots are collected in separate second-order sections!). If the estimate includes more than two real roots, then it is wise to arrange the parameter vector so that the roots closest to each other are collected in one second-order section. The PNA-2 method is thus capable of factorizing polynomials with double roots on the real axis. This may be of little interest per se, but rather surprisingly this turns out to be an important advantage in practice. It is shown by numerical examples in [ 7] that PNA-2 may have better convergence properties than PNA-1 even in cases where the number of real roots is known in advance.

6. Conclusions Two serial and two parallel algorithms for adaptive root estimation have been studied and compared. The first parallel algorithm was derived independently by several authors, see for example [ 1, 5, 9, 12, 13 ] and can be regarded as a modification of the classical serial Newton method for estimation of polynomial roots. Here, we showed that if this algorithm is applied to a

References [l] W. B6rsch-Supan, "A posteriori bounds lbr zeros of polynomials", Numer. Math., Vol. 5, 1963, pp. 380-398. [ 2 ] G. Dahlquist and A. Bj6rck, Numerical Methods, Prentice Hall, Englewood Cliffs, N J, 1974. [3] P. Gill, W. Murray and M.H. Wright, Practical Optimization, Academic Press, London, 198 I. [4] J.-C. Ho and J.-F. Yang, "Parallel Rayleigh quotient iterative algorithm for rooting nonstationary spectral polynomials", Proc. lnternat. Conf Acoust. Speech Signal Process., Toronto, 1991, pp. 2233-2236. [5] A. Kangas, Algorithms for adaptive factorization of polynomials, Report UPTEC 91041R, Department of Technology, Uppsala University, 1991. [6] A. Kangas, Contributions to temporal and spatial frequency estimation, PhD Thesis, Uppsala University, 1993. 17] A. Kangas and E. Karlsson, "Adaptive root estimation algorithms", Preprints, 9th IFAC/IFORS Symposium on Identification and System Parameter Estimation, Budapest, Hungary, 1991, pp. 1311-1316. [8] A. Kangas, P. Stoica and T. S6derstr6m, "Adaptive instrumental variable methods for frequency estimation", lnternat. J. Adaptive Control Signal Process., Vol. 6, September 1992, pp. 441-469. [ 9] I.O. Kerner, "Ein Gesamtschrittverfahren zur Berechnung der Nullstellen yon Polynomen", Numer. Math., Vol. 8, 1966, pp. 290-294. [ 10] A. Nehorai and D. Starer, "'Adaptive pole estimation", IEEE Trans. Acoust. Speech Signal Process., Vol. ASSP-38, 1990, pp. 825-838. [ 11 ] H.J. Orchard, "The Laguerre method for finding the zeros of polynomials", IEEE Trans. Circuits and Systems, Vol. CS-36, 1989, pp. 1377-1381. Vol.35. No. I. January 1994

74

A. Kangas ~Algorithms for adaptive factorization of polynomials

[ 12] D. Starer and A. Nehorai, "Adaptive polynomial factorization by coefficient matching", IEEE Trans. Signal Process., Vol. SP-39, 1991, pp. 527-530. [ 13] K. Weierstrass, "Neuer Beweis des Satzes, dass jede ganze

SignalProcessing

rationale Funktion einer Ver~inderlichen dargestellt werden kann als ein Produkt aus linearen Funktionen derselben Ver~inderlichen", Ges. Werke 3, 1903, pp. 251-269.