Computation of symmetric positive definite Toeplitz matrices by the hybrid steepest descent method

Computation of symmetric positive definite Toeplitz matrices by the hybrid steepest descent method

Signal Processing 83 (2003) 1135 – 1140 www.elsevier.com/locate/sigpro Fast communication Computation of symmetric positive de$nite Toeplitz matrice...

160KB Sizes 3 Downloads 72 Views

Signal Processing 83 (2003) 1135 – 1140 www.elsevier.com/locate/sigpro

Fast communication

Computation of symmetric positive de$nite Toeplitz matrices by the hybrid steepest descent method Konstantinos Slavakis∗ , Isao Yamada, Kohichi Sakaniwa Department of Communications and Integrated Systems, Tokyo Institute of Technology, Tokyo 152-8552, Japan Received 4 March 2002

Abstract This paper studies the problem of $nding the nearest symmetric positive de$nite Toeplitz matrix to a given symmetric one. Additional design constraints, which are also formed as closed convex sets in the real Hilbert space of all symmetric matrices, are imposed on the desired matrix. An algorithmic solution to the problem given by the hybrid steepest descent method is established also in the case of inconsistent design constraints. ? 2003 Elsevier Science B.V. All rights reserved.

1. Introduction Many signal processing applications, e.g. in the context of control theory [13], urge the calculation of the nearest (refer to Section 3) symmetric positive de$nite Toeplitz matrix to a given symmetric one. Moreover, new attributes of the spectrum estimation problem of a real scalar wide sense stationary process are revealed if this is viewed as a special case of the aforementioned problem where the given symmetric matrix is the unbiased estimate (a not in general positive de$nite matrix [19]) of the covariance matrix of the process [12,20]. A mathematically sound framework to tackle this important problem was originally studied in [12]. The framework was developed under the set theoretic estimation approach [5] where each one of the design ∗ Corresponding author. Tel.: +81-3-5734-2503; fax: +81-35734-2905. E-mail addresses: [email protected] (I. Yamada), slavakis@ ss.titech.ac.jp (K. Slavakis), [email protected] (K. Sakaniwa).

constraints (positive de$niteness, being Toeplitz, as well as other ones) is represented as a closed convex set in the real Hilbert space [26] of all symmetric matrices. The present problem can be thus regarded as a convex feasibility-optimization one: $nd the nearest matrix to a given one (optimization) such that it belongs to the intersection of a $nite collection of closed convex sets (feasibility). Provided that the intersection of the collection of the closed convex sets is nonempty, i.e. consistent constraints, an algorithmic solution to the problem was given in [12] by using the Boyle–Dykstra algorithm [2]. However, since this algorithm does not provide with a solution in the case of inconsistent constraints, i.e. empty intersection of the closed convex sets, the authors in [12], although posing the question, do not resolve this case of great practical importance. An often met situation in engineering applications is the case where the $nite collection of closed convex sets (constraints) have empty intersection due, for example, to noisy measurements or tight design speci$cations [6–8,13,16]. The hybrid steepest descent method (HSDM) [9,23,24] was introduced as a

0165-1684/03/$ - see front matter ? 2003 Elsevier Science B.V. All rights reserved. doi:10.1016/S0165-1684(03)00002-1

1136

K. Slavakis et al. / Signal Processing 83 (2003) 1135 – 1140

powerful tool for the minimization of certain convex functions on the intersection of the $xed point sets of nonexpansive mappings in a generally in$nite dimensional real Hilbert space. A side eDect of its eEcient use of the $xed point theory [26] is that it gives novel extensions to the convex feasibility-optimization problem by providing with meaningful solutions even when the intersection of the closed convex sets is empty, strictly covering thus the results obtained by the Boyle–Dykstra algorithm. This paper demonstrates the applicability of the HSDM to the important problem of $nding the nearest symmetric positive de$nite Toeplitz matrix to a given symmetric one even when inconsistent design speci$cations are met. 2. Preliminaries Let R denote the set of all real numbers. Let also N be a positive integer. We recall here that an N × N Toeplitz matrix is a matrix T := [ti; j ] ∈ RN ×N such that (s.t.) ti; j = ti−j ; ti−j ∈ R; 1 6 i; j 6 N [11]. Moreover, we remind here that a matrix A ∈ RN ×N is called positive (semi) de0nite if v∗ Av(¿) ¿ 0, for all nonzero v ∈ RN [11] (the superscript ∗ denotes transposition). Additionally, given the matrices B; C ∈ RN ×N , the notation B(¡)  C is equivalent to that B − C is positive (semi) de$nite. We de$ne, now, the real Hilbert space H [26] as the space of all N × N real symmetric matrices, i.e. H := {X ∈ RN ×N : X ∗ = X }, where the inner product is given by X ; Y  := Tr(XY ); ∀X ; Y ∈ H, N and Tr(A) := i=1 ai; i stands for the trace of the matrix A := [ai; j ] ∈ RN ×N . The induced norm

X := X ; X 1=2 ; ∀X ∈ H, becomes the Frobenius norm [11]. A subset C of a linear space is called convex if ∀X ; Y ∈ C and ∀ ∈ (0; 1); X +(1− )Y ∈ C. Given, now, a closed convex set C ⊆ H, the metric projection onto C is the mapping PC : H → C : X → PC (X ) s.t. d(X ; C) := inf { X − Y : Y ∈ C} = X − PC (X ) . A set in H will be called simple if its associated projection mapping has a closed form expression. Any closed subspace of H, for example, is a simple closed convex set [21]. Now, given a pair of nonempty closed convex sets C1 ; C2 ⊆ H, de$ne d(C1 ; C2 ) := inf { X − Y : X ∈ C1 ; Y ∈ C2 }.

Finally, given a convex set C ⊆ H, a function : C → R ∪ {∞} will be called convex if ∀X ; Y ∈ C, and ∀ ∈ (0; 1); ( X + (1 − )Y ) 6

(X ) + (1 − ) (Y ). 3. Design constraints and formulation of the problem Given a positive de$nite matrix Q ∈ H (e.g. Q := IN , where  ¿ 0, and IN ∈ RN ×N denotes the identity matrix), the $rst constraint for the desired matrix is introduced by the following closed convex set (a closed convex cone [10, p. 62] with vertex at Q): C1 = {X ∈ H : X ¡ Q}:

(1)

The metric projection P1 := PC1 , is given in Higham [15] P1 (X ) = U + U ∗ + Q; X ∈ H. Here, U U ∗ is the eigenvalue–eigenvector decomposition of X − Q. U is the orthogonal N × N matrix whose columns are the eigenvectors of the matrix X − Q and  is the diagonal matrix containing the eigenvalues of X − Q. The matrix + is formed by replacing the negative entries in  by zeroes. As the second closed convex set we consider the following closed subspace of H: C2 := {X ∈ H : X is Toeplitz}:

(2)

N ×N

; 1 6 i; j 6 N , then the If we let X = [xi; j ] ∈ R metric projection onto C2 is given by P2 (X ) := PC2 (X ) := [ti; j ]; 1 6 i; j 6 N , where ti; j := tj−i := N −j+i (1=(N − j + i)) k=1 xk; k+j−i , if 1 6 i 6 j 6 N , and ti; j := tj−i := tj; i , if 1 6 j ¡ i 6 N [12]. Given, now, the set of vectors {ri }qi=1 ⊆ RN and the set of positive numbers {i }qi=1 , the following closed halfspaces are introduced as additional convex constraints—for i = 1; : : : ; q, Ci+2 := {X ∈ H : ri∗ Xr i 6 i }:

(3)

The metric projections Pi+2 := PCi+2 ; i = 1; : : : ; q, are given in Porat [12] as Pi+2 (X ) := X , if X ∈ Ci+2 , and Pi+2 (X ) := ((i − ri∗ Xr i )= r 4RN )ri ri∗ + X , if X ∈ H\Ci+2 . Here u RN := (u∗ u)1=2 , ∀u ∈ RN . Let us give, now, an interpretation of the design speci$cation (3). Suppose that for some k; rk := [1; 0; : : : ; 0]∗ . Then, for any X = [xi; j ] ∈ RN ×N ; rk∗ Xr k 6 k iD x1; 1 6 k . This means that for a spectrum estimation problem and for a real scalar wide sense stationary

K. Slavakis et al. / Signal Processing 83 (2003) 1135 – 1140

process with covariance matrix RN (a positive definite, Toeplitz matrix in H) and variance 2 [19], letting rk := [1; 0; : : : ; 0]∗ , for some k, implies that RN ∈ Ck+2 iD 2 6 k . In control theory, constraint (3) can be interpreted as a bound on the output performance cost (or the L2 to L∞ gain) of the closed-loop system [12,13]. By de$ning a positive integer m ¿ q + 3, we allow the possible introduction of an additional set of design constraints as a set of necessarily simple closed convex sets {Ci }m i=q+3 ⊆ H. Then, a convex feasibility-optimization problem can be posed. Problem 1 (Original m problem): Given R ∈ H and assuming that i=1 Ci= ∅, consider the problem m of 0nding an X∗∈ i=1 Ci s.t. X∗ − R = m inf { X − R : X ∈ i=1 Ci }. An algorithmic solution to Problem 1 was given in [12] by using the Boyle–Dykstra algorithm [2] which generates a sequence in H that converges to X∗ = P∩mi=1 Ci (R). Since the Boyle–Dykstra algorithm cannot deal with inconsistent design constraints, i.e. the situm ation where i=1 Ci = ∅, the already posed question in [12] of solving this important engineering problem remains unanswered. The following section shows how the HSDM covers also these realistic cases. 4. The hybrid steepest descent method and the generalized convex feasible set Although Problem 1 focuses on the $nite dimensional Hilbert space H, the results of this section hold for any, not necessarily $nite dimensional, real Hilbert space. We also denote the Hilbert space for this general case by the symbol H. Again, the notation · ; · stands for the inner product and · for the induced norm. Let us consider the mapping T : H → H. The set of all 0xed points of T is de$ned as Fix(T ) := {x ∈ H : T (x) = x}. The mapping T is called -Lipschitzian on S ⊆ H if there exists  ¿ 0 s.t.

T (x) − T (y) 6  x − y ; ∀x; y ∈ S. In particular, T is called nonexpansive if T (x) − T (y) 6

x − y ; ∀x; y ∈ H. If T is nonexpansive, then Fix (T ) is closed and convex [21, Lemma 2.8-1]. A cel-

1137

ebrated example of a nonexpansive mapping is the metric projection PC onto the nonempty closed convex set C ⊆ H [21, Theorem 2.4–1.ii]. Clearly, Fix (PC ) = C. Given S ⊆ H, a mapping F : H → H is said to be monotone on S if F(x)−F(y); x−y ¿ 0; ∀x; y ∈ S. Moreover, F is said to be -strongly monotone on S if there exists  ¿ 0 s.t. F(x) − F(y); x − y ¿  x − y 2 ; ∀x; y ∈ S. Let U be an open subset of H. Then, a function : H → R ∪ {∞} is called Gˆateaux di6erentiable [26, p. 135] on U if for each u ∈ U there exists a(u) ∈ H s.t. lim →0 (( (u + h) − (u))= ) = a(u); h; ∀h ∈ H. Then,  : U → H : u → a(u) is called the Gˆateaux derivative of on U . Example 1 (A -Lipschitzian and -strongly monotone mapping [24]). Suppose that r ∈ H; # ∈ R and Q : H → H is a bounded linear [26], self-adjoint, i.e. Q(x); y = x; Q(y); ∀x; y ∈ H, and strongly positive mapping, i.e. Q(x); x ¿ % x 2 for some % ¿ 0 and ∀x ∈ H (If H is $nite dimensional, Q is nothing but a symmetric positive de$nite matrix). De$ne the quadratic function & : H → R by &(x) := 1 ateaux 2 Q(x); x − r; x + #; ∀x ∈ H. Then, the Gˆ derivative & (x) = Q(x) − r is Q -Lipschitzian and %-strongly monotone on H. Theorem 2 (The hybrid steepest descent method [23]). Let T : H → H be a nonexpansive mapping with Fix(T ) = ∅. Suppose that the function & : H → R ∪ {∞} has a Gˆateaux derivative & : H → H that is -Lipschitzian and -strongly monotone on T (H). Then, with any u0 ∈ H, and any sequence ( n )n¿1  satisfying (L1) limn→∞ n = 0, (L2) n¿1 n = ∞, −2 =0 s.t. n ∈ (0; ∞) (L3) limn→∞ ( n − n+1 ) n+1 or (W1) lim

= 0, (W2) n→∞ n n¿1 n = ∞,  (W3) n¿1 | n − n+1 | ¡ ∞ s.t. n ∈ [0; ∞), the sequence (un )n¿0 generated by un+1 := T (un ) −

n+1 & (T (un )); n ¿ 0, converges strongly to the uniquely existing u∗ s.t. &(u∗ ) = inf {&(Fix(T ))}. (An example of a sequence ( n )n¿1 satisfying (L1) – (L3) is n = 1=n( for 0 ¡ ( ¡ 1. Moreover, an example of a sequence ( n )n¿1 satisfying (W1) – (W3) is n = 1=n). By Example 1, Theorem 2 can be seen as a generalization of a $xed point theorem developed in [17,22]

1138

K. Slavakis et al. / Signal Processing 83 (2003) 1135 – 1140

to minimize )(x) := x − a 2 on Fix (T ), for some a ∈ H. Now, the concept of feasibility can be extended as follows. De"nition 3 (Generalized convex feasible set [23]). Given nonempty closed convex sets Ci ⊆ H; i = 1; : : : ; p, and K ⊆ H, de$nethe proximity function p + : H → R by +(x)  := 1=2 i=1 wi d(x; Ci )2 , where p p (wi )i=1 ⊆ [0; 1] s.t. i=1 wi = 1. Then, the generalized convex feasible set K+ ⊆ K is de$ned by K+ := {u ∈ K : +(u) = inf {+(K)}}, i.e. it is the set of all minimizers of + on K. Remark 4. It can be veri$ed by [10, Proposition II.1.2] that K+ = ∅ if at least one of Ci ; i = 1; : : : ; p, and K is bounded. Remark 5. In the case where the closed convex sets Ci ; i = 1; : : : ; p, and K stand for the constraints of a convex feasibility problem, the set K is called the absolute constraint since K+ ⊆ K. p

Remark p6. If K ∩ ( K ∩ ( i=1 Ci ).

i=1

Ci ) = ∅, then K+ =

Proposition 7 (Fixed point characterization of K+ [24]).For any = 0; K+ = Fix((1 − )Id + p PK i=1 wi PCi ), where Id : H → H denotes the identity mapping. Moreover, the mapping p (1 − )Id + PK i=1 wi PCi becomes nonexpansive for any 0 6 6 3=2. Another $xed point characterization of K+ can be found in [8]. For =1, Proposition 7 covers the classic result Fix (PC1 PC2 ) = {x ∈ C1 : d(x; C2 ) = d(C1 ; C2 )} for a pair of nonempty closed convex sets C1 ; C2 [4]. If one of C1 ; C2 is bounded, then by successive projections [1,3,14,21], the sequence ((PC1 PC2 )n (x0 ))n¿0 converges to a point in Fix(PC1 PC2 ), for some x0 ∈ H [4]. This was used in an inconsistent covariance control optimization problem [13]: An x∗ ∈ C1 s.t. ) is desired (x∗  is a matrix in d(x∗ ; C2 ) = d(C1 ; C2 m1 m2 [13]), where C1 := i=1 C1; i ; C2 := j=1 C2; j , and C1 ∩ C2 = ∅ (one of C1 ; C2 is bounded), given the m2 1 closed convex sets {C1; i }m i=1 ; {C2; j }j=1 ⊆ H. Since, however, the intersection of simple closed convex sets is not necessarily simple itself, an approxima-

tion of PC1 ; PC2 by means of the Boyle–Dykstra method must be employed, in general, for any n in ((PC1 PC2 )n (x0 ))n¿0 [13]. On the contrary, HSDM uses Proposition 7 to characterize K+ by a $nite collection of simple closed convex sets {C1 ; : : : ; Cp ; K}, avoiding thus any undesired approximation of projection operator mappings.

5. The extended problem and its algorithmic solution As already stated in Section 3 and by the de$nition of the metric projection mapping, the solution to Problem 1 is given in X∗ = P∩mi=1 Ci (R). Clearly, there exists - ¿ 0 s.t. X∗ ∈ Cb (-) (e.g. let - = P∩mi=1 Ci (R) ), where Cb (-) := {X ∈ H : X 6 -}:

(4)

In order to extend Problem 1 and to guarantee the existence of a solution, by Remark 4, also to the situation of inconsistent design speci$cations, we impose as an additional constraint to the original problem the bounded closed convex set given in (4) for some - ¿ 0. In practice and in the original problem setting, the choice of a very large - would be suEcient to guarantee the existence of a solution. More precisely, to show that there exists √- ¿ 0 s.t. at least C1 ∩ C2 ∩ Cb (-) = ∅, choose - ¿ N max (Q), where

max (Q) denotes the maximum eigenvalue of Q imposed in (1), and verify that max (Q)IN ∈ C1 ∩ C2 ∩ Cb (-). The following proposition demonstrates that the choice of Cb (-) as an additional design constraint is by no means unnatural to the original problem formulation. Proposition 8. Suppose that for some k ∈ {1;: : : ; q}; m rk := [1; 0; : : : ; 0]∗ ∈ RN , and k ¿ 0. Then, i=1 Ci m = Cb (N k ) ∩ ( i=1 Ci ). Proof. Given in the appendix. A class of applications where the condition rk := [1; 0; : : : ; 0]∗ , for some k ∈ {1; : : : ; q}, takes place is the spectrum estimation problems and the case where one imposes bounds on the variance of the real scalar, wide

K. Slavakis et al. / Signal Processing 83 (2003) 1135 – 1140

sense stationary process under study [12,20] (refer to the short discussion below (3)). The metric projection Pb := PCb (-) can be computed easily as follows: Pb (X ) := X , if X ∈ Cb (-), and Pb (X ) := (-= X )X , if X ∈ H \ Cb (-). Now, by following Remark 5, let C1 de$ned in (1), be the absolute m constraint. Let also, w2 ; : : : ; wm ; wb ∈ [0; 1] s.t. i=2 wi + wb = 1. Then, by (2), (3), and (4), m de$ne the proximity function +(X ) := (1=2) i=2 wi d(X ; Ci )2 + 12 wb d(X ; Cb (-))2 ; ∀X ∈ H. Clearly, in this case the generalized convex feasible set becomes C1; + := {X ∈ C1 : +(X ) = inf {+(C1 )}}. Remark 9. Since Cb (-) is bounded, Remark 4 clearly implies that C1; + = ∅. Problem 2 (Extended problem): Given R ∈ H and &(X ) := 12 X − R 2 ; X ∈ H, 0nd an X∗ ∈ C1; + s.t. &(X∗ ) = inf {&(C1; + )}. m Remark 10. Suppose that i=1 Ci = ∅. If the solution of Problem 1 X∗ ∈ Cb (-), which is very likely to happen in practice for a very large value of -, Remark 6 clearly implies that m C1; + = Cb (-) ∩ ( i=1 Ci )  X∗ . Moreover, since m

X∗ − R = inf { X − R : X ∈ i=1 Ci }, we m have that &(X∗ ) = inf {&(C1; + )}, because C1; + ⊆ i=1 Ci . Thus, Problems 1 and 2 are equivalent. In the case where rk := [1; 0; : : : ; 0]∗ , for some k ∈ {1; : : : ; q}, as in spectrum estimation problems, 8 imm Proposition m plies that C1; + = Cb (N k ) ∩ ( i=1 Ci ) = i=1 Ci . Once again Problems 1 and 2 coincide. Since,however, m C1; + is well de$ned even in the case where i=1 Ci =∅ (cf. Remark 9), Problem 2 clearly extends Problem 1. We mconsider, now, the nonexpansive mapping T := P1 ( i=2 wi Pi + wb Pb ). Then, by Proposition 7 and for = 1; C1; + = Fix(T ). In addition, it is apparent by Example 1 that for the quadratic function given in Problem 2, we have & (X ) = X − R; ∀X ∈ H, which is 1-Lipschitzian and 1-strongly monotone on H. We provide, now, by means of Example 1 and Theorem 2 with an algorithmic solution to the problem of computing the nearest, in the sense of &, matrix X∗ ∈ C1 s.t. it least violates, in the sense of +, the constraints C2 ; : : : ; Cm ; Cb (-), i.e. X∗ ∈ C1; + . Proposition 11. For any X0 ∈ H, and any sequence ( n )n¿1 satisfying (L1) limn→∞ n = 0, (L2)

1139



−2

n = ∞, (L3) limn→∞ ( n − n+1 ) n+1 =0 s.t.

∈ (0; ∞) or (W1) lim

= 0, (W2) n n→∞ n   n¿1 n = ∞, (W3) n¿1 | n − n+1 | ¡ ∞, s.t.

n ∈ [0; ∞), the sequence (Xn )n¿0 generated by Xn+1 := T (Xn ) − n+1 (T (Xn ) − R); n ¿ 0, converges to the uniquely existing solution of Problem 2, i.e. to X∗ ∈ C1; + s.t. &(X∗ ) = inf {&(C1; + )}. n¿1

Example 1 and Theorem 2 imply that a not necessarily quadratic function & could be used in Problem 2 and Proposition 11, provided that & is -Lipschitzian and -strongly monotone. A relaxation of the condition of & being -strongly monotone in $nite dimensional Hilbert spaces can be found in [18,25]. DiDerent scenarios could be employed also by choosing each time a diDerent closed convex set among {C1 ; : : : ; Cm ; Cb (-)} for the absolute constraint. The weights {w2 ; : : : ; wm ; wb } in the de$nition of & could also be adjusted properly depending on the design. As a $nal concluding remark, we notice that the above proposed scheme can be straightforwardly extended to the case of block Toeplitz matrices (refer to [20] for an extension in the context of spectrum estimation problems). Acknowledgements We would like to express our deep gratitude to the anonymous reviewers for their invaluable comments on the original manuscript. Appendix m Proof of Proposition 8. Notice that if i=1 Ci = ∅, then proposition holds trivially. Suppose, therefore, m that X ∈ i=1 Ci . This clearly implies that X ∈ C1 ∩ C2 ∩ Ck+2 . Since X = [xi; j ]16i; j6N is a symmetric positive de$nite Toeplitz matrix, we have |xi; j | 6 12 (xi; i + xj; j ), and max16i; j6N |xi; j | = max16i6N |xi; i | [11, Theorem 4.2.6]. Moreover, by X ∈ C2 ∩ Ck+2 , 0 ¡ xi; i 6 k ; i = 1; : : : ; N . We deduce therefore from the above that |xi; j | 6 k ; 1 6 i; j 6 N . One can easN ily verify now that X 2 = i; j=1 |xi; j |2 6 N 2 2k . Thus, m m ) ∩ ( i=1 Ci ). X ∈ Cb (N k ) and i=1Ci ⊆ Cb (N k m m Clearly, Cb (N k )∩( i=1 Ci ) ⊆ i=1 Ci . This completes the proof.

1140

K. Slavakis et al. / Signal Processing 83 (2003) 1135 – 1140

References [1] H.H. Bauschke, J.M. Borwein, On projection algorithms for solving convex feasibility problems, SIAM Rev. 38 (3) (1996) 367–426. [2] J.P. Boyle, R.L. Dykstra, A method for $nding projections onto the intersection of convex sets in Hilbert space, Lecture Notes in Statist. 37 (1986) 28–47. [3] L.M. Bregman, The method of successive projections for $nding a common point of convex sets, Soviet Math. Dokl. 6 (1965) 688–692. [4] W. Cheney, A.A. Goldstein, Proximity maps for convex sets, Proc. Amer. Math. Soc. 10 (3) (1959) 448–450. [5] P.L. Combettes, The foundations of set theoretic estimation, Proc. IEEE 81 (2) (1993) 182–208. [6] P.L. Combettes, Inconsistent signal feasibility problems: least-squares solutions in a product space, IEEE Trans. Signal Process. 42 (11) (1994) 2955–2966. [7] P.L. Combettes, Strong convergence of block-iterative outer approximation methods for convex optimization, SIAM J. Control Optim. 38 (2) (2000) 538–565. [8] P.L. Combettes, P. Bondon, Hard-constrained inconsistent signal feasibility problems, IEEE Trans. Signal Process. 47 (9) (1999) 2460–2468. [9] F. Deutsch, I. Yamada, Minimizing certain convex functions over the intersection of the $xed point sets of nonexpansive mappings, Numer. Funct. Anal. Optim. 19 (1998) 33–56. [10] I. Ekeland, R. Temam, Convex Analysis and Variational Problems, North-Holland, Amsterdam, 1976. [11] G.H. Golub, C.F.V. Loan, Matrix Computations, 3rd Edition, The John Hopkins University Press, Baltimore, MD, 1996. [12] K.M. Grigoriadis, A.E. Frazho, R.E. Skelton, Application of alternating convex projection methods for computation of positive Toeplitz matrices, IEEE Trans. Signal Process. 42 (7) (1994) 1873–1875. [13] K.M. Grigoriadis, R.E. Skelton, Alternating convex projection methods for discrete-time covariance control design, J. Optim. Appl. 88 (2) (1996) 399–432. [14] L.G. Gubin, B.T. Polyak, E.V. Raik, The method of projections for $nding the common point of convex sets, USSR Comput. Math. Phys. 7 (1967) 1–24.

[15] N.J. Higham, Computing a nearest symmetric positive semide$nite matrix, Linear Algebra Appl. 103 (1988) 103– 118. [16] A.N. Iusem, A. De Pierro, On the convergence of Han’s method for convex programming with quadratic objective, Math. Programming 52 (1991) 265–284. [17] P.L. Lions, Approximation de points $xes de contractions, C.R. Acad. Sci. Paris SQerie A–B 284 (1977) 1357–1359. [18] N. Ogura, I. Yamada, Non-strictly convex minimization over the $xed point set of an asymptotically shrinking nonexpansive mapping, Numer. Funct. Anal. Optim. 23 (1&2) (2002) 113–137. [19] B. Porat, Digital Processing of Random Signals: Theory and Methods, Prentice-Hall, Englewood CliDs, NJ, 1993. [20] K. Slavakis, I. Yamada, K. Sakaniwa, Spectrum estimation of real vector wide sense stationary processes by the Hybrid Steepest Descent Method, in: Proceedings of the ICASSP, Orlando, 2002. [21] H. Stark, Y. Yang, Vector Space Projections: A Numerical Approach to Signal and Image Processing, Neural Nets, and Optics, Wiley, New York, 1998. [22] R. Wittmann, Approximation of $xed points of nonexpansive mappings, Arch. Math. 58 (1992) 486–491. [23] I. Yamada, The hybrid steepest descent method for the variational inequality problem over the intersection of $xed point sets of non-expansive mappings, in: D. Butnariu, Y. Censor, S. Reich (Eds.), Inherently Parallel Alg. in Feasibility and Optimization and Their Application, North-Holland, Amsterdam, 2001, pp. 473–504. [24] I. Yamada, N. Ogura, Y. Yamashita, K. Sakaniwa, Quadratic optimization of $xed points of nonexpansive mappings in Hilbert space, Numer. Funct. Anal. Optim. 19 (1998) 165– 190. [25] I. Yamada, N. Ogura, N. Shirakawa, A numerically robust hybrid steepest descent method for the convexly constrained generalized inverse problems, Contemp. Math. 313 (2002) 269–305. [26] E. Zeidler, Nonlinear Functional Analysis and Its Applications I: Fixed Point Theorems, Springer, New York, 1986.