A fast convergent method of iterated regularization

A fast convergent method of iterated regularization

Acta Mathematica Scientia 2009,29B(2):341–348 http://actams.wipm.ac.cn A FAST CONVERGENT METHOD OF ITERATED REGULARIZATION∗ ) Huang Xiaowei ( ...

204KB Sizes 2 Downloads 88 Views

Acta Mathematica Scientia 2009,29B(2):341–348 http://actams.wipm.ac.cn

A FAST CONVERGENT METHOD OF ITERATED REGULARIZATION∗

)

Huang Xiaowei (

 )

Wu Chuansheng (

School of Sciences, Wuhan University of Technology, Wuhan 430070, China

)

Wu Di (

College of Arts and Science, Yangtze University, Jingzhou 434020, China

Abstract This article presents a fast convergent method of iterated regularization based on the idea of Landweber iterated regularization, and a method for a-posteriori choice by the Morozov discrepancy principle and the optimum asymptotic convergence order of the regularized solution is obtained. Numerical test shows that the method of iterated regularization can quicken the convergence speed and reduce the calculation burden efficiently. Key words Ill-posed problems, iterated regularization, Morozov discrepancy principle 2000 MR Subject Classification

1

47A52, 65R30, 65J20

Introduction In this article, we consider the linear ill-posed problem T x = y,

(1)

where T : X → Y is a linear compact operator, and X, Y are infinite-dimensional real Hilbert space with corresponding inner product (·, ·) and norms  · . We consider the Moore-Penrose generalized solution x† = T † y to the problems (1). T † denotes Moore-Penrose generalized inverse of T and D(T † ) = R(T ) + R(T )⊥ . T † y exists if and only if y ∈ D(T † ) or P y ∈ R(T ). Where P : Y → R(T ) is the orthogonal projector onto R(T ). In practice, instead of (1) we usually have a perturbed version of the equation T x = yδ .

(2)

Generally, the problem (2) is ill-posed in the sense that noisy data yδ with small noise level δ: y − yδ  ≤ δ can lead to disproportionately large deviations in the solution. For the stable solution of linear ill-posed problems, regularization methods are necessary. At present, there is a variety of ∗ Received

December 1, 2006

342

ACTA MATHEMATICA SCIENTIA

Vol.29 Ser.B

regularization methods, see [1–7]. For large-scale and nonlinear inverse problems, regularized solutions are typically constructed by iterative regularization method. But a main disadvantage of iterative regularization methods is that, especially for nonsmooth solutions, a large number of iterations has to be performed to guarantee the optimal convergence rates. To overcome this problem, certain acceleration strategies have been proposed for the iterative solution of linear inverse problems like the v methods or the method of conjugate gradients. If nothing else is mentioned, the notion used in this article can be found in any introductory work on inverse problems, e.g. [1]. The main accomplishment of this article is to present a fast convergent method of iterated regularization based on the idea of Landweber iterated regularization. The method of iterated regularization can quicken the number of iterations efficiently and the optimum asymptotic convergence order of the regularized solution is also obtained.

2

Iterated Regularization Methods

Assume (μj , xj , yj ) is a singular system of T , μ1 ≥ μ2 ≥ · · · ≥ μj ≥ μj+1 ≥ · · · > 0, if y ∈ D(T † ), then ∞  (y, yj ) xj . x† = μj j=1 Define operator Rn : D(T † ) → X by Rn = 

 n−1 

1

j a

(I − (T ∗ T ) a )

T ∗,

(3)

j=1

where a ≥ 2 is an integer constant and 0 <  < Theorem 2.1 If y ∈ D(T † ), then Rn y =

1 T ∗ T 

is a relaxation factor.

1 ∞  [1 − (1 − (μ2j ) a )n ]a

μj

j=1

(y, yj )xj .

Proof of Theorem 2.1 If y ∈ D(T † ), then T ∗ T x† = T ∗ y, Rn y = 

 n−1 



1 a

j

(I − (T T ) )

a



T y=

 n−1 

j=0

j=0

∞ a    (y, yi ) ∗ = (I − (T T ) ) T T xi μi j=0 i=1

 n−1 

=



∞ n−1  (y, yi )   i=1

μi

1 a

j

1

(I − (T ∗ T ) a )j

j=0

Note that  n−1  j=0

1

(I − (T ∗ T ) a )j

a

T ∗ T xi

a

1

(I − (T ∗ T ) a )j

T ∗ T xi .

a

T ∗ T x†

No.2

Huang et al: FAST CONVERGENT METHOD OF ITERATED REGULARIZATION

=

 n−1 

(1 −

1 (μ2i ) a )j

a

μ2i xi

=

 1 − (1 − (μ2 ) a1 )n a 1

(μ2i ) a

j=0 1

=

μ2i xi

1

[1 − (1 − (μ2i ) a )n ]a 2 [1 − (1 − (μ2i ) a )n ]a xi . μi xi = 2 μi 

We obtain Rn y =

1 ∞  [1 − (1 − (μ2 ) a )n ]a

μi

i=1

Define

then Rn y =

i

i

(y, yi )xi .

 a 1 q(n, μ) = 1 − (1 − (μ2 ) a )n , ∞ j=1

343

(4)

q(n,μj ) (y, yj )xj . μj

Theorem 2.2 q(n, μ) defined by (4) has character (1) 0 < q(n, μ) < 1; (2) q(n + 1, μ) > q(n, μ), lim q(n, μ) = 1; 1

a

n→∞

(3) q(n, μ) ≤  2 n 2 μ; 1 (4) 1 − q(n, μ) ≤ a[1 − (μ2 ) a ]n . Proof of Theorem 2.2 (1) Due to 0 <  < 1 a

1 − (μ2 ) < 1, we can obtain

1 T ∗ T 

and 0 < μ2 < 1, then 0 <

1

0 < [1 − (1 − (μ2 ) a )n ]a < 1; (2) the result is all appearance; 1 1 (3) due to Bernoulli inequation, (1 − (μ2 ) a )n ≥ 1 − n(μ2 ) a , then, 1

q(n, μ) ≤ [n(μ2 ) a ]a . Noting 0 < q(n, μ) < 1, we obtain q(n, μ) <



1

a

q(n, μ) <  2 n 2 μ; 1

1

(4) due to Bernoulli inequation, [1 − (1 − (μ2 ) a )n ]a ≥ 1 − a(1 − (μ2 ) a )n , then, 1

1 − q(n, μ) ≤ a(1 − (μ2 ) a )n . Theorem 2.2 shows that q(n, μ) is a regularizing filter; when n → ∞, Rn y → x† , then Rn is an approximation of T † . Here, n rule as regularization parameter. If yδ ∈ D(T † ), xδn = Rn yδ is approximation of the solution x† = T † y. When a = 1, Rn in (3) is just Landweber iterated regularization. 1 a Theorem 2.3 Rn  ≤  2 n 2 . 2 ∞ q(n,μj ) 2 ∞ q(n,μj ) 2 Proof of Theorem 2.3 Rn y2 = (y, y )x = |(y, yj )| , from j j μj μj j=1

j=1

theorem 2.2(3), we obtain Rn y2 ≤ na

∞  j=1

|(y, yj )|2 ≤ na y2 .

344

ACTA MATHEMATICA SCIENTIA 1

Vol.29 Ser.B

a

Then, Rn  ≤  2 n 2 . a 1 Theorem 2.4 Rn yδ − x† ≤  2 n 2 δ + Rn y − x† . Proof of Theorem 2.4 Rn yδ − x† ≤ Rn yδ − Rn y + Rn y − x† ≤ Rn  δ + Rn y − x† .

3

The Posteriori Choice of Regularization Parameter

One important problem in the application of the regularization method is the proper choice of the regularization parameter n. We consider the choice strategy of the regularization parameter n by Morozov’s discrepancy principle. Assume yδ ∈ D(T † ), define ρ(n) = T Rn yδ − yδ  . Lemma 3.1 If y ∈ D(T † ), then 2

T Rn y − y =

∞ 

2

2

2

|1 − q(n, μj )| |(y, yj )| + P y − y .

j=1

Proof of Lemma 3.1 If y ∈ D(T † ), then P y = T T †y. 2

2

2

2

T Rn y − y = T Rn y − P y + P y − y = T Rn y − P y + P y − y On the other hand, T Rn y − P y2 = T Rn y − T T †y2  ∞ ∞    2  q(n, μj ) 1 (y, yj )xj − T (y, yj )xj = T μj μj j=1

=

∞ 

j=1

|1 − q(n, μj )|2 |(y, yj )|2 .

j=1

From Lemma 3.1, we obtain that ρ(n + 1) < ρ(n), and when n → ∞, ρ(n) → P yδ − yδ . Therefore, if P yδ − yδ  ≤ δ, a choice strategy can be given by choosing the regularization parameter n = n(δ), such that T Rn yδ − yδ  ≤ τ δ (τ > 1)occurs for the first time. Lemma 3.2 Assume τ > 1 is given and y, yδ ∈ D(T † ), choosing the regularization parameter n = n(δ), such that T Rn yδ − yδ  ≤ τ δ (τ > 1)occurs for the first time. Then, T Rn−1 y − y ≥ (τ − 1)δ. Proof of Lemma 3.2 According to the choice strategy of the regularization parameter n, we know that T Rn−1 yδ − yδ  > τ δ. From Lemma 3.1, we obtain (T Rn−1 − I)(y − yδ )2 =

∞  j=1

|1 − q(n − 1, μj )|2 |(y − yδ , yj )|2 + P (y − yδ ) − (y − yδ )2

No.2

Huang et al: FAST CONVERGENT METHOD OF ITERATED REGULARIZATION



∞ 

2

345

2

|(y − yδ , yj )| + (P − I)(y − yδ )

j=1 2

≤ P (y − yδ ) + (P − I)(y − yδ )

2

2

= y − yδ  ≤ δ 2 . Then, we have T Rn−1 y − y = T Rn−1 yδ − yδ − (T Rn−1 − I)(yδ − y) ≥ T Rn−1 yδ − yδ  − (T Rn−1 − I)(y − yδ ) ≥ (τ − 1)δ. Lemma 3.3 tx (1 − t)n−1 ≤ xx n−x (n ≥ 2, 0 ≤ t ≤ 1, x ≥ 1). Proof of Lemma 3.3 Let f (t) = tx (1−t)n−1 and f  (t) = tx−1 (1−t)n−2 [x−(x+n−1)t]. Then, x  n − 1 n−1  x tx (1 − t)n−1 ≤ x+n−1 x+n−1 n−1 n−1 x due to ( x+n−1 ) ≤ 1 and ( x+n−1 )x ≤ ( nx )x , we obtain tx (1 − t)n−1 ≤ xx n−x . 2ν+1 2ν+1 2ν+1 2 a n− 2 a (ν > 0). Lemma 3.4 μ2ν+1 |1 − q(n − 1, μ)| ≤ a− 2 ( 2ν+1 2 a) Proof of Lemma 3.4 From Theorem 2.2 (4), we know that 1

μ2ν+1 |1 − q(n − 1, μ)| ≤ aμ2ν+1 [1 − (μ2 ) a ]n−1 . 1

Let t = (μ2 ) a . We use Lemma 3.3 and obtain  2ν + 1  2ν+1 2ν+1 2 a a n− 2 a . 2 Theorem 3.1 Assume τ > 1 is given, x† = (T ∗ T )ν z ∈ R(T ∗ T )ν , z ∈ X, and z ≤ E. Choosing the regularization parameter n = n(δ), such that T Rn yδ − yδ  ≤ τ δ (τ > 1)occurs for the first time. If R(T ) = Y , then 2 (1) n = O(δ − (2ν+1)a ) 2ν (2) Rn yδ − x+  = O(δ 2ν+1 ). Proof of Theorem 3.1 First, μ2ν+1 |1 − q(n − 1, μ)| ≤ a−

2

T Rn y − y = =

∞  j=1 ∞  j=1

2

2

|1 − q(n, μj )| |(y, yj )| =

2ν+1 2

∞ 2  |1 − q(n, μj )| j=1

μ2j

2

|(y, T xj )|

∞  2 |1 − q(n, μj )| |1 − q(n, μj )|2 ∗ † 2 ∗ (T T x , xj ) |(T y, x )| = j 2 2 μj μj j=1 2

∞ ∞  2  2 |1 − q(n, μj )|2 † ∗ = (x = , T T x ) μ2j |1 − q(n, μj )|2 (x+ , xj ) j 2 μ j j=1 j=1

=

∞ 

μ2j |1 − q(n, μj )|2 |((T ∗ T )ν z, xj )|2 =

j=1

∞  j=1

μ4ν+2 |1 − q(n, μj )|2 |(z, xj )|2 . j

Form Lemma 3.2 and Lemma 3.4, we know ∞  2ν + 1  2ν+1 2   2ν+1 2ν+1 2 a a n− 2 a |(z, xj )|2 , (τ − 1)2 δ 2 ≤ T Rn−1 y − y2 ≤ a− 2 2 j=1

346

ACTA MATHEMATICA SCIENTIA

(τ − 1)δ ≤ T Rn−1 y − y ≤ Ea− Then, 2

1

n ≤ a (2ν+1)a − a

2ν+1 2

Vol.29 Ser.B

 2ν + 1  2ν+1 2ν+1 2 a a n− 2 a . 2

2  2ν + 1  E  (2ν+1)a 2 a δ − (2ν+1)a . 2 τ −1

(5)

Next, ∞  2 2 Rn y − x† 2 = |1 − q(n, μj )| (x† , xj ) j=1 ∞ 

=

j=1 ∞ 

=

2

2

|1 − q(n, μj )| |((T ∗ T )ν z, xj )| 2 2 μ4ν j |1 − q(n, μj )| |(z, xj )| .

j=1 2

2

2

2

p p q q Let aj = (μ4ν j ) |1 − q(n, μj )| |(z, xj )| , bj = |1 − q(n, μj )| |(z, xj )| , ∞ ∞ ∞ 1 1 Using H¨older inequality aj b j ≤ ( apj ) p ( bqj ) q , we obtain

j=1

∞ 

=

aj b j ≤

∞  j=1 2 q

2ν 2ν+1 ,

and

1 q

=

1 2ν+1 .

j=1

2

j=1

=

=

2

μ4ν j |1 − q(n, μj )| |(z, xj )|

j=1 ∞ 

j=1

1 p

∞  j=1

μ4ν+2 j

apj

∞  p1   j=1

bqj

 q1

∞  p1   1 2 2 q |1 − q(n, μj )| |(z, xj )| · |1 − q(n, μj )| |(z, xj )| 2

2

j=1 2 p

≤ E T Rn y − y . Note that T Rn y − y≤ T Rn yδ − yδ  + (T Rn − I)(y − yδ )≤ (τ + 1)δ, then 1 2ν 2ν Rn y − x+ ≤ E 2ν+1 (τ + 1) 2ν+1 δ 2ν+1 .

(6)

Form (5), (6), and Theorem 2.4, we obtain 1  2ν  1  2ν + 1  a2  E  2ν+1 1 2ν Rn yδ − x+ ≤ a 2ν+1 +E 2ν+1 (τ + 1) 2ν+1 δ 2ν+1 . a 2 τ −1 Theorem 3.2 Assume τ > 1 is given, x† = T ∗ z ∈ R(T ∗ ), z ∈ Y , and z ≤ E. Choosing the regularization parameter n = n(δ), such that T Rn yδ − yδ  ≤ τ δ (τ > 1) occurs for the first time. If R(T ) = Y , then 1 (1) n = O(δ − 2a ), 1 (2) Rn yδ − x+  = O(δ 2 ). Proof of Theorem 3.2 The proof of Theorem 3.2 is similar to that of Theorem 3.1.

4

Iterative Algorithm and Numerical Implement Consider the iteration scheme 1

An = I + (I − (T ∗ T ) a )An−1 ,

A1 = I.

No.2

Huang et al: FAST CONVERGENT METHOD OF ITERATED REGULARIZATION

Obviously, An =

n−1 

347

1

(I − (T ∗ T ) a )j .

j=0

The regularization solution Rn yδ is denoted by Aan (T ∗ yδ ). To obtain the regularization solution of equation (2), we use the following iterative algorithm Iterative algorithm 1) Let A1 = I; 1 2) compute An = I + (I − (T ∗ T ) a )An−1 ; 3) compute Rn yδ = Aan (T ∗ yδ ), if T Rn yδ − yδ  ≤ τ δ, then stop, or switch to 2). In fact, the regularization parameter n is just the iteration. From Theorem 3.1 and Theorem 3.3, when a ≥ 2, the optimum asymptotic convergence order of the regularized solution is obtained, but the number of iterations reduce greatly. It shows exactly the advantage of our method of iterated regularization. To test our method of iterated regularization, we choose the following example of a first kind integral equation. As in [6] or [7], let us consider the integral equation with logarithmic kernel

1

0

ln(t − s)x(s)ds = y(t),

2 ≤ t ≤ 3.

(7)

We choose y(t) = yk (t), k = 1, 2, given by y1 (t) =

t2 − 1 t 1 t2 ln t − ln(t − 1) − − , 2 2 2 4

1 3 1 2 1 1 t ln t + (t − 1)2 − (t2 − t + ) ln(t − ) − . 2 2 4 2 8 The functions x(s) = xk (s), k = 1, 2, given by y2 (t) =

x1 (s) = s, ⎧ ⎨ s, 0 ≤ s ≤ 0.5 x2 (s) = ⎩ 1 − s, 0.5 ≤ s ≤ 1 are the “true” solutions of (7) for y(t) = yk (t), k = 1, 2. In practice, we consider the following equation with noisy data yδ (t) instead of (7)

0

1

ln(t − s)x(s)ds = yδ (t),

2 ≤ t ≤ 3.

(8)

Note that regularized problems are usually defined in an infinite-dimensional setting and have to be discretized for an implementation. In practice, even noisy observations are only possible in a discretized form. So, we discretize equation (8) to N 1  ln(ti − sj )x(sj ) = yδ,i N j=1

i = 1, 2, · · · , M.

(9)

j−1 Where ti = 2 + i−1 M , i = 1, 2, · · · , M , sj = N , yδ,i = y(ti ) + δzi , and zi are random numbers such that |zi | ≤ 1. Let M = 500, N = 400, τ = 1.1, and δ = 10−6 or 10−5 , we obtain

348

ACTA MATHEMATICA SCIENTIA

Vol.29 Ser.B

numerical result using our iterative algorithm. The numerical result is showed in Table 1–Table 4. In the tables, err =  xδn,k − x k / xk , x δn,k denotes the regularized solution of (9) and x k = (xk (t1 ), xk (t2 ), · · · , xk (tM ))T .

The numerical result suggests that the number of iterations and computing time reduce greatly. Our method of iterated regularization inherits the advantage of Landweber method and can reduce the calculation burden efficiently. References [1] Engl H W, Hanke M, Neubauer. A Regularization of Inverse Problems. Dordrecht: Kluwer, 1996 [2] Hanke M, Neubauer A, Scherzer O. A convergence analysis of the landweber iteration for nonlinear ill-posed problems. J Number Math, 1995, 72: 21–37 [3] Ramlau R. Modified landweber methods for inverse problems. J Number Funct Anal Optimiz, 1999, 20: 79–98 [4] Hanke M. Accelerated landweber iterations for the solution of ill-posed equations. J Numer Math, 1991, 60: 341–373 [5] Neubauer A. On Landweber iteration for nonlinear ill-posed problems in Hilbert scales. J Numer Math, 2000, 85: 309–328 [6] Bruckner G, Cheng J. Tikhonove regularization for an integer equation of the first kind with logarithmic kernel. J Inverse Ill-posed Problems, 2000, 8: 665–675 [7] Bruckner G, Pereverzev S. Self-regularization of projection method with a posteriori discretization level choice for severely ill-posed problems. J Inverse Ill-posed Problems, 2003, 19: 147–151