A class of parameter choice rules for stationary iterated weighted Tikhonov regularization scheme

A class of parameter choice rules for stationary iterated weighted Tikhonov regularization scheme

Applied Mathematics and Computation 347 (2019) 464–476 Contents lists available at ScienceDirect Applied Mathematics and Computation journal homepag...

533KB Sizes 0 Downloads 39 Views

Applied Mathematics and Computation 347 (2019) 464–476

Contents lists available at ScienceDirect

Applied Mathematics and Computation journal homepage: www.elsevier.com/locate/amc

A class of parameter choice rules for stationary iterated weighted Tikhonov regularization scheme G.D. Reddy Department of Mechanical and Aerospace Engineering, Indian Institute of Technology Hyderabad, Kandi, Sangareddy, Telangana 502285, India

a r t i c l e

i n f o

MSC: 65F22 65R30 65R32 Keywords: Ill-posed problems Regularization Stationary iterated weighted Tikhonov Parameter choice rules

a b s t r a c t Regularization procedure involves the regularization parameter that plays a crucial role in the convergence analysis of the regularization scheme. Recently, Reddy (2017) has proposed two a posteriori parameter choice rules to choose the regularization parameter in the weighted Tikhonov regularization scheme. The primary purpose of this article is to introduce a class of parameter choice rules to choose the regularization parameter in the stationary iterated weighted Tikhonov (SIWT) regularization scheme and derive the optimal j (α +1 )

rate of convergence O(δ 1+ j(α+1) ) for a stationary iterated method based on these proposed rules. The numerical experiments support our theoretical results. © 2018 Elsevier Inc. All rights reserved.

1. Introduction Many problems in science and engineering lead to inverse problems, and they are ill-posed in nature. These inverse problems can be modeled as an operator equation of the form

Kx = y,

(1.1)

where K is a linear compact operator between two Hilbert spaces X and Y with infinite dimensional range (dim(R(K )) = ∞). The problem (1.1) is ill-posed in the sense that, even if a unique solution exists, the solution does not depend continuously on the data y. The minimal X-norm solution of (1.1) is given by x† = K † y, where K† is the Moore–Penrose inverse operator of K, which is unbounded. In many applications the right hand side of (1.1), y, is contaminated by noise. Thus, instead of y, contaminated data y˜ is available with

y − y˜ ≤ δ,

(1.2)

where δ > 0 is the noise level. Due to the unboundedness of is not a good approximation to Popular approaches to solve this ill-posed problem are ordinary and iterated Tikhonov regularization schemes that can be found in [1–4]. Approximations of x† provided by these schemes are too smooth, i.e., many details of x† are not represented by these approximations [5,6]. Recently, variants (weighted and fractional) of Tikhonov schemes [5–10] have been introduced in order to dampen the over-smoothing effect on the approximations of x† provided by Tikhonov schemes. Convergence analysis and regularization properties of the iterated weighted Tikhonov schemes can be found in [5] and the further extensions of these iterated schemes are investigated and studied in [7]. K† ,

E-mail addresses: [email protected], [email protected] https://doi.org/10.1016/j.amc.2018.11.015 0 096-30 03/© 2018 Elsevier Inc. All rights reserved.

K † y˜

x† .

G.D. Reddy / Applied Mathematics and Computation 347 (2019) 464–476

465

For the sake of notational brevity, we explain the following notations: the semi-norm .W is induced by the operator α −1

W = (K K ∗ ) 4 and . denotes the L2 -norm. For 0 ≤ α < 1 (Fractional parameter), we define W with the aid of the Moore– Penrose (pseudo) inverse of the operator KK∗ . Huang et al. [11] investigated the choice of the solution subspace in iterative methods for nonstationary iterated Tikhonov regularization of large-scale problems to improve the quality of the approximation of x† . Nonstationary iterated Tikhonov of order j in general form [11] replaces the problem (1.1) by the minimization problem

x˜β j = argmin Kx − y˜2 + β j−1 L(x − x˜β j−1 )2 with x˜β0 = 0 and j = 1, 2, . . . ,

(1.3)

x∈X

where L is a linear operator acting on suitable Hilbert spaces, β j is the regularization parameter, and j ∈ N is the order of iteration. If β j = β independent of j, then (1.3) is referred to as the stationary iterated Tikhonov of order j in general form [12]. If L = I, (1.3) is referred to as the nonstationary iterated Tikhonov of order j in standard form [13]. Further details of this type of schemes such as convergence analysis and regularization properties are reported in [11–17]. Stationary iterated weighted Tikhonov (SIWT) of order j replaces the problem (1.3) by the weighted minimization problem [5] 2 x˜βj ,α = argmin Kx − y˜W + βx − x˜βj−1 2 with x˜0β ,α = 0 and j = 1, 2, . . . , ,α

(1.4)

x∈X

where β > 0 is the regularization parameter, 0 ≤ α ≤ 1 is the fractional parameter, and j ∈ N is the order of iteration. j j The value of the regularization parameter β > 0 determines how sensitive x˜β ,α is to the noise δ in y˜ and how close x˜β ,α

is to x† [6,11,12]. Hence, the determination of the regularization parameter β > 0 plays an important role in this method. The selection of the parameter β > 0 can be made in two ways, a priori and a posteriori. A parameter choice rule in which a priori knowledge on the unknown solution x† required is called an a priori parameter choice rule. Such a priori information on the unknown solution x† may not be available always. Hence, it is better to have the a posteriori parameter choice rule. In the a posteriori parameter choice rule, the parameter β > 0 is chosen based on the available data y˜ and the noise level δ . The SIWT scheme with the a priori parameter choice rule is studied by Bianchi et al. [5]; and in [18], Reddy has proposed the a posteriori parameter choice rules to choose the regularization parameter in the weighted Tikhonov scheme and established the optimal rate of convergence. The main objective of this article is extend the a posteriori parameter choice rules studied in [18] to the SIWT scheme. The normal equations associated with (1.4) are defined by

(K ∗ K )

α +1 2

x˜βj ,α + β x˜βj ,α = (K ∗ K )

α −1 2

K ∗ y˜ + β x˜βj−1 . ,α

(1.5)

An iterated solution of order j of (1.5) is given by

x˜βj ,α =

j−1 

β l ((K ∗ K )

α +1

+ β I )−(l+1) (K ∗ K )

α −1

K ∗ y˜.

(1.6)

β l ((K ∗ K )

α +1

+ β I )−(l+1) (K ∗ K )

α −1

K∗y

(1.7)

2

2

l=0

xβj ,α =

j−1 

2

2

l=0

which is an iterated solution of order j of (1.5), when y˜ = y. It is known that [5]

x˜βj ,α − x†  ≤ β j ((K ∗ K )

α +1 2

+ β I )− j x†  + c

δ . 1 β α+1

(1.8)

j It is easy to see that x˜β (δ ),α → x† as δ → 0, if we choose β in terms of δ such that

lim β (δ ) = 0 and lim

δ →0

δ →0

δ = 0. 1 β (δ ) α+1

(1.9) ν

Moreover, we can obtain the rate of convergence using the a priori parameter choice rule [5]. If x† ∈ R((K ∗ K ) 2 ) with ν = j (α + 1 ) and β (δ ) = c1

α +1 δ 1+ j(α+1) , j (α +1 )

then

x˜βj (δ ),α − x†  = O(δ 1+ j(α+1) ).

(1.10)

In this paper, we propose a class of a posteriori parameter choice rules for choosing the regularization parameter for the SIWT scheme defined in (1.5) in such a way that the optimal rate (1.10) is established.

466

G.D. Reddy / Applied Mathematics and Computation 347 (2019) 464–476

2. Parameter choice rules In this section, we consider a class of a posteriori parameter choice rules to choose the regularization parameter β > 0 for the SIWT scheme of order j, which includes the a posteriori parameter choice rules introduced by Reddy [18] and Engl [19,20]. For any y˜ ∈ Y fulfilling (1.2), a class of a posteriori parameter choice rules are defined by

G j (β , y˜ ) := β j ((K ∗ K )

α +1 2

+ β I )− j ( K ∗ K )

−γ 2

K ∗ y˜2 = τ

δp , βq

(2.11)

where τ > 1 is a constant, γ ∈ [0, 1] is independent of α , and p, q > 0. For the rest of this paper, we assume that

K ∗ y = 0 and K ∗ y˜ = 0.

(2.12)

We use the following spectral results in this study,

β j (δ ) j ((K ∗ K ) ((K ∗ K )

α +1 2

α +1 2

+ β j (δ )I )− j  ≤ 1 for all j ∈ N and

+ β j (δ )I )−1 (K ∗ K )

α +1

 ≤ 1.

2

In the following theorem, we discuss the properties of the functional G j (β , y˜ ) and consequently establish the existence and uniqueness solution of (2.11) for any δ > 0. Theorem 2.1. The functional β → β q G j (β , y˜ ) defined in (2.11) possesses the following properties. 1. β q G j (β , y˜ ) is continuous and strictly increasing in β . 2.

lim β q G j (β , y˜ ) = 0 and lim

β →0

β →∞

β q G j (β , y˜ ) = ∞ f or all j ∈ N.

(2.13)

3. There exists a unique β (δ ) satisfying (2.11). 4. For all j ≥ 2 and β > 0, we have β q G j (β , y˜ ) ≤ β q G j−1 (β , y˜ ). Proof. Let {Eλ } be the spectral family of K∗ K. From (2.11) and together with the spectral representation theorem,

G j (β , y˜ ) =

∞  0

β

2 j

λ−2γ dEλ K ∗ y˜2 .

λα+1 + β

(2.14)

q+1

Since λαβ+1 +β (for λ > 0) is continuous and strictly increasing, β q G j (β , y˜ ) is continuous and strictly increasing in β . It is easy to see that

β β = 0 and lim α +1 = 1. α +1 λ + β λ +β β →0 β →∞ lim

(2.15)

By the monotone convergence theorem

lim β q G j (β , y˜ ) = 0 and lim G j (β , y˜ ) = (K ∗ K )

β →0

β →∞

−γ 2

K ∗ y˜2 for all j ∈ N,

(2.16)

which implies

lim

β →∞

β q G j (β , y˜ ) = lim β q lim G j (β , y˜ ) = ∞ for all j ∈ N. β →∞

(2.17)

β →∞

Hence, the unique solvability of (2.11) is followed by the intermediate value theorem. Observe that

β q G j (β , y˜ ) = β q β j ((K ∗ K )

α +1 2

+ β I )− j ( K ∗ K )

−γ 2

K ∗ y˜2

α +1

β q β ((K ∗ K ) 2 + β I )−1 2 −γ α +1 β j−1 ((K ∗ K ) 2 + β I )−( j−1) (K ∗ K ) 2 K ∗ y˜2 −γ α +1 ≤ β q β j−1 ((K ∗ K ) 2 + β I )−( j−1) (K ∗ K ) 2 K ∗ y˜2 . ≤

Therefore, β q G j (β , y˜ ) ≤ β q G j−1 (β , y˜ ).



We now discuss the asymptotic behavior of β j (δ ). Lemma 2.2. Suppose that β j := β j (δ ) is the unique solution of (2.11). Then lim β j (δ ) = 0. δ →0

Proof. Assume that there is a sequence (δ n ) → 0 such that (β n ) := (β j (δ n )) → ∞ as n → ∞. From (2.11), combined with property (2) in Theorem 2.1, we obtain

0 = lim

n→∞

τ δnp = lim βnq G j (βn , y˜ ) = ∞. n→∞

(2.18)

G.D. Reddy / Applied Mathematics and Computation 347 (2019) 464–476

467

Hence, β j (δ ) is bounded as δ → 0. Now suppose there is a sequence (δ n ) → 0 such that (β n ) := (β j (δ n )) → c2 > 0 as n → ∞. Then

lim

n→∞

βnj ((K ∗ K )

α +1 2

+ βn I )− j (K ∗ K )

−γ 2

K ∗ y˜n = c2j ((K ∗ K )

α +1

+ c2 I )− j ( K ∗ K )

−γ 2

K ∗ y.

(2.19)

βnq G j (βn , y˜n ) = nlim τ δnp = 0. →∞

(2.20)

2

Consequently,

c2q c2j ((K ∗ K )

α +1

+ c2 I )− j ( K ∗ K )

2

−γ 2

K ∗ y2 = lim

n→∞

Thus, K ∗ y = 0 which is in contradiction with (2.12). Therefore, lim β j (δ ) = 0. δ →0



Theorem 2.3. Assume that β j (δ ) is the unique solution of (2.11). Then β j−1 (δ ) ≤ β j (δ ) for all j ≥ 2 and δ > 0. Proof. Combining the property (4) in Theorems 2.1 and (2.11), we obtain

β j−1 (δ )q G j (β j−1 (δ ), y˜ ) ≤ β j−1 (δ )q G j−1 (β j−1 (δ ), y˜ ) = τ δ p .

(2.21)

By (2.11) and (2.21), we deduce that

β j−1 (δ )q G j (β j−1 (δ ), y˜ ) ≤ β j (δ )q G j (β j (δ ), y˜ ).

(2.22)

Suppose that β j−1 (δ ) > β j (δ ) for all j ≥ 2, δ > 0, then, by the monotonicity of β (δ )q G j (β (δ ), y˜ ), we get

β j−1 (δ )q G j (β j−1 (δ ), y˜ ) > β j (δ )q G j (β j (δ ), y˜ ),

(2.23)

which is in contradiction with (2.22). Therefore, β j−1 (δ ) ≤ β j (δ ) for all j ≥ 2 and δ > 0.



δ = 0. 1 δ →0 β (δ ) α +1 j

Lemma 2.4. If 0 < p ≤ (α + 1 )q and β j (δ ) is determined by (2.11), then lim Proof. Due to (2.11),

τ

p p δp δp =τ β (δ )q− α+1 = G j (β j (δ ), y˜ )β j (δ )q− α+1 . p β j ( δ )q j β j (δ ) α+1

(2.24)

δp = 0. From this, we can conclude the proof of the p δ →0 β j (δ ) α +1

Since lim β j (δ ) = 0 and lim G j (β j (δ ), y˜ ) = 0, we have lim δ →0

lemma.

δ →0



Now we prove that for, 0 < p ≤ (α + 1 )q, the parameter β j (δ ) determined by (2.11) always leads to the convergence of j x˜β (δ ),α to x† . j j Lemma 2.5. Suppose that β j (δ ) is the unique solution of (2.11) and 0 < p ≤ (α + 1 )q. Then lim x˜β (δ ),α = x† . δ →0 j

Proof. Due to (1.8) we have

x˜βj j (δ ),α − x†  ≤ β j (δ ) j ((K ∗ K )

α +1 2

+ β j ( δ )I )− j x†  + c

δ . 1 β j (δ ) α+1

By Lemma 2.4,

lim

δ →0

δ = 0. 1 β j (δ ) α+1

(2.25)

In order to complete the proof of the theorem we now prove that

β j (δ ) j ((K ∗ K ) Let

x†

∈ R(K∗ K),

α +1

+ β j (δ )I )− j x†  → 0 as δ → 0 for x† ∈ N(K)⊥ . then there exists a u ∈ X such that x† = (K ∗ K )u. Then 2

β j (δ ) j ((K ∗ K )

α +1 2

+ β j ( δ )I )− j x†  = ≤

β j (δ ) j ((K ∗ K ) β j ( δ )β j ( δ ) ((K ∗ K )

Hence, β j (δ ) j ((K ∗ K )

α +1 2

α +1 2

α +1

+ β j ( δ )I )− j ( K ∗ K )u

2

j−1

((K ∗ K )

α +1 2

+ β j (δ )I )−( j−1) 

+ β j (δ )I )−1 (K ∗ K )u



β j (δ )((K ∗ K )

α +1

+ β j (δ )I )−1 (K ∗ K )u



β j (δ )((K ∗ K )

α +1

+ β j (δ )I )−1 (K ∗ K )



β j ( δ ) ( K K ) ∗

2

2

1− α +1 2

u.

+ β j (δ )I )− j x†  → 0 as δ → 0 in R(K∗ K).

α +1 2

(K ∗ K )1−

α +1 2

u

468

G.D. Reddy / Applied Mathematics and Computation 347 (2019) 464–476

Combining N (K )⊥ = R(K ∗ K ) and Theorem 2.8 in [3], we have

lim β j (δ ) j ((K ∗ K )

α +1 2

δ →0

+ β j (δ )I )− j x†  = 0 for x† ∈ N (K )⊥ .

From (2.25) and (2.26) the theorem follows immediately.

(2.26)



Lemma 2.6. Assume that 0 < p ≤ (α + 1 )q. Then there exists a constant c3 = (K ∗ K )

− j (α +1 )−γ 2

K ∗ y2 > 0 such that

3 c3 c3 δp ≤τ ≤ q +2 j 2 2 β j (δ )

(2.27)

holds whenever δ > 0 is sufficiently small. Proof. In order to establish the relation (2.27), we first prove that

lim τ

δ →0

− j (α +1 )−γ δp =  (K ∗ K ) 2 K ∗ y2 . q +2 j β j (δ )

(2.28)

Let {Eλ } be the spectral family of K∗ K. From (2.11), we have

lim τ

δ →0

δp = lim β (δ )−2 j G j (β j (δ ), y˜ ) β j (δ )q+2 j δ→0 j = lim ((K ∗ K ) δ →0

∞ 

= lim

δ →0

0

α +1 2

+ β j ( δ )I )− j ( K ∗ K )

λ−γ α +1 (λ + β j (δ )) j

−γ 2

K ∗ y˜2

2 dEλ K ∗ y˜2

using the monotone convergence theorem ∞ = λ−2 j(α+1)−2γ dEλ K ∗ y2 0

=  (K ∗ K )

− j (α +1 )−γ 2

K ∗ y2 > 0.

As a consequence of the convergence, for given > 0, there exits a δ > 0 such that

−  (K ∗ K )

− j (α +1 )−γ 2

∗ By choosing = (K K )

K ∗ y2 < τ

− j (α +1 )−γ 2

2

K ∗ y2

− j (α +1 )−γ δp < +  (K ∗ K ) 2 K ∗ y2 . β j (δ )q+2 j

, we obtain (2.27).

 ν

Now, we derive the optimal rate of convergence using the estimates in (2.27) and x† ∈ R((K ∗ K ) 2 ). ν

Theorem 2.7. Suppose that β j (δ ) is chosen according to (2.11), x† ∈ R((K ∗ K ) 2 ) with ν = j (α + 1 ) and 2 (α +1 ) .

j

j

p(1+ j (α +1 )) α +1

− 2j = q ≥

If x˜β (δ ),α and xβ (δ ),α are defined in (1.6) and (1.7) respectively, then j j

 j(α+1)  x˜βj j (δ ),α − x†  = O δ 1+ j(α+1) .

(2.29)

Proof. From (1.8), we obtain that

δ 1 β j (δ ) α+1

x˜βj j (δ ),α − x†  ≤ β j (δ ) j ((K ∗ K )

α +1

+ β j ( δ )I )− j x†  + c

β j (δ ) j ((K ∗ K )

α +1

+ β j ( δ )I )− j ( K ∗ K ) 2 u + c

=

2

2

ν

ν

as x† = (K ∗ K ) 2 u ν

β j ( δ )λ j

≤ sup α +1 λ + β j (δ ) λ>0

j

u + c

δ 1 β j (δ ) α+1

δ , as c4 = u 1 β j (δ ) α+1 δ = c4 β j ( δ ) j + c , as ν = j (α + 1 ) 1 β j (δ ) α+1 ν

≤ c4 β j (δ ) α+1 + c

δ , 1 β j (δ ) α+1

G.D. Reddy / Applied Mathematics and Computation 347 (2019) 464–476

c3 2

≤ c4

c3 = c4 2

−j q+2 j

−j q+2 j

δ

j (α +1 ) 1+ j (α +1 )

1

3 c3 2

(q+2 j )(α +1 )

3 c3 +c 2

(q+2 j )(α +1 )

j (α +1 )

δ 1+ j(α+1) + c

469

−1

δδ 1+ j(α+1)

1

j (α +1 )

δ 1+ j(α+1) . 

Hence, the theorem is proved. To solve (2.11) numerically, we use Newton’s method. Let

f (β ) := β q G j (β , y˜ ) − τ δ p

(2.30)

and Newton’s method for f (β ) = 0 is given by

βn+1 = βn −

f (βn ) for all n ∈ N and f (βn )

β1 > 0.

(2.31)

Next, we will see that the sequence (β n ) generated by (2.31) is positive and converges to β j (δ ). 2 Proposition 2.8. Let q ≥ α +1 . The sequence (β n ), defined in (2.31) is quadratically convergent to β j (δ ), the unique solution of (2.11).

Proof. Due to Theorem 2.1, the function f defined in (2.30) is monotonically increasing, twice differentiable, and β j (δ ) is the unique solution of f (β ) = 0. Let {Eλ } be the spectral family of K∗ K. Due to (2.30), we have −γ

α +1

f (β ) =

β q+2 j ((K ∗ K ) 2 + β I )− j (K ∗ K ) 2 K ∗ y˜2 − τ δ p 2 j ∞  1 q+2 j =β λ−2γ dEλ K ∗ y˜2 − τ δ p λα+1 + β 0

Differentiating f with respect to β gives

f (β ) = (q + 2 j )β q+2 j−1

∞ 

2 j

1

λ−2γ dEλ K ∗ y˜2 − λα+1 + β 0 2 j+1 ∞  1 2 jβ q+2 j λ−2γ dEλ K ∗ y˜2 λα+1 + β 0

= qβ

q+2 j−1

∞  0

2 jβ

q+2 j−1

λ−2γ dEλ K ∗ y˜2 +

λα+1 + β ∞ 

2 j

1

λ−2γ dEλ K ∗ y˜2 −

λα+1 + β

0

2 jβ q+2 j−1

2 j

1

∞ 

2 j+1

1

λα+1 + β

0

∞ 

βλ−2γ dEλ K ∗ y˜2

2 j

1

λ−2γ dEλ K ∗ y˜2 + λα+1 + β 0 2 j+1 ∞  1 2 jβ q+2 j−1 λα+1 λ−2γ dEλ K ∗ y˜2 λα+1 + β

= qβ q+2 j−1

0

= qβ

q+2 j−1

((K ∗ K )

α +1

2 jβ q+2 j−1 ((K ∗ K ) We define, T := ((K ∗ K )

α +1 2

+ β I )− j ( K ∗ K )

2

α +1 2

+ β I)

− 2 j+1 2

−γ 2

(K ∗ K )

+ βn I )− j and S := ((K ∗ K ) q+2 j n

−γ 2

α +1

α +1 2

4

−γ

(K ∗ K ) 2 K ∗ y˜2 > 0.

+ βn I )−

2 j+1 2

. Using f and f in (2.31), we get that

T (K ∗ K ) K ∗ y˜2 − τ δ p −γ α +1 β T ( ) K ∗ y˜2 + 2 jS(K ∗ K ) 4 (K ∗ K ) 2 K ∗ y˜2 −γ −γ α +1 β q+2 j (q − 1 )T (K ∗ K ) 2 K ∗ y˜2 + 2 jS(K ∗ K ) 4 (K ∗ K ) 2 K ∗ y˜2 + τ δ p = n −γ −γ α +1 βnq+2 j−1 qT (K ∗ K ) 2 K ∗ y˜2 + 2 jS(K ∗ K ) 4 (K ∗ K ) 2 K ∗ y˜2

βn+1 = βn −

β

K ∗ y˜2 +

q+2 j−1 q n

K∗K

−γ 2

470

G.D. Reddy / Applied Mathematics and Computation 347 (2019) 464–476 Table 1

β j ’s and Eβ j α ’s of Foxgood example for δ = 0.1 and n = 100. γ

α 0.4

β and error

j=1

j=2

j=3

j=4

j=5

βj

3.8571e − 02 5.5997e − 02 3.1122e − 02 9.0032e − 02 2.6148e − 02 1.3775e − 01 2.2514e − 02 1.9434e − 01 3.8084e − 02 1.1110e − 01 3.0712e − 02 1.7543e − 01 2.5777e − 02 1.8087e − 01 2.2181e − 02 2.2470e − 01 3.7841e − 02 1.1982e − 01 3.0766e − 02 1.2370e − 01 2.6057e − 02 1.1410e − 01 2.4185e − 02 1.7488e − 01

9.8345e − 02 5.1330e − 02 8.1731e – 02 8.0229e − 02 7.3814e − 02 1.5156e − 01 7.0489e − 02 2.2036e − 01 9.1122e − 02 9.8719e − 02 7.4328e − 02 1.7340e − 01 6.5249e − 02 1.8637e − 01 6.2640e − 02 2.3501e − 01 8.7153e − 02 1.0428e − 01 7.4583e − 02 1.1797e − 01 7.1890e − 02 1.6282e − 01 7.1395e − 02 1.9654e − 01

1.6935e − 01 5.5068e − 02 1.1310e − 01 6.1572e − 02 8.7433e − 02 1.1686e − 01 9.6226e − 02 2.1350e − 01 1.3046e − 01 8.6463e − 02 6.9785e − 02 8.9971e − 02 4.3829e − 02 1.0450e − 01 3.6404e − 02 1.6913e − 01 1.1392e − 01 8.6728e − 02 7.2369e − 02 6.7752e − 02 7.1140e − 02 1.1338e − 01 8.4639e − 02 1.7373e − 01

2.4315e − 01 5.8970e − 02 1.2323e − 01 4.7938e − 02 5.7681e − 02 4.5756e − 02 4.2713e − 02 1.1558e − 01 1.5895e − 01 7.4050e − 02 7.1167e − 02 6.0133e − 02 3.4433e − 02 4.5688e − 02 1.9602e − 02 9.2342e − 02 1.3590e − 0 7.7618e − 02 7.6336e − 02 5.8995e − 02 4.6748e − 02 3.7247e − 02 4.2108e − 02 6.6391e − 02

3.0230e − 01 5.8331e − 02 1.4006e − 01 5.4619e − 02 6.4081e − 02 5.4929e − 02 3.3596e − 02 8.1079e − 02 1.8174e − 01 6.4725e − 02 8.2011e − 02 5.2001e − 02 3.9171e − 02 3.3742e − 02 1.8465e − 02 7.0747e − 02 1.5550e − 03 7.4181e − 02 8.8896e − 02 6.2120e − 02 5.2381e − 02 3.9596e − 02 4.2495e − 02 4.8160e − 02

Eβ j α 0.6

βj

Eβ j α 0

0.8

βj

Eβ j α 1

βj

Eβ j α 0.4

βj

Eβ j α 0.6

βj

Eβ j α 0.5

0.8

βj

Eβ j α 1

βj

Eβ j α 0.4

βj

Eβ j α 0.6 1−α

βj

Eβ j α 0.8

βj

Eβ j α 0.9

βj

Eβ j α Table 2 β j ’s and Eβ j α ’s of

γ

α 0.4

Foxgood example for δ = 0.1 and n = 20 0 0. β and error

j=1

j=2

j=3

j=4

j=5

βj

1.1400e − 02 2.8995e − 02 6.0205e − 03 3.3559e − 02 4.6847e − 03 3.9181e − 02 4.0483e − 03 7.1557e − 02 9.5090e − 03 2.1961e − 02 5.6731e − 03 3.8163e − 02 2.1044e − 03 4.9088e − 02 1.0163e − 03 4.5321e − 02 9.3719e − 03 4.0674e − 02 5.7971e − 03 3.7523e − 02 4.6160e − 03 4.3666e − 02 4.3036e − 03 6.5937e − 02

5.0275e − 022 5.8825e − 02 2.9858e − 022 4.7776e − 02 1.4890e − 02 4.4043e − 02 6.5402e − 03 3.1501e − 02 3.9055e − 02 3.5610e − 02 1.7540e − 02 1.8559e − 02 7.5165e − 03 3.0033e − 02 3.2991e − 03 3.8665e − 02 3.5561e − 02 3.1604e − 02 2.0829e − 02 4.4191e − 02 1.1547e − 02 3.0370e − 02 8.7957e − 03 4.1279e − 02

1.0096e − 01 7.9594e − 02 4.9712e − 02 4.8115e − 02 2.2041e − 02 3.2058e − 02 9.7275e − 03 2.3642e − 02 5.8861e − 02 2.7054e − 02 2.5700e − 02 1.6835e − 02 1.1186e − 02 2.1250e − 02 5.1825e − 03 3.3001e − 02 5.2140e − 02 2.1104e − 02 3.0666e − 02 3.4735e − 02 1.7210e − 02 1.9602e − 02 1.3195e − 02 3.2221e − 02

1.3914e − 01 7.8757e − 02 6.1945e − 02 3.5056e − 02 2.8303e − 02 2.3187e − 02 1.2977e − 02 2.1314e − 02 7.4064e − 02 2.0134e − 02 3.3212e − 02 2.1573e − 02 1.4399e − 02 1.7379e − 02 6.8172e − 03 2.9370e − 02 6.5954e − 02 1.7548e − 02 3.9532e − 02 2.9852e − 02 2.2548e − 02 1.3883e − 02 1.7431e − 02 2.7582e − 02

1.6371e − 01 6.9027e − 02 7.3231e − 02 2.5478e − 02 3.4330e − 02 1.7498e − 02 1.6146e − 02 2.0572e − 02 8.8450e − 02 1.8586e − 02 4.0332e − 02 2.6241e − 02 1.7121e − 02 1.6465e − 02 8.1276e − 03 2.6483e − 02 7.9077e − 02 1.9081e − 02 4.8084e − 02 2.7709e − 02 2.7713e − 02 1.1185e − 02 2.1548e − 02 2.4938e − 02

Eβ j α 0.6

βj

Eβ j α 0

0.8

βj

Eβ j α 1

βj

Eβ j α 0.4

βj

Eβ j α 0.6

βj

Eβ j α 0.5

0.8

βj

Eβ j α 1

βj

Eβ j α 0.4

βj

Eβ j α 0.6 1−α

βj

Eβ j α

0.8

βj

Eβ j α 0.9

βj

Eβ j α

which implies βn+1 > 0 for all n ∈ N. Since f (β ) > 0 for all β > 0, we have f (β j (δ )) > 0. Therefore, the sequence (β n ), defined by (2.31) is quadratically convergent to β j (δ ).  3. Numerical examples In this section, we consider two examples to illustrate the performance of the SIWT scheme of order j defined in (1.5) when the regularization parameter β j (δ ) is determined by the parameter choice rule (2.11). The discretization of the operator K is taken from the MATLAB package, Regularization Tools by Hansen [21]. We introduce the singular value decomposition (SVD)

K = U V T ,

(3.32)

G.D. Reddy / Applied Mathematics and Computation 347 (2019) 464–476

471

Table 3

β j ’s and Eβ j α ’s of Foxgood example for δ = 0.01 and n = 100. γ

α 0.4

β and error

j=1

j=2

j=3

j=4

j=5

βj

1.1520e − 02 8.1457e − 02 9.8951e − 03 8.4982e − 02 1.6375e − 03 1.0989e − 01 7.6309e − 03 1.3232e − 01 1.1415e − 02 7.1498e − 02 9.7144e − 03 9.6258e − 02 8.3335e − 03 1.1644e − 01 7.1570e − 03 1.3822e − 01 1.1344e − 02 6.7437e − 02 9.7770e − 03 9.0904e − 02 8.5675e − 03 1.0228e − 02 8.0718e − 03 1.2052e − 01

2.5657e − 02 6.0929e − 02 2.2096e − 02 6.6502e − 02 1.8048e − 02 8.9890e − 02 1.3459e − 02 1.0225e − 01 1.9786e − 02 3.7086e − 02 1.4146e − 02 5.2798e − 02 9.3938e − 03 5.6478e − 02 6.1086e − 03 6.3612e − 02 1.8083e − 02 2.9315e − 02 1.5549e − 02 5.0666e − 02 1.3621e − 02 6.0820e − 02 1.3308e − 02 7.8714e − 02

2.5526e − 02 3.3786e − 02 1.5556e − 02 1.3164e − 02 1.0389e − 02 2.0316e − 02 6.7444e − 03 3.2474e − 02 1.9360e − 02 1.5394e − 02 1.3001e − 02 2.4423e − 02 8.1536e − 03 2.4924e − 02 5.6416e − 03 4.1923e − 02 1.8772e − 02 1.2337e − 02 1.3319e − 02 1.7013e − 02 9.3857e − 03 1.0305e − 02 7.9379e − 03 1.9281e − 02

3.2847e − 02 2.8793e − 02 1.7543e − 02 4.7524e − 02 1.1734e − 02 1.1982e − 02 7.6707e − 03 2.9789e − 02 2.4498e − 02 1.2993e − 02 1.6548e − 02 1.9902e − 02 1.0205e − 02 2.0090e − 02 7.4985e − 03 3.9681e − 02 2.3772e − 02 1.1336e − 02 1.6800e − 02 1.2263e − 02 1.1539e − 02 8.6803e − 03 9.4823e − 03 1.6567e − 02

4.1478e − 02 1.6722e − 01 2.1574e − 02 3.9370e − 02 1.1468e − 02 1.0631e − 02 9.8052e − 03 2.9714e − 02 2.9586e − 02 1.2425e − 02 1.9998e − 02 1.7423e − 02 1.2248e − 02 1.7689e − 02 9.7083e − 03 3.8873e − 02 2.8557e − 02 1.4999e − 03 2.0464e − 02 1.0124e − 02 1.4478e − 02 9.7658e − 03 1.2058e − 02 1.7055e − 02

Eβ j α 0.6

βj

Eβ j α 0

0.8

βj

Eβ j α 1

βj

Eβ j α 0.4

βj

Eβ j α 0.6

βj

Eβ j α 0.5

0.8

βj

Eβ j α 1

βj

Eβ j α 0.4

βj

Eβ j α 0.6 1−α

βj

Eβ j α 0.8

βj

Eβ j α 0.9

βj

Eβ j α Table 4 β j ’s and Eβ j α ’s of

γ

Foxgood example for δ = 0.01 and n = 20 0 0.

α

β and error

j=1

j=2

j=3

j=4

j=5

0.4

βj

6.0191e − 03 4.7920e − 02 1.0864e − 03 1.7951e − 02 9.5614e − 04 2.6073e − 02 8.4655e − 04 3.8033e − 02 1.2546e − 03 3.6919e − 02 1.0650e − 03 1.7166e − 02 9.0348e − 04 2.8998e − 02 7.3794e − 04 2.9491e − 02 1.2515e − 03 2.2544e − 02 1.0731e − 03 1.7088e − 02 9.4610e − 04 3.1353e − 02 7.1703e − 04 2.8910e − 02

1.2793e − 02 2.7454e − 02 2.6755e − 03 1.1028e − 02 1.8697e − 02 1.6393e − 02 1.4687e − 03 2.5542e − 02 4.5694e − 03 1.5405e − 02 2.1405e − 03 9.1562e − 03 1.8234e − 03 2.5980e − 02 1.4321e − 03 2.5663e − 02 4.1675e − 03 1.0775e − 02 2.2202e − 03 9.1697e − 03 1.8420e − 03 2.2638e − 02 1.4262e − 03 2.2292e − 02

1.4671e − 02 1.4314e − 02 5.2611e − 03 1.3412e − 02 2.8508e − 03 1.5658e − 02 2.3729e − 03 2.4479e − 02 8.3522e − 03 9.9114e − 03 2.2851e − 03 3.2777e − 03 2.7721e − 03 2.5217e − 02 2.2700e − 03 2.6445e − 02 7.5436e − 03 9.0413e − 03 3.0870e − 03 7.0401e − 03 2.8123e − 03 2.1854e − 02 2.2156e − 03 2.2086e − 02

2.0473e − 02 1.3010e − 02 8.2043e − 03 1.5228e − 02 3.4425e − 03 1.3901e − 02 3.0989e − 03 2.4481e − 02 1.1356e − 02 9.6751e − 03 2.1857e − 03 9.4636e − 03 2.8375e − 03 2.50 0 0e − 02 2.5971e − 03 2.6041e − 02 9.9108e − 03 8.5599e − 03 3.3790e − 03 4.8184e − 03 3.1693e − 03 1.9936e − 02 2.5523e − 03 2.0628e − 02

2.6873e − 02 1.2787e − 02 1.1234e − 02 1.6279e − 02 2.6174e − 03 5.2229e − 03 3.5020e − 03 2.3587e − 02 1.3068e − 02 1.2239e − 02 2.4357e − 03 1.3105e − 02 8.5172e − 04 1.0072e − 02 1.0024e − 03 1.5304e − 02 1.0690e − 02 8.9749e − 03 3.6951e − 03 5.9035e − 03 1.4527e − 03 8.0366e − 03 9.0191e − 04 8.4171e − 03

Eβ j α 0.6

βj

Eβ j α 0

0.8

βj

Eβ j α 1

βj

Eβ j α 0.4

βj

Eβ j α 0.6

βj

Eβ j α 0.5

0.8

βj

Eβ j α 1

βj

Eβ j α 0.4

βj

Eβ j α 0.6 1−α

βj

Eβ j α 0.8

βj

Eβ j α 0.9

βj

Eβ j α

where U = [u1 , u2 , . . . , un ] ∈ Rn×n and V = [v1 , v2 , . . . , vn ] ∈ Rn×n are orthogonal matrices, and

= diag[λ1 , λ2 , . . . , λn ] ∈ Rn×n , whose singular values are ordered according to

λ1 ≥ λ2 ≥ . . . λr > λr+1 = · · · = λn = 0. Substituting the SVD (3.32) into (1.6) and (2.11), yields

x˜βj ,α =

j−1  l=0

β l V ( α+1 + β I )−(l+1) α U T y˜

(3.33)

472

G.D. Reddy / Applied Mathematics and Computation 347 (2019) 464–476

a

b 1

0.65

0.9 0.6 0.8 0.55

0.7 0.6

0.5

0.5 0.45 0.4 0.3

0.4

0.2 0.35 0.1 0.3

0 0

0.2

0.4

Fig. 1. (a) Computed and exact solutions of example with noise level δ = 0.01.

0.6

0.8

0

1

0.2

0.4

0.6

0.8

1

Foxgood example with α = 0.8, δ = 0.01, γ = 0, j = 5, and n = 100. (b) Exact and noise data of Foxgood

a

b

1.2

1

1

0.8

0.8 0.6 0.6 0.4 0.4 0.2 0.2

0

0

-0.2 0

0.2

0.4

0.6

0.8

1

-0.2 0

0.2

0.4

0.6

0.8

1

Fig. 2. (a) Computed and exact solutions of Foxgood example with α = 0.4, δ = 0.01, γ = 0.5, j = 5, and n = 100. (b) Computed and exact solutions of Foxgood example with α = 0.8, δ = 0.01, γ = 1 − α , j = 4, and n = 100.

and

G j (β , y˜ ) := β j ( α +1 + β I )− j 1−γ U T y˜2 = τ

δp . βq

(3.34)

2 The regularization parameter β is determined by (3.34) with q = α +1 , p = 3(α + 1 ), and τ = 1.1. Setting α = 1 and γ = 0 in (3.34) leads to the parameter choice rule studied by Engl [20] for the SIWT scheme as well as the parameter choice rules studied by Reddy [18] can be seen from (3.34) with j = 1, γ = 0, and γ = 1 − α .

G.D. Reddy / Applied Mathematics and Computation 347 (2019) 464–476

473

Table 5

β j ’s and Eβ j α ’s of Shaw example for δ = 0.1 and n = 100. γ

α 0.4

β and error

j=1

j=2

j=3

j=4

j=5

βj

5.7486e − 02 1.6275e − 01 4.9634e − 02 1.4037e − 01 4.3289e − 02 1.7280e − 01 3.7989e − 02 1.7233e − 01 5.5414e − 02 1.6295e − 01 4.6782e − 02 1.7058e − 01 9.2829e − 03 1.2234e − 01 7.5883e − 03 1.4926e − 01 1.6762e − 02 1.9595e − 01 1.1380e − 02 8.2374e − 02 9.4734e − 03 1.1522e − 01 8.9041e − 03 1.3774e − 01

1.2434e − 01 1.5133e − 01 1.0597e − 01 1.3747e − 01 9.2834e − 02 1.7034e − 01 8.0567e − 02 1.6817e − 01 1.1315e − 01 1.5637e − 01 9.0543e − 02 1.6844e − 01 2.3023e − 02 1.2631e − 01 7.6727e − 03 1.4030e − 01 7.8331e − 02 1.7354e − 01 4.2442e − 02 7.2327e − 02 2.8831e − 02 1.2012e − 01 2.0980e − 02 1.2578e − 01

1.8325e − 01 1.4921e − 01 1.3251e − 01 1.3227e − 01 1.0989e − 01 1.7026e − 01 9.0879e − 02 1.6602e − 01 1.5427e − 01 1.5416e − 01 1.1043e − 01 1.6965e − 01 3.8938e − 02 1.1910e − 01 7.7746e − 03 1.5174e − 01 1.3228e − 01 1.7214e − 01 5.5183e − 02 7.4618e − 02 4.1080e − 02 1.1859e − 01 2.6060e − 02 1.2813e − 01

2.6075e − 01 1.4826e − 01 1.6450e − 01 1.3027e − 01 1.2569e − 01 1.7054e − 01 1.0124e − 01 1.6522e − 01 1.9878e − 01 1.5374e − 01 1.2889e − 01 1.7036e − 01 3.2021e − 02 1.1129e − 01 9.3431e − 03 1.5409e − 01 1.7117e − 01 1.7274e − 01 6.0801e − 02 8.2786e − 02 4.5392e − 02 1.1508e − 01 2.9403e − 02 1.3164e − 01

3.3288e − 01 1.4785e − 01 2.0796e − 01 1.3087e − 01 1.5199e − 01 1.7068e − 01 1.1802e − 01 1.6494e − 01 2.3857e − 01 1.5342e − 01 1.4403e − 01 1.7071e − 01 3.4763e − 02 1.0517e − 01 1.0780e − 02 1.5618e − 01 1.9811e − 01 1.7409e − 01 6.5598e − 02 9.3901e − 02 4.8441e − 02 1.1424e − 01 3.2419e − 02 1.3531e − 01

Eβ j α

βj

0.6

Eβ j α 0

βj

0.8

Eβ j α

βj

1

Eβ j α

βj

0.4

Eβ j α

βj

0.6

Eβ j α 0.5

βj

0.8

Eβ j α

βj

1

Eβ j α

βj

0.4

Eβ j α

βj

0.6

Eβ j α

1−α

βj

0.8

Eβ j α

βj

0.9

Eβ j α Table 6 β j ’s and Eβ j α ’s of

γ

α 0.4

Shaw example for δ = 0.1 and n = 20 0 0. β and error

j=1

j=2

j=3

j=4

j=5

βj

5.7322e − 02 1.4884e − 01 3.6297e − 03 7.6869e − 02 1.5511e − 03 5.7058e − 02 1.0116e − 03 6.5127e − 02 5.5338e − 02 1.4954e − 01 3.4223e − 03 5.5265e − 02 1.7161e − 03 7.0830e − 02 9.3375e − 04 8.3909e − 02 6.6513e − 03 1.2542e − 01 3.3252e − 03 6.5145e − 02 1.6704e − 03 6.4227e − 02 1.0117e − 03 7.0318e − 02

1.2094e − 01 1.4189e − 01 3.2040e − 02 1.2600e − 01 1.8750e − 02 1.1978e − 01 7.6203e − 03 1.0920e − 01 1.0908e − 01 1.4168e − 01 1.7834e − 02 8.2174e − 02 5.7360e − 03 7.4790e − 02 2.2119e − 03 8.3567e − 02 2.9973e − 02 6.9891e − 02 2.2300e − 02 9.8852e − 02 1.4675e − 02 1.1408e − 01 7.5912e − 03 1.1272e − 01

1.5224e − 01 1.3529e − 01 1.0600e − 01 1.4721e − 01 3.5929e − 02 1.2804e − 01 1.0090e − 02 1.0320e − 01 1.3045e − 01 1.3471e − 01 2.2923e − 02 7.2929e − 02 7.2187e − 03 6.8777e − 02 3.0162e − 03 8.0049e − 02 3.7957e − 02 7.8495e − 02 3.0753e − 02 9.4172e − 02 2.0642e − 02 1.1026e − 01 1.0130e − 02 1.0772e − 01

1.7321e − 01 1.2991e − 01 1.8141e − 01 1.5187e − 01 4.6040e − 02 1.2630e − 01 1.1996e − 02 9.8007e − 02 1.3309e − 01 1.2663e − 01 2.6859e − 02 6.6434e − 02 8.5135e − 03 6.7323e − 02 3.7423e − 03 7.8104e − 02 4.4962e − 02 8.9296e − 02 3.6192e − 02 8.8697e − 02 2.3960e − 02 1.0371e − 01 1.2144e − 02 1.0351e − 01

2.0514e − 01 1.2790e − 01 2.2929e − 01 1.5189e − 01 5.1031e − 02 1.2153e − 01 1.3757e − 02 9.4158e − 02 1.2234e − 01 1.1664e − 01 3.0431e − 02 6.2364e − 02 9.6966e − 03 6.8023e − 02 4.4158e − 03 7.6881e − 02 6.1658e − 02 8.2298e − 02 4.3501e − 02 8.6890e − 02 2.6928e − 02 9.8541e − 02 1.4013e − 02 1.0042e − 01

Eβ j α

βj

0.6

Eβ j α 0

βj

0.8

Eβ j α

βj

1

Eβ j α

βj

0.4

Eβ j α

βj

0.6

Eβ j α 0.5

βj

0.8

Eβ j α

βj

1

Eβ j α

βj

0.4

Eβ j α

βj

0.6

Eβ j α

1−α

βj

0.8

Eβ j α

βj

0.9

Eβ j α

Relative errors, Eβ j α = x˜β ,α − x† /x† , and β j are documented in the tables for a few selected values of α , γ , and the j noise level δ . The smallest restoration error in tables is highlighted with bold letters for each α and γ . In plots C. Sol. and Exact Sol. denote computed solution and exact solution respectively. j

3.1. Example 1 We consider Foxgood example from the Regularization Toolbox by Hansen [21] with different grid sizes (n = 100 and 20 0 0), defined as follows:

[K1 x1 ](s ) :=



1 0



s2 + t 2 x1 (t )dt = y1 (s ), 0 ≤ s ≤ 1,

(3.35)

474

G.D. Reddy / Applied Mathematics and Computation 347 (2019) 464–476 Table 7

β j ’s and Eβ j α ’s of Shaw example for δ = 0.01 and n = 100. γ

α 0.4

β and error

j=1

j=2

j=3

j=4

j=5

βj

6.2410e − 03 6.1238e − 02 5.4268e − 03 8.6779e − 02 4.7894e − 03 1.0684e − 01 4.2691e − 03 1.3145e − 01 6.1899e − 03 8.4808e − 02 5.3312e − 03 8.8482e − 02 4.6404e − 03 1.1461e − 01 3.9905e − 03 1.2615e − 01 6.1622e − 03 6.8231e − 02 5.3623e − 03 8.5524e − 02 4.7551e − 03 1.1065e − 01 4.9989e − 03 1.2004e − 01

1.4630e − 02 5.8444e − 02 1.1498e − 02 8.2560e − 02 9.4501e − 03 1.0157e − 01 7.6716e − 03 1.2651e − 01 9.5881e − 03 6.8943e − 02 4.7979e − 03 5.4110e − 02 3.3100e − 03 7.0906e − 02 2.0127e − 03 8.2882e − 02 7.2220e − 03 4.1121e − 02 5.8447e − 03 5.4589e − 02 6.5728e − 03 9.0992e − 02 6.9693e − 03 1.0758e − 01

2.4709e − 02 6.0831e − 02 6.5940e − 03 4.2661e − 02 3.3913e − 03 1.8911e − 02 2.9851e − 03 3.7375e − 02 1.1309e − 02 5.9199e − 02 3.6635e − 03 5.1091e − 02 1.9885e − 03 4.6703e − 02 1.2615e − 03 6.5168e − 02 8.1370e − 03 3.7370e − 02 4.0295e − 03 4.2641e − 02 2.6786e − 03 5.0972e − 02 2.5976e − 03 5.7470e − 02

3.0662e − 02 5.5765e − 02 7.6350e − 03 4.1694e − 02 2.8123e − 03 5.4056e − 02 2.2277e − 03 7.1320e − 02 1.4458e − 02 5.6736e − 02 4.3786e − 03 5.3515e − 02 2.1382e − 03 4.6402e − 02 1.3017e − 03 6.4246e − 02 1.0184e − 02 3.8181e − 02 4.7698e − 03 4.4622e − 02 2.5837e − 03 5.2904e − 02 2.1568e − 03 5.3824e − 02

3.5855e − 02 5.1726e − 02 9.0961e − 03 4.2281e − 02 3.1699e − 03 5.7396e − 02 2.3091e − 03 6.6714e − 02 1.7422e − 02 5.5009e − 02 5.1996e − 03 5.5027e − 02 2.5108e − 03 4.6882e − 02 1.5193e − 03 6.4392e − 02 1.2040e − 02 3.9361e − 02 5.6879e − 03 4.5975e − 02 3.0208e − 03 5.4289e − 02 2.3797e − 03 5.5160e − 02

Eβ j α 0.6

βj

Eβ j α 0

0.8

βj

Eβ j α 1

βj

Eβ j α 0.4

βj

Eβ j α 0.6

βj

Eβ j α

0.5

0.8

βj

Eβ j α 1

βj

Eβ j α 0.4

βj

Eβ j α 0.6 1−α

βj

Eβ j α

0.8

βj

Eβ j α 0.9

βj

Eβ j α Table 8 β j ’s and Eβ j α ’s of

γ

α 0.4

Shaw example for δ = 0.01 and n = 20 0 0. β and error

j=1

j=2

j=3

j=4

j=5

βj

1.1364e − 03 3.8946e − 02 9.8811e − 04 5.2347e − 02 9.7128e − 04 6.4891e − 02 8.6956e − 04 8.4881e − 02 1.2590e − 03 4.4118e − 02 1.0822e − 03 5.2873e − 02 9.2811e − 04 6.9714e − 02 7.7098e − 04 8.3361e − 02 1.2554e − 03 3.9985e − 02 1.0906e − 03 6.0664e − 02 9.6638e − 04 6.7878e − 02 8.6978e − 04 8.6657e − 02

6.9198e − 03 5.4041e − 02 2.0328e − 03 4.6663e − 02 9.8243e − 04 4.5424e − 02 5.9121e − 04 5.1509e − 02 3.4882e − 03 4.1435e − 02 1.1931e − 03 3.8214e − 02 7.3832e − 04 4.7544e − 02 3.9646e − 04 4.9082e − 02 2.9290e − 03 3.6823e − 02 1.3140e − 03 4.5406e − 02 8.4470e − 04 4.5588e − 02 5.9062e − 04 5.1823e − 02

1.2193e − 02 5.5626e − 02 3.1762e − 03 4.5634e − 02 1.1950e − 03 4.3243e − 02 6.8749e − 04 4.8097e − 02 5.3786e − 03 4.0130e − 02 1.6637e − 03 3.6949e − 02 1.0526e − 03 4.5981e − 02 5.7953e − 04 4.8078e − 02 4.5431e − 03 3.6002e − 02 1.8862e − 03 4.3395e − 02 1.1165e − 03 4.3620e − 02 6.7348e − 04 4.7625e − 02

1.5683e − 02 5.3004e − 02 4.2500e − 03 4.5071e − 02 1.5072e − 03 4.2996e − 02 8.8458e − 04 4.7661e − 02 7.1091e − 03 3.9425e − 02 2.0146e − 03 3.6024e − 02 1.3136e − 03 4.5069e − 02 7.5519e − 04 4.7792e − 02 6.0128e − 03 3.5470e − 02 2.4445e − 03 4.2527e − 02 1.4012e − 03 4.3060e − 02 8.6504e − 04 4.7050e − 02

1.9542e − 02 5.1993e − 02 5.2740e − 03 4.4721e − 02 1.7883e − 03 4.2894e − 02 1.0562e − 03 4.7441e − 02 8.7194e − 03 3.8989e − 02 2.1977e − 03 3.4978e − 02 1.5145e − 03 4.4347e − 02 9.1162e − 04 4.7694e − 02 7.3513e − 03 3.5051e − 02 2.9711e − 03 4.1990e − 02 1.6493e − 03 4.2737e − 02 1.0319e − 03 4.6749e − 02

Eβ j α 0.6

βj

Eβ j α 0

0.8

βj

Eβ j α 1

βj

Eβ j α 0.4

βj

Eβ j α 0.6

βj

Eβ j α 0.5

0.8

βj

Eβ j α 1

βj

Eβ j α 0.4

βj

Eβ j α 0.6 1−α

βj

Eβ j α 0.8

βj

Eβ j α 0.9

βj

Eβ j α

with noise free data y1 (s ) = 13 ((1 + s2 )3/2 − s3 ) and solution x1 (t )† = t . We have perturbed the exact data by introducing the random noise δ = 0.1 and 0.01. Numerical results of Foxgood example are presented in Tables 1–4, and from the reported errors in these Tables, it is evident that the error is decreasing with the noise δ > 0 and grid size n. The rate of decrease may not be the same as the one predicted in the theory. Plots of Foxgood example are shown in Figs. 1 and 2. For the noise level δ = 0.1, we report Eβ j α ’s and β j ’s with several values of α and γ in Tables 1 and 2 on different grid

sizes (n = 10 0 and 20 0 0). It is seen that the smallest restoration error attained at α = 0.8, γ = 0.5, and j = 5 in Table 1, while at α = 0.8, γ = 1 − α , and j = 5 in Table 2. Tables 3 and 4 report Eβ j α ’s and β j ’s with various values of α and γ for δ = 0.01 on different grid sizes (n = 100 and 20 0 0). Numerical results of Table 3 and plot (b) in Fig. 2 show that the best reconstruction of the solution achieved at α = 0.8, γ = 1 − α , and j = 4. The smallest reconstruction error obtained at α = 0.6, γ = 0.5, and j = 3 in Table 4.

G.D. Reddy / Applied Mathematics and Computation 347 (2019) 464–476

a

475

b 4

2.5

3.5 2 3

2.5

1.5

2 1 1.5

1

0.5

0.5 0 -4

-3

-2

-1

0

1

2

3

4

0 -4

Fig. 3. (a) Computed and exact solutions of with noise level δ = 0.01.

-3

-2

-1

0

1

2

3

4

Shaw example with α = 0.8, δ = 0.01, γ = 0, j = 3, and n = 100. (b) Exact and noise data of Shaw example

Fig. 4. (a) Computed and exact solutions of Shaw example with α = 0.8, δ = 0.01, γ = 0.5, j = 4, and n = 100. (b) Computed and exact solutions of example with α = 0.4, δ = 0.01, γ = 1 − α , j = 3, and n = 100.

Shaw

Computational results in Tables 1–4 indicate that the iterative process improves the quality of the solution. 3.2. Example 2 We take Shaw example from the Regularization Toolbox by Hansen [21] with different grid sizes (n = 100 and 2000), defined as follows:

[K2 x2 ](s ) :=

 π

−π

k(s, t )x2 (t )dt = y2 (s ), −π ≤ s ≤ π ,

(3.36)

476

G.D. Reddy / Applied Mathematics and Computation 347 (2019) 464–476

where k(s, t ) = (cos(s ) + cos(t ))2 ( sinu(u ) )2 , u = π (sin(s ) + sin(t )). The solution x2 is given by x2 (t )† = a1 exp(−c1 (t − t1 ))2 ) + a2 exp(−c2 (t − t2 ))2 ). As in the previous example, we have contaminated the exact data by incorporating the random noise δ = 0.1 and 0.01. Numerical results of Shaw example are given in Tables 5–8. The errors in Tables 5–8 are decreasing with the noise δ > 0, and this rate of decrease is less than the one obtained in Theorem 2.7. It is observed from Tables 5 and 6 that the errors are decreasing with the grid size n. The errors in Tables 7 and 8 are decreasing with the grid size n except for γ = 0 and α = 0.6, 0.8, and 1. Plots of Shaw example are showcased in Figs. 3 and 4. We report in Tables 5 and 6 on different grid sizes (n = 100 and 2000), Eβ j α ’s and β j ’s with several values of α and γ for †

δ = 0.1. It is evident that the smallest restoration error achieved at α = 0.6, γ = 1 − α , and j = 2 in Table 5. On the other hand, the smallest restoration error in Table 6 established at α = 0.6, γ = 0.5, and j = 1. For δ = 0.01, we display in Tables 7 and 8 on different grid sizes (n = 10 0 and 20 0 0), Eβ j α ’s and β j ’s with selected values of α and γ . Numerical results of Table 7 and plot (a) in Fig. 3 indicate that the best reconstruction of the solution is accomplished at α = 0.8, γ = 0, and j = 3. The smallest reconstruction error in Table 8 is attained at α = 0.6, γ = 0.5, and j = 5. Computational results presented in Tables 1–8 assert that γ > 0 may be a better choice in the parameter choice rules defined in (2.11) to compute the parameter β j (δ ). 4. Conclusion We proposed a class of a posteriori parameter choice rules for choosing the regularization parameter in the SIWT scheme j (α +1 )

and achieved the optimal rate of convergence O(δ 1+ j (α +1) ) for the same scheme. Numerical implementation of the SIWT scheme is discussed with examples along with their relative errors and corresponding plots. Acknowledgments Dr. G. Damodar Reddy worked on this article during his NBHM post-doctoral program in IIT-Hyderabad (Grant no.: 2/40(56)/2015/R & D-II/12987), and the grant was supported by National Board for Higher Mathematics Department of Atomic Energy 1st floor, O.Y.C.Building, Anushakti Bhavan, C.S.M. Marg, Mumbai-400 001, Maharashtra India. The support of the NBHM is gratefully acknowledged. I am thankful to the anonymous reviewers for their constructive comments on this manuscript. I am also thankful to my post doctoral advisor Dr. C. P. Vyasarayani for his valuable suggestions on this work. References [1] [2] [3] [4] [5] [6] [7] [8] [9] [10] [11] [12] [13] [14] [15] [16] [17] [18] [19] [20] [21]

H.W. Engl, M. Hanke, A. Neubauer, Regularization of Inverse Problems, 375, Springer Science & Business Media, 1996. V.A. Morozov, Methods for Solving Incorrectly Posed Problems, Springer Science & Business Media, 2012. N.M. Thamban, Linear Operator Equations: approximation and Regularization, World Scientific, 2009. A.N. Tikhonov, V.Y. Arsenin, Solutions of Ill-posed Problmes, Wiley, New York, 1977. D. Bianchi, A. Buccini, M. Donatelli, S. Serra-Capizzano, Iterated fractional Tikhonov regularization, Inverse Probl. 31 (5) (2015) 055005. D. Gerth, E. Klann, R. Ramlau, L. Reichel, On fractional Tikhonov regularization, J. Inverse Ill-posed Probl. 23 (6) (2015) 611–625. D. Bianchi, M. Donatelli, On generalized iterated Tikhonov regularization with operator-dependent seminorms, Electron. Trans. Numer. Anal. 47 (2017) 73–99. M.E. Hochstenbach, S. Noschese, L. Reichel, Fractional regularization matrices for linear discrete ill-posed problems, J. Eng. Math. 93 (1) (2015) 113–129. M.E. Hochstenbach, L. Reichel, Fractional Tikhonov regularization for linear discrete ill-posed problems, BIT Numer. Math. 51 (1) (2011) 197–215. E. Klann, R. Ramlau, Regularization by fractional filter methods and data smoothing, Inverse Probl. 24 (2) (2008) 025018. G. Huang, L. Reichel, F. Yin, On the choice of solution subspace for non-stationary iterated Tikhonov regularization, Numer. Algorithm 72 (4) (2016) 1043–1063. A. Buccini, M. Donatelli, L. Reichel, Iterated Tikhonov regularization with a general penalty term, Numer. Linear Algebra Appl. 24 (4) (2017) e2089. M. Hanke, C.W. Groetsch, Nonstationary iterated Tikhonov regularization, J. Optim. Theory Appl. 98 (1) (1998) 37–53. A. Buccini, Regularizing preconditioners by non-stationary iterated Tikhonov with general penalty term, Appl. Numer. Math. 116 (2017) 64–81. M.E. Hochstenbach, L. Reichel, An iterative method for Tikhonov regularization with a general linear regularization operator, J. Integral Equ. Appl. 22 (3) (2010) 465–482. G. Huang, L. Reichel, F. Yin, Projected nonstationary iterated Tikhonov regularization, BIT Numer. Math. 56 (2) (2016) 467–487. L. Reichel, F. Sgallari, Q. Ye, Tikhonov regularization based on generalized Krylov subspace methods, Appl. Numer. Math. 62 (9) (2012) 1215–1228. G.D. Reddy, The parameter choice rules for weighted Tikhonov regularization scheme, Comput. Appl. Math. 37 (2) (2018) 2039–2052. H.W. Engl, Discrepancy principles for Tikhonov regularization of ill-posed problems leading to optimal convergence rates, J. Optim. Theory Appl. 52 (2) (1987a) 209–215. H.W. Engl, On the choice of the regularization parameter for iterated Tikhonov regularization of ill-posed problems, J. Approx. Theory 49 (1) (1987b) 55–63. P.C. Hansen, Regularization tools version 4.0 for Matlab 7.3, Numer. Algorithm 46 (2) (2007) 189–194.