Split Bregman method for the modified lot model in image denoising

Split Bregman method for the modified lot model in image denoising

Applied Mathematics and Computation 217 (2011) 5392–5403 Contents lists available at ScienceDirect Applied Mathematics and Computation journal homep...

960KB Sizes 0 Downloads 67 Views

Applied Mathematics and Computation 217 (2011) 5392–5403

Contents lists available at ScienceDirect

Applied Mathematics and Computation journal homepage: www.elsevier.com/locate/amc

Split Bregman method for the modified lot model in image denoising q Yu-Fei Yang a, Zhi-Feng Pang b,⇑, Bao-Li Shi a, Zhi-Guo Wang b a b

College of Mathematics and Econometrics, Hunan University, Changsha 410082, China College of Mathematics and Information Science, Henan University, Kaifeng 475000, China

a r t i c l e

i n f o

a b s t r a c t In this paper a split Bregman iteration is proposed for the modified LOT model in image denoising. We first use the split Bregman method to solve the ROF model which can be seen as an approximate form of the first step of the original LOT model. Then we use a modified split Bregman method to fit the second step of the LOT model and give the convergence of the proposed split Bregman method. Several numerical examples are arranged to show the effectiveness of the proposed method. Ó 2010 Elsevier Inc. All rights reserved.

Keywords: Image restoration ROF model LLT model LOT model Split Bregman method

1. Introduction Image denoising is one of most important inverse problems in image processing and computer vision. The objective is to find the unknown true image u from an observed image f defined by

f ¼ u þ g;

ð1:1Þ

where g is a noise with the standard derivation r. However, the challenging aspect of this problem is to design methods which ca selectively smooth a noisy image without losing significant features such as edges and textures. For preserving image edges, some approaches are based on the statistics method such as the nonparametric estimation of a discontinuous surface [19,26,27] and wavelet method [7,8]. Another approaches are based on the Tikhonov regularization method in the sense of PDE [7,25] such as the most successful and popular techniques is the total variation model, which was first proposed by Rudin, Osher and Fatemi (called the ROF model) [29] as follows:

min E1 ðuÞ ¼

u2BVðXÞ

k ku  f k2L2 ðXÞ þ 2

Z

jDujdx;

ð1:2Þ

X

where k > 0 is the regularization parameter, X is a bounded domain with Lipschitz boundary and BV(X) is defined by

BVðXÞ ¼

  Z u 2 L1 ðXÞ : jDujdx < 1 ; X

where

Z X

jDujdx ¼ sup

Z X

 udivuðxÞdx : uðxÞ 2 C 10 ðX; R2 Þ; juðxÞj 6 1 ;

ð1:3Þ

q The work is supported by the NNSF of China (Nos. 60872129 and 60835004) and the Science and Technology Project of Hunan Province (No. 2009SK3027). ⇑ Corresponding author. E-mail addresses: [email protected] (Y.-F. Yang), [email protected] (Z.-F. Pang), [email protected] (B.-L. Shi), [email protected] (Z.-G. Wang).

0096-3003/$ - see front matter Ó 2010 Elsevier Inc. All rights reserved. doi:10.1016/j.amc.2010.12.009

Y.-F. Yang et al. / Applied Mathematics and Computation 217 (2011) 5392–5403

5393

qffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi u(x) = (u1(x), u2(x))T 2 R2 and juðxÞj ¼ u21 ðxÞ þ u22 ðxÞ. For more details on BV(X), we refer the readers to [16]. Since the ROF model is strictly convex and coercive, it has a unique solution u⁄ which satisfies the following Euler–Lagrange equation

  ru ¼0 kðu  f Þ  div jruj

ð1:4Þ

with the homogeneous Neumann boundary condition. In order to solve the optimization problem (1.2), several different variational frameworks have been proposed, most of which can be fell into one of two categories. The first is directly based on the Euler–Lagrange equation (1.4) and considers the following approximate equation:

  ru ¼ 0; kðu  f Þ  div jruje where jruje ¼

ð1:5Þ

qffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi jruj2 þ e to avoid jruj = 0 for a small positive number e > 0. Formally, (1.5) can be solved by using the opti-

mization techniques, e.g., gradient descent algorithm [29], fixed point iteration algorithm [34] or a primal–dual algorithm [6]. The second follows the Legendre–Fenchel transform in the sense of convex analysis. Then the solution of (1.2) can be given by

1 u ¼ f  div u; k

ð1:6Þ

where u denotes the projection of kf on the closed convex set

S ¼ fdivuðxÞ : uðxÞ 2 C 10 ðX; R2 Þ; juðxÞj 6 1g and can be solved by the semi-implicit gradient descent (or fixed point) algorithm [2] or the modified gradient descent algorithm [3]. Unfortunately, the TV-norm regularization suffers from the so-called stair-case effect which can produce undesirable blocky images. In order to overcome this effect, some higher order models have been proposed in [21,31,36] such as the LLT model [21]:

min

u2BV 2 ðXÞ

f ku  f k2L2 ðXÞ þ 2

Z

jr2 ujdx;

ð1:7Þ

X

R R where the definitions of X jr2 ujdx and BV2(X) are similar to those of X jrujdx and BV(X) in the ROF model. The numerical approaches are also similar to those of the ROF model. However, it is more difficult to solve this class of models than the ROF model since there exist high order derivatives in these models. In addition, these models also meet some drawbacks, e.g., they tend to introduce some blurring in regions of image edges. In order to avoid the staircase effect while alleviating the edge blurring, a two step method has been recently proposed by Lysaker–Osher–Tai (LOT) [22] for the image denoising. In their method, the first step is to smooth the unit normal vector of the noisy image by solving

min E2 ðnÞ ¼ jnj¼1

 Z Z  rf 2 n dx þ jrnjdx; jrf j 2 X X

a

ð1:8Þ

where a is a regularization parameter. In the second step, the restored image u is reconstructed as the solution of the following minimization problem:

min E3 ðuÞ ¼ u

b ku  f k2L2 ðXÞ þ 2

Z

ðjruj  n0  ruÞdx;

ð1:9Þ

X

where n0 is the solution of (1.8) and b is a regularization parameter. Furthermore, motivated by the ideas in [22], some deformations of the LOT model have been considered in [11,14,20,28,32]. Meanwhile, the authors in [20,22,28,32] proposed a time marching explicit scheme to solve (1.8) and (1.9), but this scheme suffers from the slow convergence and instability as did in [29]. Recently, a dual strategy following Chambolle’s ideas [2] has been employed in [14]. However, this dual scheme is very complex, especially for the first step. Thus, by the fact that 1  cos(h1  h2) and (h1  h2)2/2 are equivalent infinitesimal quantities when h1 is close to h2, we use the following form

min E4 ðhÞ ¼

h2BVðXÞ

c 2

kh  h0 k2L2 ðXÞ þ

Z

jrhjdx

X

to replace the first step of the LOT method, where h0 ¼ arctan

ð1:10Þ   fy fx

 f here ðfx ; fy ÞT ¼ jr rf j . On the other hand, following the LOT

model, the split Bregman method based on the Bregman iterative method [24] has received much attention in image restoration [5,18], especially including the L1 regularized term. It is based on the fact that the split Bregman method requires a relatively small memory footprint and is easy to code. So we naturally employ the split Bregman method to deal with (1.9) and (1.10). Furthermore, the convergence of the proposed method is also discussed.

5394

Y.-F. Yang et al. / Applied Mathematics and Computation 217 (2011) 5392–5403

The organization of the paper is as follows. In Section 2, we first review the Bregman iterative method and then present a generalized split Bregman method. Furthermore, we use the proposed split Bregman method to solve the modified LOT model. Moreover, some properties for this method are also given in this section. We implement some numerical experiments by the proposed algorithm in Section 3. Some concluding remarks are given in the last section. 2. Denoising algorithm The split Bregman method introduced in [18] has been demonstrated to be an efficient tool for solving various problems including L1 norm in image processing, which is derived from the Bregman method [24]. In fact, the Bregman method is based on the generalized Bregman distance which can protect the information of image edges so that it can give more efficient restoration than the original ROF model [29]. We start with an introduction to the Bregman method in Section 2.1, then in Section 2.2 we give a frame of a general split Bregman method, we finally propose the split Bregman method to solve the modified LOT model in image denoising in Section 2.3. 2.1. Bregman iterative method Bregman distance was first proposed to solve the convex programming problem in [1,10]. Recently, it was generalized to image restoration in [24] where the objective function only requires to be subdifferentiable rather than differentiable. Definition 2.1. Let H be a Hilbert Space. Set V  H; V  is the topological dual of V and (, ) denotes the bilinear canonical pairing over V  V  . A function / : V ! R is said to be subdifferentiable at the point x if there exists an element n 2 V  such that

/ðyÞ P /ðxÞ þ ðn ; y  xÞ

ð2:1Þ

for every y 2 V. Actually, the subdifferential of / at x is multivalued when / is not differentiable. It is known that the subdifferential of a convex function at a point is nonempty. Usually the set of the subdifferential of / at x is denoted by @/(x) with

@/ðxÞ ¼ fn 2 V  j/ðyÞ  /ðxÞ P ðn ; y  xÞ; 8y 2 Vg: Clearly, x⁄ is a minimizer of the convex function /(x) if and only if 0 2 @/(x). For more properties of the subdifferential, we refer the readers to [12,13]. If we set n⁄ 2 @/(x), then the generalized Bregman distance is defined in the following [24]. Definition 2.2. Set H is Hilbert space and V  H. Let / : V ! R be a convex function. For every y 2 V, the generalized Bregman distance associated with / at the point x is defined by 

Dn/ ðy; xÞ ¼ /ðyÞ  /ðxÞ  ðn ; y  xÞ;

ð2:2Þ

where n⁄ 2 @/(x). It is clear that the generalized Bregman distance is not a metric, since in general it is neither symmetric nor satisfies the triangle inequality. Moreover, in some sense it can be interpreted as a measure of the convexity of /. Especially, it is easy to visualize in the one-dimensional case when / is differentiable at x0: Drawing a tangent line to the graph of / at x0, the general 0 ized Bregman distance Dn/ ðy; x0 Þ (note here n⁄ = / (x0)) is seen as the vertical distance between this line and the point /(y). Based on the above generalized Bregman distance, a Bregman iterative method [24] was proposed to solve the ROF model (1.2) and has been proved to improve the quality of the regularization solution. Set

JðuÞ ¼

Z

jrujdx;

X

then the Bregman iterative method for solving the ROF model can be written as follows. Algorithm 2.1. Bregman iterative method for the ROF Model.  Set u0 = f, p0 = 0;  Perform the following steps: (1) Computing un+1 by

unþ1 ¼ arg min u



 k kf  uk2L2 ðXÞ þ JðuÞ  Jðun Þ  ðpn ; u  un Þ : 2

ð2:3Þ

(2) Updating the dual variable

pnþ1 ¼ pn  kðunþ1  f Þ 2 @Jðunþ1 Þ  Set n :¼ n + 1 and continue.

ð2:4Þ

Y.-F. Yang et al. / Applied Mathematics and Computation 217 (2011) 5392–5403

5395

P Pn n l l Combining p0 = 0 with (2.4), it is not difficult to get pnþ1 ¼ k nþ1 l¼1 ðf  u Þ. Therefore, if set b :¼  l¼1 ðf  u Þ, it is easy to deduce that Algorithm 2.1 can be rewritten as the following compact form. Algorithm 2.2. Compact form of Bregman iterative method.  Set u0 = f, b0 = 0;  Computing (un+1, bn+1) by

8 < unþ1 ¼ arg min k kf  u  bn k22 þ JðuÞ; L ðXÞ 2 u

:

b

nþ1

n

ð2:5Þ

¼ b þ unþ1  f ;

 Set n :¼ n + 1 and continue. From the definition of the generalized Bregman distance, we can deduce that the first formula of (2.5) is bounded from below zero and proper and strictly convex, so it has a unique solution. On the other hand, using the method in [2], we have the following assertion. Theorem 2.1. Set u(x) be the dual variable in (1.3) about u, then the solution of (2.5) satisfies

(

n

unþ1 ¼ f þ 1k div unþ1  b ; nþ1

b

n

¼ b þ unþ1  f ;

ð2:6Þ

where un+1 can be attained by the following iterative sequence

unþ1;jþ1 ¼

unþ1;j þ qðrðdiv unþ1;j  kðf  un  bn ÞÞÞ n 1 þ qjðrðdiv unþ1;j  kðf  un  b ÞÞj

ð2:7Þ

if set q 6 18 and un+1,0 = 0. Comparing the two formulas of (2.6) and using the fact that pn+1 = kbn+1, it is not difficult to get pn+1 = divun+1. It means that divun+1 can be looked as the subdifferential of J(un+1). On the other hand, from (1.6),  1k divu can be looked as the noise in the ROF model. Furthermore, the first formula of (2.6) also can be rewritten as unþ1 ¼ f þ 1k ðdivunþ1  divun Þ. Therefore, un+1 can be explained as the solution by using the ROF model to restore the original observed image f  1k divun . In other words, in the course of the Bregman iteration,  1k divun as the noise is added back to f. This method can overcome some disadvantages of the ROF model due to improper restoration. For example, let f(x, y) = avR(x, y), where vR(x, y)  1 if pffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi x2 þ y2 6 1, else vR(x, y) = 0. If akR 6 2, the restored image u = 0 by the ROF model. But if akR > 2, then u ¼

2 a  kR vR ðx; yÞ and v ¼ kR2 vR ðx; yÞ. It is obvious that we can not get the proper solution by using the ROF model to restore f(x, y). However, the Bregman method can overcome the above drawback, see [5,23] for details. From [17,24], we have following properties for the above Bregman iterative method. Theorem 2.2. Let u0 = 0, b0 = 0. If the sequence {(un, bn)} is generated by (2.6) , then the following assertions hold: (i) kunþ1  f kL2 ðXÞ 6 kun  f kL2 ðXÞ ; (ii) kun  f kL2 ðXÞ 6 kunþ1  f kL2 ðXÞ þ Jðunþ1 Þ=n. We can deduce from Theorem 2.2 that kun  f kL2 ðXÞ ! 0 as n ? 1. It implies that the restored image will tend to the noisy image with increasing the iteration times. So how to choose a termination standard becomes very important. In [24], the authors proposed to use the so-called generalized discrepancy principle [15]. 2.2. Split Bregman method The Bregman iterative method for solving the ROF model also suffers from some numerical difficulties since J(u) is not differentiable in the origin. In [18], by employing some auxiliary variables in the frame of the Bregman iterative method, Goldstein and Osher decoupled the original ROF model into two subproblems which only include one of the L1 and L2 portions of the objective functional. So the proposed algorithm is called the split Bregman method. Actually, the goal of this strategy is to extend the utility of the Bregman iteration to the minimizations of more general L1 norm regularization problems. Furthermore, this algorithm was proved to be equivalent to the augmented Lagrangian method and also belongs to the framework of the Douglas–Rachford splitting algorithm, c.f., [30,33,35].

5396

Y.-F. Yang et al. / Applied Mathematics and Computation 217 (2011) 5392–5403

Now we will consider following minimization problem:

min E5 ðu; dÞ ¼ u;d

k l ku  f k2L2 ðXÞ þ sðm; dÞL2 ðXÞ þ jdjL1 ðXÞ þ kru  dk2L2 ðXÞ ; 2 2

ð2:8Þ

where k, s, l are the regularization parameters, d = (d1, d2)T, m = (m1, m2)T. Here and in what follows, the superscript T denotes the transpose of a matrix. For (2.8), it has two variables u and d, so it can be solved by alternatingly minimizing u and d in the following:

8 n < Fðd Þ

:¼ unþ1 ¼ arg min 2k ku  f k2L2 ðXÞ þ l2 kru  d k2L2 ðXÞ ; n

u

ð2:9Þ

: Gðunþ1 Þ :¼ dnþ1 ¼ arg min sðm; dÞ 2 þ jdj 1 þ l krunþ1  dk22 : L ðXÞ L ðXÞ L ðXÞ 2 d

Theorem 2.3. Let dn be given by the iterative scheme dn = G(F(dn1)) in (2.9) . Then the following property holds: n

lim kd  d

n!1

n1 2 kL2 ðXÞ

¼ 0:

Proof. Let J 1 ðu; dÞ ¼ 2k ku  f k2L2 ðXÞ þ l2 kru  dk2L2 ðXÞ and J 2 ðdÞ ¼ sðm; dÞL2 ðXÞ þ jdjL1 ðXÞ . Since quadratic function J1(u, d) is strictly convex and J2(d) is convex for d, we have n

E5 ðun ; d Þ  E5 ðun ; d þ

l 2

nþ1

kd

nþ1

n

Þ ¼ J 1 ðun ; d Þ  J 1 ðun ; d

nþ1

n

Þ þ J 2 ðd Þ  J 2 ðd

nþ1

  @J nþ1 n @J nþ1 nþ1 ÞP d  d ; 1 ðun ; d Þ þ 2 ðd Þ @d @d

n

 d k2L2 ðXÞ : nþ1

5 Since dn+1 is the solution of E5(un, dn+1), thus we deduce that 0 2 @E ðun ; d @d

Þ, i.e.,

@J @J nþ1 nþ1 0 2 1 ðun ; d Þ þ 2 ðd Þ: @d @d Then it is not difficult to find that n

E5 ðun ; d Þ  E5 ðun ; d

nþ1

ÞP

l 2

nþ1

kd

n

 d k2L2 ðXÞ :

On the other hand, following E5(un, dn+1) P E5(un+1, dn+1), we can deduce n

E5 ðun ; d Þ  E5 ðunþ1 ; d

nþ1

ÞP

l 2

nþ1

kd

n

 d k2L2 ðXÞ

By summing the above inequality from n = 1 to 1, we can see that n n1 converges. This shows that limn!1 kd  d k2L2 ðXÞ ¼ 0. h

P1

n n¼1 kd

n1 2 kL2 ðXÞ

d

is bounded, thus

P1

n n¼1 kd

n1 2 kL2 ðXÞ

d

Corollary 2.1. Set {un} be generated by the iterative scheme un = F(G(un1)) in (2.9) . Then we have

lim kun  un1 k2L2 ðXÞ ¼ 0:

n!1

Theorem 2.4. Assume X is a closed, convex subset of R2 and {(un, dn)} is generated by (2.9) . Also assume F and G satisfy Lipschitz condition, i.e., kFðd1 Þ  Fðd2 ÞkL2 ðXÞ < L1 kd1  d2 kL2 ðXÞ and kGðu1 Þ  Gðu2 ÞkL2 ðXÞ < L2 ku1  u2 kL2 ðXÞ for L1, L2 2 (0, 1), respectively. Then {(un, dn)} has a unique accumulation point (u⁄, d⁄) which is a fixed point of (F(G(u)), G(F(d))) = (u, d). Moreover, (u⁄, d⁄) is also the unique solution of (2.8). Proof. From Theorem 2.3 and Corollary 2.1, it is easy to deduce that {(un, dn)} generated by (2.9) has an accumulation point (u⁄, d⁄) in L2(X). Using the fact that E5(u, d) is strictly convex, it is easy to deduce that (u⁄, d⁄) is unique. Simultaneously, since F and G satisfy Lipschitz condition, (u⁄, d⁄) is a fixed point of (F(G(u)), G(F(d))) = (u, d), i.e., F(G(u⁄)) = u⁄ and G(F(d⁄)) = d⁄. Now we assume (u⁄, d⁄) is the unique solution of (2.8). It is easy to deduce that E5(u, d) is weakly lower semicontinuous, n coercive and strictly convex. Thus we can find a convergent subsequence fðunj ; d j Þg in the minimizing sequence {(un, dn)} of E5(u, d), which has a limit point (uw, dw). By the strict convexity of E5(u, d), it implies that (uw, dw) is unique. On the other nj nj w w @E5 nj nj 5 hand, since 0 ¼ @E @u ðu ; d Þ and 0 2 @d ðu ; d Þ for every nj, it is not difficult to deduce that (u , d ) is the solution of the following minimization:

8 2 2 l < arg min 2k ku  f kL2 ðXÞ þ 2 kru  dkL2 ðXÞ ; u

: arg min sðm; dÞ d

L2 ðXÞ

þ jdjL1 ðXÞ þ l2 kru  dk2L2 ðXÞ :

ð2:10Þ

Y.-F. Yang et al. / Applied Mathematics and Computation 217 (2011) 5392–5403

5397

Therefore, due to the uniqueness of (u⁄, d⁄) and (uw, dw), we have (u⁄, d⁄) = (uw, dw). That is to say, (u⁄, d⁄) is the unique solution of (2.8). h Remark 2.1. In fact, if (un+1, dn+1) is the solution of (2.9), it must satisfy the following Euler–Lagrange equation

(

lðdivdn  MuÞ þ kðu  f Þ ¼ 0; for every dn ; @ðjdjÞ  m þ lðd  runþ1 Þ ¼ 0; for every unþ1

ð2:11Þ

with the homogeneous Neumann boundary conditions. For the second equation of (2.9), it includes a nondifferentiable term jdjL1 ðXÞ . Thus we can apply the Bregman iteration to solve this equation. Because formally (2.9) is based on decoupling (2.8) into the differentiable and the nondifferentiable portions, we call this algorithm the generalized split Bregman algorithm. Then, for (2.9), we get the generalized split Bregman method in the following: (i) Initialization: Set u0 = f, d0 = 0, c0 = 0. Set n :¼ 0; (ii) Computing (un+1, dn+1, cn+1) by

unþ1 :¼ arg min u

nþ1

d

k l n ku  f k2L2 ðXÞ þ kru  d  cn k2L2 ðXÞ ; 2 2

:¼ arg min sðm; dÞL2 ðXÞ þ jdjL1 ðXÞ þ d

cnþ1 :¼ cn þ runþ1  d

nþ1

l 2

krunþ1  d 

ð2:12Þ cn k2L2 ðXÞ ;

ð2:13Þ ð2:14Þ

;

(iii) Set n :¼ n + 1 and go to (ii). The above generalized split Bregman algorithm is formally different from that in [5,18] in that an extra term sðm; dÞL2 ðXÞ is included in (2.13). Actually, if we set s = 0 in (2.13), it reduces to the split Bregman method in [18]. In the course of numerical implementation, it is obvious that it is easy to solve (2.12) and (2.14) using the related numerical methods. However, it is difficult to solve (2.13) by the common numerical method because there exists a nondifferentiable term jdjL1 ðXÞ in (2.13). r ffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi 2  2 Therefore, we use the shrinking operators to compute the value of d. Set s ¼ u2x þ c21  ls m1 þ u2y þ c22  ls m2 , then qffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi 2 2 based on the Euler–Lagrange equation of (2.13) as shown in (2.11), we also can deduce that s satisfies s ¼ d1 þ d2 þ l1 . So we can compute the value of d by using the generalized shrinking formulas as follows:

8   unþ1 þcn  s m > < d1 ¼ max s  l1 ; 0 x s1 l 1 ;   nþ1 n s > : d2 ¼ max s  1 ; 0 uy þc2 lm2 : l s

ð2:15Þ

2.3. Split Bregman method for the modified LOT model The Split Bregman method actually introduces a shrink operator to avoid numerical difficulties in the course of dealing with L1 norm. So it is easy to solve the image restoration problems because this class of problems usually include the L1 norm. In [11], the authors referred that (1.10) is approximately equivalent to (1.8). So we set it as the first step of the modified LOT model. Actually, it is not difficult to see that (1.8) can be formally seen as the original ROF model, so there are many methods for solving it. On the other hand, (1.9) and (1.10) can be also seen as the special form of (2.8). Therefore we propose the split Bregman method to solve (1.10) as the first step of the modified LOT model. For the second step, we also use the split Bregman method to solve (1.9). That is to say, we consider the following formulas

8  > < h :¼ min

R kh  h0 k2L2 ðXÞ þ X jrhjdx; R > : u :¼ min 2b ku  f k2L2 ðXÞ þ X jruj  ðcosh rx u; sinh ry uÞdx: c

h2BVðXÞ 2

ð2:17Þ

u

Before using the split Bregman method to solve (2.17), we consider the discretization of the related model by the finite difference. From now on and until the end of this paper, we will restrict our attention to the discrete setting unless otherwise it is stated. Let D be the forward difference operator which can be generated by the MATLAB function spdiagsm, then DT denotes the backward difference operator. Therefore

ru ¼ ðuDTx ; Dy uÞT ;

Mu ¼ uDTx Dx  DTy Dy u and divðu1 ; u2 Þ ¼ u1 Dx  DTy u2 :

In the sense of the Bregman iteration, if set dx ¼ rewritten as

hDTx

ð2:18Þ

and dy = Dyh, the discrete form of the first formula in (2.17) can be

5398

Y.-F. Yang et al. / Applied Mathematics and Computation 217 (2011) 5392–5403

c

c

c

min Ed4 ðh; dx ; dy Þ ¼ kðdx ; dy Þkl2 ðXÞ þ kh  h0 k2l2 ðXÞ þ 1 kdx  hDTx  cx k2l2 ðXÞ þ 1 kdy  Dy h  cy k2l2 ðXÞ ; 2 2 2

h;dx ;dy

ð2:19Þ

qffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi P 2 2 where kðdx ; dy Þkl2 ðXÞ ¼ ni;j¼1 dx;i;j þ dy;i;j . Furthermore, if set hx ¼ uDTx ; hy ¼ Dy u, the discretization of the second formula in (2.17) can be reformulated as

b b b min Ed3 ðu; hx ; hy Þ ¼ kðhx ; hy Þkl2 ðXÞ  khx cosh þ hy sinhkl2 ðXÞ þ ku  f k2l2 ðXÞ þ 1 khx  uDTx  rx k2l2 ðXÞ þ 1 khy  Dy u  r y k2l2 ðXÞ ; 2 2 2

u;hx ;hy

where kðhx ; hy Þkl2 ðXÞ ¼

qffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi P 2 2 hx;i;j þ hy;i;j and khx cosh þ hy sinhkl2 ðXÞ ¼ ni;j¼1 ðhx cosh þ hy sinhÞi;j . i;j¼1

ð2:20Þ

Pn

Combining (2.19) with (2.20), when only one iteration of the inner loop is performed in the split Bregman method, we can employ this algorithm to solve the discrete modified LOT model as follows: Algorithm 2.3. Split Bregman method for the modified LOT Model.  Step 1: Split Bregman method for (2.19). Denote d = (dx, dy)T and c = (cx, cy)T.   0 0 D f (i) Initialization: Set h0 ¼ h0 ¼ tan1 fDyT ; dx ¼ dy ¼ c0x ¼ c0y ¼ 0. Let n :¼ 1; x n+1 n+1 n+1 (ii) Computing (h , d , c ) by n

hnþ1 ¼ G1 ðc; c1 ; hn ; d ; c n ; h0 Þ;   nþ1 T 1 h Dx þ cnx nþ1 ; dx ¼ max sn  ; 0 c1 sn   Dy hnþ1 þ cny 1 nþ1 ; dy ¼ max sn  ; 0 c1 sn ¼ cny þ cnþ1 y

ð2:22Þ ð2:23Þ

nþ1

ð2:24Þ

nþ1

ð2:25Þ

¼ cnx þ hnþ1 DTx  dx ; cnþ1 x DTy hnþ1

ð2:21Þ

 dy ;

rffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi  2  2 n where s ¼ hn DTx þ cnx þ Dy hn þ cny ; G1 is defined in (2.32); (iii) If stopping criterion holds, outputting h:¼hn+1. Otherwise, set n:¼n + 1 and go to (II).  Step 2: Split Bregman method for (2.20). Denote n0 = (cosh, sinh)T, h = (hx, hy)T and r = (rx, ry)T. (0) Initialization: Set h0 = 0 and r0 = 0. Let k :¼ 0; (1) Computing (uk+1, hk+1, rk+1) by k

ukþ1 ¼ G2 ðb; b1 ; uk ; h ; rk ; f Þ;   kþ1 T u Dx þ r kx þ b11 cosh 1 kþ1 ; hx ¼ max tk  ; 0 b1 tk   Dy ukþ1 þ r ky þ b11 sinh 1 kþ1 ; hy ¼ max tk  ; 0 b1 tk

ð2:26Þ

¼ r kx þ ukþ1 DTx  hx ; rkþ1 x

kþ1

ð2:29Þ

kþ1 hy ;

ð2:30Þ

rkþ1 x

¼

where t k ¼

r ky

þ Dy u

kþ1



ð2:27Þ ð2:28Þ

rffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi  2ffi ðukþ1 DTx þ r kx þ b11 coshÞ2 þ Dy ukþ1 þ rky þ b11 sinh ; G2 is defined in (2.34);

(2) If stopping criterion holds, outputting u. Set k :¼ k + 1 and go to (1). In (2.21) and (2.26), hn+1 and uk+1 can be attained by employing the Gauss–Seidel iterative method. More explicitly, hn+1 and uk+1 can be attained from the following formulas n

hnþ1 ¼ G1 ðc; c1 ; hn ; d ; cn ; h0 Þ ð2:31Þ i;j  c1  n c n n n n n n n n n n n h þh þh þh þ dx;i1;j  dx;i;j þ dy;i;j1  dy;i;j  cx;i1;j þ cx;i;j  cy;i;j1 þ cy;i;j þ h ; :¼ c þ 4c1 0;i;j c þ 4c1 iþ1;j i1;j i;jþ1 i;j1 ð2:32Þ

Y.-F. Yang et al. / Applied Mathematics and Computation 217 (2011) 5392–5403

5399

k

kþ1 ui;j ¼ G2 ðb; b1 ; uk ; h ; rk ; f Þ ð2:33Þ  b1  k b k k k k u þ uki1;j þ uki;jþ1 þ uki;j1 þ hx;i1;j  hx;i;j þ hy;i;j1  hy;i;j  r kx;i1;j þ rkx;i;j  r ky;i;j1 þ r ky;i;j þ fi;j : :¼ b þ 4b1 b þ 4b1 iþ1;j ð2:34Þ

qffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi qffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi On the other hand, we notice that in Algorithm 2 s and t also satisfy s ¼ ðhDTx Þ2 þ ðDy hÞ2 þ c1 and t ¼ ðuDTx Þ2 þ ðDy uÞ2 þ b11 , 1 respectively, so it is easy to see that s P c1 and t P b11 . 1 For the convergence of Algorithm 2, it is not difficult to deduce that (1.10) and (1.9) have unique solutions h⁄ and u⁄ which are based on the fact that (1.10) and (1.9) are coercive and strictly convex. On the other hand, we also notice that Algorithm 2.3 is the implementable version of (2.12)–(2.14). So, similar to the results in [5], we have the following assertions: Theorem 2.5. Assume h⁄ and u⁄ be the unique solution of (1.10) and (1.9) respectively. Furthermore, assume that hk and un be generated by Algorithm 2, then we have

lim khk  h k ¼ 0 and lim kun  u k ¼ 0: n!1

k!1

3. Numerical results In this section, we present several numerical experiments to show the effectiveness of the proposed two-step algorithm. All simulations are performed in MATLAB (2009a) on a PC with an Intel Core (TM)2 Duo CPU at 2.20 GHz and 2.00 GB of memory. In order to evaluate the related algorithms, we show the Signal to Noise Ratio (SNR) of the restored image and mean squared error (MSE), respectively, defined by

! Z  Þ2 dxdy u 1 and MSE ¼ ðu  u0 Þ2 dxdy; R 2 jXj X  X ðg  gÞ dxdy R

SNR ¼ 10  log10

X ðu

R R  ¼ jX1 j X u dx dy, and where u is the restored image, u0 is the originally clean image, g denotes the noise, jXj ¼ X dx dy; u R g ¼ jX1 j X g dx dy. Moreover, for our experiments, the parameters are chosen with respect to the best visual results. The first example presents a synthetic image with additive Gaussian noise of SNR = 13.6540 and MSE = 144.1727 in Fig. 1. In order to test the effectiveness of the two-step method, we give the restored images by using the semi-implicit descent method in [2,9] to solve the ROF model and the LLT model, where the step-size parameters is set as trof = 0.125 and tllt = 0.0125. From the restored images in Fig. 2, it is clear to see that much of noise is suppress. However, it is not difficult to find that there are some staircase effect in the ROF model and edges blur in the LLT model. But, for the two-step method as expected, the staircase effect and the edge blur are obviously alleviated. In fact, these alleviations can be also found in the differences shown in the right of Fig. 2. Furthermore, comparing our proposed two-step method with that in [22], we can deduce that our proposed method is even much more effective as shown in Table 1. This result actually follows the fact that the two-step method in [22] is required lots of iterations to get the solution of the state equation. In the second example, for the more specific comparison between our method and the original two-step method in [22], we consider the real Lena image shown in Fig. 3, where the noisy image is contaminated by the Gaussian white noise with the standard deviation r = 20. In the first step of the original two-step method, we set the time step dt = 0.1 and a = 0.05, we find it has a better result after 200 iterations. However, for our algorithm, as we all know that the split Bregman algorithm

Fig. 1. The original image and the noisy image.

5400

Y.-F. Yang et al. / Applied Mathematics and Computation 217 (2011) 5392–5403

Fig. 2. Left: restored image; right: difference between the clean image and the left image.

Y.-F. Yang et al. / Applied Mathematics and Computation 217 (2011) 5392–5403

5401

Table 1 The related data in example 1. Method

Iterations in step 1/2

MSE

SNR

ROF LLT LOT Our

200/0 200/0 150/150 25/150

12.1507 21.7856 11.6414 11.4394

24.0870 21.6707 24.3395 24.3794

Fig. 3. The related images and curves for 150 iterations in example 2.

has a fast restoration, so the iteration can be set as 30 when we choose c = 0.06 and c1 = 0.004. For the second step of two algorithms, we set the time step dt = 0.1 and b = 0.205 for the original two-step algorithm, and set b = 0.0675 and b1 = 0.002 for our algorithm. In this course, these two algorithms will be terminated after 150 iterations. It can be obviously seen from the related data in Table 2 that our proposed algorithm can save 26.65% CPU time comparing with the original two-step algorithm.

5402

Y.-F. Yang et al. / Applied Mathematics and Computation 217 (2011) 5392–5403 Table 2 The related data in example 2. Method

Average time of step 1/2

Sum

MSE

SNR

LOT Our

0.0630/0.0488 0.0303/0.0517

0.1118 0.0820

74.8849 74.6331

15.4667 15.3554

16

400

15

350 300

13 MSE

SNR

14

Our Algorithm Original Algorithm

12

250 200

11 150

10 9 8

Our Algorithm Original Algorithm

100 50

0

50

100 150 200 250 300 350 400 450 500 The Number of Iterations

0

50

100 150 200 250 300 350 400 450 500 The Number of Iterations

Fig. 4. SNR and MSE curves for 500 iterations in example 2.

On the other hand, for the convergence of these two algorithm, we also plot the curves of SNR and MSE in Fig. 3. It clearly find that the curves generated by our algorithm has not too much change after 60 iterations, but the original two-step algorithm requires about 120 iterations. This implies that our algorithm owns a fast convergence. However, if we extend the iterations of the second step to 500, it is easy to find from Fig. 4 that our algorithm is stable and the original method of the LOT model is not stable. It is based on the fact that the solution of the original method is effected by the Courant–Friedrichs–Lewy (CFL) condition [4]. 4. Concluding remarks The LOT model has been proposed to overcome some drawbacks of the ROF model and the LLT model in [22]. Based on some advantages of the split Bregman method, we apply this method to solve the modified LOT model. To be more specific, we first derived an approximate form to replace the first step of the original LOT model. This replacement easily makes us use the split Bregman method to solve the first step. Furthermore, we extended the split Bregman method to solve the second step of the LOT model. Our experimental results show that the proposed method is quite efficient. Moreover, we also noticed that the TV-Stokes model as the equivalent form of the LOT model has been proposed in [28]. Especially, the dual method based on the Chambolle’s trick has been used in [14], but the computational cost of the first step of this strategy is large. So it is one of our future research to seek some efficient methods for the TV-Stokes model. References [1] L. Bregman, The relaxation method of finding the common points of convex sets and its application to the solution of problems in convex programming, U.S.S.R. Comput. Math. Math. Phys. 7 (1967) 200–217. [2] A. Chambolle, An algorithm for total variation minimization and applications, J. Math. Imaging Vision. 20 (1–2) (2004) 89–97. [3] A. Chambolle, Total variation minimization and a class of binary MRF models, in: EMMCVPR, vol. 3757, 2005, pp. 136–152. [4] R. Courant, K. Friedrichs, H. Lewy, On the partial difference equations of mathematical physics, IBM J. 3 (1976) 215–234. [5] J.F. Cai, S. Osher, Z.W. Shen, Split Bregman methods and frame based image restoration, Multiscale Modell. Simul. 8 (2) (2009) 337–369. [6] T. Chan, G. Golub, P. Mulet, A nonlinear primal-dual method for total variation-based image restoration, SIAM J. Scientific Comput. 20 (6) (1999) 1964–1977. [7] T. Chan, J. Shen, Image Processing and Analysis-Variational, PDE, Wavelet, and Stochastic Methods, SIAM Publisher, Philadelphia, 2005. [8] T. Chan, H.M. Zhou, Total variation wavelet thresholding, J. Scientific Comput. 32 (2) (2007) 315–341. [9] H.Z. Chen, J.P. Song, X.-C. Tai, A dual algorithm for minimization of the LLT model, Adv. Comput. Math. 31 (1–3) (2009) 115–130. [10] G. Chen, M. Teboulle, Convergence analysis of a proximal-like minimization algorithm using Bregman functions, SIAM J. Optim. 3 (3) (1993) 538–543. [11] F.F. Dong, Z. Liu, D.X. Kong, K. Liu, An improved LOT model for image restoration, J. Math. Imaging Vision. 34 (1) (2009) 89–97. [12] I. Ekeland, R. Teman, Convex Analysis and Variational Problems, North-Holland, Amsterdam, 1976. [13] I. Ekeland, T. Turbull, Infinite Dimensional Optimization and Convexity, The university of Chicago press, Chicago, 1983. [14] C. Elo, A. Malyshev, T. Rahman, A Dual Formulation of the TV-Stokes Algorithm for Image Denoising, Scale Space and Variational Methods in Computer Vision, Vol. 5567, Springer, 2009, pp. 307–318.

Y.-F. Yang et al. / Applied Mathematics and Computation 217 (2011) 5392–5403

5403

[15] H.W. Engl, M. Hanke, A. Neubauer, Regularization of Inverse Problems, Kluwer Academic Publishers., Dordrecht, 1996. [16] L. Evans, R. Gariepy, Measure Theory and Fine Properties of Functions, CRC Press, Boca Raton, 1992. [17] R.Q. Jia, H.Q. Zhao, W. Zhao, Convergence analysis of the Bregman method for the variational model of image denoising, Appl. Comput. Harmonic Anal. 27 (3) (2009) 367–379. [18] T. Goldstein, S. Osher, The split Bregman for L1 regularized problems, SIAM J. Imaging Sci. 2 (2) (2009) 323–343. [19] I. Gijbels, A. Lambert, P.H. Qiu, P. Edge-preserving image denoising and estimation of discontinuous surfaces, IEEE Trans. Patt. Anal. Mach. Intell. 28 (7) (2006) 1075–1087. [20] W. Litvinov, T. Rahman, X.C. Tai, A modified TVS-Stokes model for image processing, Technical report, Herausgeber: Institut für Mathematik der Universität Augsburg 2009. [21] M. Lysaker, A. Lundervold, X.C. Tai, Noise removal using fourth-order partial differential equation with applications to medical magnetic resonance images in space and time, IEEE Trans. Image Process. 12 (12) (2003) 1579–1590. [22] M. Lysaker, S. Osher, X.C. Tai, Noise removal using smoothed normals and surface fitting, IEEE Trans. Image Process. 13 (10) (2004) 1345–1357. [23] Y. Meyer, Oscillating Patterns in Image Processing and Nonlinear Evolution Equations, University Lecture Series, Vol. 22, American Mathematical Society, Providence, RI, 2001. [24] S. Osher, M. Burger, D. Goldfarb, J. Xu, W. Yin, An iterative regularization method for total variation-based image restoration, Multiscale Modell. Simul. 4 (2) (2005) 460–489. [25] N. Paragios, Y. Chen, O. Faugeras, Handbook of Mathematical Models in Computer Vision, Springer, Heidelberg, 2005. [26] P.H. Qiu, P.S. Mukherjee, Edges structure preserving image denoising, Signal Processing 90 (10) (2010) 2851–2862. [27] P.H. Qiu, Jump surface estimation, edge detection, and image restoration, J. Am. Statist. Assoc. 102 (478) (2007) 745–756. [28] T. Rahman, X. Tai, S. Osher, A TV-Stokes Denoising Algorithm, in: F. Sgallari, A. Murli, N. Paragios (Eds.), Scale Space and Variational Methods in Computer Vision, Springer, 2007, pp. 473–482. [29] L. Rudin, S. Osher, E. Fatemi, Nonlinear total variation based noise removal algorithms, Physica D 60 (1992) 259–268. [30] S. Setzer. Operator Splittings and Bregman Methods in image processing. PhD thesis, University of Mannheim, 2009. [31] G. Steidl, A note on the dual treatment of higher order regularization functionals, Computing 76 (2006) 135–148. [32] X.C. Tai, S. Borok, J. Hahn, Image denoising using TV-Stokes equation with an orientation-matching Minimization, in: SSVMCV, vol. 5567, 2009, pp. 490–501. [33] X.C. Tai, C.L. Wu, Augmented Lagrangian method, dual methods and split Bregman iteration for ROF model, in: SSVMCV, vol. 5567, 2009, pp. 502–513. [34] C. Vogel, M. Oman, Iterative methods for total variation denoising, SIAM J. Scientific Comput. 17 (1) (1996) 227–238. [35] C.L. Wu, X.C. Tai, Augmented Lagrangian method, dual methods, and split Bregman iteration for ROF, vectorial TV, and high order models, SIAM J. Imaging Sci. 3 (3) (2010) 300–339. [36] Y. You, M. Kaveh, Fourth-order partial differential equation for noise removal, IEEE Trans. Image Process. 9 (10) (2000) 1723–1730.