New total variation regularized L1 model for image restoration

New total variation regularized L1 model for image restoration

Digital Signal Processing 20 (2010) 1656–1676 Contents lists available at ScienceDirect Digital Signal Processing www.elsevier.com/locate/dsp New t...

12MB Sizes 0 Downloads 71 Views

Digital Signal Processing 20 (2010) 1656–1676

Contents lists available at ScienceDirect

Digital Signal Processing www.elsevier.com/locate/dsp

New total variation regularized L 1 model for image restoration Yuying Shi ∗ , Xiaozhong Yang Department of Mathematics and Physics, North China Electric Power University, Beijing 102206, China

a r t i c l e

i n f o

a b s t r a c t

Article history: Available online 4 January 2010

In this paper, we propose three new time dependent models based on total variation regularized L 1 model for solving total variation minimization problems in image restoration. The main idea is to apply a priori smoothness on the solution image and to regularize the parabolic term. We propose a kind of continuous boundary conditions, i.e. mean boundary conditions. Numerical experimental results demonstrate that the three new models need far less iterations than the original total variation L 1 model and mean boundary conditions are efficient for image restoration. © 2009 Elsevier Inc. All rights reserved.

Keywords: Image restoration Total variation Mean BCs TVL1 model

1. Introduction In a typical image restoration problem, g denotes an observed or measured incomplete portion of a clean “good” image f on the image domain Ω . A simplified but already very powerful data model in various digital applications in blurring followed by noise degradation:

g = h ∗ f + r, where h is a continuous blur kernel, often assumed to be linear or even shift-invariant, and r, an additive noise. The goal of image restoration is to reconstruct f as faithfully as possible from g. The data is subject to

h ∗ f − g 2L 2 = |Ω|σ 2 .

(1.1)

Minimization of total variation (TV) under L 1 data fidelity is solving the following model:

inf TV(u ) + λ g − h ∗ f  L 1 ,

u ∈BV

(1.2)

for the minimizer f λ and rλ = g − h ∗ f λ , where BV is the space of functions of bounded variation, TV( f ) is the total variation of f , and g ∈ L 1 (Ω). From (1.2), the time dependent model can be gotten:

 ft = ∇ ·

 h∗ f −g ∇f − λh ∗  , |∇ f | (h ∗ f − g )2 + 

with f (x, 0) given as initial data and homogeneous Neumann boundary conditions (BCs), where call it as TVL1 model.

*

Corresponding author. E-mail addresses: [email protected] (Y. Shi), [email protected] (X. Yang).

1051-2004/$ – see front matter doi:10.1016/j.dsp.2009.12.007

©

2009 Elsevier Inc. All rights reserved.

(1.3)

 is a small constant. We

Y. Shi, X. Yang / Digital Signal Processing 20 (2010) 1656–1676

1657

The standard Rudin–Osher–Fatemi (ROF) model [15] is known to have certain limitations. One important issue is the loss of contrast. For example, when the observed image is f = c1 B r (0) , which has intensity c > 0 at all points (x1 , x2 ) inside the disk B r (0) ≡ {(x1 , x2 ): x21 + x22  r } and intensity 0 everywhere else. The solution of the ROF model is of the form c  f (x), where c  ∈ [0, 1) is a constant. We never get c  = 1, no matter how to choose the value of λ. So many variants of the original ROF model replace the fidelity term with weaker norms. One of them is replacing L 2 norm by L 1 norm [6]. Previous work on the TVL1 model for image processing includes Nikolova’s [14,13] discovery of the usefulness of this model for removing impulsive noise, a series of applications of this model in computer vision by Chen et al. [7], Yin’s et al. multiscale decomposition [23] and the references there. In [1–3], S. Alliney restricts his studies to the one-dimensional case and to the discrete case. J. Darbon et al. [8,9] point out that the TVL1 model yields a self-dual and contrast invariant filter and reformulate the problem into independent binary Markov Random Fields (MRFs) attached to each level set of an image. The major advantages of the TVL1 model over the ROF model are as follows: firstly, the TVL1 model is totally geometric with respect to the shape of the level-sets [8,9,24]. Secondly, small features in the image maintain their contrast even as the fidelity parameter λ is lowered. Thirdly, a data driven scale selection technique can be discussed. The contributions of the paper are the following. We propose two fast methods to lessen the iteration numbers. One is applying a prior smoothness on the solution image before every iteration. The other one is regularizing the parabolic term to accelerate the convergence rate and improve the restored image quality. The rest of this paper is as follows. We first discuss the BCs in Section 2. In Section 3 we show three different new models based on the TVL1 model. In Section 4, discretizing schemes of the three new models and the TVL1 model are presented. We give the numerical experiments and test the effectiveness of the three new models in Section 5. Some concluding remarks are given in Section 6. 2. Different BCs The purpose of this section is to discuss the choice of the BCs with regard to precision of the restoration, especially close to the boundaries (presence of ringing effect). For a blurred and noisy image, we cannot exactly know the BCs in the process of convolution, but we must predetermine reference BCs in the process of deconvolution. Hence we want to recover



f (1, 2) · · · f (2, 2) · · ·

f (1, 1) ⎜ f (2, 1)

f =⎝ from

··· f (n, 1)

··· ··· f (n, 2) · · ·



g (1, 2) · · · g (2, 2) · · ·

g (1, 1) ⎜ g (2, 1)

g=⎝



f (1, n) f (2, n) ⎟

··· f (n, n)



(2.1)



g (1, n) g (2, n) ⎟

··· ··· ··· ··· g (n, 1) g (n, 2) · · · g (n, n)



(2.2)

in case of knowing



h(−m, −m) h(−m, −m + 1) ⎜ h(−m + 1, −m) h(−m + 1, −m + 1)



··· h(m, −m)

··· h(m, −m + 1)

⎞ ··· h(−m, m) · · · h(−m + 1, m) ⎟ ⎠ ··· ··· ··· h(m, m)

(2.3)

and r. However the values g (i , j ), 1  i , j  n depend on the unknown f (i , j ) for 1 − m  i , j  n + m i.e., from (2.1) but also from f (i , j ) with

−m + 1  i  0, n + 1  i  n + m,

−m + 1  j  n + m, −m + 1  j  n + m,

(2.4) (2.5)

1  i  n,

−m + 1  j  0,

(2.6)

1  i  n,

n + 1  j  n + m.

(2.7)

The choice of BCs is making certain assumptions on the unknown boundary data f (i , j ) in (2.4)–(2.7) coming from different BCs. A classical choice is to use zero (Dirichlet) BCs [14], where we assume that the data of the image f outside the domain of consideration are zero. But if the true image is not close to zero at the boundaries, it means that the Dirichlet BCs is introducing an artificial discontinuity, which in turn implies a Gibbs phenomenon. R. Gonzalez et al. [10] introduce periodic BCs. However the image at the right boundary has nothing to do with the image at the left boundary, so this implies that we are introducing an artificial discontinuity, which in turn leads to ringing effects. M. Ng et al. [12] consider reflecting BCs, which means that the data outside the domain of consideration are taken as a reflection of the data inside. The reflection guarantees the continuity, but generally fails to guarantee the continuity of the normal derivative except in the nongeneric

1658

Y. Shi, X. Yang / Digital Signal Processing 20 (2010) 1656–1676

Fig. 1. Original kitten and a 100 ∗ 100 part of original kitten.

Fig. 2. Original linhaifa and a 100 ∗ 100 part of original linhaifa.

case, where the normal derivative at the boundary is zero. S. Serra-Capizzano [16] describes antireflective BCs which ensures a C 1 continuity in the case of one-dimensional signals and a C 0 continuity plus the normal derivative continuity in the case of images. Y. Shi et al. [17,19] introduce mean BCs which also ensures a C 1 continuity in the case of one-dimensional signals and a C 0 continuity plus the normal derivative continuity in the case of images, but can get a smaller error than that of the antireflective BCs. On the other hand, while the periodic, reflective, and antireflective BCs are related to fast O (n2 log n) transforms, this good computational feature is not shared by the mean BCs (see the introduction of [19]). The experiments there show the effectiveness of the mean BCs. The detailed analysis of the five kinds of BCs are shown in [12,16,19].

Y. Shi, X. Yang / Digital Signal Processing 20 (2010) 1656–1676

1659

Fig. 3. Original oldpainting and a 100 ∗ 100 part of original oldpainting.

Fig. 4. Noisy kitten image corrupted with salt and pepper noise (2%) (left); a 100 ∗ 100 part of noisy kitten image (right).

3. Three new models Because the computational process of Eq. (1.3) is slow, we introduce three new models based on the TVL1 model in this section. 3.1. The RTVL1 model The famous ROF/TVL2 model [15] is given:

 ft = ∇ ·

 ∇f − λh ∗ (h ∗ f − g ), |∇ f |

(3.1)

with f (x, 0) given as initial data and homogeneous Neumann BCs. In order to regularize the parabolic term, the authors of [11] multiply the right part of Eq. (3.1) by the magnitude of the gradient and give another model:

1660

Y. Shi, X. Yang / Digital Signal Processing 20 (2010) 1656–1676

Fig. 5. Noisy linhaifa image corrupted with salt and pepper noise (20%) (left); a 100 ∗ 100 part of noisy linhaifa image (right).

Fig. 6. Noisy linhaifa image corrupted with Laplace noise (b = 1/20) (left); a 100 ∗ 100 part of noisy linhaifa image (right).

Y. Shi, X. Yang / Digital Signal Processing 20 (2010) 1656–1676

1661

Fig. 7. Noisy oldpainting image corrupted with Laplace noise (b = 1/30) (left); a 100 ∗ 100 part of noisy oldpainting image (right).

Fig. 8. Blurry and noisy oldpainting image corrupted with Gaussian kernel (kn = 5, and noisy oldpainting image (right).

 f t = |∇ f |∇ ·

σ = 5) and Laplace noise (b = 1/10) (left); a 100 ∗ 100 part of blurry

 ∇f − |∇ f |λh ∗ (h ∗ f − g ), |∇ f |

(3.2)

with f (x, 0) given as initial data and homogeneous Neumann BCs as above. We call this model as RTVL2 model. Numerical results [11] show that the RTVL2 model need smaller iterative number than the TVL2 model. Motivated by this, we can propose a new model in the same way according to the TVL1 model:

 ∇f h∗ f −g − |∇ f |λh ∗  f t = |∇ f |∇ · , |∇ f | (h ∗ f − g )2 +  

with f (x, 0) given as initial data (the original blurred and noisy image g used as initial guess) and mean BCs, where small constant. We call this model as RTVL1 model.

(3.3)

 is a

1662

Y. Shi, X. Yang / Digital Signal Processing 20 (2010) 1656–1676

Fig. 9. Noisy oldpainting image corrupted with random Gaussian noise (σ = 40; 100 ∗ 100 pixels are corrupted) (left); a 100 ∗ 100 part of noisy oldpainting image (right).

3.2. The NTVL1 model Witkin [22] notices that the convolution of the signal with Gaussian at each scale is equivalent to the solving of the heat equation with the signal as initial datum. Because the noise is often assumed as additive noise, Y. Shi et al. [17] apply a priori smoothness on the solution image (3.1) and propose a new model from the ROF model:

 ft = ∇ ·

 ∇Gσ ∗ f − λh ∗ (h ∗ G σ ∗ f − g ), |∇ G σ ∗ f |

(3.4)

with f (x, 0) given as initial data and different kinds of BCs. We call this model as NTVL2 model. In fact, denoising technique is used before solve the restoration problem (3.1). The term (G σ ∗ ∇ f )(x, t ) = (∇ G σ ∗ f )(x, t ) which appears inside the divergence term of (3.4) is simply the gradient of the solution at time σ of the heat equation with f (x, t ) as initial datum. In order to preserve the notion of scale in the gradient estimate, it is convenient that this kernel G σ depends on a scale parameter. According to Witkin’s model, time is interpreted as a “scale factor.” (More precisely, the solution f (x, t ) at time t corresponds to a scale t 1/2 . Indeed, roughly speaking, f (x, t ) appears in the Witkin model as a smooth version of g obtained by convolving it with a filter of spatial width t 1/2 .) Thus we should choose the scale parameter (spatial width of the filtering) σ 1/2 [4,5]. Here, we apply a priori smoothness on the solution image (1.3) and get another new model:

 ft = ∇ · with

 ∇Gσ ∗ f h ∗ Gσ ∗ f − g − λh ∗  , |∇ G σ ∗ f | (h ∗ G σ ∗ f − g )2 + 

(3.5)

 is a small constant, f (x, 0) given as initial data and mean BCs. We call this model as NTVL1 model.

3.3. The RNTVL1 model Y. Shi et al. [18] regularize the parabolic term of (3.4) as in Section 3.1 and get a new model:

 f t = |∇ G σ ∗ f |∇ ·

 ∇Gσ ∗ f − |∇ G σ ∗ f |λh ∗ (h ∗ G σ ∗ f − g ), |∇ G σ ∗ f |

(3.6)

with f (x, 0) given as initial data and mean BCs. We call this model as RNTVL2 model. In [18], Y. Shi et al. present a proof of the existence, uniqueness and stability of the viscosity solution of (3.3) and (3.6). To reduce the iterative number, we regularize the parabolic term of (3.5) and get the third new model:

 f t = |∇ G σ ∗ f |∇ ·

 ∇Gσ ∗ f h ∗ Gσ ∗ f − g − |∇ G σ ∗ f |λh ∗  , |∇ G σ ∗ f | (h ∗ G σ ∗ f − g )2 + 

(3.7)

Y. Shi, X. Yang / Digital Signal Processing 20 (2010) 1656–1676

1663

Fig. 10. From top-left: restored part of kitten image (Fig. 4 (right)) by TVL1 model; by RTVL1 model; by NTVL1 model; by RNTVL1 model.

with  is a small constant, f (x, 0) given as initial data and mean BCs. We call this model as RNTVL1 model. Since (1.3), (3.3), (3.5) and (3.7) are not well defined at points where ∇ f = 0, due to the presence of the term |∇1f | , it is common to slightly perturb the TV functional to become



|∇ f |β dx = Ω



|∇ f |2 + β dx.

Ω

The parameter β > 0 is a regularization parameter and is usually small. Until here, the introduction of the three new models (RTVL1 model, NTVL1 model and RNTVL1 model) is finished. 4. Discretize the four models In the following sections, we will discretize the four models (TVL1 model, RTVL1 model, NTVL1 model and RNTVL1 model). We still write G σ ∗ f as f . Let f inj be the approximation of the value f (xi , y j , tn ), where xi = i x, y j = j  y, and tn = nt, n  1, where x,  y, and t are the spatial stepsizes and the time stepsize, respectively.

1664

Y. Shi, X. Yang / Digital Signal Processing 20 (2010) 1656–1676

Fig. 11. From top-left: modified restored error of kitten image (Fig. 4 (right)) by TVL1 model; by RTVL1 model; by NTVL1 model; by RNTVL1 model. Table 1 ISNR and iterative numbers by using the TVL1 model, RTVL1 model, NTVL1 model and RNTVL1 model for all examples. Examples

Model

TVL1

Example Example Example Example Example Example Example Example Example Example Example Example Example Example Example Example Example Example Example Example Example

ISNR iterative number max(|e mr |) ISNR iterative number max(|e mr |) ISNR iterative number max(|e mr |) ISNR iterative number max(|e mr |) ISNR iterative number max(|e mr |) ISNR iterative number max(|e mr |) ISNR iterative number max(|e mr |)

5.03981 180 111.8416 4.83015 157 145.1243 6.04375 400 90.9696 5.58952 150 82.3183 8.88168 360 114.2011 3.19676 80 75.2404 5.83256 100 97.3015

1 1 1 1 (adaptive λ) 1 (adaptive λ) 1 (adaptive λ) 2 2 2 3 3 3 4 4 4 5 5 5 6 6 6

RTVL1 5.91956 9 65.9507 5.49727 7 71.2896 8.09957 12 52.6569 5.62587 10 51.0669 8.92143 50 64.1318 3.28080 10 54.4138 8.45396 10 34.9188

NTVL1

RNTVL1

2.88041 2 77.7242

2.70624 2 78.5075

2.86414 2 82.2654 3.96296 1 61.7479 8.76081 1 53.6468 3.18934 1 55.1278 6.00636 1 41.5568

2.95240 2 75.2842 4.06148 1 62.6601 8.86866 1 53.6274 3.23113 1 54.9163 6.18769 1 43.2947

The TVL1 model and NTVL1 model in terms of explicit partial derivatives can be expressed as

ft =

f xx ( f y2 + β) − 2 f xy f x f y + f y y ( f x2 + β)

( f x2 + f y2 + β)3/2

− λh ∗ 

h∗ f −g

(h ∗ f − g )2 + β

,

(4.1)

Y. Shi, X. Yang / Digital Signal Processing 20 (2010) 1656–1676

1665

Fig. 12. From top-left: restored part of kitten image (Fig. 4 (right)) by TVL1 model; by RTVL1 model; modified restored error of kitten image (Fig. 4 (right)) by TVL1 model; by RTVL1 model.

using g as initial guess and mean BCs. The RTVL1 model and RNTVL1 model in terms of explicit partial derivatives can be expressed as

ft =

f xx ( f y2 + β) − 2 f xy f x f y + f y y ( f x2 + β)

( f x2

+

f y2

+ β)



using g as initial guess and mean BCs. We define the derivative terms as (see [20,21,19])

f ixj = y

fij = f ixx j = yy

fij =

f in+1, j − f in−1, j , 2 x f in, j +1 − f in, j −1

,

2 y

f in+1, j − 2 f in, j + f in−1, j , x2 f in, j +1 − 2 f in, j + f in, j −1

 y2

,



f x2 + f y2 + βλh ∗ 

h∗ f −g

(h ∗ f − g )2 + β

,

(4.2)

1666

Y. Shi, X. Yang / Digital Signal Processing 20 (2010) 1656–1676

Fig. 13. From top-left: restored part of linhaifa image (Fig. 5 (right)) by TVL1 model; by RTVL1 model; by NTVL1 model; by RNTVL1 model.

xy

fij = f itj

f in+1, j +1 − f in−1, j +1 − f in+1, j −1 + f in−1, j −1 , 4x y

1 n f in,+ j − f i, j = . t

We let

snij = f ixx j



y 2

fij

  xy y y y

x 2 + β − 2 f i j f ixj f i j + f i j fij + β ,

and

 v nij =

f ixj

2

y 2 + f i j + β.

Then (4.1) reads as follows:

Y. Shi, X. Yang / Digital Signal Processing 20 (2010) 1656–1676

1667

Fig. 14. From top-left: modified restored error of linhaifa image (Fig. 5 (right)) by TVL1 model; by RTVL1 model; by NTVL1 model; by RNTVL1 model.

f itj =

snij

( v nij )3

− λh ∗

h ∗ f inj − g i j

(h ∗ f inj − g i j )2 + 

(4.3)

.

Then (4.2) reads as follows:

f itj =

snij

( v nij )2

h ∗ f inj − g i j − v nij λh ∗ . (h ∗ f inj − g i j )2 + 

(4.4)

5. Numerical experiments and discussions In this section, we will deal with three original images with range [0, 255] (denoted by f , see Fig. 1 (left)–Fig. 3 (left)). We consider different noisy images with salt and pepper noises (Fig. 4 (left) and Fig. 5 (left)), noisy images with Laplace noises (Fig. 6 (left) and Fig. 7 (left)) and Gaussian noise (Fig. 9 (left)). In addition, we consider Gaussian blurring kernel, defined as

h(x, y ) =

1 2πα 2

2 2 2 e −(x + y )/2α .

(5.1) |x−θ|

1 The probability density function of Laplace(θ, b) is: F (x|θ, b) = 2b exp(− b ). In our real life, we cannot get the whole blurred and noisy images but only part of blurred and noisy images. According to the part of blurred and noisy image, we want to reconstruct an approximate true image. When we deconvolve the part of blurred and noisy image, we must make some assumptions on the outside values of the part of blurred and noisy image. Our mean BCs will demonstrate the efficiency. To compare the four models (TVL1 model, RTVL1 model, NTVL1 model and RNTVL1 model), we select a part (100 × 100) (see part of blurred and noisy image in Fig. 4 (right)–Fig. 9 (right)) from blurred and noisy image (Fig. 4 (left)–Fig. 9 (left)).

1668

Y. Shi, X. Yang / Digital Signal Processing 20 (2010) 1656–1676

Fig. 15. From top-left: restored part of linhaifa image (Fig. 6 (right)) by TVL1 model; by RTVL1 model; by NTVL1 model; by RNTVL1 model.

We need to reconstruct these part of blurred and noisy images (Fig. 4 (right)–Fig. 9 (right)), then compare these restored partial images and these corresponding partial true images (Fig. 1 (right)–Fig. 3 (right)), respectively. Experimental results are discussed. We will use the signal to noise ratio (SNR) of the image f to measure the level of noise,

 2 i, j ( f i j )  . SNR = 10 · log10 2 i , j (r i j ) Blurred signal to noise ratio (BSNR) is used to measure the ratio of the level of blur kernel and the level of noise

 BSNR = 10 · log10 where ¯f =

 i, j

 ∗ f i j − ¯f ]2 ,  2

i , j [h

i, j ri j

f i j /n2 is the mean of the image f = ( f i j ), 1  i  n, 1  j  n.

Y. Shi, X. Yang / Digital Signal Processing 20 (2010) 1656–1676

1669

Fig. 16. From top-left: modified restored error of linhaifa image (Fig. 6 (right)) by TVL1 model; by RTVL1 model; by NTVL1 model; by RNTVL1 model.

It is sometimes difficult to judge which restored image is better than the others using naked eyes. To show further the differences of the restored images using the different models, we choose a quantity-improvement of signal to noise ratio (ISNR) to measure the quality of improvement. The computational value of ISNR comes from the part of true images, the part of blurred and noisy images and the restored images by solving the different four models. ISNR is used to measure the goodness of restored image:

  ISNR = 10 · log10 

i, j [ f i j

i, j [ f i j

− ( g )i j ]2

− ( f new )i j ]2

 ,

where f new is the restored image. That is, the value of ISNR is larger, the restored image is better [19]. To show the effectiveness of mean BCs, we let the restored error er = f − f r [19], where f is the true image and f r is the restored image. Assume the size of Gaussian kernel kn = 2m + 1 or kn = 2m + 2 in blur kernel (5.1) and the Gaussian kernel can be represented by a (2m + 1) × (2m + 1) matrix. We have known the controlled image (100 × 100), so we need to consider an image ((100 + 2m) × (100 + 2m)) in the process of deconvolution. Thus only m values along the boundary are impacted by the mean BCs. In order to see the differences of the restored part that relates to the mean BCs more clearly, the values of part of er ((100 − 2m) × (100 − 2m)) that are foreign to the BCs are set to be zero. The modified restored error is entitled as e mr . 5.1. Details for choice of parameters The choice of parameters is important for the results. Here we will discuss the impacts of the parameters. β is a regular parameter in case of |∇ u | = 0.  is a regular parameter in case of dividing by zero. If β or  is chosen to be large, the restored image may be too smooth; but if β or  is too small, the convergence may become slow, i.e. the computational process need large iteration times. We let β = 0.01 and  = 1e − 4. Furthermore, we can choose β = 10−32 [13], the smallest positive machine number.

1670

Y. Shi, X. Yang / Digital Signal Processing 20 (2010) 1656–1676

Fig. 17. From top-left: restored part of oldpainting image (Fig. 7 (right)) by TVL1 model; by RTVL1 model; by NTVL1 model; by RNTVL1 model.

t impacts the iteration speed. For large t, the iterations will converge faster, but the scheme is unstable. However, if t is chosen too small, the convergence will be very slow. We let t = 0.1. The Lagrange multiplier λ impacts the restoration effect. Large λ yields a blurry, oversmoothed restored image, and small λ corresponds to very little noise removal. We choose λ according to different images in order to obtain the best ISNR. The spatial width of the kernel of G σ impacts the numbers of assumed pixels outside of the part of blurred and noisy image in deconvolution process. If the spatial width is chosen to be large, the restored image is oversmoothed. But if it is √ chosen to be small, the restored image is noisy. Here the spatial width is chosen to be  σ (σ is the standard deviation of the noise). To show the restored boundary clearly, we select m = 2 when showing e mr . The initial guess is the blurred and noisy (or noisy) image g. The stopping rule for determining the “optimal” uk¯ among the iterates {uk } is trying to get the best visual result and the largest ISNR using the least iterative numbers.

Y. Shi, X. Yang / Digital Signal Processing 20 (2010) 1656–1676

1671

Fig. 18. From top-left: modified restored error of oldpainting image (Fig. 7 (right)) by TVL1 model; by RTVL1 model; by NTVL1 model; by RNTVL1 model.

5.2. Numerical experiments Example 1 (Kitten image with salt and pepper noise). This example denoises a kitten image with small salt and pepper noise (2%). SNR is 15.9832. The variance of the noise is nearly 20, so the spatial width of the kernel G σ is chosen to be 4. We chose the parameter λ = 0.085. The restored part of images by the four models (TVL1 model, RTVL1 model, NTVL1 model and RNTVL1 model) are shown in Fig. 10. The modified restored errors e mr by the four models are shown in Fig. 11. We can distinctly see where is the eye of the kitten among the four restored images. It should be noticed that the eyeball of the kitten by the TVL1 model and the RTVL1 model can be seen, but the eyeball of the kitten by the NTVL1 model and the RNTVL1 model cannot be clearly seen. So we can say the NTVL1 model and the RNTVL1 model oversmooth the restored images. Table 1 shows the ISNR values, the iterative numbers and maximum absolute value of e mr . The max(|e mr |) of the TVL1 model is the largest. Different from the above fixed λ in every iteration, we apply an adaptive value. If we assume that the variance of the noise is known, we can adjust λ so that the energy contained in the residual corresponds to the energy of the noise. Because if we apply the prior smoothness, the iterative number is usually 1 (Table 1), thus the value of λ will not be adjusted. Here, we only test the TVL1 model and the RTVL1 model to compare the results between fixed λ and adaptive λ. The initial λ is set to be 0.01. Fig. 12 shows the restored images and the corresponding modified restored errors by the TVL1 model and the RTVL1 model. Table 1 shows the results by the TVL1 model and the RTVL1 model. The max(|e mr |) of the TVL1 model is larger than that of the RTVL1 model. Compared with the fixed λ, this choice of adaptive λ does not say that we can get the best possible restored image. Example 2 (Linhaifa image with salt and pepper noise). The example denoises the linhaifa image with large salt and pepper noise (20%). SNR is 17.3763. The standard deviation of the noise is nearly 18, so the spatial width of the kernel G σ is chosen to be 4. We choose the parameter λ = 0.2. Fig. 13 depicts the restored part of blurred and noisy images using the TVL1 model, RTVL1 model, NTVL1 model and RNTVL1 model. All of the restored images show the details and edges clearly. The restored images by the NTVL1 model and the RNTVL1 model are a little oversmoothed. The modified restored errors e mr by the four models are shown in Fig. 14. The

1672

Y. Shi, X. Yang / Digital Signal Processing 20 (2010) 1656–1676

Fig. 19. From top-left: restored part of oldpainting image (Fig. 8 (right)) by TVL1 model; by RTVL1 model; by NTVL1 model; by RNTVL1 model.

e mr of the RTVL1 is the smallest. Table 1 shows the ISNR values, the iterative numbers and maximum absolute value of e mr . The max(|e mr |) of the TVL1 model is the largest. We have tested these models on images with larger salt and pepper noise, the results are also satisfactory. 1 Example 3 (Linhaifa image with Laplace noise). This example denoises linhaifa image with small Laplace noise (θ = 0, b = 20 ). SNR is 16.6230. The standard deviation of noise nearly 20, so the spatial width of Gaussian kernel is chosen to be 4. We choose λ = 0.005. The restored part of images by the four models (TVL1 model, RTVL1 model, NTVL1 model and RNTVL1 model) are shown in Fig. 15. The modified restored errors e mr by the four models are shown in Fig. 16. Table 1 shows the ISNR values, the iterative numbers and maximum absolute value of e mr . The max(|e mr |) of the TVL1 model is the largest. All of the restored images show the details and edges clearly.

Example 4 (Oldpainting image with Laplace noise). This example denoises oldpainting image with large Laplace noise (θ = 0, 1 ). SNR is 12.0437. The standard deviation of noise nearly 10, so the spatial width of Gaussian kernel is chosen to be 3. b = 30 We choose λ = 0.85. The restored part of images by the four models (TVL1 model, RTVL1 model, NTVL1 model and RNTVL1

Y. Shi, X. Yang / Digital Signal Processing 20 (2010) 1656–1676

1673

Fig. 20. From top-left: modified restored error of oldpainting image (Fig. 8 (right)) by TVL1 model; by RTVL1 model; by NTVL1 model; by RNTVL1 model.

model) are shown in Fig. 17. All of the restored images are almost the same. Only several points in the restored image by the TVL1 model are not well regularized. The modified restored errors e mr by the four models are shown in Fig. 18. Table 1 shows the ISNR values, the iterative numbers and maximum absolute value of e mr . Example 5 (Oldpainting image with Gaussian blur and Laplace noise). To show that the three new models are efficient for blurred and noisy images, this example recovers oldpainting image corrupted with Gaussian blur (kn = 5) and small Laplace 1 ). SNR is 21.4531. BSNR is 15.7378. The standard deviation of noise nearly 10, so the spatial width of noise (θ = 0, b = 10 Gaussian kernel is chosen to be 3. We choose λ = 1. In the process of convolution, we assume that the BCs is Dirichlet BCs. Because we will control part of blurred and noisy image, the assumed BCs does not impact the results. The restored part of images by the four models (TVL1 model, RTVL1 model, NTVL1 model and RNTVL1 model) are shown in Fig. 19. All of the restored images are almost the same. The modified restored errors e mr by the four models are shown in Fig. 20. Table 1 shows the ISNR values, the iterative numbers and maximum absolute values of e mr . The max(|e mr |) of the TVL1 model is the largest. Example 6 (Oldpainting image with random Gaussian noise). It is well known that the function imnoise in Matlab adds Gaussian noise to image, thus every pixel is corrupted. Here, we add random Gaussian noise, i.e., the Gaussian noise is added randomly to some pixels of the original image. The detailed steps are: we first randomly select 100 ∗ 100 pixels (the number (100 ∗ 100) of corrupted pixels can be changed) among the original image pixels 256 ∗ 256; then we get the Gaussian noise n (n = randn(100, 100) ∗ 40, here, the standard deviation of the Gaussian noise is chosen to be 40). At last, we add n to these selected pixels. If we do so, some pixels are corrupted two times (three times or more) and some pixels are not corrupted. The random Gaussian noise can be seen as a kind of impulsive noise. The example denoises oldpainting image with random Gaussian noise (σ = 40, 100 ∗ 100 points are corrupted). SNR is 20.0401. The part around the eyes are corrupted badly (see Fig. 7). The standard deviation of noise nearly 10, so the spatial width of Gaussian kernel is chosen to be 3. We choose λ = 0.2. The restored part of images by the four models (TVL1 model, RTVL1 model, NTVL1 model and RNTVL1 model) are shown in Fig. 21. All the details of the restored images are clear. But

1674

Y. Shi, X. Yang / Digital Signal Processing 20 (2010) 1656–1676

Fig. 21. From top-left: restored part of oldpainting image (Fig. 9 (right)) by TVL1 model; by RTVL1 model; by NTVL1 model; by RNTVL1 model.

the restored image by the RTVL1 visually had best approaches the part of true oldpainting. The modified restored errors e mr by the four models are shown in Fig. 22. Table 1 shows the ISNR values, the iterative numbers and maximum absolute value of e mr . The max(|e mr |) of the TVL1 model is the largest. 5.3. Discussions From the values of Table 1, it can be seen that the value of ISNR using the RTVL1 model is larger than that using the TVL1 model, and the corresponding iterative number is lower, respectively. It shows that using the regularization technique improves the restored quality and lessens the computational cost. It should be noticed that the iteration number using the NTVL1 model and the RNTVL1 model (only 1 or 2) is smaller than the iterative number of the RTVL1 model (larger or equal to 7) and considerably smaller than the iterative number of the TVL1 model (larger or equal to 80). Thus we can say that the priori smoothness lessens the computational cost by a long way. At the same time, the restored quality has an improvement. For the oldpainting image (mostly composed of smooth parts), the differences of the ISNR among the four models are not so large. But the iterative number of the three new models is smaller than that using the TVL1 model. But for linhaifa image and kitten image (mostly composed of textures, details and edges), the RTVL1 model (only regularization of parabolic term)

Y. Shi, X. Yang / Digital Signal Processing 20 (2010) 1656–1676

1675

Fig. 22. From top-left: modified restored error of oldpainting image (Fig. 9 (right)) by TVL1 model; by RTVL1 model; by NTVL1 model; by RNTVL1 model.

seems can get the best restored results (the ‘best’ visual result, larger ISNR values and smaller max(|e mr |)), and it has not large iterative number. The priori smoothness technique a little oversmooths the linhaifa image and kitten image. The maximum absolute values of |e mr | using the four models and mean BCs are shown in Table 1. From the values of max(|e mr |) using the four models and mean BCs, we can clearly see that the max(|e mr |) of the TVL1 model is the largest. The max(|e mr |) of the RTVL1 is almost the smallest in all of the experiments, respectively. But it should be noticed that the other three new models (the RTVL1 model, the NTVL1 model and the RNTVL1 model) are not much difference. 6. Concluding remarks In this paper, we have proposed three new time dependent models based on the TVL1 model to solve the image restoration problem. The main idea is to apply a priori smoothness on the solution image and to regularize the parabolic term. Applying a priori smoothness can be considered as adding “a low pass filter” before every iteration, i.e., denoising technique is used before every iteration. It accelerates the convergence process and improves the restored quality. However, it cannot be used for blurred image without noise. Regularizing the parabolic term also improves the restored quality and accelerates the convergence process. Nonlinear explicit schemes are used to discretize the TVL1 model, RTVL1 model, NTVL1 model and RNTVL1 model. According to numerical results, we think that if the image is mainly composed of large smooth parts and very small textures and details, we had better choose RNTVL1 model. But if the image is mainly composed of textures, edges and details, it is good to use RTVL1 model. To speed the computational time, we will apply algebraic multigrid to solve these models in the following work. Acknowledgments The work has been supported by NSFC (Nos. 10726035, 10771065, 10801049), Foundation of North China Electric Power University and Hebei Province Natural Science (No. A2007001027). We thank the referees for very useful suggestions.

1676

Y. Shi, X. Yang / Digital Signal Processing 20 (2010) 1656–1676

References [1] S. Alliney, Digital filters as absolute norm regularizers, IEEE Trans. Signal Process. 40 (1992) 1548–1562. [2] S. Alliney, Recursive median filters of increasing order: a variational approach, IEEE Trans. Signal Process. 44 (1996) 1346–1354. [3] S. Alliney, A property of the minimum vectors of a regularizing functional defined by means of the absolute norm, IEEE Trans. Signal Process. 45 (1997) 913–917. [4] L. Alvarez, Pierre-Louis Lions, Jean-Michel Morel, Image selective smoothing and edge detection by nonlinear diffusion II*, SIAM J. Numer. Anal. 29 (1992) 845–866. [5] F. Catte, Pierre-Louis Lions, Jean-Michel Morel, J. Coll, Image selective smoothing and edge detection by nonlinear diffusion*, SIAM J. Numer. Anal. 29 (1992) 182–193. [6] T. Chan, S. Esedoglu, Aspects of total regularized L 1 function approximation, SIAM J. Appl. Math. 65 (2005) 1817–1837. [7] T. Chen, W. Yin, X. Zhou, D. Domaniciu, T. Huang, Total variation models for variable lighting face recognition, IEEE Transactions on Pattern Analysis and Machine Intelligence 28 (2006) 1519–1524. [8] J. Darbon, Total variation minimization with L1 data fidelity as a contrast invariant filter, in: Proceedings of the 4th International Symposium on Image and Signal Processing and Analysis, ISPA 2005, 15–17 September 2005, pp. 221–226. [9] J. Darbon, M. Sigelle, Image restoration with discrete constrained total variation. Part I: Fast and exact optimization, JMIV 26 (2006) 261–276. [10] R. Gonzalez, R. Woods, Digital Image Processing, Addison–Wesley, Reading, MA, 1992. [11] A. Marquina, S. Osher, Explicit algorithms for a new time dependent model based on level set motion for nonlinear deblurring and noise removal, SIAM J. Sci. Comput. 22 (2000) 387–405. [12] M.K. Ng, R.H. Chan, W.C. Tang, A fast algorithm for deblurring models with Neumann boundary conditions, SIAM J. Sci. Comput. 21 (1999) 851–866. [13] M. Nikolova, A variational approach to remove outliers and impluse noise, J. Math. Imaging Vision 20 (2004) 99–120. [14] M. Nikolova, Minimizers of cost-function involving nonsmooth data-fidelity terms, SIAM J. Numer. Anal. 40 (2004) 965–994. [15] L. Rudin, S. Osher, Total variation based image restoration with free local constraints, in: Proceedings IEEE Internat. Conf. Imag. Proc., Austin, TX, IEEE Press, Piscataway, NJ, 1994, pp. 31–35. [16] S. Serra-Capizzano, A note on antireflective boundary conditions and fast deblurring models, SIAM J. Sci. Comput. 25 (2003) 1307–1325. [17] Y. Shi, Q. Chang, A model for image deblurring and denoising based on the Rudin–Osher–Fatemi model, Chinese J. Comput. Phys. 23 (2006) 551–558. [18] Y. Shi, Q. Chang, New time dependent model for image restoration, Appl. Math. Comput. 179 (2006) 121–134. [19] Y. Shi, Q. Chang, Acceleration methods for image restoration problem with different boundary conditions, Appl. Numer. Math. 58 (2008) 602–614. [20] G. Strang, Accurate partial difference methods II. Non-linear problems, Numer. Math. 6 (1964) 37–46. [21] G. Strang, On the construction and comparison of difference schemes, SIAM J. Numer. Anal. 5 (1968) 506–517. [22] A.P. Witkin, Scale-space filtering, in: Proceedings of IJCAI, Karlsruhe, 1983, pp. 1019–1021. [23] W. Yin, T. Chen, X. Zhou, A. Chakraborty, Background correction for cDNA microarray images using the TV + L1 model, Bioinformatics 21 (2005) 2410–2416. [24] W. Yin, D. Goldfarb, S. Osher, The total variation regularized L1 model for multiscale decomposition, SIAM Journal on Multiscale Modeling and Simulation 6 (2006) 190–211.

Yuying Shi received the Ph.D. degree in Applied Mathematics from Academy of Mathematics and System Sciences, the Chinese Academy of Science, China, in 2006. Since then she has been working at Department of Mathematics and Physics, North China Electric Power University. Her research interests include algebraic multigrid method, image restoration, edge detection. Xiaozhong Yang received the Ph.D. degree in Xi’an JiaoTong University, China in 1997. He worked as a postdoc in Institute of Atmospheric Physics, Chinese Academy of Sciences from 1997 to 1999. He is currently a professor of Department of Mathematics and Physics, North China Electric Power University. His research interests include processing, PDEs, Scientific Computing and Mathematical Modeling.