Accepted Manuscript A modified spectral conjugate gradient projection algorithm for total variation image restoration Benxin Zhang, Zhibin Zhu, Shuang’an Li PII: DOI: Reference:
S0893-9659(13)00258-9 http://dx.doi.org/10.1016/j.aml.2013.08.006 AML 4429
To appear in:
Applied Mathematics Letters
Received date: 29 May 2013 Revised date: 10 August 2013 Accepted date: 11 August 2013 Please cite this article as: B. Zhang, Z. Zhu, S. Li, A modified spectral conjugate gradient projection algorithm for total variation image restoration, Appl. Math. Lett. (2013), http://dx.doi.org/10.1016/j.aml.2013.08.006 This is a PDF file of an unedited manuscript that has been accepted for publication. As a service to our customers we are providing this early version of the manuscript. The manuscript will undergo copyediting, typesetting, and review of the resulting proof before it is published in its final form. Please note that during the production process errors may be discovered which could affect the content, and all legal disclaimers that apply to the journal pertain.
A modified spectral conjugate gradient projection algorithm for Total Variation image restoration∗ Benxin Zhang Zhibin Zhu† Shuang’an Li School of Mathematics and Computing Science, Guilin University of Electronic Technology, Guilin, 541004, P. R. China
Abstract In this study, a modified spectral conjugate gradient projection method is presented to solve total variation image restoration, which is transferred into the nonlinear constrained optimization with the closed constrained set. The global convergence of the proposed scheme is analyzed. In the end, some numerical results illustrate the efficiency of this method. Keywords: Image restoration, total variation, dient projection, global convergence
1.
constrained optimization,
conjugate gra-
Introduction
The total variation model is well known in image processing, which was introduced by Rudin, Osher and Fatemi (ROF) in [?]. The ROF model can preserve sharp discontinuities in an image for removing noise by solving the following minimization problem: min P (u) := u
R
Ω
|∇u|dx + λ2 ku − f k22 ,
(1.1)
R where ∇ is the gradient operator and Ω |∇u|dx stands for the total variation of u, Ω the image domain which is a convex Lipschitz open set in R2 , f the degraded image to restore, u of (??) the restored image (see for instance [?] for a thorough mathematical analysis of this problem), λ a weighting parameter which controls the amount of denoising, and | · | represents the Euclidean norm on R2 . The total variation of u has the following equivalent dual form: R R R |∇u|dx = max ∇u · ω = max Ω −u∇ · ω, (1.2) Ω Ω 1 ω∈C ,kω|≤1 |ω|≤1 c
where ω : Ω → R2 is the dual variable, and ∇· is the divergence operator. Some ideas from the duality were proposed first by Chan, Golub and Mulet [?], later by Cater [?] and A. Chambolle [?, ?]. Chambolle’s project algorithm [?] has been very popular, because, based on this method, there are so many total variation minimization algorithms. In [?], the authors proposed nonmonotone Chambolle gradient projection algorithms for total variation image ∗ This
work was supported in part by NNSF(No.11061011, 11361018) of China and Guangxi Fund for Distinguished Young Scholars (2012GXSFFA060003). † Corresponding author: Zhibin Zhu,
[email protected].
1
restoration. They used the well known Barzilai-Borwein stepsize instead of the constant time stepsize in Chambolle’s method. Further, they adopt the adaptive nonmonotone line search proposed by Dai and Fletcher [?] to guarantee the global convergence. Birgin and Martinez [?] proposed a spectral conjugate gradient method for unconstrained optimization. In [?], the authors proposed a modified conjugate gradient method for constrained optimization problem. Because of the large scale data of the image, we hope that the conjugate gradient method can be introduced to the image processing. In this paper, we proposed a modified spectral conjugate gradient method for constrained optimization problem that based on the results of [?] and [?]. We apply the conjugate gradient method to solve the image restoration problems and the global convergence is proved. In the section 2, we present the spectral conjugate gradient algorithm. In the section 3, Some numerical results are given to illustrate the efficiency of the method.
2.
The Main Results
Firstly, we review some definitions and basic results from[?]. We assume that the images are matrices of size N × N . X denotes the Euclidean space RN ×N , and Y = X × X. The spaces X and Y will be endowed the scalar product, which is defined as follow: X < s, t >X = si, j ti,j , s, t ∈ X, 1≤i,j≤N
< p, q >Y =
X
1 2 p1i,j qi,j + p2i,j qi,j ,
1≤i,j≤N
p = (p , p2 ), q = (q 1 , q 2 ) ∈ Y. 1
To define a discrete total variation, we introduce a discrete version of the gradient operator ∇ : X → Y , which is given by (∇u)i, j = ((∇u)1i,j , (∇u)2i j ), ( ui+1,j − ui,j , if i < N, 1 (∇u)i,j = 0, if i = N, ( ui,j+1 − ui,j , if j < N, (∇u)2i,j = 0, if j = N, for all i, j = 1, · · · , N . Then, the total variation of u is defined by X T V (u) :=
1≤i,j≤N
(∇u)i,j .
Therefore, the discretization of minimization problem (??) is defined as follows min T V (u) + λ2 ku − f k2X ,
u∈X
(2.1)
where u, f ∈ X are discretization vectors, and k · kX is the Euclidean norm. Chambolle gave the solution of (??) by (2.2) u = f + π λ1 K (−f ), 2
where π π1 K (·) is the orthogonal projection onto the convex set
1 λK
with
K := {∇ · ω : ω ∈ Y , |ωi,j | ≤ 1, ∀i, j = 1, · · · , N }. We introduce a discrete divergence operator ∇· : Y → X which is defined, by analogy with the continuous setting, by ∇· = −∇∗ . That is, for every ω ∈ Y and u ∈ X, h−∇ω, uiX = hω, ∇uiY . One checks easily that ∇· is given by 1 1 2 2 if 1 < i < N , if 1 < j < N , ωi, j − ωi−1,j , ωi,j − ωi−1,j , 1 2 (∇ · ω)i,j = + ωi,j , if i = 1, ωi,j , if j = 1, 1 2 ωi−1,j , if i = N , ωi−1,j , if j = N .
Hence computing the nonlinear projection π λ1 K (−f ) is equal to solving the following problem: min F (ω) = kf + λ1 ∇ · ωk2X ω∈K (2.3) s.t. K := {ω : ω ∈ Y, |ωi,j | ≤ 1, ∀i, j = 1, · · · , N }.
The main advantage of the dual formulation (??) is that the objective function is quadratic and hence there is no issue with non-differentiability as in the primal formulation. Consider the constrained optimization problem (2.4)
minω∈K F (ω) , where K := {ω : ω ∈ Y, |ω| ≤ 1}.
In [?], Calamai P H etc proposed a generalized gradient projection algorithm for (??) and gave some significant convergence properties. We refer reader to [?] for more details. The gradient projection algorithm is defined by ωk+1 = ω(αk ) = p(ωk − αk gk ), where the projection operate p(·) : Rn → K is defined by p(ω) = argmin{kz − ωk2 : z ∈ K}, and αk is the step-size which satisfies the generalized Armijo line search conditions, while gk is the gradient of F (ω) at ωk , i.e., gk = ∇F (ωk ). Considering the large scale data of the image restoration model (??), while the conjugate gradient method is an effective numerical method for unconstrained optimization, so, we consider to use the conjugate gradient method to solve the image restoration model. The conjugate gradient method for solving unconstrained optimization problem has the form as follows: ωk+1 = ωk + αk dk , where dk =
(
−gk , −gk + βk dk−1 ,
k = 1, k > 1,
and αk is determined by some line search, while βk is the scalar, of which two well-known formulas have the types as follows: βkF R =
kgk k2 ; kgk−1 k2 3
βkP RP =
gkT (gk − gk−1 ) . kgk−1 k2
In addition, Birgin and Martinez [?] proposed a spectral conjugate gradient method by combining the spectral gradient method [?] with conjugate gradient ideas in the following way: ( −gk , k = 1, dk = −θk gk + βk dk−1 , k > 1, where βk is a scalar. Using a geometric interpretation for quadratic function minimization,Birgin and Martinez suggested the following expressions for defining the update parameter βk βkSF R = βkSP RP = θk is a spectral quotient such as
θk−1 kgk k2 ; θk kgk−1 k2
θk−1 gkT (gk − gk−1 ) . θk kgk−1 k2
θk =
sTk−1 sk−1 , T s yk−1 k−1
where sk−1 = ωk − ωk−1 ,yk−1 = gk − gk−1 .In fact, θk is derived from an approximately secant equation underlying quasi-Newton methods, which can be obtained by minimizing kθk sk−1 − yk−1 . In this paper, we restrict the θk ∈ [10−5 , 105 ]. Because the model (??) is a constrained optimization problem, the general conjugate gradient method can not be used to solve the image restoration model (??). For the constrained optimization problem with the closed convex constrained set, [?] proposed a modified conjugate gradient method. It was given a special parameter, and obtained a revised direction of the negative gradient, then a feasible direction with descent was obtained by projecting the revised direction into the feasible set. Considering the circulation structure and sparse characteristic about the constrained conditions of the model (??), based on the ideas of [?] and [?], we propose a modified spectral conjugate gradient method to solve the model (??). The analysis of the spectral conjugate gradient projection method requires some basic properties of the projection operator. Lemma 2.1 Let p be the projection into K. (1) For all x ∈ K, y ∈ Rn ,hp(y) − y, x − p(y)i ≥ 0. (2) ∀y, z ∈ Rn ,p is a nonexpansive operator, that is, kp(y) − p(z)k ≤ ky − zk. Let x ∈ K,d ∈ Rn ,x(α) = p(x + αd). We have the following results: Lemma 2.2 (1) hx(α) − (x + αd), y − x(α)i ≥ 0;∀y ∈ K, α > 0; 2 , ∀α > 0. (2) h(−d, x − x(α)i ≥ kx−x(α)k α If ω ∈ K,hg(ω ∗ ), ω − ω ∗ i ≥ 0, then ω ∗ is the stationary point of (??). Set ωk ∈ K, ω ¯ k (1) = p(ωk − θk gk ), vk = −θk gk + βk dk−1 . Let dk = ωk (1) − ωk , 4
where ωk (1) = p(ωk + vk ). In order to guarantee dk to be a descent direction, we restrict the parameter βk which satisfies: k¯ ωk (1) − ωk k2 ≥ (1 + 4)θk |βk |kgk kkdk−1 k, where 4 > 0 is a scalar. By the above inequality, we can get the range of the parameter βk as follows: βk ∈ [−βk (4), βk (4)], where βk (4) = k¯ ωk (1) − ωk k2 /((1 + 4)θk kgk kkdk−1 k). Hence we can obtain the direction vk and then dk is determined. We give our algorithm as follows. The modified spectral conjugate gradient projection algorithm(SCG): Step 1 : Given ω 0 ∈ Y, let 4 > 0,d0 = 0,β1 = 0,k := 0. Step 2 : Let dk = ωk (1) − ωk , where βk ∈ [−β(4), β(4)]. Step 3 : Take ωk+1 = ωk + αk dk , where αk satisfies the generalized Armijo line search conditions as follows: (2.5) f (ωk + αk dk ) ≤ f (ωk ) + µ1 αk hgk , dk i and αk ≥ γ1
αk ≥ γ2 αk∗ ,
or
(2.6)
where αk∗ satisfies: f (ωk + αk∗ dk ) > f (ωk ) + µ2 αk∗ hgk , dk i.
(2.7)
Here µ1 , µ2 ∈ (0, 1),γ1 , γ2 > 0 are constant. If kωk (1) − ω(k)k = 0, then ωk is the stationary point of (??), STOP. Step 4 : Set k := k + 1, and go to Step 2. Note: In Step 2, βk can be taken: βk = argmin{|β − βkSF R | : β ∈ [−βk (4), βk (4)]}, or βk = argmin{|β − βkSP RP | : β ∈ [−βk (4), βk (4)]}. Consequently, it is obtained the corresponding modified spectral conjugate gradient projection method with the scalar βk of SFR, SPRP. Obviously, if βk = 0, the algorithm is the spectral gradient projection algorithm(SGP). If θk = θk−1 = 1, the algorithm is the conjugate gradient projection algorithm [?]. Lemma 2.3 ωk ∈ K is the stationary point of (??), if and only if ω ¯ k (1) = ωk . Lemma 2.4 ωk ∈ K is the stationary point of (??), if and only if ωk (1) = ωk . Proof Assume that ωk is the stationary point of (??), By Lemma ??, we can get ω ¯ k (1) = ωk . 2 Hence βk = 0. By Lemma ??(2), kωk (1) − ωk k ≤ hθk gk , ωk − ωk (1)i = 0, so ωk (1) = ωk .
5
Let ωk (1) = ωk . By Lemma ??(1), we can get hωk (1) − (xk + sk ), ω ¯ k (1) − ωk i ≥ 0. By Lemma ?? and the defined of βk , we obtain that 0
≤ h−sk , ω ¯ k (1) − ωk i
= hθk gk , ω ¯ k (1) − ωk i − βk hdk−1 , ω ¯ k (1) − ωk i
≤ hθk gk , ω ¯ k (1) − ωk i + |βk hdk−1 , ω ¯ k (1) − ωk ik k¯ ωk (1) − ωk k2 ≤ hθk gk , ω ¯ k (1) − ωk i + kdk−1 kk¯ ωk (1) − ωk k (θk (1 + 4)kgk kkdk−1 k) 1 ≤ hθk gk , ω ¯ k (1) − ωk i + k¯ ωk (1) − ωk k2 (1 + 4) 1 k¯ ωk (1) − ωk k2 ≤ −k¯ ωk (1) − ωk k2 + 1+4 4 k¯ ωk (1) − ωk k2 = − 1+4 ≤ 0 Hence k¯ ωk (1) − ωk k = 0, ω ¯ k (1) = ωk . By Lemma ??, the conclusion holds. Lemma 2.5 If ωk ∈ K is not the stationary point of (??), then we have the following facts: 2+4 (1) kdk k ≤ θk (1+4) kgk k;
4 k¯ ωk (1) − ωk k2 > 0; (2) hθk gk , ωk − ωk (1)i ≥ 1+4 2+4 (3) |h−sk , ωk − ωk (1)i − hθk gk , ωk − ωk (1)i| ≤ (1+4)4 hθk gk , ωk − ωk (1)i.
Proof (1) By Lemma 2.1(2) and the define of βk , we can get kdk k
≤
kvk k = k − θk gk + βk dk−1 k
≤ kθk gk k + |βk |kdk−1 k k¯ ωk (1) − ωk k2 ≤ kθk gk k + kdk−1 k ((1 + 4)θk kgk kkdk−1 k) θk ≤ kθk gk k + kgk k (1 + 4) 2+4 = θk kgk k. (1 + 4)
(2) By Lemma2.2(2),Lemma2.1(2) and the define of βk , we can get
hθk gk , ωk − ωk (1)i = hθk gk , ωk − ω ¯ k (1)i + hθk gk , ω ¯ k (1) − ωk (1)i ≥ ≥ ≥ ≥ >
k¯ ωk (1) − ωk k2 − θk kgk k · k¯ ωk (1) − ωk (1)k θk k¯ ωk (1) − ωk k2 k¯ ωk (1) − ωk k2 − kgk kkdk−1 k (θk (1 + 4)kgk kkdk−1 k) k¯ ωk (1) − ωk k2 k¯ ωk (1) − ωk k2 − (1 + 4) 4 k¯ ωk (1) − ωk k2 1+4 0.
(3) By the define of dk and βk , Lemma2.5(3), we can get |h−sk , ωk − ωk (1)i − hθk gk , ωk − ωk (1)i| 6
kβk kkdk−1 kkωk (1) − ωk k k¯ ωk (1) − ωk k2 2+4 ≤ kdk−1 kθk kgk k ((1 + 4)θk kgk kkdk−1 k) (1 + 4) 2+4 hθk gk , ωk − ωk (1)i. ≤ (1 + 4)4 ≤
Theorem 2.1 If F (ω) ∈ C 1 and the algorithm (SCG) generates an infinite sequence {ωk }, then we have f (ωk ) → −∞, or lim inf kωk (1) − ωk k = 0. k→∞
Proof By Lemma ??(2), we know that gkT dk < 0 for all k. The inequality (??) indicates that f (ωk+1 ) < f (ωk ). So, f (ωk ) is a decrease monotonically sequence. If f (ωk ) → −∞, the conclusion is proved. Hence we assume that f (ωk ) is bounded. For the sake of contradiction, we suppose that lim inf kωk (1) − ωk k = 0 is not true. Then k→∞
there exists a constant > 0 such that
kωk (1) − ωk k ≥ ,
∀k.
(2.8)
By (??), Lemma ??(3), Lemma ??(2) and (??), we have f (ωk ) − f (ωk + αk dk ) ≥ −µ1 αk hgk , dk i ≥ µ1 αk (42(1+4)4 +24+2)θk h−sk , ωk − ωk (1)i ≥ ≥
2 µ1 αk (42(1+4)4 +24+2)θk kωk − ωk (1)k
(2.9)
µ1 αk (42(1+4)4 +24+2)θk kωk − ωk (1)k.
By (??) and the fact that f (ωk ) is bounded, we can get ∞ P
k=1
αk kωk − ωk (1)k < +∞.
Due to kωk+1 − ωk k = αk kdk k, we know that
∞ P
k=1
(2.10)
kωk − ωk (1)k < +∞. Hence the sequence
{ωk } is convergent. We assume that {ωk } converges to ω ∗ . By (??) and (??), we have lim αk = 0.
k→∞
(2.11)
Then we know that {dk } is bounded by Lemma ??(1). Assume that there exists an infinite subset K ⊂ {1, 2, · · ·} such that lim dk = d∗ , lim sk = s∗ . When k is sufficiently large, by k→∞ k∈K
k→∞ k∈K
(??) and (??), we have αk < γ1 . Then αk ≥ γ2 αk∗ , where α∗ satisfies : f (ωk + αk∗ dk ) − f (ωk ) > µ2 hgk , dk i. αk∗ Set k → ∞, we get gkT (ω ∗ )d∗ > µ2 gkT (ω ∗ )d∗ . Hence gkT (ω ∗ )d∗ = 0. By Lemma ??(3), we have hs∗k , ω ∗ (1) − ω ∗ i = 0. But hs∗k , ω ∗ (1) − ω ∗ i ≥ kω ∗ (1) − ω ∗ k2 . So kω ∗ (1) − ω ∗ k = 0, which contradicts with (??). Hence lim inf kωk (1) − ωk k = 0. k→∞
7
3.
Numerical experiments
In this section, we carry out some numerical experiments based on the algorithm. The results show that the algorithm is effective. The simulations are preformed in Matlab 7.12 (R2011a) on a laptop with a processer at 2GHZ and 2GB of RAM. The noise images are generated by adding Gaussian noise to the clean images using the Matlab function imnoise,with variance parameter set to 0.01. we fix µ1 = µ2 = 0.8,γ1 = 0.5, γ2 = 0.25,λ = 0.045,4 = 1.067. To evaluate the quality of the restored image by different methods, we employ the root-mean-square error(rmse) between the restored and the original image: rsme =
ku − uk k2 . kuk2
where u and uk are the original and restored images. It is clear that the smaller the rsme ,the better quality the restored image. The stopping criterion is kωk (1) − ωk k < . In the numerical tests, we find that the cannot be taken too small because of the rounding error. The use of very small values of costs too much computational time but cannot reduce the rmse and improve the quality of the restored image. Here, we choose = 1. We test the conjugate gradient projection algorithms [?] and spectral conjugate gradient projection algorithms that we proposed. Conjugate Gradient Projection Algorithms [?]: • GM: GLP gradient projection algorithm with βk = 0. • NMCG: Conjugate gradient projection algorithm with βk = βk (4). • NMFR: Conjugate gradient projection algorithm with βk = βkF R . • NMPRP: Conjugate gradient projection algorithm with βk = βkP RP . Modified Spectral Conjugate Gradient Projection Algorithms: • SPG: Spectral gradient projection algorithm with βk = 0. • SCG: Spectral conjugate gradient projection algorithm with βk = βk (4). • SFR: Spectral conjugate gradient projection algorithm with βk = βkSF R . • SPRP: Spectral conjugate gradient projection algorithm with βk = βkSP RP . In order to test the speed of the algorithms more fairly, the experiment are repeated for 10 times and the average of the result is given in the Table 1 and Table 2. We report the number of iterations (Niter) and the CPU time (in seconds) required for the whole denoising process
8
and the rsme of the recovered image. Table 1 Numerical results of CG image
Algorithm
N iter
T ime
rsme
shape(128 × 128)
GM N M CG NMF R N M P RP
152 149 160 155
0.47 0.53 0.52 0.51
0.0739 0.0415 0.0412 0.0414
cameraman(256 × 256)
GM N M CG NMF R N M P RP
270 269 256 280
6.50 7.48 6.85 7.08
0.0829 0.0221 0.0214 0.0200
lena(512 × 512)
GM N M CG NMF R N M P RP
450 545 400 453
70.71 82.14 71.05 78.16
0.0525 0.0079 0.0082 0.0081
boat(1024 × 1024)
GM N M CG NMF R N M P RP
735 616 654 737
521.24 483.95 473.39 555.49
0.0078 0.0079 0.0077 0.0078
From Table 1 and Table 2, it is obvious to see that the spectral conjugate gradient projection algorithm is much faster than the conjugate gradient projection algorithm, especially for some large-size images, while the error between the restored and the original image of the proposed conjugate gradient method is more accurate than that of the gradient project algorithm. Next,we show soem results of the shape (128 × 128), camreman(256 × 256), lena(512 × 512) and boat(1024 × 1024) restoration. Fig.1 displays the restoration results by our method. The numerical experiments show that the spectral conjugate gradient projection method can restore corrupted image well in an efficient manner. Table 2 Numerical results of SCG image
Algorithm
N iter
T ime
rsme
shape(128 × 128)
SP G SCG SF R SP RP
39 38 42 36
0.19 0.18 0.18 0.16
0.0419 0.0419 0.0415 0.0413
cameraman(256 × 256)
SP G SCG SF R SP RP
53 61 61 57
1.74 2.00 2.01 2.13
0.0216 0.0219 0.0218 0.0223
lena(512 × 512)
SP G SCG SF R SP RP
116 110 83 85
25.66 25.90 16.41 20.46
0.008 0.0081 0.0080 0.0081
boat(1024 × 1024)
SP G SCG SF R SP RP
178 174 136 154
150.75 164.55 140.77 147.55
0.0080 0.0078 0.0079 0.0078
9
Fig. 1 Denoising result, ”shape”,”camreman”,”lena”, ”boat”.
10
10
6
10
6
SPRP NMPRP
||xk(1)-xk||
10
10
10
10
10
5
10
4
10
||xk(1)-xk||
10
SFR NMFR
3
2
10
10
1
10
0
0
10
20
30
40 50 CPU time
60
70
80
10
90
5
4
3
2
1
0
0
100
200
300 CPU time
400
500
600
Fig. 2 Relative norm v.s. CPU time for denoising ”lena”, ”boat”.
10
6
10
6
SPG SCG
||wk(1)-wk||
10
10
10
10
10
5
10
4
10
||wk(1)-wk||
10
SFR SPG
3
2
10
10
1
10
0
0
5
10
15
20
25
30
35
10
5
4
3
2
1
0
0
5
10
15 CPU time
CPU time
20
25
30
Fig. 3 Relative norm v.s. CPU time for denoising ”lena”.
10
10
||wk(1)-wk||
10
10
10
10
10
6
SFR SPRP SPG SCG
5
4
3
2
1
0
0
20
40
60
80
100 CPU time
120
140
160
180
200
Fig. 4 Relative norm v.s. CPU time for denoising ”boat”.
Fig.2 lists the results of SPRP and NMPRP for ”lena”, SFR and NMFR for ”boat”. Obviously, the spectral conjugate gradient projection method that we proposed is much faster than 11
conjugate gradient projection method. Fig.3 polts the SCG and SFR’s relative norm and the CPU time cost V.S SPG’s for ”lena”. Fig.4 lists the our spectral conjugate gradient projection method’s numerical result for ”boat”. From Fig.3 and Fig.4, we can know that SFR, SPRP are faster than SPG. According to Fig.2, Fig.3, and Fig.4, the residual function value is non-monotone decreasing in iterative procedure. Indeed, the non-monotonicity is an importance feature as pointed out in [?].
Conclusion In this study, a modified spectral conjugate gradient projection method is presented to solve total variation image restoration. Some numerical experiments are carried out, which show that the spectral conjugate gradient projection algorithm is much faster than the conjugate gradient projection algorithm, especially for some large-size images, while the error between the restored and the original image of the proposed conjugate gradient method is more accurate than that of the gradient project algorithm. Acknowledgement: The authors would like to thank the anonymous referee, whose constructive comments led to a considerable revision of the original paper.
References [1] Bresson, X., Chan, T. Fast dual minimization of the vectorial total variation norm and applications to color image processing. Tech-nical report, UCLA CAM Report 07-25(2007). [2] Calamai, P. H., More, J. J. Projeeted gradient methods for linearly constralned problems,Mathematieal Progranuning,39:93116(1987). [3] Carter, J. L. Dual methods for total variation-based image restoration. Ph.D. thesis, University of California at Los Angeles (Advisor: T.F. Chan)(2001). [4] Chan, T., Shen, J. Image Processing and AnalysisVariational, PDE, Wavelet, and Stochastic Methods. Philadelphia, SIAM (2005). [5] Chan, T., Golub, G., Mulet, P. A nonlinear primal-dual method for total variation-based image restoration. SIAM J. Sci. Comput.20 , 1964-1977(1999). [6] Chambolle, A. An algorithm for total variation minimization and applications. J. Math. Imaging Vis. 20 , 89-97(2004). [7] Chambolle, A. Total variation minimization and a class of binary MRF models. In: EMMCVPR 05. Lecture Notes in Computer Sci-ences, vol. 3757, 136-152. Springer, Berlin (2005). [8] Chambolle, A., Lions, P.L. Image recovery via total variation minimization and related problems. Numer. Math. 76 , 167-188 (1997). [9] Birgin, E. G., Martinez, J. M. A spectral conjugate gradient method for unconstrained optimization. Applied Mathematics and Optimization .43,117C128(1999). 12
[10] Barzilai, J., Borwein, J. M.Two point step size gradient methods.IMA Journal o fNumerical Analysis.8,141C148(1988). [11] Rudin, L., Osher, S., Fatemi, E. Nonlinear total variation based noise removal algorithms. Physica D60 , 259-268(1992). [12] Sun, Q. Y, Gao. B, Wang ,C. Y. Modified conjugate gradient projected methods for nonlinear constrained optimization. Acta Mathematicae Applicatae Sinina.33(4), 640-651(2001). [13] Sun, Q. Y.Generaiized three-term memory gradient Projeetion method for noulinear programming with nonlinear equality and inequality constraints,Oransactions, 7(2), 3544(2003). [14] Sun, Q. Y.Generaiized super-memory gradient Projeetion method arbitrary initial point and conjugate gradient scalar for nonlinear programming with inequality constraints,Mathematica Numerica SinIca, 26(4), 401412(2004). [15] Yu, G. H., Qi, L., Dai, Y. H. On Nonmonotone Chambolle Gradient Projection Algorithms for Total Variation Image Restoration.J.math.Imaging., 35, 143-154(2009). [16] Dai, Y. H., Fletcher, R.. Projected Barzilai-Borwein methods for large-scale boxconstrained quadratic programming. Numer. Math., 100, 21-47(2005).
13