A simplified proof of weak convergence in Douglas–Rachford method

A simplified proof of weak convergence in Douglas–Rachford method

Operations Research Letters 47 (2019) 291–293 Contents lists available at ScienceDirect Operations Research Letters journal homepage: www.elsevier.c...

263KB Sizes 0 Downloads 14 Views

Operations Research Letters 47 (2019) 291–293

Contents lists available at ScienceDirect

Operations Research Letters journal homepage: www.elsevier.com/locate/orl

A simplified proof of weak convergence in Douglas–Rachford method Benar F. Svaiter IMPA, Estrada Dona Castorina 110, 22460–320 Rio de Janeiro, Brazil

article

info

Article history: Received 24 October 2018 Received in revised form 18 April 2019 Accepted 23 April 2019 Available online 3 May 2019 Keywords: Douglas–Rachford method Weak convergence Monotone operators

a b s t r a c t Douglas–Rachford method is a splitting algorithm for finding a zero of the sum of two maximal monotone operators. Weak convergence in this method to a solution of the underlying monotone inclusion problem in the general case remained an open problem for 30 years and was proved by the author 7 years ago. That proof was cluttered with technicalities because we considered the inexact version with summable errors. In this short communication we present a streamlined proof of this result. © 2019 Elsevier B.V. All rights reserved.

1. Introduction

emphasizes the new ideas used for obtaining this result and may lead to new applications of them.

Douglas–Rachford method is an iterative splitting algorithm for finding a zero of a sum of two maximal monotone operators. It was originally proposed by Douglas and Rachford for solving the discretized Poisson equation A(w ) + B(w ) = f

w ∈ W,

where A and B are, respectively, the discretization of −∂ 2 /∂ x2 and −∂ 2 /∂ y2 in the interior of the domain and f is the problem’s datum, while W is the family of functions on the discretized domain which satisfies the prescribed boundary condition. With this notation, the method proposed by Douglas–Rachford [1, Part II, eq. (7.4)] writes

λn A(yn+1/2 ) + yn+1/2 = xn − λn B(xn ) λn B(xn+1 ) + xn+1 = xn − λn A(yn+1/2 ), with f = 0 or with f incorporated to A. Lions and Mercier [3] extended this procedure with λ = λn fixed, for arbitrary maximal monotone operators, although in their analysis, for the general case, weak convergence of an associated sequence, but not of {xn }, was proved. The solution has to be retrieved applying the resolvent of B to the weak limit of the associated sequence. Weak convergence in this method to a solution of the underlying inclusion problem was proved by the author [6], 30 years afterwards Lions and Mercier seminal work. That proof was cluttered with technicalities because the inexact case with summable error was considered. The aim of this note is to present a streamlined version of that proof. New or simplified proofs of know results has, since long, been a topic of interest for many researchers. We believe that a simplified proof of weak convergence in Douglas–Rachford methods E-mail address: [email protected]. https://doi.org/10.1016/j.orl.2019.04.007 0167-6377/© 2019 Elsevier B.V. All rights reserved.

2. Basic definitions and results From now on, X is a real Hilbert space with inner product

⟨·, ·⟩ and associated norm ∥·∥. Strong and weak convergence of w a sequence {xn } in X to x will be denoted by xn → x and xn → x, respectively. We consider in X × X the canonical inner product and norm of Hilbert space products,

⟨(z , w), (z ′ , w′ )⟩ = ⟨z , z ′ ⟩ + ⟨w, w′ ⟩, √ ∥(z , w)∥ = ⟨(z , w), (z , w)⟩, which make it also a Hilbert space. A point to set operator T : X ⇒ X is a relation in X , that is T ⊂ X × X ; for any x ∈ X , T (x) = {y : (x, y) ∈ T },

T −1 (y) = {x : (x, y) ∈ T }.

For S : X ⇒ X , T : X ⇒ X , and λ ∈ R, the operators S + T : X ⇒ X and λT : X ⇒ X are defined, respectively, as (S + T )(x) = {y + y′ : y ∈ S(x), y′ ∈ T (x)}, (λT )(x) = {λy : y ∈ T (x)}. A function f : X ⊃ D → X is identified with the point to set operator F = {(x, y) : x ∈ D, y = f (x)}. An operator T : X ⇒ X is monotone if

⟨x − x′ , y − y′ ⟩ ≥ 0

∀ (x, y), (x′ , y′ ) ∈ T

and it is maximal monotone if it is a maximal element in the family of monotone operators in X with respect to the partial order of inclusion, when regarded as a subset of X × X .

292

B.F. Svaiter / Operations Research Letters 47 (2019) 291–293

Minty’s Theorem [4] states, among other things, that if T is maximal monotone, then for each z ∈ X there is a unique (x, y) ∈ X × X such that x + y = z,

y ∈ T (x).

yn = JλA (xn−1 − λbn−1 )

(n = 1, 2, . . . ).

Let {un }, {vn } be as in (5). In view of the above equalities and the second equality in (5), un = J λ B ( v n ) ,

The resolvent, proximal mapping, or proximal operator of a maximal monotone operator T with stepsize λ > 0 is

vn+1 = JλA (xn−1 − λbn−1 ) + λbn−1

JλT = (I + λT )−1 .

Since λbn−1 = vn − un and xn−1 −λbn−1 = un − (vn − un ) = 2un −vn ,

(1)

(n = 1, 2, . . . ).

vn+1 = JλA (2un − vn ) + vn − un = JλA (2JλB vn − vn ) + vn − JλB vn = JλA ((2JλB − I)(vn )) + (I − JλB )(vn ).

In view of Minty’s theorem, the proximal mapping (of a maximal monotone operator) is a function whose domain is the whole underlying Hilbert space X . Opial’s Lemma [5], which we state next, will be used in our proof.

and {un }, {vn } satisfy (3).

Lemma 2.1 (Opial’s Lemma). Let {un } be a sequence in a Hilbert w space Z . If un → u, then for any v ̸ = u

Let us write some of Lions and Mercier’s results on this method with the notation (4).

lim inf ∥un − v∥ > lim inf ∥un − u∥. n→∞

Theorem 3.2 ([3, Proposition 2]). If (2) has a solution and {(xn , bn )} and {(yn , an )} are a pair of sequences generated by Douglas–Rachford method (4), then {xn + λbn } and {xn } are bounded,

n→∞

We will also need the following result. Lemma 2.2 ([2, Proposition 4]). Suppose T1 and T2 are maximal monotone operators in X and T1 + T2 is also maximal monotone. If (zk,i , vk,i ) ∈ Ti for i = 1, 2, k = 1, 2, . . . , and w

zk,1 − zk,2 → 0,

zk,1 → z ,

vk,1 + vk,2 → 0,

vk,1 → v,

w



xn − xn−1 = −λ(an + bn ) → 0, yn − xn−1 = −λ(an + bn−1 ) → 0,

as n → ∞,

and {xn + λbn } converges weakly to a point v such that JλB (v ) is a solution of this inclusion problem.

as k → ∞

3. Douglas–Rachford method

Proof. The two equalities follow trivially from the equalities in (4), the remainder follows trivially from [3, Proposition 2], the second line of its proof, and Proposition 3.1. □

From now on A, B : X ⇒ X are maximal monotone operators. We are concerned with the problem

The extended solution set of (2), which was defined and studied in [2], is

0 ∈ A(x) + B(x).

(2)

Se (A, B) = {(z , w ) : w ∈ B(z), −w ∈ A(z)}.

(3)

It is trivial to verify that z is a solution of (2) if and only if (z , w ) ∈ Se (A, B) for some w . Our aim is to prove the following theorem.

then (z , v ) ∈ T1 and (z , −v ) ∈ T2 .

Lions and Mercier’s extension [3] of Douglas–Rachford is

vn+1 = JλA (2JλB − I)vn + (I − JλB )vn , un = JλB (vn ) (n = 1, 2, . . . ).

Lions and Mercier proved [3], for the case where A and B are general maximal monotone operators, that {vn } converges weakly to a point v such that JλB (v ) is a solution of (2). However, weak convergence of {un } to a solution was an open problem because, in general, the resolvent is not sequentially weakly continuous; therefore, weak convergence of {vn } does not imply weak convergence of {un = JλB (vn )} to a solution. Douglas–Rachford method, as defined by Lions and Mercier, can be described as follows: choose λ > 0, x0 ∈ X , b0 ∈ B(x0 ) and for n = 1, 2, . . . compute yn , an such that an ∈ A(yn ), λan + yn = xn−1 − λbn−1 , compute xn , bn such that

(4)

bn ∈ B(xn ), λbn + xn = yn + λbn−1 . As stated next, (3) and (4) are the same method. Proposition 3.1. If {(xn , bn )} and {(yn , an )} are a pair of sequences generated by (4) and un = xn−1 ,

vn = xn−1 + λbn−1

(n = 1, 2, . . . )

(5)

then the sequences {vn }, {un } satisfy (3). Proof. Suppose {(xn , bn )} and {(yn , an )} are a pair of sequences generated by (4) It follows from the resolvent definition (1) that xn = JλB (xn + λbn )

(n = 0, 1, . . . ),

(6)

Theorem 3.3. Let {(xn , bn )} and {(yn , an )} be sequences generated by Douglas–Rachford method (4). If the solution set of (2) is nonempty and A + B is maximal monotone, then the sequences {(xn , bn )}, {(yn , −an )} converge weakly to a point in Se (A, B) and, consequently, {xn } converges weakly to a solution of (2). Weak convergence of {(xn , bn )} and {(yn , −an )} to a point in the extended solution set Se (A, B) was proved in [6, Theorem 1] in a more general context; namely, without assuming maximal monotonicity of the sum A + B and in the case where the proximal subproblems in (4) are solved inexactly within a summable error tolerance. Since here we assume that there are no errors in the solution of the proximal subproblems in (4), (7) the basic ideas of the proof of weak convergence [6] can be used without the technicalities which attend the use of summable error criteria. 4. Proof of Theorem 3.3 Observe that Se (λA, λB) = {(z , λw ) : (s, w ) ∈ Se (A, B)} and for any sequence {(zn , wn )} in X × X , w

(zn , wn ) → (z , w ) as n → ∞

⇕ w

(zn , λwn ) → (z , λw ) as n → ∞.

B.F. Svaiter / Operations Research Letters 47 (2019) 291–293

Moreover, since one can define A˜ = λA and B˜ = λB, and apply (4) to A˜ and B˜ instead of A and B, without loss of generality we assume from now on that λ = 1. Now under this assumption, (4) writes an ∈ A(yn ), an + yn = xn−1 − bn−1 , bn ∈ B(xn ), bn + xn = yn + bn−1 ,

(n = 1, 2, . . . ).

(7)

Proof of Theorem 3.3. It follows from Lemma 4.1 that {pn } is Fejér convergent to Se (A, B). Suppose the solution set of (2) is nonempty. Since {pn } is Fejér convergent to Se (A, B), this sequence is bounded and, consequently, it has weak limit points. Let p = (z , w ) be a weak limit point of {pn }, that is, there exists a subsequence {pnk } which converges weakly to (z , w ), w

Let pn = (xn , bn )

(8)

Our aim is to prove that {pn } converges weakly to a point in Se (A, B). First we will prove that this sequence is Fejér convergent to Se (A, B). The next result was proved in [6, Lemma 2] in a more general context. Since this result is the key for proving weak convergence of {pn } (together with the results of Lions and Mercier summarized in Theorem 3.2), we repeat here its proof.

2

∥p − pn−1 ∥ ≥ ∥p − pn ∥ + ∥an + bn−1 ∥

2

= ∥p − pn ∥ + ∥xn−1 − yn ∥ . 2

Let rn = yn+1 − xn and sn = an+1 − bn for n = 1, 2, . . . . It follows from Lions and Mercier results in [3] summarized in Theorem 3.2 that {rn } and {sn } converge to 0. Therefore, the subsequences {rnk } and {snk } also converge to 0, that is, ynk +1 − xnk → 0 and ank +1 − bnk → 0 as k → ∞. Using Lemma 2.2 with zk,1 = xnk , zk,2 = ynk +1 ,

Lemma 4.1 ([6, Lemma 2]). If p ∈ Se (A, B), then for all n, 2

Proof. Fix p = (z , w ) ∈ Se (A, B). It follows from the inclusions in (6) and (7) and from the monotonicity of A and B that

⟨z − xn , xn − xn−1 ⟩ = ⟨z − xn , −an − bn ⟩ = ⟨z − xn , w − bn ⟩ + ⟨z − yn , −w − an ⟩ + ⟨yn − xn , −w − an ⟩ ≥ ⟨yn − xn , −w − an ⟩.

vk,1 = bnk , vk,2 = ank +1 ,

k = 1, 2, . . . ,

we conclude that (z , w ) ∈ Se (A.B). Hence, all weak limit points of {pn } belong to Se (A, B). Since {pn } converges Fejér to Se (A, B), it follows from Opial’s Lemma that this sequence has at most one weak limit point in that set. Therefore, (z , w ) ∈ Se (A, B) is the unique weak limit of the bounded sequence {pn }, which is equivalent to weak convergence of {pn } to such a point p = (z , w ). Weak convergence of {(yn , −an )} to p follows trivially again from Lions and Mercier’s results summarized in Theorem 3.2. Weak convergence of {xn } to z follows trivially. □

Acknowledgements

Direct combination of this inequality with the second equality in (7) and (8) yields

⟨p − pn , pn − pn−1 ⟩ = ⟨z − xn , xn − xn−1 ⟩ + ⟨w − bn , bn − bn−1 ⟩ ≥ ⟨yn − xn , −w − an ⟩ + ⟨w − bn , yn − xn ⟩ = ⟨xn − yn , an + bn ⟩.

We thank the anonymous referee for his/her corrections and suggestions. The author was partially supported by Conselho Nacional de Desenvolvimento Científico e Tecnológico – CNPq grants 306247/2015-1 and 430868/2018-9 and by Fundação Carlos Chagas Filho de Amparo à Pesquisa do Estado do Rio de Janeiro FAPERJ grant Cientistas de Nosso Estado E-26/203.318/2017.

References

Therefore

∥p − pn−1 ∥2 = ∥p − pn ∥2 + ∥pn − pn−1 ∥2 + ⟨p − pn , pn − pn−1 ⟩

2

≥ ∥p − pn ∥2 + ∥an + bn ∥2 + ∥xn − yn ∥2 + 2⟨xn − yn , an + bn ⟩, which proves the lemma. □ Observe that this lemma gives an explicit estimation of how fast xn−1 − yn = an + bn−1 → 0. Indeed, if p ∈ Se (A, B), then addition of the inequality stated on the lemma for all n yields

∥p − p0 ∥2 ≥

w

xnk → z and bnk → w as k → ∞.

(n = 1, 2, . . . )

2

293

∞ ∑ n=1

∥an + bn−1 ∥2 =

∞ ∑ n=1

∥yn − xn−1 ∥2 .

(9)

[1] Jim Douglas Jr., H.H. Rachford Jr., On the numerical solution of heat conduction problems in two and three space variables, Trans. Amer. Math. Soc. 82 (1956) 421–439. [2] Jonathan Eckstein, B.F. Svaiter, A family of projective splitting methods for the sum of two maximal monotone operators, Math. Program. 111 (2008) 173–199, (1-2, Ser. B). [3] P.-L. Lions, B. Mercier, Splitting algorithms for the sum of two nonlinear operators, SIAM J. Numer. Anal. 16 (6) (1979) 964–979. [4] George J. Minty, Monotone (nonlinear) operators in Hilbert space, Duke Math. J. 29 (1962) 341–346. [5] Zdzisław Opial, Opial weak convergence of the sequence of successive approximations for nonexpansive mappings, Bull. Amer. Math. Soc. 73 (1967) 591–597. [6] B.F. Svaiter, On weak convergence of the Douglas-Rachford method, SIAM J. Control Optim. 49 (1) (2011) 280–287.