JID:YJMAA
AID:20987 /FLA
Doctopic: Optimization and Control
[m3L; v1.194; Prn:28/12/2016; 8:39] P.1 (1-14)
J. Math. Anal. Appl. ••• (••••) •••–•••
Contents lists available at ScienceDirect
Journal of Mathematical Analysis and Applications www.elsevier.com/locate/jmaa
Stabilization of stochastic parabolic equations with boundary-noise and boundary-control Ionuţ Munteanu Alexandru Ioan Cuza University, Department of Mathematics, and Octav Mayer Institute of Mathematics (Romanian Academy), 700506 Iaşi, Romania
a r t i c l e
i n f o
Article history: Received 7 September 2016 Available online xxxx Submitted by U. Stadtmueller Keywords: Heat equation Neumann boundary conditions Boundary noise Feedback controller Eigenfunctions
a b s t r a c t We consider here a state equation of parabolic type on a bounded real interval, with Neumann boundary conditions consisting in the sum of the control and of a white noise in time. We design a simple form, easily numerically implementable feedback law that assures the exponential stability in probability of the a given equilibrium solution. The form of the feedback involves only the eigenfunctions of the linear operator of the linearized equation. © 2016 Elsevier Inc. All rights reserved.
1. Introduction The main concern of the present work is the stabilization problem associated to a parabolic type equation on a bounded real interval, with Neumann boundary conditions in which the derivative of the unknown is equal to the sum of the control and of a white noise in time, namely: ⎧ ⎪ ⎨ ∂t z(t, x) = z (t, x) + f (x, z(t, x)), t > 0, x ∈ (0, π), ˙ , z (t, π) = 0, t > 0, z (t, 0) = v(t) + e−δt W ⎪ ⎩ z(0, x) = z (x), x ∈ (0, π). o
(1.1)
Here {W = W (t), t ≥ 0} is a standard real Wiener process in the probability space (Ω, P, F); the unknown z = z(t, x, w) is a real-valued process; the initial data zo belongs to L2 (0, π); f is a nonlinear function; and ∂ v = v(t) is the control. The symbol stands for the space partial derivative, i.e., ∂x . The problem of boundary stabilization of parabolic type equations has been widely studied both in the deterministic case (see, for instance [21,12]) as well as in the stochastic case (see, for instance [11], [1, Section 2.4.1] and the references therein). Regarding the later ones, in all those works the equation E-mail address:
[email protected]. http://dx.doi.org/10.1016/j.jmaa.2016.12.047 0022-247X/© 2016 Elsevier Inc. All rights reserved.
JID:YJMAA 2
AID:20987 /FLA
Doctopic: Optimization and Control
[m3L; v1.194; Prn:28/12/2016; 8:39] P.2 (1-14)
I. Munteanu / J. Math. Anal. Appl. ••• (••••) •••–•••
always contains noise as a forcing term, which makes the problem easier since the presence of enough noise guaranties the stability of the system. However, in [7] it is considered the limit and more difficult situation of a noise acting only at the boundary, corresponding to a realistic situation where the control itself is ˙ . Here, we assume a fading-type noise, namely e−δt W ˙ , for some δ > 0 (the noise perturbed by a noise W perturbation is not permanent of the same intensity but vanishes exponentially fast). The exponential decay of the noise requirement is mandatory since we intend to construct a feedback u that guaranties that the solution of the corresponding closed loop translated system (1.2) decays exponentially fast to zero. In [7] it is addressed the problem of optimal control associated to equation (1.1). Those results were improved in [9], while in [22] the authors considered in addition some delays in the equation (1.1). In the present paper, we study the problem of exponential stabilization of equilibrium solutions to (1.1). More precisely, let ze ∈ C 2 ([0, π]) be any solution to the equation ze (x) + f (x, ze (x)) = 0 in (0, π); ze (π) = 0. Translating ze into zero by defining the fluctuation variable z − ze → y, we equivalently rewrite (1.1) as ⎧ ⎪ ⎨ ∂t y(t, x) = y (t, x) + f (x, y(t, x) + ze (x)) − f (x, ze (x)), t > 0, x ∈ (0, π), ˙ , y (t, π) = 0, t > 0, y (t, 0) = u + e−δt W ⎪ ⎩ y(0, x) = y o (x) := z (x) − z (x), x ∈ (0, π), o e
(1.2)
where u = v − ze (0). Then, we look for a feedback law u = u(y), such that, once inserted into (1.2), the corresponding solution of the closed-loop equation (1.2) satisfies π lim e
|y(t, x)|2 dx < ∞, P − a.s.,
μt
t→∞
0
for some positive constant μ. In the present work, the design of the stabilizing controller u uses the ideas in [2]. There, a simple feedback law, so-called of proportional type, is proposed (the stochastic version of this feedback is developed in [1]). The procedure in [2] is associated to Dirichlet boundary conditions, but it can be easily reformulated for Neumann boundary conditions, to suit our case. However, it is of conditional nature, requiring the linear independence of the normal derivatives of the eigenfunctions of the linear operator of the linearized equation, on the part of the boundary where the control is applied. In the Neumann boundary case the requirement becomes: the linear independence of the eigenfunctions on that part of the boundary. It is easy to see that in the one-dimensional case (which is our case) this necessity can not be fullfiled since the values of the eigenfunctions on the boundary are nothing but real numbers. To overcome this problem, we shall use the ideas in [18] that extend those ones in [2]. (Roughly speaking, in [18] it is dropped the hypotheses of linear independence, and it is proposed a similar stabilizing feedback as that one in [2].) In the literature, similar proportional-type feedbacks were constructed for stabilizing other important parabolic-type equations as the Navier–Stokes equations in [16,20], the phase-filed equations in [15], the heat equation with delays in [17], or the magnetohydrodynamic equations in a channel in [19]. Equations of type (1.1) arise naturally as models to describe chemical reactions, for instance. However, the higher-dimensional case and, more important, the Dirichlet boundary conditions case are left for subsequent work (see the Conclusions section). The organization of the paper is as follows: in Section 2, after introducing the main assumptions and notations we give a priori the form of the stabilizing feedback and state the main stabilization result of this work, that is Theorem 2.1 below; then, in Section 3, we focus on the proof of it. Finally, in Section 4 we end with some conclusions.
JID:YJMAA
AID:20987 /FLA
Doctopic: Optimization and Control
[m3L; v1.194; Prn:28/12/2016; 8:39] P.3 (1-14)
I. Munteanu / J. Math. Anal. Appl. ••• (••••) •••–•••
3
2. Main assumptions, notations and the feedback law Everywhere in the following, we shall assume that f, fz ∈ C([0, π] × R),
(i) where fz =
∂ ∂z f .
Besides this, when needed, we shall strengthen assumption (i) to (ii)
|fz (x, z)| ≤ C(|z|m + 1), ∀x ∈ [0, π], z ∈ R,
where 0 < m < ∞. We denote by L2 (0, π) the space of Lebesgue square integrable functions on (0, π) with the norm y := π 1 π |y(x)|2 dx 2 and the scalar product y, z := 0 y(x)z(x)dx. We consider the linear self-adjoint operator 0 in L2 (0, π) Ay := y + fz (x, ze )y, ∀y ∈ D(A),
D(A) = y ∈ H 2 (0, π) : y (0) = y (π) = 0 ,
(2.1)
where H 2 (0, π) stands for the standard Sobolev space on (0, π). It is known that −A has a countable set ∞ ∞ of real eigenvalues, denoted by {λj }j=1 , with the corresponding eigenfunctions denoted by {φj }j=1 , that is, −Aφj = λj φj , j = 1, 2, 3, .... The system {φj }j∈N∗ may be chosen to be orthonormal. Besides this, given ρ > 0, there is a N ∈ N such that λj ≤ ρ for j = 1, 2, ..., N and λj > ρ in rest.
(2.2)
(For more details see [2].) We fix ρ (and so, we fix N as-well) such that 2ρ − δ < 0. In order to simplify our presentation, we shall assume further that the first N eigenvalues are mutually distinct. The general case can be also considered and treated as in [18]. Next, we denote by B the Gramm matrix corresponding to the system {φ1 (0), φ2 (0), ..., φN (0)} in R, namely ⎛
(φ1 (0))2 ⎜ φ (0)φ (0) ⎜ 2 1 B := ⎜ ⎝ ... φN (0)φ1 (0)
φ1 (0)φ2 (0) (φ2 (0))2 ... φN (0)φ2 (0)
⎞ ... φ1 (0)φN (0) ... φ2 (0)φN (0) ⎟ ⎟ ⎟ ⎠ ... ... ... (φN (0))2
(2.3)
It is important to emphasize that φi (0) = 0, i = 1, ..., N since otherwise we would have φi ≡ 0, i = 1, ..., N , which is absurd. It is clear that, for N greater than 1, this matrix is not invertible. As we shall need in the sequel an invertible one, let us perturb it. More precisely, let us introduce the matrices ⎛ ⎜ ⎜ Λγk := ⎜ ⎝
1 γk −λ1
0 ... 0
0 1 γk −λ2
... 0
... ... ... ...
0 0 ... 1 γk −λN
⎞ ⎟ ⎟ ⎟ , k = 1, ..., N, ⎠
(2.4)
JID:YJMAA
AID:20987 /FLA
4
Doctopic: Optimization and Control
[m3L; v1.194; Prn:28/12/2016; 8:39] P.4 (1-14)
I. Munteanu / J. Math. Anal. Appl. ••• (••••) •••–•••
(here γk , k = 1, ..., N , are some mutually distinct positive numbers chosen as in relation (3.4) below) and define Bk := Λγk BΛγk , k = 1, ..., N.
(2.5)
Also, denote by ⎛
1 γ1 −λ1 φ1 (0) 1 γ2 −λ1 φ1 (0)
⎜ ⎜ T := ⎜ ⎝
... 1 γN −λ1 φ1 (0)
⎞ φN (0) φN (0) ⎟ ⎟ ⎟, ⎠ ... 1 γN −λN φN (0)
1 γ1 −λ2 φ2 (0) 1 γ2 −λ2 φ2 (0)
1 γ1 −λN 1 γ2 −λN
... ... ... ... 1 γN −λ2 φ2 (0) ...
(2.6)
and A = (B1 + B2 + ... + BN )−1 .
(2.7)
We note that, even if B is singular, hence each Bk is singular, we have by [18, Appendix] that their sum is nonsingular, therefore the matrix A is well-defined. Now, let us take a pause and do some further comments on the matrices we have introduced above. Since B is a Gramm matrix we know that it is symmetric and positive semidefinite. It follows that each Bk is symmetric and positive semidefinite, that is, Bk q, q N ≥ 0, ∀q ∈ RN ,
(2.8)
(here and in what follows ·, · N stands for the standard scalar product in RN ) and their sum is symmetric and positive definite, since it is invertible. Thus, the matrix A is symmetric and positive definite and one 1 can define another positive definite symmetric matrix, denoted by A 2 , such that 1
1
A2 A2 = A
(2.9)
(the square root of A; for more details see [3]). Bearing in mind the above notations, we may now give the form of the feedback that we claim that it exponentially stabilizes the equation (1.2). More precisely,
u(t) :=
⎛
⎞ ⎛ ⎞ y(t), φ1
1 ⎜ y(t), φ ⎟ ⎜ 1 ⎟ ⎜ ⎟ 2 ⎟ ⎜ T A⎜ . ⎟,⎜ ⎟ ⎝ ... ⎠ ⎝ ... ⎠ y(t), φN
1 N
(2.10)
For latter purpose it is convenient to equivalently write the feedback u as the sum of N feedbacks uk , that is u = u1 + u2 + ... + uN , where ⎞ ⎛ y(t), φ1
⎜ y(t), φ ⎟ ⎜ ⎜ 2 ⎟ ⎜ A⎜ ⎟,⎜ ⎝ ⎠ ⎝ ... y(t), φN
uk (t) :=
⎛
Our goal is to show the following result
1 γk −λ1 φ1 (0) 1 γk −λ2 φ2 (0)
... 1 γk −λN φN (0)
⎞
⎟ ⎟ ⎟ ⎠
, t ≥ 0, k = 1, ..., N. N
(2.11)
JID:YJMAA
AID:20987 /FLA
Doctopic: Optimization and Control
[m3L; v1.194; Prn:28/12/2016; 8:39] P.5 (1-14)
I. Munteanu / J. Math. Anal. Appl. ••• (••••) •••–•••
5
Theorem 2.1. Under assumptions (i) and (ii), the solution to the closed-loop equation ⎧ x) + f (x, z(t, x)),⎞t >⎛0, x⎞∈ (0, π), ∂t z(t, x) = z (t, ⎛ ⎪ ⎪ ⎪ ⎪ ⎪ z(t) − ze , φ1
1 ⎪ ⎪ ⎜ z(t) − z , φ ⎟ ⎜ 1 ⎟ ⎨ ⎜ ⎟ ⎜ ⎟ e 2 ˙ (t), z (t, π) = 0, t ≥ 0, z (t, 0) = T A ⎜ + ze (0) + e−δt W ⎟,⎜ ⎟ ⎪ ⎝ ⎠ ⎝ ... ... ⎠ ⎪ ⎪ ⎪ ⎪ z(t) − ze , φN
1 ⎪ ⎪ N ⎩ z(0, x) = zo (x), x ∈ (0, π),
(2.12)
satisfies ρ
lim e 2 t z(t) − ze 2 < ∞, P − a.s.,
t→∞
provided that zo − ze ≤ θ, for some θ > 0 sufficiently small. Here T is introduced in relation (2.6), while A is introduced in relation (2.7). φ1 , ..., φN are the first N eigenfunctions of the operator −A, introduced in (2.1). Before ending this section, we present the following lemma, that will be given without a proof (we refer to [14] for its proof). It represents the key result that allows to obtain convergence in probability of stochastic processes. Lemma 2.1. Let I and I1 be nondecreasing adapted processes. Z be a nonnegative semimartingale and M a local martingale such that E(Z(t)) < ∞, ∀t ≥ 0, I1 (∞) < ∞, P-a.s., and Z(t) + I(t) = Z(0) + I1 (t) + M (t), ∀t ≥ 0. Then, there is lim Z(t) < ∞, P − a.s. and I(∞) < ∞, P − a.s.
t→∞
3. Main results In order to achieve our goal, we follow the classical steps by considering first the stabilization problem associated to the linearization of the equation (1.2). More precisely, we plug the proposed feedback law (2.10) into the first order approximation of equation (1.2), and derive the following result Theorem 3.1. Under assumption (i), the solution to the closed-loop equation ⎧ ∂t y(t, x) = y (t, x) + fz (x, ze )y(t, x), t > 0, x ∈ (0, π), ⎪ ⎪ ⎪ ⎛ ⎞ ⎛ ⎞ ⎪ ⎪ ⎪ y(t), φ1
1 ⎪ ⎪ ⎜ y(t), φ ⎟ ⎜ 1 ⎟ ⎨ ⎜ ⎟ ⎜ ⎟ 2 ˙ (t), y (t, π) = 0, t ≥ 0, y (t, 0) = T A ⎜ + e−δt W ⎟,⎜ ⎟ ⎪ ⎝ ⎠ ⎝ ⎠ ... ... ⎪ ⎪ ⎪ ⎪ y(t), φN
1 ⎪ ⎪ N ⎪ ⎩ o y(0, x) = y (x), x ∈ (0, π), satisfies ρ
lim e 2 t y(t)2 < ∞, P − a.s.
t→∞
(3.1)
JID:YJMAA
AID:20987 /FLA
Doctopic: Optimization and Control
[m3L; v1.194; Prn:28/12/2016; 8:39] P.6 (1-14)
I. Munteanu / J. Math. Anal. Appl. ••• (••••) •••–•••
6
Here T is introduced in relation (2.6), while A is introduced in relation (2.7). φ1 , ..., φN are the first N eigenfunctions of the operator −A, introduced in (2.1). Proof. Using the ideas in [6], we rewrite equation (3.1) as an internal control problem type. To this aim, given some γ > 0 we denote by Dγ = Dγ (x), x ∈ (0, π), the solution to the equation ⎧ N ⎪ ⎨ −D (x) − a(x)D (x) − 2 λj Dγ , φj φj (x) + γDγ (x) = 0, x ∈ (0, π), γ γ j=1 ⎪ ⎩ Dγ (0) = 1, Dγ (π) = 0.
(3.2)
It is known that for γ > 0 sufficiently large equation (3.2) has a unique solution belonging to the space 1 H 2 (0, π) (see, e.g. [13]). In the sequel, we shall need to compute the scalar product Dγ , φi , i = 1, ..., N . To this end, let us scalarly multiply equation (3.2) by φi to obtain after some straightforward computations that Dγ , φi = −
φi (0) , i = 1, ..., N. γ − λi
(3.3)
Now, let ρ < δ < γ1 < γ2 < ... < γN
(3.4)
be N constants such that for each of them equation (3.2) has a unique solution, denoted by Dγ1 , ..., DγN , respectively. For latter purpose, we show that ⎛ ⎞ ⎞ y(t), φ1
Dγk uk , φ1
⎜ y(t), φ ⎟ ⎜ D u , φ ⎟ ⎜ ⎜ γk k 2 ⎟ 2 ⎟ ⎜ ⎟ = −Bk A ⎜ ⎟, ⎝ ⎝ ⎠ ⎠ ... ... Dγk uk , φN
y(t), φN
⎛
(3.5)
where uk are introduced in (2.11) and Bk are introduced in (2.5), for k = 1, ..., N . This is indeed so. We have ⎛ ⎞ ⎞ ⎛ 1 y(t), φ1
γk −λ1 φ1 (0) Dγk , φi
⎜ y(t), φ ⎟ ⎜ ⎟ 1 ⎜ 2 ⎟ ⎜ γk −λ2 φ2 (0) Dγk , φi ⎟ Dγk uk , φi = A ⎜ , i = 1, ..., N. ⎟ ⎟,⎜ ⎝ ⎠ ⎠ ⎝ ... ... 1 y(t), φN
γk −λN φN (0) Dγk , φi
N
It then follows by relation (3.3) that ⎞ ⎞ ⎛ 1 − (γk −λ1 )(γ φ1 (0)φi (0) y(t), φ1
k −λi ) ⎜ y(t), φ ⎟ ⎜ − 1 φ2 (0)φi (0) ⎟ ⎜ ⎟ 2 ⎟ ⎜ A⎜ , i = 1, ..., N, ⎟ ⎟ , ⎜ (γk −λ2 )(γk −λi ) ⎝ ⎠ ⎠ ⎝ ... ... y(t), φN
− (γ −λ 1)(γ −λ ) φN (0)φi (0)
Dγk uk , φi =
⎛
k
N
k
i
N
from where we immediately obtain (3.5). Besides this, let D = D(x), x ∈ (0, π), be the solution to a similar equation as (3.2), namely
−D (x) − a(x)D(x) + γD(x) = 0, x ∈ (0, π), D (0) = 1, D (π) = 0,
(3.6)
JID:YJMAA
AID:20987 /FLA
Doctopic: Optimization and Control
[m3L; v1.194; Prn:28/12/2016; 8:39] P.7 (1-14)
I. Munteanu / J. Math. Anal. Appl. ••• (••••) •••–•••
7
for some sufficiently large γ > 0. Similarly as above, we have D, φi = −
Further, set l(t, x) = y(t, x) −
N
φi (0) , i = 1, 2, 3.... γ + λi
(3.7)
˙ . It is easy to see that l belongs to D(A). Dγk (x)uk (t) − D(x)e−δt W
k=1
Formally differentiating l with respect to time, yields ∂t l(t, x) = y (t, x) + a(x)y(t, x) − ∂t
N
˙ Dγk (x)uk (t) + D(x)e−δt W
k=1
= Al + α(t, x) − ∂t β(t, x), where α(t, x) := −2
N j=1
λj
N
Dγk uk , φj
φj (x) +
k=1
N
˙ γk Dγk (x)uk (t) + γD(x)e−δt W
k=1
and β(t, x) :=
N
˙ . Dγk (x)uk (t) + D(x)e−δt W
k=1
Therefore t y(t, x) = l(t, x) + β(t, x) = β(t, x) + e (yo − βo ) + tA
t e
(t−s)A
α(s, x)ds −
0
t
0
t e(t−s)A α(s, x)ds −
= etA yo +
e(t−s)A ∂s β(s, x)ds
0
Ae(t−s)A β(s, x)ds. 0
It follows that equation (3.1) may be equivalently rewritten as ⎞ ⎧ ⎛ N N N ⎪ ⎪ ⎪ ⎝ ⎪ λj Dγk uk , φj φj (x) + (γk − A)Dγk (x)uk (t)⎠ dt ⎪ ⎨ dy(t) = Ay(t) − 2 ⎪ ⎪ ⎪ ⎪ ⎪ ⎩
j=1
k=1
−δt
+ (γ − A) D(x)e
k=1
(3.8)
dW
y(0) = y o .
(For a detailed proof of the equivalence see [6].) To show the exponential decaying of the solution of equation (3.8), we involve a classical technique of N decomposition into a stable and an unstable part, due to [21]. More exactly, let Xu := linspan {φj }j=1 and ∞ Xs := linspan {φj }j=N +1 , PN the algebraic projection of L2 (0, π) on Xu and set yu := PN y, ys := (I −PN )y. Then, applying the projectors PN and I −PN , respectively, we may split (3.8) in two parts, as follows: on Xu ⎛ ⎞ ⎧ N N N ⎪ ⎪ ⎪ ⎝ ⎪ λj Dγk uk , φj φj (x) + (γk − Au )Dγk (x)uk (t)⎠ dt ⎪ ⎨ dyu (t) = Au yu (t) − 2 j=1 k=1 k=1 (3.9) ⎪ −δt ⎪ + (γ − A ) D(x)e dW ⎪ u ⎪ ⎪ ⎩ y(0) = yuo := PN y o ,
JID:YJMAA
AID:20987 /FLA
Doctopic: Optimization and Control
[m3L; v1.194; Prn:28/12/2016; 8:39] P.8 (1-14)
I. Munteanu / J. Math. Anal. Appl. ••• (••••) •••–•••
8
and on Xs ⎧ N ⎪ ⎪ ⎨ dys (t) = As ys (t) + (γk − As )Dγk (x)uk (t) dt + (γ − As ) D(x)e−δt dW k=1 ⎪ ⎪ ⎩ y (0) = y o := (I − P )y o . s N s
(3.10)
Here Au := PN A and As := (I − PN )A. First, we focus on the so-called unstable part (3.9). Let us denote the left hand side of the equation by R(yu ) := ⎡ ⎣Au yu (t) − 2
N
λj
j=1
N
Dγk uk , φj
φj (x) +
k=1
N
⎤ (γk − Au )Dγk (x)uk (t)⎦ dt + (γ − Au )D(x)e−δt dW.
k=1
For i = 1, ..., N , we have that the scalar product R, φi = λi y, φi − λi
N
Dγk uk , φi +
k=1
N
γk Dγk uk , φi dt + (γ + λi ) D(x)e−δt dW, φi ,
k=1
where, taking into account the relations (3.5) and (3.7) and recalling that A = (B1 + B2 + ... + BN )−1 , we obtain ⎛ ⎞ R, φ1
N N ⎜ R, φ ⎟ ⎜ 2 ⎟ Bk AY − γk Bk AY dt + Φe−δt dW ⎜ ⎟ = −ΛY + Λ ⎝ ... ⎠ k=1 k=1 (3.11) R, φN
N = −γ1 Y + (γ1 − γk )Bk AY dt + Φe−δt dW. k=2
Here ⎛ ⎞ ⎞ −φ1 (0) y, φ1
⎜ −φ (0) ⎟ ⎜ y, φ ⎟ ⎜ ⎜ ⎟ 2 ⎟ 2 Λ := diag(λ1 , λ2 , ..., λN ); Y := ⎜ ⎟ and Φ := ⎜ ⎟. ⎝ ... ⎠ ⎝ ... ⎠ y, φN
−φN (0) ⎛
So, by (3.11), multiplying scalarly equation (3.9) by φi , i = 1, ..., N , we get ⎧ N ⎪ ⎪ ⎨ dY = −γ1 Y + (γ1 − γk )Bk AY dt + Φe−δt dW, t > 0 k=2 ⎪ ⎪ ⎩ Y(0) = Y . o
(3.12)
Applying Itô’s formula to eδt Ay, y N in (3.12), yields 1 2
1 2
t
1 2
(δ − 2γ1 )eδs A Y2N + 2eδs
eδt A Y2N = A Yo 2N +
+ 0
e−δs AΦ, Φ N ds + 2
(γ1 − γk ) Bk AY, AY N
ds
k=2
0
t
N
(3.13)
t AΦ, Y N dW 0
JID:YJMAA
AID:20987 /FLA
Doctopic: Optimization and Control
[m3L; v1.194; Prn:28/12/2016; 8:39] P.9 (1-14)
I. Munteanu / J. Math. Anal. Appl. ••• (••••) •••–•••
9
1
Here, A 2 is the square root of A, (see (2.9)) and · N denotes the standard norm in RN . Taking advantage of (2.8) and the fact that γ1 − γk ≤ 0, k = 2, ..., N , we see that 2eδs
N
(γ1 − γk ) Bk AY, AY N ≤ 0, s ≥ 0.
k=2
Also, recall that γ1 was taken such that γ1 > δ, which implies that δ − 2γ1 < 0. Finally notice that since A is symmetric we have AΦ, Φ ≥ 0. Hence, taking the expectation in (3.13), yields 1
E eδt A 2 Y2N
!
1
≤ A 2 Yo 2N +
1 AΦ, Φ N < ∞, ∀t ≥ 0. δ
(3.14)
Now, let us denote by Z(t) := e A δt
1 2
t Y2N ,
I1 (t) := AΦ, Φ N
e
−δs
t AΦ, Y N dW,
ds, M := 2
0
0
and t N 1 δs 2 δs 2 I(t) := − (γ1 − γk ) Bk AY, AY N ds. (δ − 2γ1 )e A YN + 2e k=2
0
Taking into account that M is a local martingale and I, I1 are nondecreasing, adapted and with finite variation processes, we conclude that Z(t) = Z(0) + I1 (t) − I(t) + M (t) is a semimartingale. By (3.14) we are able to apply Lemma 2.1 to Z, I, I1 and M (noticing also the obvious fact that I1 (∞) < ∞) to obtain that there exists the limit ! 1 lim eδt A 2 Y2N < ∞, P − a.s. t→∞
1
Using that A 2 is an invertible positive definite symmetric matrix it follows that lim eδt Y2N < ∞, P-a.s. t→∞ This yields lim eδt yu (t)2 < ∞, P − a.s.,
t→∞
since yu (t)2 =
N
(3.15)
| y(t), φj |2 = Y2N .
j=1
Concerning the stable part (3.10), in order to show the exponential stability in probability of its solution we shall apply a classical technique developed in [10]. The spectrum of the operator As consists in ∞ {−λj }j=N +1 with −λj < −ρ, j ≥ N + 1, therefore it generates a ρ-exponentially-stable C0 -semigroup on Xs . By the Lyapunov theorem there is Q ∈ L(Xs , Xs ), Q = Q∗ ≥ 0, such that − Qy, As y + ρy =
1 y2 , ∀y ∈ Xs . 2
(3.16)
As in the unstable case, since Q is positive definite and symmetric we deduce the existence of a symmetric 1 1 1 and positive definite operator Q 2 such that Q 2 Q 2 = Q, namely, the square root of Q. N Let us denote by F (u(t)) := (γk − As )Dγk (x)uk (t) and by G(x) := (γ − As )D(x). So, equation (3.10) k=1
may be rewritten as
JID:YJMAA
AID:20987 /FLA
Doctopic: Optimization and Control
[m3L; v1.194; Prn:28/12/2016; 8:39] P.10 (1-14)
I. Munteanu / J. Math. Anal. Appl. ••• (••••) •••–•••
10
dys (t) = (As ys (t) + F (u(t))) dt + e−δt G(x)dW ys (0) = yso .
(3.17)
By (3.15) and the definition of uk in (2.11) we easily see from the definition of F that we have F (u(t))2 ≤ Ce−δt , t ≥ 0, P − a.s.
(3.18)
Applying Itô’s formula in (3.17) to the function Qy, y , we get " # 1 dQ 2 ys (t)2 = 2 Qys , As ys + 2 Qys , F (u(t)) + e−2δt QG, G dt + e−δt Qys , G dW (3.19) (making use of (3.16)) " # = −2ρ Qys , ys − ys 2 + 2 Qys , F (u(t)) + e−2δt QG, G dt + e−δt Qys , G dW, or, equivalently, 1 2
t
1 2
Q ys (t) = Q ys (p) − 2
1 2
2ρQ ys + ys
2
2
2
t $
! dτ + 2
p
t +
% 1 1 Q 2 ys , Q 2 F (u(τ )) dτ
p
e−2δτ QG, G dτ +
p
t
$
%
(3.20)
e−δτ Q 2 ys , Q 2 G dW, ∀ 0 ≤ p ≤ t. 1
1
p
It follows from (3.19) that ' ! & 1 d e2ρt Q 2 ys 2 = −e2ρt ys 2 + 2e2ρt Qys , F (u(t)) + e2(ρ−δ)t QG, G dt + e(2ρ−δ)t Qys , G dW (3.21) or, equivalently
e
2ρt
1 2
Q ys = Q 2
1 2
yso 2
t &
' −e2ρτ ys 2 + 2e2ρτ Qys , F (u(τ )) + e2(ρ−δ)τ QG, G dτ
+ 0
(3.22)
t e(2ρ−δ)τ Qys , G dW.
+ 0
First, we show that ! 1 E Q 2 ys (t)2 ≤ Ce−ρt and E ys (t)2 ≤ Ce−ρt , ∀t ≥ 0,
(3.23)
for some positive constant C. To this end, taking the expectation in (3.22), and recalling that 2ρ − δ < 0, we deduce that ⎫ ⎧ t ⎨ % ⎬ ! $ 1 1 1 E e2ρt Q 2 ys 2 ≤ C + 2E e2ρτ Q 2 ys , Q 2 F (u(τ )) dτ , ∀t ≥ 0, (3.24) ⎭ ⎩ 0
1
1
1 where C = Q 2 ys0 2 +Q 2 G2 2(δ−ρ) . Using the Schwarz inequality, stochastic Fubini theorem and estimate (3.18) we get
JID:YJMAA
AID:20987 /FLA
Doctopic: Optimization and Control
[m3L; v1.194; Prn:28/12/2016; 8:39] P.11 (1-14)
I. Munteanu / J. Math. Anal. Appl. ••• (••••) •••–•••
2ρt
E e
1 2
Q ys
2
!
t ≤C +2
11
! 1 1 e2ρτ E Q 2 ys Q 2 F (u)(τ )) dτ
0
t ≤C +ρ
E e
2ρτ
1 2
Q ys
2
t
!
e(2ρ−δ)τ dτ
dτ + C
0
0
(recall that 2ρ − δ < 0) t ≤C+
! 1 ρE e2ρτ Q 2 ys 2 dτ.
0 1
Via Gronwall inequality and the fact that Q 2 is a symmetric positive definite operator, (3.23) follows immediately. Next, from (3.20), we have P
1
Q 2 ys (t)2 ≥ p
sup t∈[p,p+1]
⎛ p+1 ⎞ ! ! 1 1
p
p ≤ P Q 2 ys (p)2 ≥ 2ρQ 2 ys 2 + ys 2 dτ ≥ ⎠ + P⎝ 5 5
+ t + ⎞ + $ + % + + 1 1
p + P ⎝2 sup ++ Q 2 ys , Q 2 F (u(τ )) dτ ++ ≥ ⎠ 5 t∈[p,p+1] + + p + t + ⎛ ⎞ + + + +
p + P ⎝ sup ++ e−2δτ QG, G dτ ++ ≥ ⎠ 5 t∈[p,p+1] + + ⎛
p
p
+ t + ⎞ + + % $ 1 + + 1
p + P ⎝ sup ++ e−δτ Q 2 ys , Q 2 G dW ++ ≥ ⎠ 5 t∈[p,p+1] + + ⎛
p
(using the Cebisev inequality and the Burkholder–Davis–Gundy inequality) ! 1 5 5 ≤ E Q 2 ys (p)2 +
p
p
+
10
p
5 +
p 5 +
p
p+1 ! 1 E 2ρQ 2 ys 2 + ys 2 dτ p
p+1
+$ 1 %+! 1 + + E + Q 2 ys , Q 2 F (u(τ )) + dτ
p p+1 E e−2δτ | QG, G | dτ p p+1 , +$ 1 %+2 1 + −2δτ + 2 2 E e + Q ys , Q G + dτ p
(making use of the estimates (3.18) and (3.23)) ≤ Ce−ρp
1 ,
p
for some p > 0. Taking p = e− 2 ρp , we get from above that 1
JID:YJMAA
AID:20987 /FLA
Doctopic: Optimization and Control
[m3L; v1.194; Prn:28/12/2016; 8:39] P.12 (1-14)
I. Munteanu / J. Math. Anal. Appl. ••• (••••) •••–•••
12
P
sup
− 12 ρp
1 2
Q ys (t)2 ≥ e
≤ Ce− 2 ρp , ∀p ∈ N∗ . 1
(3.25)
t∈[p,p+1]
The Borel–Cantelli lemma now implies that there is p(ω) such that if p > p(ω), then Q 2 ys (t)2 ≤ Ce− 2 ρp , 1
sup
1
t∈[p,p+1]
which implies that 1
lim e 2 ρt ys (t)2 < ∞, P − a.s.
t→∞
(3.26)
Recalling that y = yu + ys , and invoking (3.15) and (3.26), we are lead to the wanted conclusion of the theorem, thereby completing the proof. 2 Theorem 3.1 provides a global asymptotical exponential stability result of the linearization of the equation (1.2) under the action of the feedback (2.10). So, one may expect that this leads to local exponential stability of the nonlinear equation (1.2) under the action of the same feedback (2.10). This is indeed the case. More exactly, we have the following result, whose proof will not be given here since, because of the similarities, it can be easily deduced from the proof of [2, Theorem 4.1]. Theorem 3.2. Under assumptions (i) and (ii), the solution to the closed-loop equation ⎧ ∂t y(t, x) = y (t, x) + f (x, y(t, x) + ze (x)) − f (x, ze (x)), t > 0, x ∈ (0, π), ⎪ ⎪ ⎪ ⎛ ⎞ ⎛ ⎞ ⎪ ⎪ ⎪ 1 y(t), φ1
⎪ ⎪ ⎜ y(t), φ ⎟ ⎜ 1 ⎟ ⎨ ⎜ ⎟ 2 ⎟ ⎜ ˙ (t), y (t, π) = 0, t ≥ 0, y (t, 0) = T A ⎜ + e−δt W ⎟,⎜ ⎟ ⎪ ⎝ ⎝ ⎠ ⎠ ... ... ⎪ ⎪ ⎪ ⎪ 1 y(t), φN
⎪ ⎪ N ⎪ ⎩ y(0, x) = y o (x), x ∈ (0, π),
(3.27)
satisfies ρ
lim e 2 t y(t)2 < ∞, P − a.s.,
t→∞
provided that y o ≤ θ, for some θ > 0 sufficiently small. Here T is introduced in relation (2.6), while A is introduced in relation (2.7). φ1 , ..., φN are the first N eigenfunctions of the operator −A, introduced in (2.1). To end this section, going back to the initial variable z, Theorem 3.2 immediately implies Theorem 2.1. 4. Conclusions We discussed here the problem of Neumann boundary stabilization of semi-linear heat equation in the case when the control is perturbed by a noise. The designed feedback is easily implementable in practice due to its simple form. Of course, another interesting case would be that one of a Dirichlet boundary controller perturbed by noise. The main difficulties that appear in that case are related to the fact that the solution is not L2 -valued unlike to the case of Neumann boundary conditions. More precisely, the solution lies in a negative Sobolev space H α , α < − 14 . The reason is that the smoothing properties of the heat equation are not strong enough
JID:YJMAA
AID:20987 /FLA
Doctopic: Optimization and Control
[m3L; v1.194; Prn:28/12/2016; 8:39] P.13 (1-14)
I. Munteanu / J. Math. Anal. Appl. ••• (••••) •••–•••
13
to regularize a rough term such as a white noise. However, one may suggest to reconsider the problem in the new framework proposed in [8], namely, in weighted L2 -spaces. The problem is that in order to apply the eigenbasis decomposition method, that we use here, one should consider, as basis in the weighted L2-space, the eigenfunctions system of the weighted Laplace operator. But, since equation (1.1) is governed by the Laplace operator and not by the weighted Laplace operator, this approach fails. Another idea would be to consider the solution in the space of distributions as in [4], but this is left for subsequent work. Besides this, even in the case of Neumann boundary conditions, it would be interesting to consider the dimension of the space variable higher than one. The main difficulty in this case is that the solution D of (3.6) should satisfy noise boundary conditions of the type ∂ ˙ (t, x), x ∈ Γ, D(x) = e−δt W ∂ν where Γ is a part of the boundary of the domain in which the equation is considered, while ν is the outward unit normal of it. In order to overcome this one may rely on the existing results in [5], after imposing some additional conditions, to define the solution D to (3.6). Then, the present algorithm may be applied. However, this is left for subsequent work. Acknowledgments The author would like to gratefully acknowledge the helpful comments of the anonymous referees. This paper was supported by an Alexander von Humboldt Foundation post-doctoral fellowship. References [1] V. Barbu, Stabilization of Navier–Stokes Flows, Springer-Verlag, New York, 2010. [2] V. Barbu, Boundary stabilization of equilibrium solutions to parabolic equations, IEEE Trans. Automat. Control 58 (9) (2013) 2416–2420. [3] N. Bourbaki, Théories spectrales, Springer, 2007. [4] Z. Brzezniak, B. Goldys, S. Peszat, F. Russo, Second order PDEs with Dirichlet white noise boundary conditions, J. Evol. Equ. 15 (1) (2013). [5] G. Da Prato, J. Zabczyk, Evolution equations with white-noise boundary conditions, Stoch. Stoch. Rep. 42 (1993) 167–182. [6] G. Da Prato, J. Zabczyk, Ergodicity for Infinite-Dimensional Systems, London Mathematical Society Lecture Notes Series, vol. 229, Cambridge University Press, 1996. [7] A. Debussche, M. Fuhrman, G. Tessitore, Optimal control of a stochastic heat equation with boundary-noise and boundarycontrol, ESAIM Control Optim. Calc. Var. 13 (2007) 178–205. [8] G. Fabbri, B. Goldys, An LQ problem for the heat equation on the halfline with Dirichlet boundary control and noise, SICON 48 (3) (2009) 1473–1488. [9] G. Guatteri, F. Masiero, On the existence of optimal controls for SPDEs with boundary noise and boundary control, SIAM J. Control Optim. 51 (13) (2013) 1909–1939. [10] U.G. Haussmann, Asymptotic stability of the linear Ito equation in infinite dimensions, J. Math. Anal. Appl. 65 (1978) 219–235. [11] A. Ichikawa, Stability of parabolic equations with boundary and pointwise noise, in: Stochastic Differential Systems Filtering and Control, in: Lecture Notes in Control and Information Sciences, vol. 69, Springer-Verlag, Berlin, 2005. [12] I. Lasiecka, R. Triggiani, Differential and Algebraic Riccati Equations with Application to Boundary/Point Control Problems: Continuous Theory and Approximation Theory, Lecture Notes in Control and Information Sciences, vol. 164, Springer-Verlag, Berlin, 1991. [13] I. Lasiecka, R. Triggiani, Control Theory for Partial Differential Equations: Continuous and Approximation Theories, Cambridge Univ. Press, Cambridge, U.K., 2000. [14] R. Lipster, A.N. Shiraev, Theory of Martingales, Kluwer, Dordrecht, 1989. [15] I. Munteanu, Boundary stabilization of the phase field system by finite-dimensional feedback controllers, J. Math. Anal. Appl. 412 (2014) 964–975. [16] I. Munteanu, Boundary stabilization of the Navier–Stokes equation with fading memory, Internat. J. Control 88 (3) (2015) 531–542. [17] I. Munteanu, Stabilization of semilinear heat equations, with fading memory, by boundary feedbacks, J. Differential Equations 259 (2015) 454–472. [18] I. Munteanu, Stabilisation of parabolic semilinear equations, Internat. J. Control (2016), http://dx.doi.org/10.1080/ 00207179.2016.1200747.
JID:YJMAA 14
AID:20987 /FLA
Doctopic: Optimization and Control
[m3L; v1.194; Prn:28/12/2016; 8:39] P.14 (1-14)
I. Munteanu / J. Math. Anal. Appl. ••• (••••) •••–•••
[19] I. Munteanu, Boundary stabilization of a 2-D periodic MHD channel flow, by proportional feedbacks, ESAIM Control Optim. Calc. Var. (2016), http://dx.doi.org/10.1051/cocv/2016025. [20] I. Munteanu, Stabilization of a 3-D periodic channel flow by explicit normal boundary feedbacks, J. Dyn. Control Syst. (2016), http://dx.doi.org/10.1007/s10883-016-9332-9. [21] R. Triggiani, Boundary feedback stabilization of parabolic equations, Appl. Math. Optim. 6 (1980) 201–220. [22] J. Zhou, Optimal control of a stochastic delay heat equation with boundary-noise and boundary-control, Internat. J. Control 87 (9) (2014).