Accepted Manuscript Optimal control of measure dynamics Oleg A. Kuzenkov, Alexey V. Novozhenin PII: DOI: Reference:
S1007-5704(14)00408-0 http://dx.doi.org/10.1016/j.cnsns.2014.08.024 CNSNS 3324
To appear in:
Communications in Nonlinear Science and Numerical Simulation
Please cite this article as: Kuzenkov, O.A., Novozhenin, A.V., Optimal control of measure dynamics, Communications in Nonlinear Science and Numerical Simulation (2014), doi: http://dx.doi.org/10.1016/j.cnsns. 2014.08.024
This is a PDF file of an unedited manuscript that has been accepted for publication. As a service to our customers we are providing this early version of the manuscript. The manuscript will undergo copyediting, typesetting, and review of the resulting proof before it is published in its final form. Please note that during the production process errors may be discovered which could affect the content, and all legal disclaimers that apply to the journal pertain.
Optimal control of measure dynamics Oleg A. Kuzenkova, Alexey V. Novozheninb Lobachevsky State University of Nizhni Novgorod, Prospekt Gagarina 23, Nizhni Novgorod, Russia, 603950 a
[email protected],
[email protected]
Abstract Optimal control problem is considered in the space of measures. The general principles of solving are presented. On the basis of these principles feedback control has been suggested for satisfying the state constraint in the form of equality. The principles of solving are developed for specific problems with a state constraint. In particular, optimal control problems of heat conductivity with the state constraints as equality are considered. Feedback is constructed by using bilinear control and integral transformation. Numerical solution is proposed on the base of the method of moments.
Keywords Optimal control, measure, heat conductivity.
1. Introduction Optimal control of parabolic systems occurs in many application fields such as chemical reaction simulations and biomedical sciences [1]. In particular, the problem of optimal control of heat conductivity processes remains relevant [2-6]. The problem becomes more difficult with state constraints. If the state functions are considered to be in the set of continuous functions, the Lagrange multipliers associated with these problems are known to be regular Borel measures. Therefore, while setting up the optimality conditions, measures appear on the right-hand side of the associated adjoint state equations. This causes difficulties in the numerical approximation of the problems. The solution approach to stateconstrained optimal control problems through Lagrange multipliers associated with the state constraint leads to technical difficulties [7-10]. Moreover, the state constraints are often violated in the case of open-loop control due to inaccurate approximations and due to outside disturbances. It is possible to use feedback control to overcome these difficulties. Feedback control has certain advantages: it is steadier with respect to disturbances. Feedback is successfully constructed on the base of bilinear or quadratic control, integral transformations [11-14]. In the optimal control theory there is a fruitful approach to consider not only a particular problem but general classes of problems in an abstract mathematical form. For example, optimal control is very often considered in Banach space [7, 15]. In this article the optimal control problem is considered in a more concrete Banach space of measures. The study of measure dynamics has continued since the appearance of the basic works of L. Schwartz [16], who introduced the notions of the generalized function or distribution and
considered differential equations in distributions or measures. These equations are often used for the description of various nature processes, for example, stochastic processes, processes of mathematical physics or quantum mechanics and others. Measure-valued right hand sides or boundary conditions in partial differential equations have attracted recent interest due to their role in the adjoint equation for optimal control problems with pointwise state constraints [17-22]. Impulse control and point control are often described by measures. The dynamic equations of measures are considered as adjoint equations for the optimal control problem in non-reflexive Banach spaces [6]; dynamics of Lyapunov measure is used for the optimal stabilization of nonlinear systems [23]. In this article the general principles of solving optimal control problem for a measure dynamics are presented. On the basis of these principles feedback control has been suggested for satisfying the state constraint in the form of equality. The principles of solving are developed for specific problems with a state constraint. In particular, optimal control problems of heat conductivity with the state constraints as equality are considered. Feedback is constructed by using bilinear control and integral transformation. Numerical solution is proposed on the basis of the method of moments [24-26].
2. Motivation example: controlled process of heat conductivity for a rod Consider a classical optimal control problem for the process of the heat conductivity with a state constraint. Let there be a heat insulated rod; z ( x, t ) is heat distribution of the rod at the time t ; 0 < x < l . The initial heat distribution is a smooth function ϕ that satisfies the requirements ∂ϕ ∂ϕ (0, t ) = (l , t ) = 0 ; ϕ (0, t ) = 1 . (1) ∂x ∂x Controlled process of heat conductivity is described by the differential equation with the boundary conditions ∂z ∂z ∂z = ∆z + w( x, t ) , (0, t ) = (l , t ) = 0 . (2) ∂t ∂x ∂x Here ∆ is Laplace operator, w( x, t ) is control. Control time T is fixed.
The process is required to be controlled to minimize a deviation of the final distribution z ( x, T ) from a desirable distribution h(x) : l
∫ (h( x) − z( x, T ))
2
dx → min .
0
Moreover, state constraint z (0, t ) = 1 is given at any time t . It means that the temperature on the left end of the rod is equal to 1 at any time. It is a classical problem of optimal control theory that has been considered since the 1960s. In particular A.G.Butkovsky suggested the method of moments [24], based on the decomposition of controlled systems in Fourier series. In our case the equations for Fourier coefficients ζ n of the function z and the state constraint are presented as
ζ&n = −λnζ n + η n , n = 1,2,K ; ζ n (0) = ϕ n ,
∞
∑ζ n =1
n
(t ) = 1 ,
(3)
where λ n are eigenvalues, η n and ϕ n are Fourier coefficients of the functions w and ϕ . (n − 1)πx Functions v n = cos , n = 1,2,K , are eigenfunctions. l The method of moments assumes transition to a finite-dimensional optimal control problem with the state constraint. However, the analytical solution of this problem meets considerable difficulties, because Hamilton-Pontryagin function contains undefined measures. Moreover, the state constraint is not performed at small disturbances. In this work another control method is suggested. It is feedback control. The control is chosen in the following form
(
l
)
w( x, t ) = u ∗ z − z ( ∆z + u ∗ z ) x =0 ; u ∗ z = ∫ u ( y, t ) z ( x − y, t )dy , 0
where * is the convolution operator, u ( x, t ) being a tuning of feedback. Function u is the element of an admissible set, for example u can be a piecewise continuous function of time that satisfies an inequality l
u
2
= ∫ u 2 ( x, t ) dx ≤ c 2 0
in any time t . The form of the control is the sum of the linear convolution operator and the nonlinear term. It is a simple form of control, it is easy to implement in a practice. The linear convolution operator is the particular case of linear control operators that are widely used in modern control systems [12]. The linear convolution can be obtained by using the Fourier transform of the rod temperature and tuning function. The second term is defined by the value of some function in one point. This value can be obtained by installing the sensor at this point. In this case controlled system has the form ∂z ∂z = ∆z + u ∗ z − z ( ∆z + u ∗ z x = 0 , = 0 , z ( x,0) = ϕ ( x) . (4) ∂t ∂x x = 0
(
)
x =l
It can be shown that the state constraint for the system is automatically carried out at any time. Actually, as it follows from (4), ∂z (0, t ) =( ∆z + u ∗ z ) x =0 − z (0, t )( ∆z + u ∗ z ) x =0 ∂t or ∂z (0, t ) =( ∆z + u ∗ z ) x =0 (1 − z (0, t )) . ∂t Integrating this equation, we get t
z (0, t ) = 1 − c exp ∫ (∆z + u ∗ z ) x = 0 dt . 0
Since z (0,0) = ϕ (0) = 1 , it follows that equalities c = 0 , z (0, t ) = 1 are hold. Thus the optimal control problem is reduced to finding the optimal tuning of feedback. It is possible to prove that the solution of the initial-boundary value problem (4) is associated with the solution of the auxiliary problem
∂v ∂v = ∆v + u ∗ v , ∂t ∂x
x=0 x =l
= 0 , v( x,0) = ϕ ( x) (5)
in the form v ( x, t ) z ( x, t ) = . v(0, t ) Really, ∂v ∂v v(0, t ) − v (0, t ) v ∂z ∂t v v v v ∂t ∆ + u ∗ + u ∗ = = ∆ − = 2 ∂t v ( 0 , t ) v ( 0 , t ) v ( 0 , t ) v ( 0 , t ) v ( 0 , t ) v (0, t ) x =0
(
)
= ∆z + u ∗ z − z ( ∆z + u ∗ z ) x = 0 ,
∂z ∂x
x =0 x =l
=
z ( x, 0 ) =
∂ v ( x, t ) 1 ∂v = ∂x v(0, t ) x =0 v(0, t ) ∂x x =l
x =0 x =l
= 0,
v ( x, 0 ) ϕ ( x ) = = ϕ ( x) . v(0,0) ϕ (0)
This fact gives a chance to solve an optimal control problem for the linear auxiliary system without a state constraint: to find an admissible control u to minimize the functional 2
v ( x, T ) ∫0 h( x) − v(0, T ) dx → min . We can use Pontryagin maximum principle to solve this optimization problem [15]. In accordance with it the adjoint function ψ satisfies equations ∂ψ ∂ψ = −∆ψ − u ∗ψ , = 0 . (6) ∂t ∂x x =0 l
x =l
Hamilton-Pontriagin function H has the form l
H = ∫ (ψ ∗ v)udx . 0
Since ∂ ∂v ∂ψ (v ∗ψ ) = ∗ψ + v ∗ , ∂x ∂x ∂x it can be seen, that function ψ ∗ z does not depend on t . From Pontryagin maximum principle it follows that the optimal function u does not depend on time.
(u ∗ψ ) ∗ v = ψ ∗ (u ∗ v) , ∆ψ ∗ v = ψ ∗ ∆v ,
Next, the auxiliary problem can be solved numerically, for example, by the method of moments. The method of moments assumes transition to a finite-dimensional optimal control problem based on the decomposition of controlled systems in a Fourier series. Here the equations for Fourier coefficients ξ n of the function v are presented as l ξ&1= −λ1ξ1 + lu1ξ1 , ξ&n = − λnξ n + u n ξ n , n = 2,3, K ; ξ n (0) = ϕ n , (7) 2 where u n are constant Fourier coefficients of the control functions u . The problem is reduced to finding the constants (u1 ,K, u N ) of minimizing the function
2
N ( T ) ξ h − n ∑ n N n =1 ξ n (T ) ∑ n =1 under conditions l N lu12 + ∑ u n2 ≤ c 2 2 m =1 where hn are Fourier coefficients of the desired distribution h . In the work [14] it is proved that finite-dimensional approximations of the auxiliary problem converge to the exact solution of this problem. The results of the numerical experiment are given 3πx in Fig. 1. Here h = cos , l = 10 , T = 4 , c = 1 . 10
Figure 1 3D graph of heat distributions of the rod with respect to time for h = cos
2πx 10
It can be seen that the final distribution coincides almost exactly with the desired distribution. Moreover, the obtained solution meets the state constraint: the temperature on the left end of the rod is equal to 1. The graph of the solution with other initial conditions is presented in Fig. 2.
Figure 2
πx
, T = 10 , c = 0.7 , l = 10 10 The graph of the solution in the case of optimal open-loop control w is shown in Fig. 3 for comparison. Here the initial condition is the same as in Fig. 3, 3D graph of heat distributions for h = cos
1
w = ∫ w 2 ( x, t )dx ≤ b 2 , 2
0
b = 19.74 . The optimal control problem was numerically solved by the method of moments.
Figure 3 3D graph of heat distributions for optimal open-loop control w
Comparing of the temperatures on the left end of the rod for optimal feedback control and optimum control without feedback is given in Fig. 4.
Figure 4 The comparison of the temperature on the left end of the rod
The state constraint is seen to be violated due to inaccurate approximations in the case of control without feedback. Feedback control has advantages: it is steadier with respect to inaccurate approximations. Using the example of this particular problem we can observe the general principles of the solution: 1. The use of a special form of feedback control for satisfying the state constraint. 2. The transition to the linear differential equation. 3. The use of Pontryagin maximum principle to establish the optimal feedback tuning. In the most general form these principles can be presented for the equation of measure dynamics. What is the relation between the considered problem and measures dynamics? For the answer let us consider the set X = {1,2,K} , Σ = 2 X . Let a Fourier coefficient ζ n be a measure of the element with the number n . Then the measure of a set A ∈ Σ is the sum µ ( A) = ∑ ζ j , the j∈A ∞
measure of the set X is the sum of the series µ ( X ) = ∑ ζ n = 1. n =1
And we have a normed measure that changes over time. The equations for Fourier coefficients (3) determine the process of measure dynamics. The considered problem was to find the optimal control of the normed measure dynamics.
3. General principles of optimal control in measure space Let X be a topological space, Σ is σ -algebra of sets from X , M ( X , Σ) is Banach space of finite measures on X , µ ∈ M ( X , Σ) , ξ ∈ M ( X , Σ) , µ[t ] is time function with values in M ( X , Σ) , µ& is the derivative of function µ[t ] with respect to t ; µ[t ]( A) is the value of measure µ in the time t on the set A ∈ Σ ; F is an operator defined on the set D[F ] which is
dense in M ( X , Σ) , F acts from D[F ] in M ( X , Σ) , F [µ ]( A) is the value of the operator F on the set A ; ν is a normed measure, ν ∈ M ( X , Σ) , ν ( X ) = 1 . Consider a Cauchy problem: µ& = F [ µ ] , (8) µ[0] = ν . (9) Then the following lemma holds. Then the first principle can be stated in the form of the following theorems. Theorem 1. For the solution of the Cauchy problem to be a normed measure at any time, it is necessary and sufficient that differential equation (8) be presented on the set of normed measures in the form µ& = Φ[ µ ] − µΦ[µ ]( X ) , (10) where Φ is a homogeneous operator. Proof. Necessity. Let solution µ[t ] of Cauchy problem (8), (9) be normed measures at any time under any normed measureν , then µ[t ]( X ) = 1 , µ& [t ]( X ) = F [ µ ]( X ) = 0 , F[ µ ]( X ) = 0 under
µ any normed measure ν . Consider operator Φ[µ ] = µ ( X ) F . It can be seen that Φ is the µ( X ) homogeneous operator, Φ[µ ] = F [ µ ] for any normed measure µ , Φ[ν ]( X ) = F [ν ]( X ) = 0 under any normed ν . Then F[ µ ] = Φ[µ ] = Φ[µ ] − µΦ[µ ]( X ) for any normed measure µ . Sufficiency. As it follows from (8), ∂µ[t ]( X ) = Φ[ µ ]( X ) − µ[t ]( X )Φ[ µ ]( X ) = Φ[ µ ]( X )(1 − µ[t ]( X )) . ∂t Integrating this equation, we get t
µ[t ]( X ) = 1 − c exp ∫ Φ[µ ]( X )dt . 0
Since µ[0]( X ) = ν ( X ) = 1 , we have c = 0 , µ[t ]( X ) = 1 . Thus µ[t ] is the normed measure. Theorem 1 has a corollary for controlled processes. Corollary. Consider controlled process of measure dynamics µ& = F [ µ ,U ] , µ[0] = ν 0 , ν 0 ( X ) = 1 with a state constraint µ ( X )[t ] = 1, (11) where U is a control. For satisfying the state constraint it is necessary and sufficient that the operator F has the form F[ µ ,U ] = Φ[ µ ,U ] − µΦ[µ ,U ]( X ) , where Φ is a homogeneous operator with respect to µ .
For example, Φ may be a linear operator with respect to µ : Φ[ µ ,U ] = Lµ + Uµ . Here L is a linear bounded operator or an unbounded closed operator with dense domain everywhere. U is a control linear bounded operator. In the considered example L was Laplace operator, U was the convolution operator. This control method is a generalization of the bilinear or quadratic control [12, 13].
It is possible to present different specific state constraints as an equality in the form (11). Therefore, the first theorem is very useful in the optimal control theory. As a rule, general principles of this theory are stated in an abstract Banach space. However, the first theorem has not a generalization in the Banach space, and this property is stated in more concrete Banach space of measures. Thus measure dynamics using is a fruitful approach in the optimal control. Consider an auxiliary Cauchy problem ξ& = Φ[ξ ] , (12) ξ [0] = ν . (13) The second principle can be stated in the form of the theorem. Theorem 2. If the solution of the auxiliary Cauchy problem satisfies inequality ξ ( X ) ≠ 0 . Then the solutions of problems (9)-(10) and (12)-(13) are associated in the form
µ=
ξ ξ(X )
.
Proof. Let ξ [t ] is the solution of the auxiliary Cauchy problem (12)-(13), ξ [t ]( X ) ≠ 0 , ξ [t ] µ= , then ξ [t ]( X ) ξ [0] ν µ[0] = = =ν , ξ [0]( X ) ν ( X ) ξ&[t ]ξ [t ]( X ) − ξ [t ]ξ&[t ]( X ) ξ [t ]Φ[ξ ]( X ) Φ[ξ ] µ& = = − = Φ[µ ] − µΦ[ µ ]( X ) . ξ 2 [t ]( X ) ξ [t ]( X ) ξ 2 [t ]( X )
It can be seen that the function µ is the solution of Cauchy problem (9)-(10). Consider the differential equation with operator control in measures µ& = Lµ + Uµ − µ ( Lµ ( X ) + Uµ ( X )) . (14) Here U is a control measurable function of time, its values are bounded linear operators at any time. L is a linear bounded operator or an unbounded closed operator with everywhere dense domain. In the case of unbounded operator L it is assumed that the resolvent R(λ , L ) of the operator L satisfies the condition
[R(λ , L)]n
≤ Kλ− n with λ > 0 , n = 1,2,K , where the values
of λ belong to the resolvent set of operator L , K is a constant. Moreover it is assumed that this is true for the adjoint operator L∗ . From the first principle it follows that the solution of this equation is a normed measure, it satisfies the state constraint (11). From the second principle it follows that it is possible to transit to simpler linear equation ξ& = Lξ + Uξ . (15)
Let operator U satisfy the condition U (t ) ≤ c . Let f 1 ( µ , t ) , f 2 (U , t ) , f 3 ( µ ) be continuous functionals, f1 and f 3 have continuous week derivatives (Gateaux derivatives)
∂f 1 ∂f 3 , , ∂µ ∂µ
satisfying Lipschitz conditions with respect to µ . The optimal control problem is stated: the process is required to be controlled to minimize the functional T
J = ∫ ( f1 (µ[t ], t ) + f 2 (U (t ), t ))dt + f 3 (µ[t ]) . 0
where µ[t ] is the solution of Cauchy problem (14), (9). The third principal is a necessary condition of the optimal control in the form of Pontryagin maximum principle. Theorem 3. Let U 0(t ) be an optimal control in this problem, µ 0 [t ] be the corresponding solution of equation (14), (9), ξ 0 [t ] be the corresponding solution of equation (15), (13). Let
L∗ be the adjoint operator for L , ψ be the adjoint function satisfying the equations ∂f 1 ξ 0 ∂f ξ 0 [T ] . , t , ψ (T ) = 3 ∂ξ ξ 0 ( X ) ∂ξ ξ 0 [T ]( X ) Then Hamilton-Pontryagin function H t [U ] = ψUµ 0 + f 2 (U , t )
ψ& = − L∗ψ −
has minimum in the point U 0 (t ) almost everywhere on [0, T ] H t [U 0 (t )] = min H t [U ] . U ≤c
Proof. As it follows from the second principle it is possible to transit to the equivalent optimal control problem for the auxiliary linear equation: to find an operator function U (T ) to minimize functional T ξ ξ [T ] , J = ∫ f1 , t + f 2 (U (t ), t ) dt + f 3 ξ [ T ]( X ) ξ(X ) 0 where ξ [t ] is solution of auxiliary Cauchy problem (13), (15). The optimal control in this auxiliary problem coincides with the optimal control for the system (9), (14). This problem is a special case of the optimal operator control problem in a Banach space considered in [15]. Let B be a Banach space, z (t ) be a function of time and its values belong to B, ϕ belongs to B. Assume that L is a bounded linear operator, U (t ) is a measurable function of time and its values are
bounded linear operators acting from B to B, and U (t ) ≤ c ; g1 ( z, t ) , g 2 (U , t ) , g 3 ( z ) are continuous functionals, g 1 and g 3 have continuous week derivatives (Gateaux derivatives) ∂g 3 , satisfying Lipschitz conditions with respect to z . The process ∂z z& = Lz + U (t ) z , z (0) = ϕ (16) is required to be controlled to minimize the functional T
J = ∫ ( g1 ( z (t ), t ) + g 2 (U (t ), t ))dt + g 3 ( z (t )) . 0
∂g1 , ∂z
The maximum principle for this problem has been proved [15]: let U 0 (t ) be optimal control in this problem, z 0 [t ] be the corresponding solution of equations (16). Let L∗ be the adjoint operator for L , ψ be the adjoint function satisfying the equations ∂g ∂g ψ& = L∗ψ − 1 ( z 0 , t ) , ψ (T ) = 3 ( z 0 [T ]) . ∂z ∂z Then Hamilton-Pontryagin function H t [U ] = ψUz 0 + g 2 (U , t ) has minimum in the point U 0 (t ) almost everywhere on [0, T ] : H t [U 0 (t )] = min H t [U ] . U ≤c
ξ If we set B = M ( X , Σ) , z = ξ , g1 ( z , t ) = f 1 , t , g 2 (U , t ) = f 2 (U , t ) = f 2 (U (t ), t ) , ξ (X ) ξ [T ] , we obtain a necessary optimality condition for the auxiliary problem: let g 3 = f 3 ξ [T ]( X ) U 0 (t ) be an optimal control in this problem, ξ 0 [t ] be the corresponding solution of equations (13), (15), ψ satisfies the equations ∂f 1 ξ 0 ∂f ξ 0 [T ] . , t , ψ (T ) = 3 ∂ξ ξ 0 ( X ) ∂ξ ξ 0 [T ]( X ) Then Hamilton-Pontryagin function H t [U ] = ψUξ 0 + f 2 (U , t ) has minimum in the point U 0 (t ) almost everywhere on [0, T ] . Hence here is the validity of the maximum principle for the original problem. The theorem is proved similarly in the case of an unbounded operator L .
ψ& = L∗ψ −
It is possible to solve various specific problems based on these principles (optimal control for systems on a countable simplex [27], hyperbolic system on a simplex [28], optimal separation of a specified harmonic with state constraints [14, 29], optimal cooling of a section of a solid body [6] and others [30]).
4. Applications of the theory to optimization problems of mathematical physics. Numerical examples Consider the following problem: the heat conductivity process (1), (2) for the rod is required to be controlled to maximally cool the segment A of the rod 2 ∫ (z( x, t ) ) dΩ → min A
over fixed time T with a state constraint l
∫ z( x, t )dx = 1 . 0
The state constraint means that the energy of the rod is equal to 1 at any time. Function ϕ (x) satisfies the requirements l
∫ ϕ ( x)dx = 1 . 0
Present the controlled process as a normed measure dynamics. Let X be the segment [0, l ] , Σ be σ -algebra of Lebesgue measurable subsets of the segment, the measure of a set A ∈ Σ be equal to the following Lebesgue integral µ[t ]( A) = ∫ z ( x, t )dx . A
Then z ( x, t ) is the measure density, equations (2) determine the process of the measure dynamics. The state constraint means that µ[t ] is a normed measure l
µ[t ]( X ) = ∫ z ( x, t )dx = 1. 0
It is possible to solve this problem based on three principles. In according to the first principle we can use feedback control in the form l
w( x, t ) = u ∗ z − z ∫ (u ∗ z )dx 0
for satisfying the state constraint. In accordance with the second principle we can transit to the same auxiliary initial-boundary value problem (5) as in the first optimal problem for the rod. Then we can use Pontryagin maximum principle for solving the optimal control problem for the auxiliary system. According to this the adjoint function ψ satisfies the same differential equations (6) as in the first problem. Then from Pontryagin maximum principle it also follows that the optimal function u does not depend on time. The optimal control problem for the auxiliary system can be numerically solved by the method of moments. The results of the numerical experiment are given in Fig. 5. Here l = 7 , T = 7 , c = 5 , A = [0,1] . The initial distribution is uniform. It can be seen that the temperature on the segment [0,1] converges to 0 over time. Moreover, the obtained solution meets the state constraint: the energy of the rod is equal to 1 at any time.
Figure 5. Time history of heat distributions.
{
2
2
}
Consider an optimal control problem for a round membrane Ω = x = ( x1 , x2 ) : x1 + x2 ≤ 1 .
∂z = ∆z + w( x, t ) , z ( x,0) = ϕ ( x) , ∫ φ 2 ( x)dΩ = 1 , ∂t Ω ∂z ∂n
= 0 , ∫ (h( x) − z ( x, T )) 2 dΩ → min . ∂Ω
Ω
∂z Here is the derivative with respect to the normal of the boundary ∂Ω . ∂n
A state constraint is not linear 2 ∫ z dΩ = 1 . Ω
It is the constant mean-square value of the heat distribution. In this case the equations for Fourier coefficients ζ n of the function z and the state constraint are presented as
ζ&n = −λnζ n + η n , n = 1,2,K ; ζ n (0) = ϕ n ,
∞
∑ζ
2 n
= 1,
n =1
where λn are eigenvalues, η n and ϕ n are Fourier coefficients of the functions w and ϕ , the system of eigenfunctions is orthonormal. Let X be the set {1,2, K} , Σ = 2 X . Let ζ n2 be the measure of the element with the number n , and the measure of a set A ∈ Σ is the sum ∞
µ ( A) = ∑ ζ 2j , the measure of the set X is the sum of the series µ ( X ) = ∑ ζ n2 = 1 . j∈ A
n =1
And we have the probability measure that changes over time. Moreover, ζ&n2 = −2λnζ n2 + 2η nζ n , ζ n2 (0) = ϕ n2 . The equations for Fourier coefficients determine the process of the measure dynamics. If the functions η n have the form ∞
η n = u nζ n − ζ n ∑ (−λi +u i )ζ i2 , i =1
∞
∑u
2 i
≤ c2 .
i =1
Then controlled system has the form ⋅
∞
(ζ n2 ) = −2λ nζ n2 + 2η nζ n2 − ζ n2 ∑ (−λi + u i )ζ i2 . i =1 2 n
Let χ n = ζ , then ∞
⋅
χ n = −2λn χ n + 2un χ n − χ n ∑ (−λi + ui )χ i , χ n (0) = ϕ n2 , i =1
According to the first principle the solution of this system determines a probability measure. Corresponding feedback control in the form w( x, t ) = (u ∗ z ) − z ∫ (u ∗ z − ∆z )dΩ , ∫ u 2 dΩ ≤ c 2 Ω
Ω
satisfies the state constraint. In accordance with the second principle it is possible to transit to the auxiliary controlled system ξ&n = −2λnξ n + 2u n ξ n , ξ n (0) = ϕ n2 The auxiliary system coincides with the differential equations system (7) for Fourier coefficients of the auxiliary system solution in the first example. From Pontryagin maximum principle it follows that the optimal functions u n does not depend on the time.
Next, the auxiliary problem can be solved by the method of moments as in the first example. The finite-dimensional approximations coincide with the approximation of the first problem. The results of the numerical experiment are given in the following graphs.
(
Here h = J1 1.841 x12 + x22
)0.003xx ++0x.005x 1
2 1
2 2
2
, J 1 is the first Bessel function, c 2 = 10 , T = 7 .
The initial distribution is on Fig. 6, the desired distribution is on Fig. 7, the final distribution is on Fig. 8. It is obvious that graphs on Fig. 7 and Fig. 8 almost coincide.
Figure 6 The initial distribution
Figure 7 The desired distribution
Figure 8 The final distribution
It is possible to solve the optimal stabilization problem with a state constraint using the same approach. For example, consider controlled heat conductivity process for the rod with boundary and initial conditions ∂z ∂z ∂z = ∆z + λ0 z + w , = = 0 , z ( x,0) = ϕ ( x) . ∂t ∂x x = 0 ∂x x = l
∂ϕ ∂ϕ = = 0 , ϕ (0) = 0 , λ0 is some constant. The process is required to be controlled ∂x x = 0 ∂x x =l to stabilize during fixed time T or to minimize a deviation of the final distribution from 0: Here l
∫z
2
( x, T )dx → min .
0
Moreover, state constraint z (0, t ) = 0 is given at any time t . Let control w is chosen in the following form w = u ∗ z − ( z + 1)( ∆z + λ0 z + u ∗ z ) x = 0 , 2
where u ≤ c 2 . Let v = z + 1 , then ∂v = ∆v + λ0v − λ0 + u ∗ v − u ∗ 1 − v(∆v + λ0v − λ0 + u ∗ v − u ∗ 1) x =1 . ∂t ∂v If u ∗ 1 = −λ0 then = ∆v + λ0v + u ∗ v − v( ∆v + λ0v + u ∗ v) x =1 . In according to the first principle ∂t function v satisfies the equality v(0, t ) = 1 , then function v satisfies the equation ∂v = ∆v + u ∗ v − v(∆v + u ∗ v) x =1 , ∂t function z satisfies the state constraint. The optimal stabilization problem is reduced to the first optimal control problem for the rod in the motivation example with h = 1 . The results of the numerical experiment are given in Fig. 9.
Figure 9 3D graph of heat distributions
Here λ0 = 10 , ϕ ( x) = 2e −2 x sin(πx) , l = 1, T = 2 . It can be seen, that the heat distribution converges to 0 very quickly and meets the state constraint: the temperature on the left end of the rod is equal to 0. Moreover, the heat distribution coincides almost exactly with zero after t = 0.3 . It is possible to compare the results with the solution of the similar problem [31]. The solution was obtained for the same equation with the same parameters and initial conditions without a state constraint using boundary control. The graph of the solution (the temperature evolution in
the rod) was presented in the paper [31] in Fig. 4 on page 620. Here the solution converges to zero slower than in Fig. 9: the heat distribution coincides almost exactly with zero only at the moment T = 2 . We can use these principles not only in physical models, but also in biological models and others. Consider the controlled process with inheritance [32] µ& = (k ( x) + u( x, t ))µ − ((k + u ) µ ( X ))µ (17) and the initial condition (9). Here X is a topological space, x ∈ X , k ( x) is a self-reproduction coefficient, continuous function on X , u ( x, t ) is control, | u( x, t ) |≤ c . Limiting coefficient − (k + u )µ ( X ) is equal to the general reproduction. For example, µ is the distribution of the population biomass on the set X of genotypes. The process is required to be controlled to maximize µ[T ]( A) on a measurable set A . According to the first principle the solution of Cauchy problem (9), (17) is a probability measure at any time. As it follows from the second principle it is possible to transit to an auxiliary system ξ& = (k ( x) + u ( x, t ))ξ . This fact gives a chance to solve an optimal control problem for the linear auxiliary system: to find an admissible control u to minimize the functional ξ [T ]( A) − . ξ [T ]( X ) It is possible to use Pontriagin maximum principle to solve this optimization problem. According to this the adjoint function ψ satisfies equations ξ ( A)ξ [T ]( X ) − ξ [T ]( A) ψ& = −(k ( x) + u ( x, t ))ψ , ψ ( x, T ) = − , ξ 2 [T ]( X ) where ξ ( A) is the characteristic function of A : 1, if x ∈ A, ξ ( A) = 0, if x ∉ A. Hamilton-Pontriagin function H has the form H (u ) = (k ( x) + u)ψξ . It can be seen that Hamilton-Pontriagin function has minimum if u = с for x ∈ A and u = −с for x ∉ A . As it follows from the third principle the optimal control has the form c if x ∈ A, u 0 ( x, t ) = − c if x ∉ A.
5. Conclusions In this work the following results are obtained. The general optimal control problem has been considered in the space of measures; the general solving principles have been established; necessary conditions for the optimality of normed measure dynamics have been deduced; the methods of solving have been developed for specific optimal control problems of distributed parameters systems with a state constraint in the form of an equality; feedback control has been suggested for satisfying the state constraint; optimal control problems of heart conductivity with the state constraints have been considered as an
example; feedback has been constructed by using bilinear control and integral transformation; numerical solution has been proposed based on of the method of moments. It can be seen that differential equations in measures are useful for solving optimal control problems. It is possible to solve various specific problems on the bases of the principles stated for a measure dynamics.
Acknowledgement This research was supported by the Russian Foundation for Basic Research (RFBR grant N~1301-12452).
References [1] P. Neittaanmakiand, D. Tiba, Optimal Control of Nonlinear Parabolic Systems: Theory, Algorithms and Applications, New York: Marcel Dekker, 1994 [2] F. Trȍ ltzsch, A. Unger, Fast solution of optimal control problems in the selective cooling of steel. Journal of Applied Mathematics and Mechanics 81(7) (2001), pp. 447-456 [3] P. Benner, J. Saak, Efficient numerical solution of the LQR-problem for the heat equation, J. Proc. Appl. Math. Mech. 4(1) (2004), pp. 648-649 [4] A. H. Borzabadi, A. V. Kamyadand, M. H. Farahi, Optimal control of the heat equation in an inhomogeneous body, J. Appl. Math. & Computing 15(1-2) (2004), pp. 127-146 [5] M. Farag, M. Al-Manthari, Optimal control of a second order parabolic heat equation, Information Theories & Applications 13(3) (2006), pp. 215-221 [6] O.A. Kuzenkov, Optimal cooling of a section of a solid body, Computational Mathematics and Modeling 7 (4) (1996), pp. 393-398 [7] H. O. Fattorini, Infinite Dimensional Optimization and Control Theory, Encyclopedia of Mathematics and its Applications, vol. 62, Cambridge University Press, 1999 [8] I. Neitzel, F. Trȍltzsch, On regularization methods for the numerical solution of parabolic control problems with pointwise state constraints, ESAIM: Control, Optimization and Calculus of Variations 15 (2009), pp. 426-453 [9] F. Trȍltzsch, Optimal Control of Partial Differential Equations, AMS, USA, 2010 [10] A. Borzì1, S. G. Andrade, Multigrid Solution of a Lavrentiev-Regularized StateConstrained Parabolic Control Problem, Numer. Math. Theor. Meth. Appl. 5(1) (2012), pp. 1-18 [11] B. S. Mordukhovich, Optimal control and feedback design of state-constrained parabolic systems in uncertainty conditions, Applicable Analysis. 90(6) (2011), pp. 1075-1109 [12] A. Smyshlyaev, M. Krstic, Adaptive Control of Parabolic PDEs. New Jersey. Princeton University Press, 2010.
[13] M. Ouzahra, Feedback stabilization of parabolic systems with bilinear controls. Electronic Journal of Differential Equations. 2011(38) (2011), pp. 1-10 [14] O.A. Kuzenkov, Optimal separation of a specified harmonic using feedback in the presence of phase constraints. Computational Mathematics and Modeling 9 (2), (1998), pp. 147-152 [15] O.A. Kuzenkov, A.V. Novozhenin, Optimal operator control of systems in a Banach space, Differential Equations 48 (1) (2012), pp. 136-146 [16] L. Schwartz, Theorie des Distributions. Hermann, Paris, 1966 [17] E. Casas, Control of an elliptic problem with pointwise state constraints, SIAM J. Control Optim. 24(6) (1986), pp. 1309-1318 [18] L. Boccardo, T. Gallouët, Nonlinear elliptic and parabolic equations involving measure data, J. Funct. Anal. 87(1) (1989), pp. 149-169 [19] C. Meyer, L. Panizzi, A. Schiela, Uniqueness criteria for solutions of the adjoint equation in state-constrained optimal control, Numerical Functional Analysis and Optimization 32(9) (2011), pp. 983-1007 [20] C. Clason, K. Kunisch, A measure space approach to optimal source placement, Computational Optimization and Applications 53(1) (2012), pp. 155-171 [21] E. Casas, C. Clason, K. Kunisch, Parabolic control problems in measure spaces with sparse solutions, SIAM J. Control Optim. 51(4) (2013), pp. 2788-2808 [22] K. Pieper, B. Vexler, A Priori, Error Analysis for Discretization of Sparse Elliptic Optimal Control Problems in Measure Space, SIAM Journal on Control and Optimization 51(1) (2013), pp. 28-63 [23] U. Vaidya, P. G. Mehta, Lyapunov measure for almost everywhere stability, IEEE Transactions on Automatic Control, 53(1) (2008), pp. 307-323 [24] A.G. Butkovsky, Distributed Control Systems. Elsevier, New York, 1969 [25] R. Meziat, The method of moments in global optimization, Journal of Mathematical Sciences 116(3) (2003), pp. 3303-3324 [26] D. Patino, R. Meziat, P. Pedregal, An Alternative Approach for non-Linear Optimal Control Problems Based on the Method of Moments, Journal Computational Optimization and Applications 38(1) (2007), pp. 147-171 [27] O.A. Kuzenkov, A.V. Novozhenin, Necessary optimality conditions for systems on a countable simplex, XII International Conference “Stability and oscillations of nonlinear control systems”, Book of Abstracts. Moscow, (2012), pp. 201-203 [28] O.A. Kuzenkov, E.A. Ryabova, Optimal control of a hyperbolic system on a simplex, Journal of Computer and Systems Sciences International 42(2) (2003), pp. 227-233 [29] O.A. Kuzenkov, Optimal selection algorithm of the specified harmonic for the parabolic system with the maintenance of the constant sum of state coordinates, Proceedings of the First
International Conference “Mathematical Algorithms”, Nizhny Novgorod, August 15-19, 1994, Nizhny Novgorod, (1995), pp. 63-66 [30] O.A. Kuzenkov Optimal control in families of measures, XV International Conference “Dynamical system modeling and stability investigation”, May 25-27, 2011, Kyiv, Ukraine, Abstracts of conference reports, Kyiv (2011), p. 362 [31] A. Smyshlyaev, M. Krstic, Backstepping observers for a class of parabolic PDEs, Systems & Control Letters 54 (2005), pp. 613-625 [32] A. Gorban, Selection Theorem for Systems with Inheritance, Math. Model. Nat. Phenom. 2(4) (2007), pp. 1-45