Stochastic optimal structural control: Stochastic optimal open-loop feedback control

Stochastic optimal structural control: Stochastic optimal open-loop feedback control

Advances in Engineering Software 44 (2012) 26–34 Contents lists available at ScienceDirect Advances in Engineering Software journal homepage: www.el...

306KB Sizes 0 Downloads 130 Views

Advances in Engineering Software 44 (2012) 26–34

Contents lists available at ScienceDirect

Advances in Engineering Software journal homepage: www.elsevier.com/locate/advengsoft

Stochastic optimal structural control: Stochastic optimal open-loop feedback control K. Marti Federal Armed Forces University Munich, Aero-Space Engineering and Technology, 85577 Neubiberg/Munich, Germany

a r t i c l e

i n f o

Article history: Available online 10 August 2011 Keywords: Stochastic structural control Active control of structures Robust open-loop feedback control Robust model predictive control Stochastic optimization methods H-minimal control Two-point boundary value problems

a b s t r a c t In order to stabilize mechanical structures under dynamic applied loads, active control strategies are taken into account. The structures usually are stationary, safe and stable without external dynamic disturbances, such as strong earthquakes, wind turbulences, and water waves. Thus, in case of dynamic disturbances, additional control elements can be installed enabling active control actions. Active control strategies for mechanical structures are applied in order to counteract heavy applied dynamic loads, such as earthquakes, wind, and water waves, which would lead to large vibrations causing possible damages of the structure. Modeling the structural dynamics by means of a system of first order random differential equations for the state vector (displacement vector q and time derivative q_ of q), robust optimal controls are determined in order to cope with the stochastic uncertainty in the applied loadings. Ó 2011 Civil-Comp Ltd and Elsevier Ltd. All rights reserved.

1. Structural control systems under stochastic uncertainty 1.1. Optimal structural control: active control under stochastic uncertainty In order to stabilize mechanical structures under dynamic applied loads, active control strategies are taken into account. The structures usually are stationary, safe and stable without external dynamic disturbance. Thus, in case of dynamic disturbance, additional control elements can be installed enabling active control actions. Active control strategies for mechanical structures are applied in order to counteract heavy applied dynamic loads, such as earthquakes, wind turbulence, and water waves, which would lead to large vibrations causing possible damage to the structure. Robust optimal controls are determined in order to cope with the stochastic uncertainty involved in the dynamic parameters, the initial values and the applied loadings. Basically, active control depends on the supply of external energy to counteract the dynamic response of a structure. Current research is directed towards the design of active control to reduce the mean square response, i.e. the displacements and their velocities, of the system to a desired level within a reasonable span of time. While the actual time path of the random external load is not known at the planning stage, we assume here that the probability distribution or at least the occurring moments of the applied load are known. The performance of the stochastic dynamic system is evaluated by means of a convex, quadratic cost

function along the trajectory and at the terminal point: Costs for displacements and feedback control. The problem is then to determine an optimal feedback control law minimizing the expected total costs. In active control of dynamic structures, cf. [5,14,17–21], the behavior of the m-vector z = z(t) of displacements with respect to time t is described by a system of second order linear differential equations for z(t) having a right hand side being the sum of the stochastic applied load process and the control force depending on a control n-vector function u(t):

€ þ Dq_ þ KqðtÞ ¼ f ðt; x; uðtÞÞ; Mq

ð1aÞ

Hence, the force vector f = f(t, x, u(t)) on the right hand side of the dynamic Eq. (1a) is given by the sum

f ðt; x; uÞ ¼ f0 ðt; xÞ þ fa ðt; x; uÞ;

ð1bÞ

of the applied load f0 = f0(t, x) being a vector-valued stochastic process describing e.g. external loads or excitation of the structure caused by earthquakes, wind turbulences, water waves, etc., and the actuator or control force vector fa = fa(t, x, u) depending on an input or control n-vector function u = u(t), t0 6 t 6 tf. Here, x denotes the random element, lying in a certain probability space (X, A, P), used to represent random variations. Furthermore, M, D, K, resp., denotes the m  m mass, damping and stiffness matrix. In many cases the actuator or control force fa is linear, i.e.

fa ¼ Cu; E-mail address: [email protected]

t0 6 t 6 tf :

with a certain m  n matrix C.

0965-9978/$ - see front matter Ó 2011 Civil-Comp Ltd and Elsevier Ltd. All rights reserved. doi:10.1016/j.advengsoft.2011.05.040

ð1cÞ

K. Marti / Advances in Engineering Software 44 (2012) 26–34

By introducing appropriate matrices, the linear system of second order differential Eqs. (1a) and (1b) can be represented by a system of first order differential equations as follows:

z_ ¼ gðt; x; zðt; xÞ; uÞ :¼ Azðt; xÞ þ Bu þ bðt; xÞ;

ð2aÞ

with

 A :¼

0

I

1

1

M K

bðt; xÞ :¼

M D



0 1

M f0 ðt; xÞ



 ;

B :¼

0 1

M C

 ;

ð2bÞ

 :

ð2cÞ

Moreover, z = z(t) is the 2m-state vector defined by



  q ; q_

ð2dÞ

fulfilling a certain initial condition

 zðt 0 Þ ¼

qðt0 Þ _ 0Þ qðt



 :¼

with deterministic q_ 0 ¼ q_ 0 ðxÞ.

 q0 ; q_ 0 or

stochastic

ð2eÞ initial

values

q0 ¼ q0 ðxÞ;

1.2. Stochastic optimal open-loop feedback control The aim of the present paper is to determine an (approximate) feedback control law for structural control systems as described in Section 1.1 taking into account stochastic parameter variations. Stochastic parameter variations arise in the following way, see e.g. [6]:  The initial conditions of a system may not be accurately specified or completely known.  Systems experience disturbances from their environment, in fact the separation between system and environment is always an idealization. Also system commands are typically not known a priori.  Uncertainty in the accuracy of a system model itself is a central source. Any dynamical model of a system will neglect some physical phenomena, and this means that any analytical control approach based solely on this model will neglect some regimes of operation. Many of the actual control design methods are based on some chosen nominal set of model parameters. Obviously, controls of this type are not robust in general: In a parametric system or structure, a decision variable, as e.g. a control input function u = u(t), a controller or regulator, is called robust with respect to a certain set of possible system or model parameters, if it has a satisfactory performance for all parameters under consideration. Thus, robust controls, regulators should be insensitive with respect to parameter variations. Obviously, this is a decision theoretical problem which depends essentially on the amount of information available about the unknown parameters, cf. [11]. Usually, robust control methods are designed to function properly as long as uncertain parameters are within some compact set, as e.g. a multidimensional interval. Robust methods aim to achieve or maintain satisfactory performance, such as controllability, reachability and/or a certain type of stability for each parameter vector in the given set of model parameters. Describing the desired property by a scalar criterion, as e.g. by H1- and H2-functions, cf. [4], also minimax decision rules are applied. Since the minimax – and/or the ‘‘holds for all parameters’’ – criterion are very pessimistic decision criteria, and in many practical cases more detailed information than only box-information is

27

available, also in the optimal control design under uncertainty more recent techniques should be applied. One of these new tools is Stochastic Optimization for the case of stochastic uncertainty, which means that the unknown parameters can be modeled by random variables/vectors having a given (joint) distribution. Here, the performance of the controlled system or structure is evaluated first by a certain total cost function. E.g., in regulator optimization problems, using the weighted sum of the costs for the tracking error and the costs for the control, see Section 2, one obtains robust feedback controls in terms of stability properties, hence, eigenvalues of the homogeneous system lying in the left half of the complex plane, cf. [1]. The different realizations of the random parameter vector with their probability distribution are then incorporated into the design process by taking expectations with respect to the probabilistic information about the plant and its working neighborhood at the time of decision. Thus, determining stochastic optimal controls, i.e., minimizing the (conditional) total expected costs, parameter-insensitive, hence, robust controls are obtained. Summarizing the above considerations, we get the following design criterion for robust optimal control: The major objective of (feedback) control is to minimize the effects of unknown initial conditions and external influences on system behavior, subject to the constraints of not having a complete representation of the system, cf. [6]. Thus, in case of stochastic uncertainty this can be realized by computing stochastic optimal controls based on new techniques from Stochastic Optimization, see [13]. Assuming here that at each time point t the state zt := z(t) is available, the control force fa = Cu is generated by means of a PDcontroller. Hence, for the input n-vector function u = u(t), we have

_ uðtÞ :¼ uðt; qðtÞ; qðtÞÞ ¼ uðt; zðtÞÞ;

ð3aÞ

_ Finding optimal feedback with a feedback control law u ¼ uðt; q; qÞ. control laws – insensitive as far as possible with respect to parameter variations – means that besides determining an optimal control law, i.e. an optimal function from the state space into the control space, also its insensitivity with respect to stochastic parameter variations should be guaranteed. Efficient approximate feedback control laws are constructed now by using the concept of openloop feedback control. Open-loop feedback control is the main tool in model predictive control, cf. [2,12,15], which is very often used to solve optimal control problems in practice. The idea of open-loop feedback control is to construct a feedback control law quasi argument-wise, see [3,8]: Having to cope with stochastic uncertainty, we proceed with the following new version of open-loop feedback control: At each intermediate time point tb 2 [t0, tf], based on the observed state zb = z(tb) at tb a stochastic optimal open-loop control u⁄ = u⁄(tj(tb, zb)), tb 6 t 6 tf, is determined first on the remaining time interval [tb, tf], see Fig. 1, by stochastic optimization methods, cf. [11]. Then, at this time t = tb just the ‘‘first’’ control value u⁄(tb, zb) := u⁄(tbj(tb, zb)) is used only. For each other argument (t, zt := z(t)) the same construction is applied. Having then a stochastic optimal open-loop control u⁄ = u⁄(tj(tb, zb)), tb 6 t 6 tf on each remaining time interval [tb, tf] with an arbitrary starting time tb, t0 6 tb 6 tf, a stochastic optimal open-loop feedback control law is then defined by

u ¼ uðtb ; zðtb ÞÞ :¼ u ðtb jðtb ; zb ÞÞ; t0 6 tb 6 tf :

Fig. 1. Remaining time interval.

ð3bÞ

28

K. Marti / Advances in Engineering Software 44 (2012) 26–34

Remark 1.1. Due to the linear-quadratic structure of the underlying control problem, using a new stochastic version of the Hamilton–Jacobi-approach, see [7], the state and costate trajectory of the optimal control problem under stochastic uncertainty can be determined analytically to a large extent. Inserting then these trajectories into the H-minimal control, see [7], stochastic optimal open-loop controls are found on an arbitrary remaining time interval. These controls yield then immediately a stochastic optimal open-loop feedback control law. Moreover, the obtained controls can be realized in real-time, which is already shown for applications in optimal control of industrial robots, cf. [16]. Remark 1.2. The above described construction for PD-controllers can be transfered also to PID-controllers.

The performance function F for active structural control systems is defined, cf. [9–11], by the conditional expectation of the total costs being the sum of costs L along the trajectory, arising from the displacements z = z(t, x) and the control input u = u(t, x), and possible terminal costs G arising at the final state zf. Hence, on the remaining time interval tb 6 t 6 tf we have the following conditional expectation of the total cost function:

F :¼ E

tf

tb

! Lðt; x; zðt; xÞ; uðt; xÞÞdt þ Gðt f ; x; zðt f ; xÞÞjAtb :

1 T 1 z Q ðt; xÞz þ uT Rðt; xÞu; 2 2

Qq

0

0

Q q_

! ð4cÞ

;

_ with positive (semi) definite weight matrices Q q ; Q q_ , resp., for q; q. Furthermore, G = G(tf, x, z (tf, x)) describes possible terminal costs. In case of endpoint control G is defined by

Gðt f ; x; zðt f ; xÞÞ :¼

1 ðzðtf ; xÞ  zf ðxÞÞT Sðzðt f ; xÞ  zf ðxÞÞ; 2

tf

tb

! 1 ðzðt; xÞT Qzðt; xÞ þ uðtÞT RuðtÞÞdt þ Gðtf ; x; zðtf ; xÞÞjAtb ; 2

s:t: z_ ðt; xÞ ¼ Azðt; xÞ þ BuðtÞ þ bðt; xÞ; a:s:; zðt b ; xÞ ¼ zb ðgivenÞ; uðtÞ 2 Dt ; tb 6 t 6 tf :

tb 6 t 6 tf ;

ð5aÞ ð5bÞ ð5cÞ ð5dÞ

An important property of (5a)–(5d) is stated next, see [12]: Lemma 3.1. If the terminal cost function G = G(tf, x, z) is convex in z, and the feasible domain Dt is convex for each time point t, t0 6 t 6 tf, then the stochastic optimal control problem (5a)–(5d) is a convex optimization problem.

According to [12], the stochastic Hamiltonian H related to the optimal control problem (5a-d) under stochastic uncertainty reads:

Hðt; x; z; y; uÞ :¼ Lðt; x; z; uÞ þ yT gðt; x; z; uÞ ¼

1 T z Qz þ CðuÞ þ yT ðAz þ Bu þ bðt; xÞÞ: 2

ð6aÞ

4.1. Expected Hamiltonian (with respect to the time interval [tb, tf] and information Atb ) For the definition of a H-minimal control the conditional expectation of the stochastic Hamiltonian is needed:

HðbÞ :¼ EðHðt; x; z; y; uÞjAtb Þ   1 ¼ E zT Qz þ yT ðAz þ bðt; xÞÞjAtb þ CðuÞ þ EðyT BujAtb Þ 2

ð4bÞ

with positive (semi) definite 2m  2m, n  n, resp., matrix functions Q = Q(t, x), R = R(t, x). In the simplest case the weight matrices Q, R are fixed. A special selection for Q reads



Z

ð4aÞ

Here Atb denotes the set of information available until time tb. Supposing quadratic costs along the trajectory, the function L is given by

Lðt; x; z; uÞ :¼

E

4. The stochastic Hamiltonian of (5a)–(5d)

2. Expected total cost function

Z

min

ð4dÞ

where S is a positive (semi) definite weight matrix, and zf = zf(x) denotes the (possible probabilistic) final state. Remark 2.1. Instead of 12 uT Ru, in the following we also use a more general convex control cost function C = C(u).

3. Open-loop control problem on the remaining time interval [tb, tf] Having the differential equation with random coefficients describing the behavior of the dynamic mechanical structure under uncertainty and the costs arising from displacements and at the terminal state, on a given remaining time interval [tb, tf] a stochastic optimal open-loop control u⁄ = u⁄(tj(tb, zb)), tb 6 t 6 tf, is a solution of the following optimal control problem under stochastic uncertainty:

¼ CðuÞ þ EðBT yðt; xÞjAtb ÞT u þ . . . ¼ CðuÞ þ qðtÞT u þ . . .

ð6bÞ

with

hðtÞ :¼ EðBðxÞT yðt; xÞjAtb Þ ¼ hðtjt b ; Atb Þ;

t P tb :

ð6cÞ

4.2. H-minimal control on [tb, tf] In order to formulate the two-point boundary value problem for a stochastic optimal open-loop control u⁄ = u⁄(tj(tb, zb)), tb 6 t 6 tf, we need first a H-minimal control

f u ðt; zðt; Þ; yðt; ÞÞ; u ¼ f

tb 6 t 6 tf ;

defined, cf. [12], for tb 6 t 6 tf as a solution of the following convex stochastic optimization problem, cf. [11]:

min EðHðt; x; zðt; xÞ; yðt; xÞ; uÞjAtb Þ;

ð7aÞ

s.t.

u 2 Dt ;

ð7bÞ

where z = z(t, x), y = y(t, x) are certain trajectories, see also [7]. According to (6a)–(6c) and (7a), (7b), the H-minimal control

f u ðt; zðt; Þ; yðt; ÞÞ ¼ f u ðt; hðtÞÞ u ¼ f

ð8aÞ

is defined by

f u ðt; hðtÞÞ :¼ arg minðCðuÞ þ hðtÞT uÞ u2Dt

for t P tb :

ð8bÞ

For strictly convex, differentiable cost functions C = C(u), as e.g. CðuÞ ¼ 12 uT Ru with positive definite matrix R, the necessary and sufficient condition for f u reads in case Dt ¼ Rn

rCðuÞ þ hðtÞ ¼ 0:

ð9aÞ

29

K. Marti / Advances in Engineering Software 44 (2012) 26–34

If u ´ rC(u) is a 1-1-operator, then the solution of (9a) reads 1

u ¼ v ðhðtÞÞ :¼ rC ðhðtÞÞ:

The solution of system (13a) and (13b) reads

ð9bÞ

zðt; xÞ ¼ eAðttb Þ zb þ

With (6c) and (8b) we then have

t

T AT ðt f sÞ

 B e

ð9cÞ

 eAðtsÞ bðs; xÞ þ BrC 1

tb



f u ðt; hðtÞÞ ¼ v ðhðtÞÞ ¼ rC 1 ðEðBðxÞT yðt; xÞjAtb ÞÞ:

Z

ðbÞ

rz Gðtf ; x; zðtf ; xÞÞ

 ds;

t b 6 t 6 t f :: ð14Þ

Conditional expectations as in (9c) are denoted in the following also by an overbar marked with the index (b), see e.g. (11a).

For the final state z = z(tf, x) we get the relation:

5. Canonical (Hamiltonian) system

zðt f ; xÞ ¼ eAðtf tb Þ zb þ

Z

tf

 eAðtf sÞ bðs; xÞ þ BrC 1

t

b   T ðbÞ  BT eA ðtf sÞ rz Gðt f ; x; zðt f ; xÞÞ ds:

n

Assume here that Dt ¼ R , t0 6 t 6 tf. In the following we suppose that a H-minimal control f u ¼ f u ðt; zðt; Þ; yðt; ÞÞ; t b 6 t 6 t f , i.e., a solution f u ¼ f u ðt; hðtÞÞ ¼ v ðhðtÞÞÞ of the stochastic optimization problem (7a), (7b) is available. According to [12], a stochastic optimal open-loop control u⁄ = u⁄ (tj(tb, zb)), tb 6 t 6 tf,

u ðtjðt b ; zb ÞÞ :¼ f u ðt; z ðt; Þ; y ðt; ÞÞ; tb 6 t 6 t f ;

ð10Þ

of the stochastic optimal control problem (5a)–(5d) can be obtained by solving the following stochastic two-point boundary value problem related to (5a)–(5d) and inserting this solution into a H-minimal control: n

Theorem 5.1. Suppose that Dt ¼ R for all time points t under consideration, and assume that the inverse rC1 of the gradient function rC exists. If z⁄ = z⁄(t, x), y⁄ = y⁄(t, x), t0 6 t 6 tf, is a solution of

  ðbÞ þ bðt; xÞ; z_ ðt; xÞ ¼ Azðt; xÞ þ BðxÞrC 1 BðxÞT yðt; xÞ tb 6 t 6 tf ; zðt b ; xÞ ¼ zb ;

ð11bÞ

T

6.1. Endpoint control In the case of endpoint control, the terminal cost function is given by the following definition (16a), where zf = zf(x) denotes the desired – possible random – final state:

Gðtf ; x; zðtf ; xÞÞ :¼

_ xÞ ¼ A yðt; xÞ  Qzðt; xÞ; yðt;

ð11cÞ

yðt f ; xÞ ¼ rGðtf ; x; zðt f ; xÞÞ;

ð11dÞ

then the function u⁄ = u⁄(tj(tb, zb)), tb 6 t 6 tf, defined by (10) is a stochastic optimal open-loop control for the remaining time interval tb 6 t 6 tf.

rGðt f ; x; zðt f ; xÞÞ ¼ zðtf ; xÞ  zf ðxÞ; ðbÞ

rGðt f ; x; zðt f ; xÞÞ

tb 6 t 6 tf :

ð12aÞ

  T ðbÞ ¼ v ðhðtÞÞ ¼ rC BT eA ðtf tÞ rz Gðt f ; x; zðt f ; xÞÞ ; tb 6 t 6 tf :

ð12bÞ

Having (12a) and (12b), for the state trajectory z = z(t, x) we get, see (11a) and (11b), the following system of ordinary differential equations

  T ðbÞ z_ ðt; xÞ ¼ Azðt; xÞ þ BrC 1 BT eA ðtf tÞ rz Gðt f ; x; zðt f ; xÞÞ

zðt b ; xÞ ¼ zb :

ð16cÞ

zðt f ; xÞ ¼ eAðtf tb Þ zb þ

Z

tf

 eAðtf sÞ bðs; xÞ þ BrC 1

t

b    T ðbÞ ðbÞ  BT eA ðtf sÞ zðtf ; xÞ  zf ds:

ð17aÞ

Taking conditional expectations Eð. . . jAtb Þ in (17a), we get the folðbÞ lowing condition for zðt f ; xÞ :

zðt f ; xÞ

ðbÞ

¼ eAðtf tb Þ zb þ Z

Z

tf

ðbÞ

eAðtf sÞ bðs; xÞ ds

tb tf

   T ðbÞ eAðtf sÞ BrC 1 BT eA ðtf sÞ zðtf ; xÞ  zf ðbÞ ds:

6.1.1. Quadratic control costs Here, the control cost function C = C(u) reads

CðuÞ ¼

1 T u Ru; 2

ð18aÞ

hence,

1

tb 6 t 6 tf ;

 zf ðbÞ

ð17bÞ

For a deterministic matrix B this yields

þ bðt; xÞ;

ðbÞ

Thus,

and (11d) reads

f u ðt; hðtÞÞ

¼ zðtf ; xÞ

tb

In this section we consider the case Q = 0, i.e., there are no costs   q for the displacements z ¼ _ . In this case the solution of (11c) q

rz Gðtf ; x; zðtf ; xÞÞ;

ð16bÞ

and therefore

6. Minimal energy control

yðt; xÞ ¼ e

ð16aÞ

Hence,

þ

AT ðt f tÞ

1 kzðt f ; xÞ  zf ðxÞk2 : 2

¼ Eðzðt f ; xÞjAtb Þ  Eðzf jAtb Þ: ð11aÞ

ð15Þ

rC ¼ Ru;

ð18bÞ

and therefore

rC 1 ðwÞ ¼ R1 w: Consequently, (17b) reads

zðt f ; xÞ

ðbÞ

¼ eAðtf tb Þ zb þ 

Z

Z

tf

ðbÞ

eAðtf sÞ bðs; xÞ ds

tb tf

ðbÞ

eAðtf sÞ BR1 BT eA

T

ðt f sÞ

dszðt f ; xÞ

eAðtf sÞ BR1 BT eA

T

ðt f sÞ

dszf ðxÞ :

tb

ð13aÞ ð13bÞ

ð18cÞ

þ

Z

tf

tb

ðbÞ

ð19Þ

30

K. Marti / Advances in Engineering Software 44 (2012) 26–34 T

uðt; zÞ ¼ R1 BT eA ðtf tÞ ðI þ UÞ1 eAðtf tÞ z

Putting

U :¼

Z

tf

T

eAðtf sÞ BR1 BT eA

|fflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflffl{zfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflffl}

ðtf sÞ

ds;

G0 ðtÞ

ð20Þ

Z tf ðtÞ  R B e ðI þ UÞ1 eAðtf sÞ bðs; xÞ ds t |fflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflffl{zfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflffl} G1 ðt;bð;xÞðtÞ Þ

tb

1 T AT ðt f tÞ

we find

ðI þ UÞzðt f ; xÞ

ðbÞ

¼ eAðtf tb Þ zb þ

Z

tf

ðbÞ

eAðtf sÞ bðs; xÞ ds þ Uzf ðbÞ ;

T

 R1 BT eA ðtf tÞ ððI þ UÞ1 U  IÞ zf ðtÞ ; |fflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflffl{zfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflffl}

tb

ð21aÞ hence,

ð25aÞ

G2 ðtÞ

hence, ðbÞ

zðt f ; xÞ

¼ ðI þ UÞ1 eAðtf tb Þ zb þ ðI þ UÞ1

Z

tf

ðtÞ

ðbÞ

uðt; zÞ ¼ G0 ðtÞz þ G1 ðt; bð; xÞ Þ þ G2 ðtÞzf ðtÞ :

eAðtf sÞ bðs; xÞ ds

ð25bÞ

tb

þ ðI þ UÞ1 Uzf ðbÞ :

ð21bÞ

The regularity of the matrix I + U is guaranteed by the following lemma:

Remark 6.1. Note that the open-loop feedback law z ´ u(t, z) is not linear in general, but affine-linear. 6.2. Endpoint control with more general terminal cost functions

Lemma 6.1. I + U is regular. Proof. Due to the previous considerations, U is a positive semidefinite 2m  2m matrix. Hence, U has only nonnegative eigenvalues. Assuming that the matrix I + U is singular, there is a 2m-vector w – 0 such that

In this subsection we consider more general terminal cost functions G. Hence, suppose

Gðtf ; x; zðtf ; xÞÞ :¼ gðzðt f ; xÞ  zf ðxÞÞ;

ð26aÞ

rGðt f ; x; zðt f ; xÞÞ ¼ rgðzðtf ; xÞ  zf ðxÞÞ:

ð26bÞ

ðI þ UÞw ¼ 0: Consequently,

However, this yields

Uw ¼ Iw ¼ w ¼ ð1Þw; which means that k = 1 is an eigenvalues of U. Since this contradicts to the above mentioned property of U, the matrix I + U must be regular. h Now, (21b) and (16b) yield ðbÞ

rz Gðt f ; x; zðt f ; xÞÞ

¼ zðt f ; xÞ  zf ðbÞ

¼ zðt f ; xÞ

f u ðt; hðtÞÞ ¼ v ðhðtÞÞ   T ðbÞ ¼ rC 1 BT eA ðtf tÞ rgðzðt f ; xÞ  zf ðxÞÞ ; and therefore, see (15),

zðt f ; xÞ ¼ eAðtf tb Þ zb þ

ðbÞ

þ

 zf ðbÞ

ð22Þ

Thus, a stochastic optimal open-loop control u⁄ = u⁄ (tj(tb, zb)), tb 6 t 6 tf, on [tb, tf] is given by, cf. (12b),

 T u ðtjðt b ; zb ÞÞ ¼ R1 BT eA ðtf tÞ ðI þ UÞ1 eAðtf tb Þ zb þ ðI þ UÞ1 ! Z tf ðbÞ ðbÞ 1 Aðtf sÞ ;  e bðs; xÞ ds þ ððI þ UÞ U  IÞzf

ð23Þ

ð27bÞ

gðz  zf Þ :¼

2m X ðzi  zf i Þ4 ;

ð28aÞ

rgðz  zf Þ

ðbÞ

¼ 4ðEððz1  zf 1 Þ3 jAtb Þ; . . . ; Eððz2m  zf 2m Þ3 jAtB ÞÞT  T ðbÞ ðbÞ ¼ 4 m3 ðz1 ðtf ; Þ; zf 1 ðÞÞ; . . . ; m3 ðz2m ðtf ; Þ; zf 2m ðÞÞ ðbÞ

ðt f t b Þ

ððI þ UÞ U  IÞzf

ðbÞ

;

ð29Þ

Thus,

tb 1

ð28bÞ

Here,

¼: 4m3 ðzðtf ; Þ; zf ðÞÞ:

ðI þ UÞ1 eAðtf tb Þ zb Z tf ðbÞ T  R1 BT eA ðtf tb Þ ðI þ UÞ1 eAðtf sÞ bðs; xÞ ds R B e

  T ðbÞ eAðtf sÞ BrC 1 BT eA ðtf sÞ rgðzðt f ; xÞ  zf ðxÞÞ ds;

Special case: Now, a special terminal cost function is considered in more detail:

uðtb ; zðtb ÞÞ :¼ u ðtb jðtb ; zb ÞÞ

1 T AT ðt f tb Þ

tf

rgðz  zf Þ ¼ 4ððz1  zf 1 Þ3 ; . . . ; ðz2m  zf 2m Þ3 ÞT :

Finally, the open-loop feedback control law u = u(t, z(t)) is then given by

T

eAðtf sÞ bðs; xÞds

tb

i¼1

tb

¼ R1 BT eA

tf

tb 6 t 6 tf :

tb

tb 6 t 6 tf :

Z

Z

tb

¼ ðI þ UÞ1 eAðtf tb Þ zb Z tf ðbÞ þ ðI þ UÞ1 eAðtf sÞ bðs; xÞ ds þ ððI þ UÞ1 U  IÞzf ðbÞ :

ð27aÞ

ð24Þ

with zb := z(tb). Replacing tb ? t and zb ? z, we find this result:

zðt f ; xÞ ¼ eAðtf tb Þ zb þ þ

Z

Z

tf

eAðtf sÞ bðs; xÞds

tb

  T ðbÞ eAðtf sÞ BrC 1 BT eA ðtf sÞ 4m3 ðzðt f ; Þ; zf ðÞÞ ds : tb |fflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflffl ffl{zfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflffl ffl}   tf

ðbÞ

Theorem 6.1. The open-loop feedback control law u = u(t, z) is given by

J m3 ðzðt f ;Þ;zf ðÞÞ

ð30Þ

31

K. Marti / Advances in Engineering Software 44 (2012) 26–34

Eq. (30) yields then

6.3.1. Quadratic control costs Assume that the control cost function and its gradient are given by

3

ðzðt f ; xÞ  zf ðxÞÞ jc:byc: ¼ ! Z tf   3 ðbÞ Aðtf tb Þ Aðtf sÞ ¼ e zb  zf þ e bðs; xÞds þ J m3 ðzðt f ; Þ; zf ðÞÞ tb

;

c:byc:

ð31aÞ where ‘‘c.-by-c.’’ means ‘‘component-by-component’’. Taking expectations in (31a), we get the following relation for the moment vecðbÞ tor m3 :

  ðbÞ ðbÞ m3 ðzðt f ; Þ; zf ðÞÞ ¼ W m3 ðzðt f ; Þ; zf ðÞÞ :

CðuÞ ¼

1 T u Ru; rCðuÞ ¼ Ru: 2

ð35bÞ

Here, (35a) yields

zðt f ; xÞ ¼ eAðtf tb Þ zb þ 

Z

Z

tf

eAðtf sÞ bðs; xÞds

tb tf

eAðtf sÞ BR1 BT eA

T

ðt f sÞ

ðCðxÞT CðxÞzðtf ; xÞ

ðbÞ

tb

ð31bÞ

ðbÞ

 CðxÞT CðxÞzf ðxÞ Þds: T

ð35cÞ

T

Multiplying with C C = C(x) C(x) and taking expectations, from (35c) we get Remark 6.2.

ðbÞ

CT Czðt f ; xÞ

3

Eððzðtf ; xÞ  zf ðxÞÞ jAtb Þjc:byc: ¼ Eðtb Þ ðzðtf ; xÞ  zðbÞ ðtf Þ þ zðbÞ ðtf Þ  zf ðxÞÞ3    ¼ Eðtb Þ ðzðtf ; xÞ  zðbÞ ðtf ÞÞ3 þ 3ðzðtf ; xÞ  zðbÞ ðtf ÞÞ2 zðbÞ ðtf Þ  zf ðxÞ  þ 3ðzðtf ; xÞ  zðbÞ ðtf ÞÞðzðbÞ ðtf Þ  zf ðxÞÞ2 þ ðzðbÞ ðtf Þ  zf ðxÞÞ3 : ð31cÞ Assuming that z(tf, x) and zf(x) are stochastically independent, then

Eððzðtf ; xÞ  zf ðxÞÞ3 jAtb Þ ¼ m3 ðzðtf ; ÞÞ þ 3r2ðbÞ ðzðt f ; ÞÞðzðbÞ ðtf Þ  zf ðbÞ Þ  3 ðbÞ þ zðbÞ ðt f Þ  zf ðxÞ : ð31dÞ ðbÞ

Z

ðbÞ

¼ CT C eAðtf tb Þ zb þ T

C C

ðbÞ

Z

tf

ðbÞ

CT CeAðtf sÞ bðs; xÞ ds

tb tf

Aðt f sÞ

e

tb T

 C Czf ðxÞ

ðbÞ

BR1 BT eA

T

ðt f sÞ

 ðbÞ ds CT Czðtf ; xÞ

 :

ð36aÞ

Rt According to a former lemma, the matrix U ¼ tbf eAðtf sÞ BR1 T AT ðt f sÞ B e ds is positive semidefinite. Consequently, I + U is regular. From (36a) we obtain ðbÞ

ðI þ CT C UÞCT Czðt f ; xÞ

ðbÞ

ðbÞ

¼ CT C eAðtf tb Þ zb Z tf ðbÞ þ CT CeAðtf sÞ bðs; xÞ ds tb ðbÞ

ðbÞ

þ CT C U CðxÞT CðxÞzf ðxÞ :

6.3. Weighted quadratic terminal costs With a certain (possibly random) weight matrix C = C(x), we consider the following terminal cost function:

1 Gðt f ; x; zðt f ; xÞÞ :¼ kCðxÞðzðt f ; xÞ  zf Þk2 : 2

ð32aÞ

ðbÞ

CT Czðt f ; xÞ

 ¼

rGðtf ; x; zðtf ; xÞÞ ¼ CðxÞT CðxÞðzðtf ; xÞ  zf ðxÞÞ; ðt f tÞ

T

A ðt f tÞ

¼e

rz Gðtf ; x; zðtf ; xÞÞ T

CðxÞ CðxÞðzðtf ; xÞ  zf ðxÞÞ;

ð33aÞ

¼ eA

T

T

ðt f tÞ ðt f tÞ



ðbÞ



ðbÞ

CðxÞT CðxÞzðt f ; xÞ

 :  CðxÞT CðxÞzf ðxÞ

ðbÞ

CT CeAðtf sÞ bðs; xÞ ds ð36cÞ

ðtf tÞ



ðbÞ

CðxÞT CðxÞzðt f ; xÞ CðxÞT CðxÞzf ðxÞ

ðbÞ



tb 6 t 6 tf ;

¼ ...;

CðxÞT CðxÞðzðtf ; xÞ  CðxÞzf ðxÞÞ

tf

Putting (36c) into (34), corresponding to (23) we get the optimal open-loop control T

ðbÞ ðtÞ ¼ eA y

ðbÞ

CT C eAðtf tb Þ zb

tb

u ðtÞ ¼ R1 BT eA

hence

1

 1 ðbÞ ðbÞ þ I þ CT C U CT CðbÞ U Czf ðxÞ :

ð32bÞ

and from (12a) we get T

ðbÞ

I þ CT C U

 1 Z ðbÞ þ I þ CT C U

This yields

yðt; xÞ ¼ eA

ð36bÞ

ðbÞ

Assuming that the matrix I þ CT C U is regular, we get, cf. (21a) and (21b),

ð37Þ

ðbÞ

ð33bÞ Thus, for the H-minimal control we find f ðbÞ ðtÞÞ u ðt; hðtÞÞ ¼ v ðhðtÞÞ ¼ rC 1 ðBT y    ðbÞ ðbÞ T ¼ rC 1 BT eA ðtf tÞ CðxÞT CðxÞzðtf ; xÞ  CðxÞT CðxÞzf ðxÞ :

ð34Þ We obtain therefore, see (14), Z t  zðt; xÞ ¼ eAðttb Þ zb þ eAðtsÞ bðs; xÞ þ BrC 1 tb   ðbÞ ðbÞ T T T  BT eA ðtf sÞ ðCðxÞ CðxÞzðt f ; xÞ  CðxÞ CðxÞzf ðxÞ Þ ds: ð35aÞ

which yields then the related stochastic optimal open-loop feedback control law u = u(t, z(t)) corresponding to Theorem 6.1. 7. Nonzero costs for displacements Suppose now that Q – 0. According to (11a)–(11d), for the adjoint trajectory y = y(t, x) we have the system of differential equations

_ xÞ ¼ AT yðt; xÞ  Qzðt; xÞ yðt; yðt f ; xÞ ¼ rGðtf ; x; zðtf ; xÞÞ: For given z(t, x) and rG(tf, x, z(tf, x)) we have the solution:

yðt; xÞ ¼

Z

t

tf

eA

T

ðstÞ

Qzðs; xÞds þ eA

T

ðt f tÞ

rGðt f ; x; zðt f ; xÞÞ:

ð38Þ

32

K. Marti / Advances in Engineering Software 44 (2012) 26–34

where zðbÞ ðtÞ :¼ Eðzðt; xÞjAtb Þ. Interpreting (42a) and (42b) as an initial value problem for wðtÞ :¼ zðbÞ ðtÞ of the type

Indeed, (38) yields

yðt f ; xÞ ¼ rz Gðt f ; x; zðt f ; xÞÞ T

_ xÞ ¼ eA 0 Qzðt; xÞ  yðt;

Z

tf

_ wðtÞ ¼ AwðtÞ þ bðtÞ; T

AT e A

ðstÞ

Qzðs; xÞds

wðt b Þ ¼ wb ;

t T AT ðt f tÞ

A e

tb 6 t 6 tf ;

rGðt f ; x; zðt f ; xÞÞ

AT 0

Qzðt; xÞ Z tf  T T  AT eA ðstÞ Qzðs; xÞds þ eA ðtf tÞ rGðtf ; x; zðt f ; xÞÞ

¼ e

t

with a given function b = b(t), from (42a) and (42b) we obtain the following relation:

 Z t f T ðbÞ ðsÞ  BR1 BT eAðtsÞ b eA ðssÞ Q zðbÞ ðsÞds s tb  T þ eA ðtf sÞ ðzðbÞ ðt f Þ  zf ðbÞ Þ ds; t b 6 t 6 t f : ð43aÞ

zðbÞ ðtÞ ¼ eAðttb Þ zb þ

¼ AT yðt; xÞ  Qzðt; xÞ: Taking conditional expectations in (38), we find

Z

t

This is a condition for the function zðbÞ ¼ zðbÞ ðtÞ; tb 6 t 6 t f . Putting t = tf, we then get the following condition for the expected terminal state vector zðbÞ ðt f Þ:

ðbÞ ðtÞ ¼ Eðyðt; xÞjAtb Þ y Z tf T T ðbÞ ¼ eA ðstÞ Q zðbÞ ðsÞds þ eA ðtf tÞ rGðtf ; x; zðtf ; xÞÞ :

ð39Þ

t

 Z t f T ðbÞ ðsÞ  BR1 BT eAðtf sÞ b eA ðssÞ Q zðbÞ ðsÞds tb s  T ðbÞ ð43bÞ þ eA ðtf sÞ ðzðbÞ ðt f Þ  zf Þ ds: Z

zðbÞ ðt f Þ ¼ eAðtf tb Þ zb þ

This yields, see (6c), (9b), (9c), ðbÞ

ðbÞ ðtÞ hðtÞ ¼ BT yðt; xÞ ¼ BT y Z tf  T T ðbÞ ; ¼ BT eA ðstÞ Q zðbÞ ðsÞds þ eA ðtf tÞ rGðtf ; x; zðtf ; xÞÞ t

ð40aÞ and therefore f u ðt; hðtÞÞ ¼ v ðhðtÞÞ ¼ rC 1 ðhðtÞÞ   Z tf T T ðbÞ ¼ rC 1 BT eA ðstÞ Q zðbÞ ðsÞds  BT eA ðtf tÞ rGðtf ; x; zðtf ; xÞÞ : t

tf

The conditional expectation zðbÞ ðt f Þ can be obtained then from (43b) as in (21b). Having zðbÞ ðt f Þ , from (43a) we get then a fixed-point condition for zðbÞ ðtÞ; tb 6 t 6 tf . Fixed-point conditions can be solved e.g. iteratively. With zðbÞ ðtÞ; t b 6 t 6 t f , the stochastic optimal open-loop control u⁄ = u⁄(tj(tb, zb)), tb 6 t 6 tf, follows from (40b), cf. (23): Z tf   T T ðbÞ u ðtjðtb ;zb ÞÞ ¼ R1 BT eA ðstÞ Q zðbÞ ðsÞds þ eA ðtf tÞ zðbÞ ðtf Þ  zf : t

ð40bÞ Thus, in order to determine an open-loop feedback control, we need the function, vector, resp., a) zðbÞ ¼ zðbÞ ðtÞ ¼ Eðzðt; xÞjAtb Þ

ð44aÞ Moreover

uðtb ; zb Þ :¼ u ðtb jðtb ; zb ÞÞ; t0 6 tb 6 tf ;

ð44bÞ

is then the stochastic optimal open-loop feedback control law.

ðbÞ

b) rGðtf ; x; zðt f ; xÞ , where for z = z(t, x) we have, cf. (11a) and (11b), the equations

ðbÞ ðtÞÞ þ bðt; xÞ z_ ðt; xÞ ¼ Azðt; xÞ þ BrC 1 ðBT y  Z tf T eA ðstÞ Q zðbÞ ðsÞds ¼ Azðt; xÞ þ BrC 1 BT t  T ðbÞ þ bðt; xÞ;  BT eA ðtf tÞ rGðtf ; x; zðt f ; xÞÞ

ð41aÞ

zðt b ; xÞ ¼ zb :

ð41bÞ

Remark 7.1. Putting Q = 0 in (44a) and (44b), we again obtain the stochastic optimal open-loop feedback control law (24) in Section 5.

8. Example

7.1. Quadratic control and terminal costs

We consider the structure according to Fig. 2, see [5], where we want to control the supplementary active system while minimizing the expected total costs for the control and the terminal costs. The behavior of the vector of displacements q = q(t, x) can be described by the following system of differential equations of second order:



Corresponding to (16a), (16b) and (18a), (18b), suppose

M

rGðtf ; x; zðtf ; xÞÞ ¼ zðtf ; xÞ  zf ðxÞ;

€0 ðt; xÞ q €z ðt; xÞ q



 þD

q_ 0 ðt; x; t q_ z ðt; xÞ



 þK

q0 ðt; xÞ qz ðt; xÞ



¼ f0 ðt; xÞ þ fa ðtÞ ð45Þ

1

1

rC ðwÞ ¼ R w:

with the 2  2 matrices and 2-vectors

Taking expectations in (41a) and (41b), we obtain the following integro-differential equation

d ðbÞ z ðtÞ ¼ AzðbÞ ðtÞ  BR1 BT dt 1 T AT ðt f tÞ

 BR B e

Z

tf

eA

T

ðstÞ

 M¼

Q zðbÞ ðsÞds

t

ðbÞ

ðz ðtf Þ  zf

ðbÞ

ðbÞ ðtÞ; Þþb

tb 6 t 6 tf ;



ð42aÞ zðbÞ ðtb Þ ¼ zb ;

ð42bÞ







m0

0

0

mz

 mass matrix

d0 þ dz

dz

dz

dz

k0 þ kz

kz

kz

kz





ð46aÞ

damping matrix

ð46bÞ

stiffness matrix

ð46cÞ

33

K. Marti / Advances in Engineering Software 44 (2012) 26–34

with a 4  4 weight matrix Gf. Note that this problem is of the ‘‘Minimal-Energy Control’’-type, as we apply no extra costs for the displacements, i.e. Q = 0. The two-point-boundary problem to be solved reads then, cf. (11a)–(11d), ðbÞ 1 z_ ðt; xÞ ¼ Azðt; xÞ  BBT yðt; Þ þ bðx; tÞ; R

ð49aÞ

_ xÞ ¼ AT yðt; xÞ; yðt;

ð49bÞ

zðt b ; xÞ ¼ zb ;

ð49cÞ

yðt f ; xÞ ¼ Gf zðt f ; xÞ:

ð49dÞ

Hence, the solution of (49a)–(49d), i.e. the stochastic optimal trajectories, are given by, cf. (35a), T

yðt; xÞ ¼ eA

ðtf tÞ

Gf zðtf ; xÞ;

zðt; xÞ ¼ eAðttb Þ zb þ

Z

ð50aÞ

  T ðbÞ 1 ds: eAðtsÞ bðs; xÞ  BBT eA ðtf sÞ Gf zðt f ; xÞ R

t

tb

ð50bÞ Finally, we get the stochastic optimal control, see (36c) and (37): T 1 u ðtjðt b ; zb ÞÞ ¼  BT eA ðtf tÞ ðI4 R

Fig. 2. Principle of active structural control.



 1 uðtÞ actuator force fa ðtÞ ¼ þ1   f01 ðt; xÞ : applied load f0 ðt; xÞ ¼ 0

kz dz  mkzz  mdzz mz mz |fflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflffl{zfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflffl} :¼A 0 1 0 1 0 0 B 0 C B 0 C B C B C þ B  1 C uðtÞ þ B f0 ðt;xÞ C; @ m0 A @ m0 A 1 0 mz |fflfflfflfflfflffl{zfflfflfflfflfflffl} |fflfflfflfflfflffl{zfflfflfflfflffl ffl}

ð47Þ

:¼bðt;xÞ

where Ip denotes the p  p identity matrix. Furthermore, we have the optimal control problem under stochastic uncertainty:

s:t:

"Z # tf 1 RðuðsÞÞ2 ds þ zðt f ; xÞT Gf zðt f ; xÞ ; 2 tb Z t zðt; xÞ ¼ zb þ ðAzðs; xÞ þ BuðsÞ þ bðs; xÞÞds; FðuðÞÞ :¼ E

ð48aÞ ð48bÞ

tb

uðÞ 2 CðT; RÞ;

e

At b

zb þ

Z

tf

As

e

ðbÞ

!

bðs; xÞ ds ;

tb

with



1 R

Z

tf

T

eAðtf sÞ BBT eA

ðtf sÞ

ds:

ð51bÞ

tb

9. Conclusion

1

This yields the dynamic equation       0 I2 0 0 z_ ðt; xÞ ¼ zðt; xÞ þ þ 1 1 1 1 M K M D M fa ðtÞ M f0 ðt; xÞ 0 1 0 0 1 0 B 0 0 0 1 C B C d0 þdz ¼ B  k0 þkz kz dz C zðt; xÞ  @ m0 m0 m0 m0 A

min

Atf

ð51aÞ ð46eÞ

q0 ðt; xÞ B q ðt; xÞ C C z _ xÞ T Þ T ¼ B zðt; xÞ :¼ ðqðt; xÞT ; qðt; C: B @ q_ 0 ðt; xÞ A q_ z ðt; xÞ

:¼B

þ Gf UÞ Gf e

ð46dÞ

Here we have n = 1, i.e. the control inputs uðÞ 2 CðT; RÞ are real-valued continuous functions on the corresponding time intervals T, and the weight matrix R becomes a positive real number. To represent the equation of motion (45) by a first order differential equation we set

0

1

ð48cÞ

Active regulator strategies are considered for stabilizing dynamic mechanical structures under stochastic applied loadings. The problem has been modeled in the framework of optimal control under stochastic uncertainty for minimizing the expected total costs arising from the displacements of the structure and the regulation costs. Based on the concept of open-loop feedback control, in recent years the so-called Model Predictive Control (MPC) became very popular in solving optimal control problems in practice. Hence, due to the great advantages of open-loop feedback controls, stochastic optimal open-loop feedback controls have been constructed by taking into account the random parameter variations in the structural control problem with random parameters. For finding stochastic optimal optimal open-loop controls, on the remaining time intervals tb 6 t 6 tf with t0 6 tb 6 tf, the stochastic Hamilton function of the control problem has been introduced. Then, the class of H- minimal controls can be determined by solving a finite-dimensional stochastic optimization problem for minimizing the conditional expectation of the stochastic Hamiltonian subject to the remaining deterministic control constraints at each time point t. Having a H-minimal control, the related two-point boundary value problem with random parameters can be formulated for the computation of the stochastic optimal state- and costate-trajectory. Due to the linear-quadratic structure of the underlying control problem, the optimal state and costate trajectory can be determined analytically to a large extent. Inserting then these trajectories into the H-minimal control, stochastic optimal open-loop controls are found on an arbitrary remaining time interval. These controls yield then immediately a robust, hence, stochastic optimal open-loop feedback control law.

34

K. Marti / Advances in Engineering Software 44 (2012) 26–34

References [1] Ackermann J. Robuste Regelung. Berlin–Heidelberg–New York [etc.]: SpringerVerlag; 1993. [2] Allgöwer F, et al. (Eds.), Nonlinear model predictive control. Basel: Birkhäuser Verlag; 2000. [3] Aoki M. Optimization of stochastic systems – topics in discrete-time systems. New York – London: Academic Press; 1967. [4] Basar T, Bernhard P. H1-optimal control and related minimax design problems: a dynamic approach. Boston: Birkhäuser; 1991. [5] Block C. Aktive Minderung personeninduzierter Schwingungen an weit gespannten Strukturen im Bauwesen. Fortschrittberichte VDI, Reihe 11, Schwingungstechnik, Nr. 336. Düsseldorf: VDI-Verlag GmbH; 2008. [6] Dullerud GE, Paganini F. A Course in robust control theory – a convex approach. New York [etc.]: Springer-Verlag; 2000. [7] Kalman RE, Falb PL, Arbib MA. Topics in mathematical system theory. New York [etc.]: McGraw-Hill Book Company; 1969. [8] Ku R, Athans M. On the adaptive control of linear systems using the open-loopfeedback-optimal approach. IEEE Trans Automat Control, AC-18 1973:489–93. [9] Marti K. Stochastic optimization methods in robuts adaptive control of robots. In: Grötschel M et al., editors. Online optimization of large scale systems. Berlin–Heidelberg–New York: Springer-Verlag; 2001. p. 545–77. [10] Marti K. Adaptive optimal stochastic trajectory planning and control (AOSTPC) for robots. In: Marti K, Ermoliev Y, Pflug G, editors. Dynamic stochastic optimization. Berlin–Heidelberg: Springer-Verlag; 2004. p. 155–206. [11] Marti K. Stochastic optimization problems. 2nd ed. Berlin– Heidelberg: Springer-Verlag; 2008.

[12] Marti K. Approximate solutions of stochastic control problems by means of convex approximations. In: Topping BHV, Papadrakakis M. (Eds.), Proceedings of the 9th int conference on computational structures technology (CST08) Stirlingshire. UK: Civil-Comp Press; 2008, paper no. 52. [13] Marti K. Optimal control of dynamical systems and structures under stochastic uncertainty: stochastic optimal feedback control. Adv Eng Softw 2010. doi:10.1016/j.advengsoft.2010.09.008. [14] Nagarajaiah S, Narasimhan S. Optimal control of structures. In: Arora JS (Ed.), Optimization of structural and mechanical systems. New Jersey [etc.]: World Scientific; 2007. p. 221–44. [15] Richalet J, Rault A, Testud JL, Papon J. Model predictive heuristic control: applications to industrial processes. Automatica 1978;14:413–28. [16] Schacher M. Stochastisch Optimale Regelung von Robotern. PhD thesis, Faculty for Aerospace Engineering and Technology, Federal Armed Forces University Munich, submitted for publication. [17] Song TT. Active structural control in civil engineering. Eng Struct 1988;10:74–84. [18] Soong TT. Active structural control: theory and practice. New York: Longman Scientific and Technical, J. Wiley; 1990. [19] Soong TT, Costantinou MC. Passive and active structural vibration control in civil engineering. CISM courses and lectures – no. 345. Wien-New York: Springer-Verlag; 1994. [20] Spencer BF, Nagarajaiah S. State of the art of structural control. J Struct Eng, ASCE 2003;129(7):845–56. [21] Yang JN, Soong TT. Recent advance in active control of civil engineering structures. Probab Eng Mech 1988;3(4):179–88.