Reduced order controllers for distributed parameter systems: LQG balanced truncation and an adaptive approach

Reduced order controllers for distributed parameter systems: LQG balanced truncation and an adaptive approach

Mathematical and Computer Modelling 43 (2006) 1136–1149 www.elsevier.com/locate/mcm Reduced order controllers for distributed parameter systems: LQG ...

2MB Sizes 0 Downloads 22 Views

Mathematical and Computer Modelling 43 (2006) 1136–1149 www.elsevier.com/locate/mcm

Reduced order controllers for distributed parameter systems: LQG balanced truncation and an adaptive approach Belinda Batten King a,∗ , Naira Hovakimyan b , Katie A. Evans a,1 , Michael Buhl c a Department of Mechanical Engineering, Oregon State University, Corvallis, OR 97331-6001, United States b Department of Aerospace and Ocean Engineering, Virginia Polytechnic Institute and State University, Blacksburg, VA 24061, United States c Department of Aerospace Engineering, Technical University of Munich, Munich, Germany

Received 10 March 2005; accepted 4 May 2005

Abstract In this paper, two methods are reviewed and compared for designing reduced order controllers for distributed parameter systems. The first involves a reduction method known as LQG balanced truncation followed by MinMax control design and relies on the theory and properties of the distributed parameter system. The second is a neural network based adaptive output feedback synthesis approach, designed for the large scale discretized system and depends upon the relative degree of the regulated outputs. Both methods are applied to a problem concerning control of vibrations in a nonlinear structure with a bounded disturbance. c 2006 Elsevier Ltd. All rights reserved.

Keywords: Reduced order control; Balanced truncation; LQG balancing; Neural networks; Output feedback; Distributed parameter systems

1. Introduction Control design for distributed parameter systems (DPSs), such as those modeled by partial differential equations (PDEs), is an area of research that has received much attention in the last fifteen years (see, for example [1–8]). Many important military applications are modeled by DPSs, including fluid dynamics pertaining to aircraft and combustion, electromagnetics for surveillance and interrogation, structural vibrations in airframes, satellites and in refueling structures, to mention but a few. In this paper, we present two approaches to low-order control design for such applications. One utilizes the distributed parameter structure of the system, while the other focuses on the nonlinearity in the system. We consider this paper to be a motivation for blending the two approaches in future work. Although a significant theory has been developed, in particular, for control of linear PDE systems via methods such as linear quadratic Gaussian (LQG) or MinMax, this theory has not been widely implemented. A primary reason for this is that derivation of the DPS controllers requires approximation of the systems for computation of approximate controllers that are typically large scale. Techniques for such approximations can be found in [6,9–11] for example. Since these controllers are large scale, their real-time implementation is not possible. To address this ∗ Corresponding author.

E-mail addresses: [email protected] (B.B. King), [email protected] (N. Hovakimyan), [email protected] (K.A. Evans), [email protected] (M. Buhl). 1 Present address: Program of Mathematics and Statistics, Louisiana Tech University, Ruston, LA 71272, United States. c 2006 Elsevier Ltd. All rights reserved. 0895-7177/$ - see front matter doi:10.1016/j.mcm.2005.05.031

B.B. King et al. / Mathematical and Computer Modelling 43 (2006) 1136–1149

1137

difficulty, model and control reduction techniques have been a topic of active research through the last decade. There are many approaches to model reduction, and to adequately discuss the ones that have been widely used in the last ten years would require a dedicated paper. In this work, we will focus on comparing two approaches. A method that is often used in industrial applications is known as balanced truncation and can be found in many textbooks in modern control, e.g., [12]. This approach replaces the large scale model for the system by a reduced order one that captures characteristics that are important for control design, specifically controllability and observability. This reduced order model is then used to design a low order controller—by any number of methods—that is then applied to the physical problem. A related method that we use here is LQG balanced truncation, [13–16]. It involves computing a LQG controller for the large scale model, and replacing the large scale model with a reduced order one that captures the characteristics of the large scale controller. Then, the reduced order model is used to design a low order controller, again by any number of standard methods. The difference between these two truncation methods is the way in which the reduced order model is obtained. The same controller design methods can be applied to reduced order models from either balancing technique to obtain a low order controller. In this paper, we will apply the MinMax control methodology to the reduced order system that we obtain through LQG balanced truncation. A thorough discussion of these two balancing methods followed by various control designs can be found in [17]. One of the challenges in reduction methods is ensuring that the resulting low order controller will control the DPS. At the first stage of approximating the PDE to compute the large-scale controller as described above, theory exists for LQG and MinMax design to guarantee that this finite dimensional approximating controller converges to the PDE controller, e.g., [9–11]. However, even at this stage, nonlinearities and disturbances in the PDE system can present difficulties. For the low order control approach described above, nonlinearities can be addressed by including an extended compensator, and details of this are discussed in Section 3. Using control designs that are typically robust, such as MinMax, H∞ and central controllers, can mitigate disturbances. This is one motivation for using the MinMax controller for our first low order design. Another way to address these challenges is presented in this paper, which involves coupling an adaptive control with controllers designed for PDE systems. Results reported on adaptive control of DPSs have been primarily for systems with linear dynamics [18–21] and only more recently for those with nonlinear dynamics [22]. In [18], the authors analyze adaptive control of infinite dimensional parabolic systems and include a stability proof ensuring parameter convergence. In [19], a discontinuous model reference adaptive control synthesis is developed for a certain hyperbolic distributed parameter system. The synthesis is carried out in an infinite dimensional setting, the numerical approximation of which can be considered at the implementation stage. Here, we first discuss adaptive augmentation for a large-scale finite dimensional LQG controller, which is known to converge to the distributed parameter LQG controller in the absence of nonlinearities. We show that the error dynamics could be written in a form for which the proof on ultimate boundedness presented in [23,24] is valid. Next, using the approach in [24], we develop a reduced order controller design for the high order dynamics, by writing the original dynamics in its normal form and acknowledging the fact that the zeros are stable for the model problem considered in this paper. We characterize the conditions and assumptions on the reduced order model and discarded dynamics under which the adaptive output feedback approach, developed in [23,24], can be applied for augmenting a low order LQG controller designed for the reduced order dynamics. Taking advantage of the fact that the normal form for the problem of interest in this paper involves only measured states, one can consider an LQR controller for the reduced system. Meanwhile, we point out that the relationship to the infinite dimensional system, i.e., to the true PDE, is not addressed in this paper. The remainder of this paper is organized as follows. In Section 2, we give a motivating example that leads to the problem formulation. In Section 3, we introduce the general PDE control framework and define the related infinite dimensional system dynamics along with its finite dimensional approximation. In Section 4, we discuss the LQG balanced truncation method for reduced order model formulation, followed by MinMax controller design for the low order controllers. In Section 5, we formulate the adaptive output feedback control approach specialized to our problem of interest. Section 6 contains simulation results for both low-order controllers as applied to the PDE system. Section 7 summarizes the paper. 2. A motivating example We consider the nonlinear structural dynamics problem discussed in [25] that models control of the vibrations of a cable mass system subject to disturbances. Specifically, this problem involves the vibrations of an elastic cable which

1138

B.B. King et al. / Mathematical and Computer Modelling 43 (2006) 1136–1149

Fig. 1. Cable–mass structure.

is fixed at one end and attached to a mass at the other as shown in Fig. 1. The mass is suspended by a spring which has nonlinear stiffening terms and is forced by a sinusoidal disturbance. The equations describing this system are   ∂2 ∂ ∂ ∂2 w(t, s) 0 < s < `, t > 0, (1) τ w(t, s) + γ ρ 2 w(t, s) = ∂s ∂s ∂t∂s ∂t   ∂2 ∂2 ∂ w(t, `) − α1 w(t, `) − α3 [w(t, `)]3 + ϕ(t) + u(t), m 2 w(t, `) = − τ w(t, `) + γ ∂s ∂t∂s ∂t with boundary condition w(t, 0) = 0.

(2)

Initial conditions are chosen of the form w(0, s) = w0 (s),

∂ w(0, s) = w1 (s). ∂t

(3)

Here, w(t, s) represents the displacement of the cable at time t, position s, w(t, `) represents the position of the mass at time t, ρ and m are the densities of the cable and mass respectively, τ is Young’s modulus for the cable, and γ is a damping coefficient. The alphas are spring constants (linear and nonlinear) associated with the mass. The term ϕ(t) is viewed as a disturbance and u(t) is a control input. We assume that the only measurements of the system that are available are the mass position and velocity. This leads to the equation for the sensed measurement   " w(t, `) # η1 (t) η(t) = . (4) = ∂ η2 (t) w(t, `) ∂t 3. General PDE control design framework Due to the special form of the physics of this problem, specifically, that the control and disturbance inputs, and the nonlinearity enter the system via forces on the mass, the infinite dimensional system dynamics can be presented in the following abstract form: w(t) ˙ = Aw(t) + B (u(t) + F(w(t)) + ϕ(t)) ,

(5)

w(0) = w0

where w(t) = (w(t, ·), w(t, `), ∂t∂ w(t, ·), ∂t∂ w(t, `)) is in the state space W = L 2 (0, `) × R × L 2 (0, `) × R and u(t) and ϕ(t) are in R; F(w(t)) represents the nonlinearity in the system. The state measurement is given by η(t) = Cw(t)

(6)

where C : W → R2 . Notice that given (6), the vector relative degree of the measurement is r = 2 1 (for the definition of vector relative degree refer to [26]). Under standard assumptions, using the LQG theory for DPS 



B.B. King et al. / Mathematical and Computer Modelling 43 (2006) 1136–1149

1139

(see for example, [11]), a feedback controller can be designed for the linear system obtained by linearizing (5) around an equilibrium. The resulting feedback control law is given by u(t) = K wc (t),

(7)

where wc (t) is an estimate of the state obtained from the dynamical system: w˙ c (t) = Ac wc (t) + Lη(t),

wc (0) = wc0 .

(8)

The feedback gain operator K achieves the desired set-point regulation. For this system, the equilibrium is the origin of the state space W , and this equilibrium is globally exponentially stable (see [25]). The operators K , Ac and L are obtained through the solution of two algebraic Riccati equations A∗ Π + Π A − Π B B ∗ Π + C ∗ C = 0 A P + P A∗ − PC ∗ C P + B B ∗ = 0.

(9) (10)

Given the solutions Π and P, one defines K = −B ∗ Π L = PC ∗ Ac = A − B K − LC.

(11)

These operators cannot, in general, be derived analytically, as the Riccati equations are functional differential equations for PDE systems. Therefore, to compute the control law and compensator equation in (7) and (8), a finite dimensional approximation of the linearization of (5) and (6) is derived using a convergent approximation scheme. Using, for example, a finite-element method as in [25], one can obtain an observable and controllable finitedimensional approximation of (5) and (6):   (12) w˙ N (t) = A N w N (t) + B N u(t) + F N (w N (t)) + ϕ(t) , η(t) = C N w N (t),

w N (0) = w0N

(13)

where N is the number of basis elements in the approximation scheme. We note that the vector relative degree is preserved in this approximation process. In [25], the disturbance, ϕ(t), was taken to be modeled by a cosine function. Although we will again use such a disturbance in the numerical results, we make the following more general assumption regarding ϕ(t). Assumption 3.1. Bounded disturbances ϕ(t) belong to a class of continuous time functions, describable by differential equations: w˙ ϕN (t) = f ϕN (wϕN (t), w N (t)) ϕ N (t) = gϕ (wϕN (t), w N (t)).

(14)

The dynamics in (14) are input-to-state stable with w N (t) viewed as input [27], and in addition the overall system   w˙ N (t) = A N w N (t) + B N u(t) + F N (w N (t)) + gϕN (wϕN (t), w N (t)) w˙ ϕN (t) = f ϕN (wϕN (t), w N (t))

(15)

η(t) = C N w N (t) is observable from its output η(t). Remark 3.1. Assumption 3.1 imposes only a mild restriction on the class of disturbances for which the adaptive control approach (developed in Section 5) is applicable. It also ensures that the framework of nonlinear control with its design tools can be applied for regulation of the dynamics in (12), see for example [28].

1140

B.B. King et al. / Mathematical and Computer Modelling 43 (2006) 1136–1149

The approximations, introduced in (12) and (13), are used to compute K N , L N , AcN , by solving the algebraic Riccati equations in (9) and (10). By careful choice of an approximation scheme, the matrices K N , L N , AcN can be shown to converge to the PDE controller given by K , L, Ac in (7) and (8). Thus, the finite dimensional LQG controller for stabilization of the nominal linear system dynamics is given by u LQG (t) = −K N wcN (t) w˙ cN (t) = AcN wcN (t) + L N η(t),

(16) w N (0) = wcN0 .

(17)

We assume that N is chosen large enough so that the matrices in (12)–(17) have converged to the respective operators in (5)–(8). The finite dimensional approximation to the closed-loop LQG system without nonlinearity and disturbance (which henceforth will be referred to as the full-order LQG system) is given by:  N   N   w˙ (t) A −B N K N w N (t) (18) = w˙ cN (t) LNCN AcN wcN (t)  N   N w0 w (0) . (19) = wcN0 wcN (0) At this stage, the full-order LQG control is impractical for implementation as for many applications of interest, the state estimate is quite large, making the controller intractable for real-time computation. There are many approaches to obtaining low order controllers for both infinite and finite dimensional systems. Some involve reducing the state space model before computing the controllers, while others involve computing a control for the full-order system—that is known to converge to the controller for the PDE systems—and then reducing the controller in some way. In the next section, we discuss one reduction technique that has been explored for low order control design. 4. A reduced order controller via LQG balanced truncation and MinMax design 4.1. LQG balanced truncation Balanced truncation is a common procedure that can be found in standard references on control, e.g., [5] for PDE systems or [29] for systems of ordinary differential equations. It is based on the premise that a low order approximation to the system in (5) and (6) can be obtained by eliminating any states that are difficult to control and to observe. A special realization of the system, the balanced realization, is used to determine these states. Although this method often works well, some of the physics in the model is lost in the reduction phase before computation of the controller is done. To address that difficulty, we consider a related approach, LQG balanced truncation, which is based on the Riccati operators Π and P [13,15,16,30]. This process allows for the computation of a controller before any physical information is discarded. The LQG balancing procedure can be considered as follows. If S is a bounded, nonsingular operator, then the system x(t) ˙ = S −1 ASx(t) + S −1 Bu(t)

(20)

y(t) = C Sx(t)

is another realization of the system given by (5) and (6). In this particular realization, the operators A, B, C are mapped to (S −1 AS, S −1 B, C S), and the transformed Riccati operators are Πˆ = SΠ S ∗ and Pˆ = (S −1 )∗ P S −1 . Additionally, Πˆ = Pˆ = diag(µ1 , µ2 , . . . , µn , . . .),

µ1 ≥ µ2 ≥, · · · , ≥ µn ≥ · · · ≥ 0

where µi are the LQG characteristic values and are realization invariant [31,32]. If we write       A11 A12 B1 ˆ ˆ A= B= Cˆ = C1 C2 A21 A22 B2

B.B. King et al. / Mathematical and Computer Modelling 43 (2006) 1136–1149

1141

ˆ B, ˆ Cˆ are partitioned so that the states corresponding to the q most “significant” LQG characteristic values are then A, given by the truncated system defined by A11 , B1 , C1 . Specifically, the reduced order system q

w˙ q (t) = A11 wq (t) + B1 u(t),

w0 = wq (0)

η (t) = C1 u(t) q

(21) (22)

will be used as the foundation for low order controller design. Notice that the number of states in the reduced order system is q, corresponding to the number of most significant LQG characteristic values. 4.2. MinMax control design The MinMax control design is considered to be more robust to disturbances than the LQG design and is discussed in detail in [33]. In terms of implementation, the difference between the computation of LQG and MinMax controllers is the Riccati equations that are solved and how the control design operators are then defined. For MinMax, one must solve A∗ Π + Π A − Π [(1 − θ 2 )B B ∗ ]Π + C ∗ C = 0

(23)

A P + P A − P[(1 − θ )C C]P + B B = 0.

(24)

2







For sufficiently small θ , we are guaranteed minimal solutions Π and P such that [I − θ 2 PΠ ] is positive definite, and which stabilize the system given by (5) and (6). We then define the control operators to be K = B∗Π F = [I − θ 2 PΠ ]−1 PC ∗

(25)

Ac = A − B K − FC + θ B B Π . 2



Note that θ = 0 results in the LQG controller. To account for nonlinearities in the system, e.g., as in our motivating example, we can use an extended compensator. In this case, rather than use a compensator of the form (8), we include a nonlinear term to account for dynamics that were neglected when linearizing the state dynamics. Specifically, the compensator equation takes the form ˆ cext (t)). w˙ cext (t) = Ac wcext (t) + Lη(t) + F(w

(26)

To compute a full-order MinMax control law and extended compensator, one would use the finite dimensional approximating dynamics and measurement in (12) and (13) and compute solutions to the MinMax Riccati equations (23) and (24). This would yield the approximations to the MinMax control operators in (25) that could then form a control law and observer. To compute a reduced order MinMax controller, one could use the reduced order dynamics generated by LQG balanced truncation in (21) and (22) to compute solutions to the MinMax Riccati equations, thus resulting in a reduced order controller. The following is an algorithm for the reduced order control design described in this section. • Compute Π N and P N from A N , B N , and C N via solution of the algebraic Riccati equations (9) and (10). • Π N is symmetric positive definite, so the Cholesky decomposition yields Π N = R ∗ R. • R ∗ P N R can be diagonalized by a unitary matrix U such that R ∗ P N R = U Λ2 U ∗ where Λ is a diagonal matrix containing the LQG characteristic values. 1 • Define S = Λ 2 U ∗ R. It follows that PˆN = (S −1 )∗ P N S −1 = diag(µ1 , µ2 , . . . , µ N ) = SΠ N S ∗ = Πˆ N .

• Based on the magnitude of the LQG characteristic values, µi , determine a truncated system, A11 , B1 , C1 , of size q, as in (21) and (22). • Use A11 , B1 , C1 to solve the MinMax Riccati equations in (23) and (24), yielding a reduced order controller q

u q (t) = −K q wcext (t) q w˙ cext (t)

=

q q Ac wcext (t) +

(27) L η (t) + F q q

ˆq

q (wcext (t)).

(28)

1142

B.B. King et al. / Mathematical and Computer Modelling 43 (2006) 1136–1149

For simulation purposes, the reduced order controller can be applied to the full order, nonlinear dynamics. This results in a closed loop system of the form   N   N  N   N N F (w (t)) A −B N K q w˙ (t) w (t) . (29) = + ˆq q q q q w˙ cext (t) wcext (t) Lq C N Ac F (wcext (t)) 5. A reduced order control design via adaptive output feedback 5.1. Closed-loop system dynamics Although the previous approach to reduced order controller design addresses nonlinearities by using an extended compensator, this approach is rather limited. Specifically, the controller and nonlinearity are determined by linearizing the system dynamics around a particular solution, and thus may not be particularly robust to a variety of operating conditions. In this second approach to reduced order control design that utilizes adaptive output feedback, for the regulation of the full order nonlinear dynamics in (12), consider the controller u(t) = u LQG (t) − u ad (t) = −K N wcN (t) − u ad (t),

(30)

where u LQG (t) is the full-order LQG controller in (16) and (17) and u ad (t) ∈ R is an adaptive signal that will be designed to approximately cancel the nonlinearities and disturbances. With this addition, the full order closed-loop system will take the form:  N   N   w˙ (t) A −B N K N w N (t) = w˙ cN (t) LNCN AcN wcN (t)  N  B − u ad (t) − F N (w N (t)) − gϕN (wϕN (t), w N (t)) (31) 0 w˙ ϕN (t) = f ϕN (wϕN (t), w N (t)) η(t) = C N w N (t). In the subsequent subsections we introduce the adaptive output feedback control approach, developed in [23], and specialize it for the dynamics in (31). For the purposes of our further analysis, we introduce the following notation:  N   N  w (t) A −B N K N N N ¯ w¯ (t) = , , A = wcN (t) LNCN AcN (32)  N  N  B C 0 N N ¯ ¯ B = , C = 0 0 I and write the dynamics in (31) in the following compact form:   ˙¯ N (t) = A¯ N w¯ N (t) − B¯ N u ad (t) − F N (w N (t)) − gϕN (wϕN (t), w N (t)) w w˙ ϕN (t) = f ϕN (wϕN (t), w N (t))

(33)

η(t) ¯ = C¯ N w¯ N (t) where the definition of C¯ N implies that the estimates of states (controller states) are available for feedback and are added to the available measurements. Notice that the LQG design ensures that A¯ N is Hurwitz. This implies that for arbitrary positive definite matrix Q N , there exists a unique positive definite symmetric matrix P N = (P N )T that solves the Lyapunov equation ( A¯ N )T P N + P N A¯ N = −Q N .

(34)

B.B. King et al. / Mathematical and Computer Modelling 43 (2006) 1136–1149

1143

5.2. Neural network approximation of nonlinearity Following the development in [34], given arbitrary  ∗ > 0 and a continuous function f (·), f : D ⊂ Rn → Rm , where D is compact, there exists a set of radial basis functions Φ(·) and bounded constant weights W , such that the following representation holds for all x ∈ D: f (x) = W T Φ(x) + (x),

k(x)k <  ∗ .

(35)

Here, the structure W T Φ(x) is called a radial basis function (RBF) neural network (NN), and (x) is the function reconstruction error. In [35,36], it was shown that for an observable system, such an approximation can be achieved using a finite sample of the output history. We recall the main theorem from [36] in the form of the following existence theorem. Theorem 5.1. Given  ∗ > 0, there exist bounded weights W and a positive time delay d > 0, such that the nonlinearity F(w N (t)) + gϕN (wϕN (t), w N (t)) in (33) can be approximated over a compact set D of the argument (wϕN , w N ) by an RBF NN F N (w N (t)) + gϕN (wϕN (t), w N (t)) = W T Φ(µ(t)) + (µ(t)), using the input vector: h µ(t) = 1 η¯ dT (t)

u¯ dT (t)

iT

,

kk <  ∗

kµ(t)k ≤ µ∗ ,

(36)

(37)

where η¯ d (t), u¯ d (t) are vectors of difference quotients of the measurement η and the control variable u in (12) and (13), respectively, and µ∗ is a known uniform bound on D. 5.3. Adaptive control and adaptation laws The adaptive element is designed as: u ad (t) = Wˆ T (t)Φ(µ(t)),

(38)

where Wˆ (t) are estimates of neural network weights, (see (36)), that will be adapted online. Using the representations in (36) and the definition of the adaptive control signal (38), the closed loop dynamics in (33) take the form ˙¯ N (t) = A¯ N w¯ N (t) − B¯ N (Wˆ T (t)Φ(µ(t)) − W T Φ(µ(t)) − (µ(t))) w η(t) ¯ = C¯ N w¯ N (t).

(39)

The NN weight adaptation laws are similar to the ones proposed in [37]: ˙ˆ (t) = −F[Φ(µ(t))(wˆ N (t))T P N B¯ N + k Wˆ (t)] W where wˆ N (t) is the output of the following linear observer for the dynamics in (39) ˙ˆ N (t) = A¯ N wˆ N (t) + M N (η(t) w ¯ − η(t)) ˆ N N η(t) ˆ = C¯ wˆ (t)

(40)

where M N is chosen to make A˜ N , A¯ N − M N C¯ N Hurwitz, and F is a positive definite adaptation gain matrix, k > 0 is a constant. This choice of adaptive law is based on Lyapunov’s direct method and ensures bounded set point regulation for the dynamics in (33), as discussed in [23,24,37–39]. For reasons of space the proof is not repeated here. 5.4. Reduced order adaptive output feedback control design The adaptive output feedback control approach described above is presented in the context of the full-scale (high order) controller, and is therefore not practical for implementation purposes. As an alternative idea, notice that η1 (t)

1144

B.B. King et al. / Mathematical and Computer Modelling 43 (2006) 1136–1149

has relative degree 2, and following the approach presented in [24], consider the dynamics in (12) and (13) in its normal form [26]:        η˙ 1 (t) 0 1 η1 (t) 0 + (u(t) + ∆1 (u(t), w N (t), wϕN (t))) (41) = η˙ 2 (t) a1 a2 η˙ 2 (t) b | {z } | {z } |{z} An

ηn

Bn

z˙ (t) = C z z(t) + Cη ηn (t)

(42)

w˙ ϕN (t) = f ϕN (wϕN (t), w N (t))   1 0 η(t) = η (t), 0 1 n | {z }

(43) (44)

Cn

where the constant matrices C z , Cη are such that the dynamics z˙ = C z z, representing the zero-dynamics of the system in (12) and (13), are exponentially stable. Note that the exponential stability of the zero dynamics follows easily from the global exponential stability proven in [25]. Also notice that the zero-dynamics are linear due to the structure of the problem. Therefore, any stabilizing controller for the dynamics in (41) will ensure boundedness of the z(t) states of the system [27]. Thus, the original control problem formulation for the dynamics in (12) and (13) is reduced to defining a controller for a second order system in (41). To this end, consider the approach outlined in [24], and rewrite the dynamics in (41)–(44) along with the dynamics for disturbances in (14), subject to Assumption 3.1, in the following compact form: η˙ n (t) = An ηn (t) + Bn (u(t) + F N (w N (t)) + gϕN (wϕN (t), w N (t))) z˙ (t) = C z z(t) + Cη ηn (t) w˙ ϕN (t) = f ϕN (wϕN (t), w N (t))

(45)

η(t) = Cn ηn (t). Since the only part of the state of the DPS that is used for control design in (45) is the part that is measured, a compensator is not needed to provide a state estimate for a controller (as described in Section 3, (8)). Therefore a second order linear quadratic regulator (LQR) controller that involves full-state feedback can be designed for the dynamics in (45) to stabilize the nominal linear dynamics η˙ n (t) = An ηn (t) + Bn u(t) η(t) = Cn ηn (t)

(46)

in the absence of nonlinearities and disturbances. Let this LQR controller be defined by the following system u n (t) = −K n ηn (t)

(47)

where K n is the solution of the algebraic Riccati equation, (9), where A = An , B = Bn , C = Cn . Following the approach presented previously for the full order controller, we augment this second order LQR controller with an adaptive element for stabilization of the dynamics in (45). That is, u(t) = u n (t) − u ad (t),

(48)

where u ad is designed as in (38), u ad (t) = Wˆ T (t)Φ(µ(t)), and the adaptive laws take the form ˙ˆ (t) = −F[Φ(µ(t))η T (t)P B + k Wˆ (t)]. W n n

(49)

1145

B.B. King et al. / Mathematical and Computer Modelling 43 (2006) 1136–1149 Table 1 System parameters ρ

τ

γ

m

`

α1

α3

1.0

1.0

.005

1.5

2

.01

3

Fig. 2. Cable position, w(t, s).

Here Pn solves the Lyapunov equation (34) for the Hurwitz A¯ n that defines the closed loop nominal linear dynamics formed by using the LQR control (in the absence of nonlinearities and disturbances): η˙ n (t) = (An + Bn K n ) ηn (t) | {z }

(50)

A¯ n

The stability proof, developed in [23,24,37–39], ensures bounded set point regulation for the ηn states in (45), guaranteeing boundedness of the internal z(t) states. 6. Simulations The numerical results throughout this section are based on a finite element approximation of the cable-mass system, resulting in the finite dimensional approximation of the system given in (12) and (13). Specific details of the approximation scheme can be found in [25]. 6.1. System with LQG balanced, MinMax controller The full order dynamics of the large scale approximating system in (12) and (13) are reduced according to the algorithm in Section 4. Specifically, LQG balanced truncation is applied to the full-order system of size N = 160, and two states are retained so that the reduced order system is of dimension q = 2. The MinMax controller with extended compensator is then computed with θ = 0.45. The periodic disturbance is chosen to be ϕ = cos(2.23318t). The values used for system parameters in (1) are given in Table 1: The results shown depict the full order system with reduced order MinMax controller as in (29). In Fig. 2, we show the behavior of the cable mass system with reduced order control applied, starting from the initial condition for position and velocity of the state as w(0, s) = s, wt (0, s) = −2, position and velocity of the state estimate as wc (0, s) = −0.5s, wct (0, s) = 0. An attenuation of the initial condition with a persistent disturbance over time can be observed. In Fig. 3, the position of the mass and mid-cable over time are depicted. In Fig. 4, the effort to control the structure as given by the magnitude of u(t) is shown.

1146

B.B. King et al. / Mathematical and Computer Modelling 43 (2006) 1136–1149

Fig. 3. Mass position, w(t, `) (left); Mid-cable position, w(t, `/2) (right).

Fig. 4. Control effort, u(t).

6.2. System with adaptive controller The nominal LQR controller, u n (t), in (47) is constructed for the normal form (46) of the spring–mass system at the end of the cable: #   " 0 1 η (t) 0 η˙ 1 (t) 1 k u(t). (51) = + η˙ 2 (t) 1 − 0 η2 (t) m This controller can be constructed by solving the algebraic Riccati equation in (9) with An , Bn as in (51) and Cn being the two by two identity matrix. The LQR controller in (47) then is K n = −BnT Πn . The adaptive part of the controller u ad (t) is formed by a linearly parameterized RBF network:   1 T ˆ u ad (t) = W (t) Φ(t) where 1 stands for the bias term, and  T Φ(t) = φ1 (t) · · · φl−1 (t) .

(52)

(53)

(54)

B.B. King et al. / Mathematical and Computer Modelling 43 (2006) 1136–1149

1147

Fig. 5. Cable position, w(t, s).

Fig. 6. Mass position, w(t, `) (left); Mid-cable position, w(t, `/2) (right).

The activation functions φi are selected as: φi (µ(t)) = exp(−(µ(t) − ci )T (µ(t) − ci )/σ ) i = 1, . . . , (l − 1).

(55)

In (55), µ(t) ∈ is the input vector to the network, ci ∈ are the centers of the radial basis functions, σ is the variance of the basis functions, l is the size of the network and m is the number of inputs to the NN. The simulations are performed with the following parameters: F = 6I, σ = 1, l = 40, m = 82. In order to be able to compare the performance of the adaptive controller simulations with the linear part of the controller alone are carried out also. Fig. 5 shows the state of the system over 50 s; the same initial condition is used as for the previous simulation results. Fig. 6 shows the deflection of the mass and mid-cable positions. Furthermore, the control effort, u(t), for both cases is presented in Fig. 7. The results show that the adaptive controller leads to a much better performance at the mass position, while the mid-cable behavior cannot be improved significantly. The overall behavior of the system under this reduced order control is similar to the behavior under the DPS-based reduced order control. However, the adaptive control shows less control effort required. Rm

Rm

7. Conclusions This paper presents two methods to obtain low order controllers for distributed parameter system. The first, LQG balanced truncation followed by MinMax control design, relies on DPS theory and addresses nonlinearities in a

1148

B.B. King et al. / Mathematical and Computer Modelling 43 (2006) 1136–1149

Fig. 7. Control effort, u(t).

limited fashion through an extended compensator. The second focuses on the nonlinearity in the system, and relies only on the output measurement of the system to design the controller. The reduction by balanced truncation provides a systematic approach to model reduction of DPS systems for control design. Although the adaptive method presented here outperforms the balanced truncation method—at least in terms of required control effort—the model on which the nominal controller is based is highly specific, and this idea may not be broadly applied to general DPS control problems. In particular, the fact that the control was second order was tied to the fact that there were two sensed states. Thus in future work, we propose a coupling of the two approaches, using the reduced order DPS-based controller via LQG balanced truncations to produced a reduced order model, followed by a standard control design to serve as the nominal controller. Then, this nominal design can be augmented by an adaptive NN controller as considered here. We believe that this approach holds great promise for many applications. Acknowledgements The research of Batten King and Evans was supported in part by NSF under grant DMS-0072629; King’s was additionally supported by AFOSR under Grant No. F49620-52-01-KINGB. The research of Hovakimyan is supported by AFOSR under Grant No. F49620-03-1-0443 while Buhl’s research is sponsored under Kurzstipendium zum Anfertigen einer Abschlussarbeit im Ausland under the German Academic Exchange Service. References [1] H.T. Banks, K. Kunisch, The linear regulator problem for parabolic systems, SIAM Journal on Control and Optimization 22 (1984) 684–698. [2] H.T. Banks, R.C. Smith, Y. Wang, Smart Material Structures, Masson and Wiley, Paris, 1996. [3] A. Bensoussan, G. Da Prato, M. Delfour, S. Mitter, Representation and Control of Infinite Dimensional Systems, Volume I & II, Birkhauser, Boston, 1992. [4] J.A. Burns, S. Kang, A control problem for Burgers’ equation with bounded input/output, Nonlinear Dynamics 2 (1991) 235–262. [5] R.F. Curtain, H.J. Zwart, An Introduction to Infinite-Dimensional Linear Systems Theory, Springer-Verlag, New York, 1995. [6] M.A. Demetriou, R.C. Smith (Eds.), Research Directions in Distributed Parameter Systems, SIAM Publications, Philadelphia, 2003. [7] I. Lasiecka, R. Triggiani, Differential and Algebraic Riccati Equations with Application to Boundary/Point Control Problems: Continuous Theory and Approximation Theory, Springer-Verlag, Heidelberg, 1991. [8] J.L. Lions, Optimal Control of Systems Governed by Partial Differential Equations, Springer-Verlag, Berlin, 1971. [9] H.T. Banks, K. Kunisch, An approximation theory for nonlinear partial differential equations with applications to identification and control, SIAM Journal on Control and Optimization 16 (1982) 815–859. [10] J.S. Gibson, A. Adamian, Approximation theory for linear quadratic Gaussian control of flexible structures, SIAM Journal on Control and Optimization 29 (1991) 1–37. [11] I. Lasiecka, Finite element approximations of compensator design for analytic generators with fully unbounded controls/observations, SIAM Journal on Control and Optimization 33 (1) (1995) 67–88. [12] S. Skogestad, I. Postlethwaite, Multivariable Feedback Control, John Wiley & Sons, Chichester, 1996. [13] E.A. Jonckheere, L.M. Silverman, A new set of invariants for linear systems—application to reduced order compensator design, IEEE Transactions on Automatic Control 28 (1983) 953–964.

B.B. King et al. / Mathematical and Computer Modelling 43 (2006) 1136–1149

1149

[14] A. Yousuff, R.E. Skelton, A note on balanced controller reduction, IEEE Transactions on Automatic Control 29 (3) (1984) 254–257. [15] E.I. Verriest, Low sensitivity design and optimal order reduction for the lqg-problem, in: 24th Midwest Symposium on Circuits and Systems, 1981, pp. 365–369. [16] E.I. Verriest, Suboptimal lqg-design via balanced realizations, in: 20th IEEE Conference on Decision and Control, 1981, pp. 686–687. [17] K.A. Evans, Reduced order controllers for distributed parameter systems, Ph.D. Dissertation, Virginia Polytechnic Institute & State University, 2003. [18] K.S. Hong, J. Bentsman, Direct adaptive control of parabolic systems: algorithm synthesis and convergence and stability analysis, IEEE Transactions on Automatic Control 39 (1994) 2018–2033. [19] Yu. Orlov, Model reference adaptive control of distributed parameter systems, in: Conference on Decision and Control, 1997. [20] M. Demetriou, Model reference adaptive control of slowly varying parabolic distributed parameter systems, in: Conference on Decision and Control, 1994. [21] Y. Miyasato, Model reference adaptive control for distributed parameter systems of parabolic type by finite dimensional controller, in: Conference on Decision and Control, 1990. [22] R. Padhi, S.N. Balakrishnan, Proper orthogonal decomposition based feedback optimal control synthesis of distributed parameter systems using neural networks, in: Proc. of the American Control Conference, 2002. [23] N. Hovakimyan, F. Nardi, A. Calise, N. Kim, Adaptive output feedback control of uncertain systems using single hidden layer neural networks, IEEE Transactions on Neural Networks 13 (6) (2002) 1420–1431. [24] N. Hovakimyan, B.-J. Yang, A. Calise, Robust adaptive output feedback control methodology for multivariable non-minimum phase nonlinear systems, Automatica 42 (4) (2006). [25] J.A. Burns, B.B. King, A reduced basis approach to the design of low order compensators for nonlinear partial differential equation systems, Journal of Vibration and Control 4 (1998) 297–323. [26] A. Isidori, Nonlinear Control Systems, Springer, Berlin, 1995. [27] E.D. Sontag, Smooth stabilization implies coprime factorization, IEEE Transactions on Automatic Control 34 (4) (1989) 435–443. [28] A. Teel, L. Praly, Global stabilizability and observability imply semi-global stabilizability by output feedback, Systems & Control Letters 22 (1994) 313–325. [29] K. Zhou, J. Doyle, Essentials of Robust Control, Prentice Hall, Upper Saddle River, NJ, 1998. [30] R.F. Curtain, A new approach to model reduction in finite-dimensional control design for distributed parameter systems, 2001 (preprint). [31] R.F. Curtain, K. Glover, Balanced realisations for infinite-dimensional systems, in: Operator Theory and Systems: Proc. Workshop Amsterdam, 1985, pp. 87–104. [32] B.C. Moore, Principal component analysis in linear systems: controllability, observability, and model reduction, IEEE Transactions on Automatic Control 26 (1) (1981) 17–32. [33] I. Rhee, J. Speyer, A game theoretic controller and its relationship to h ∞ and linear-exponential-gaussian synthesis, in: Proceedings of the 28th Conference on Decision and Control, 1989, pp. 909–915. [34] R. Sanner, J.J. Slotine, Gaussian networks for direct adaptive control, IEEE Transactions on Neural Networks 3 (6) (1992) 837–864. [35] N. Hovakimyan, H. Lee, A. Calise, On approximate NN realization of an unknown dynamic function from its input–output history, in: American Control Conference, 2000, pp. 3153–3157. [36] E. Lavretsky, N. Hovakimyan, A. Calise, Upper bounds for approximation of continuous-time dynamics using delayed outputs and feedforward neural networks, IEEE Transactions on Automatic Control 48 (9) (2003) 1606–1610. [37] N. Hovakimyan, F. Nardi, A. Calise, A novel error observer based adaptive output feedback approach for control of uncertain systems, IEEE Transactions on Automatic Control 47 (8) (2002) 1310–1314. [38] N. Hovakimyan, F. Nardi, A. Calise, A novel observer based adaptive output feedback approach for control of uncertain systems, in: American Control Conference, 2001, pp. 2444–2449. [39] N. Hovakimyan, A. Calise, N. Kim, Adaptive output feedback control of uncertain multi-input multi-output systems using single hidden layer neural networks, International Journal of Control 77 (15) (2004) 1318–1329.