Fast and Robust Iterative Learning Control for Lifted Systems

Fast and Robust Iterative Learning Control for Lifted Systems

Proceedings of the 18th World Congress The International Federation of Automatic Control Milano (Italy) August 28 - September 2, 2011 Fast and Robust...

283KB Sizes 0 Downloads 100 Views

Proceedings of the 18th World Congress The International Federation of Automatic Control Milano (Italy) August 28 - September 2, 2011

Fast and Robust Iterative Learning Control for Lifted Systems ⋆ A. Haber ∗ P.R. Fraanje ∗ M. Verhaegen ∗ ∗

Delft University of Technology, Delft, 2628 CD The Netherlands (e-mail: [email protected]).

Abstract: In this paper we propose a new methodology to synthesize and implement a robust ILC controller for lifted systems. The control law is obtained as the solution of the bounded data uncertainty (BDU) least squares problem, using the bounded model uncertainty in the trial domain. The uncertainty bounds in the trial domain are directly obtained from the bounds on the model uncertainties of the local sample to sample LTI or LTV models, using the randomized algorithm. This way we avoid introduction of unnecessary conservatism in the lifted robust ILC design. Efficient solutions, with the linear complexity in the size of the trial length, are presented for both the BDU least squares problem and for the computation of the uncertainty bounds in the trial domain. The linear complexity is achieved through exploiting the sequentially semiseparable structure of the lifted system matrices. Therefore the proposed framework is especially suitable for the LTI and LTV uncertain systems with large number of samples in the trial. Keywords: Learning control; Efficient algorithms; Robust control; Robust estimation; Uncertain dynamic systems 1. INTRODUCTION Iterative Learning Control (ILC) has proven to be an effective method for achieving high tracking performance of systems that have to execute the same task number of times. Each execution of the task is referred to as the trial. The ILC strategy derives the control action for the next trial by updating the control action from the previous trial(s) with the signals that are derived on the basis of the measured data from the previous trial(s). Detailed and recent overview of the ILC is available in Bristow et al. (2006). The system controlled by the ILC evolves during the trial length, a domain that we refer to as the local time domain, and from one trial to another, a domain that we refer to as the trial domain. A recent trend in the analysis and design of the ILC is to formulate this 2D system as a 1D system, evolving only over the trial domain. This trial domain system representation is also referred to as the lifted system representation Phan et al. (2000). More recently, the robustness of the ILC laws and robust ILC controller synthesis, for the system representation in the trial domain, has been studied in van de Wijdeven et al. (2009) using µ analysis, in Lee et al. (2000) using the convex optimization techniques, in Yang et al. (2008) using Linear Matrix Inequalities (LMIs), and in Harte et al. (2005) by inverting the nominal plant in the trial domain. Although these works represent a contribution to the robust ILC theory, they do not address two important aspects that might hamper their real-life application. First aspect is the modeling of the uncertainties in the trial domain. In the cited literature, the connection between the ⋆ This research is supported by the Dutch Ministry of Economic Affairs and the Provinces of Noord-Brabant and Limburg in the frame of the ”Pieken in de Delta” program

978-3-902661-93-7/11/$20.00 © 2011 IFAC

local uncertainty description and the uncertainty description in the trial domain is not considered. The prevailing approach is to assume a certain uncertainty description (usually affine parametric or weighted additive) in the trial domain, and to design the trial domain controller on basis of this description. This approach may result in the conservative design with slow convergence rate of the ILC system (the effect of the uncertainties on the system is overemphasized), or may even lead to the unstable behavior (the effect of the uncertainties on the system is underestimated). Second aspect is the computational complexity of the lifted ILC design methods. Namely, in the trial domain representation dimensions of the lifted system matrices (system matrices in the trial domain) are proportional to the number of the sampling steps in the trial (trial length). This implies that for large trial lengths or for a relatively high sampling frequencies with respect to the trial length, the lifted system matrices have extremely large dimensions. This is the case in robotic applications where the lifted system matrices can have several million elements Hakvoort et al. (2008). Consequently, a straightforward application of the analysis and controller synthesis methods proposed in these works may turn out to be computationally infeasible in the case of large trial lengths. For example, the computational cost of the LMI solvers, Nesterov and Nemirovsky (1994), is at least O(l3 ) , where l is the number of decision variables. Further, the LMI problems defined in the trial domain result in order of l ≈ kN 2 decision variables, where N is the number of samples in a trial and k << N is a constant (see simulation section of Yang et al. (2008)). From this we conclude that the cost of the LMI solvers for the ILC design problems in the trial domain is around O(N 6 ). Beside the computational

3617

10.3182/20110828-6-IT-1002.00435

18th IFAC World Congress (IFAC'11) Milano (Italy) August 28 - September 2, 2011

complexity, another difficulty that might hinder the design of the ILC for large trial lengths, is an increased demand for memory locations Barton et al. (2010). The connection between the local time domain uncertainty description with the uncertainty description in the trial domain has been established in Ahn et al. (2007). However, this method is restricted to a class of the local time sample to sample dynamics that are LTI, stable and allow diagonalizable transition matrix. Further, this method may turn out to be computationally infeasible when trials are long. Another important issue in the ILC is the effect of a non-repeating disturbances and measurement noise Chin et al. (2004). In this paper we propose a new methodology to synthesize and implement a robust ILC controller for lifted systems. The control law is obtained as the solution of the bounded data uncertainty (BDU), Chandrasekaran et al. (1998); Sayed et al. (2002), least squares problem, using the bounded model uncertainty in the trial domain. The uncertainty bounds in the trial domain are directly obtained from the bounds on the model uncertainties of the local sample to sample LTI or LTV models, using the randomized algorithm. This way we avoid introduction of unnecessary conservatism in the lifted robust ILC design. Efficient solutions, with the linear complexity in the size of the trial length (O(N )), are presented for both the BDU least squares problem and for the computation of the uncertainty bounds in the trial domain. The linear complexity is achieved through exploiting the sequentially semiseparable structure (SSS) Chandrasekaran et al. (2003), of the lifted system matrices. Therefore the proposed framework is especially suitable for the LTI and LTV uncertain systems with the large number of samples in the trial. The SSS structure of the lifted system matrices has been exploited in Rice and Verhaegen (2010) to derive the LQG (H2 ) repetitive controller, with O(N ) complexity. This work however did not consider the robustness with respect to the model uncertainties. This work is organized as follows: In section 2 the robust ILC problem is formulated as the BDU least squares problem; In section 3 a new ILC algorithm is proposed on a basis of the computationally efficient implementation of the solution of the BDU least squares problem; In section 4 the sufficient condition is derived that guarantees the convergence of the robust solution; In section 5 a computationally efficient method for conversion of the uncertainty bounds from the local to the trial domain is presented; Finally, the simulations results are presented in section 6. 2. PROBLEM STATEMENT Let the local time domain, LTV sample to sample dynamics be described by the model: x(t + 1) = A(t, δ)x(t) + B(t, δ)u(t) + F (t, δ)d(t) (1) y(t) = C(t, δ)x(t) + D(t, δ)u(t) + G(t, δ)n(t) (2) where t ∈ N0 is the time, x(t) ∈ Rnx is the state, y(t) ∈ Rny is the measured output, u(t) ∈ Rnu is the control input, d(t) ∈ Rnd is the process disturbance and n(t) ∈ Rnn is the measurement noise. The mismatch between the physical system and the model (1)-(2) is

represented by the vector δ ∈ Rr , contained in the rdimensional ball with radius ρ, Calafiore et al. (2007), denoted as: B2 (ρ, Rr ) = {δ ∈ Rr : kδk2 ≤ ρ} (3) We assume that the system matrices in (1)-(2) continuously depend on δ and that: ˜ δ) B(t, δ) = B(t) + B(t, ˜ δ) A(t, δ) = A(t) + A(t, ˜ δ) C(t, δ) = C(t) + C(t,

˜ δ) (4) D(t, δ) = D(t) + D(t, ˜ δ) denotes where A(t) denotes the nominal part, while A(t, the uncertain part of the matrix A(t, δ), and the same notation holds for the other system matrices. The system matrices F (·) and G(·) can depend on δ in an arbitrary way. The state transition matrix of the system (1)-(2) is: n Φ(N,N0 ) (δ) =

A(N − 1, δ)A(N − 2, δ) . . . A(N0 , δ) if N > N0 I if N = N0

For the trial length of N + 1 we have:

yk = C(δ)xk0 + D(δ)uk + H(δ)dk + G(δ)nk where A(δ) = Φ(N +1,0) (δ) and  D(δ) =

H(δ) =

 

D(0, δ) C(1, δ)Φ(1,1) (δ)B(0, δ)

0 D(1, δ)

. . .

. . .

.

0 C(1, δ)Φ(1,1) (δ)F (0, δ)

0 0

... 0 ... 0

  

=

..

. . . . C(N, δ)Φ(N,1) (δ)F (0, δ) C(N, δ)Φ(N,2) (δ)F (1, δ) . . . 0 . . .

. . .

 C(0, δ)Φ

C(δ) =

k

0 0

... ...

. . . C(N, δ)Φ(N,1) (δ)B(0, δ) C(N, δ)Φ(N,2) (δ)B(1, δ) . . . D(N, δ)

(0,0) (δ) C(1, δ)Φ(1,0) (δ)

y

(5)

 

. . . C(N, δ)Φ(N,0) (δ)



y(0) y(1)

 

. . . y(N )







G(0, δ) 0 0 ... 0 G(1, δ) 0 . . .

 G(δ) =   

u(0) u(1)

 uk =   



. . . u(N )



..



 dk =   

. . . 0

d(0) d(1)

. . . d(N )



. . . 0



 nk =   

n(0) n(1)

. . . n(N )

  

 

  0 0

. . . . . . . . . 0 . . . G(N, δ)





   (6)

In (5), xk0 is the initial state of the k-th trial, and underline indicates a lifted system matrix or a lifted vector. We define the difference operator ∆zk := zk+1 − zk , where zk is an arbitrary lifted vector, and suppose the following Chin et al. (2004): dk = wk + vk k

wk = wk−1 + ∆wk−1

(7)

k−1

where v and ∆w are lifted random vectors. The lifted measurement noise vector, nk , is also assumed to be a lifted random vector. In addition we assume that there exist trial independent real numbers ηw , ηv and ηn such that:



k

∆v ≤ ηv , ∆wk ≤ ηw , ∆nk ≤ ηn (8) 2 2 2

With process disturbance formulation (7) we have taken into account both repetitive and non-repetitive disturbances that influence the system during the trial. The lifted tracking error of the k-th trial is defined as ek = yref − yk , where yref is the lifted reference trajectory vector. Assuming that each trial starts from the same initial state, that is xk0 = x0 , ∀ k ∈ N, and taking into account previous assumptions, from (5) we have:

3618

ek+1 = ek − D(δ)∆uk − H(δ)(∆wk + ∆vk ) − G(δ)∆nk (9)

18th IFAC World Congress (IFAC'11) Milano (Italy) August 28 - September 2, 2011

Due to (4), D(δ) can be separated into the part that does not depend on the uncertainty - nominal part D, and the ˜ part that does - the perturbed part D(δ): ˜ (10) D(δ) = D + D(δ) where: 

D(0) 0 D(1)  C(1)Φ(1,1) B(0) D= . .  . . . . C(N )Φ(N,1) B(0) C(N )Φ(N,2) B(1)

0 0 . . . ...



... 0 ... 0  .  .. .  . . . . . D(N )

(11)

˜k (δ) = −H(δ)(∆wk + By introducing the notation e k k ∆v ) − G(δ)∆n and taking into account (10), the equation (9) can be written compactly:   ˜ ˜k (δ) − D + D(δ) (12) ek+1 = ek + e ∆uk

Due to the assumption that the local time domain system matrices in (1)-(2) continuously depend on δ and due to fact that the set B2 (ρ, Rr ) is closed and bounded, and because of (8), the following bounds can be established:



˜

k ˜ (δ) ≤ ηe , ∀δ ∈ B2 (ρ, Rr ) (13)

D(δ) ≤ η, e 2

2

The main goal of the ILC strategy is to reach a relatively small value of the tracking error in as small as possible number of trials. At the end of the k-th trial, we have information about ek , we know the lifted nominal matrix D and the bounds η and ηe defined by (13). We are interested in deriving the ILC update ∆uk that will minimize the tracking error in the k + 1 trial, knowing that the effect of the uncertainties, non-repeating process disturbances and measurement noise, when expressed in the norms terms cannot exceed the bounds η and ηe . All these facts, motivate us to formulate the problem of obtaining the ILC update ∆uk that will minimize the tracking error in the next trial ,i.e. ek , as the BDU estimation problem Chandrasekaran et al. (1998); Sayed et al. (1998):

of D is defined by (11), algorithms with O(N ) computational complexity are available for computing the vectormatrix product, and performing the basic matrix operations (+−,×,−1 ). Matrices with this type of structure are called Sequentially Semi-Separable (SSS) matrices Chandrasekaran et al. (2003). It also proved that the algebra of the SSS matrices is closed under the basic matrix operations. By exploiting these nice properties of the SSS structure, we formulate a new computationally efficient ILC algorithm. Algorithm 1. For the observed ek ∈ R(N +1)ny , and for D ∈ R(N +1)ny ×(N +1)nu , with (N + 1)ny ≥ (N + 1)nu and D full rank, and for given M (number of iterations of the recursion (19)) and η defined by (13), the ILC update ∆ˆ uk , obtained as the solution to (14), is constructed as follows: kD† ek k2 kD T e k k For each trial k define τ1k = , τ2k = kek k 2 T † † k kD D e k2 2   † k k and r = I − DD e If rk 6= 0 (ek does not belong to the column space of D) . (1) If η ≥ τ2k then the ILC control update is ∆ˆ uk = 0. k (2) If η < τ2 then the ILC control update is −1 T k ∆ˆ uk = DT D + α ˆk I D e (15) where α ˆ k is determined using the recursion (19).

If rk = 0 ( ek belongs to the column space of D). (1) If η ≥ τ2k then the ILC control update is ∆ˆ uk = 0. k k (2) If τ1 < η < τ2 then the ILC control update is −1 T k ∆ˆ uk = DT D + α ˆk I D e (16) where α ˆ k is determined using the recursion (19). (3) If η ≤ τ1 then the ILC control update is

For a given D ∈ R(N +1)ny ×(N +1)nu , where ny ≥ nu , and nonnegative real numbers (η, ηe ), determine if possible an ∆uk that solves:  

k  k ˜

e + e˜k (δ) − D + D(δ) max min ∆u ∆uk

˜ kD(δ) k2 ≤η,ke˜k (δ)k2 ≤ηe

∆ˆ uk = D† ek τ1k

then there are infinitely many ILC = (4) If η = control updates that are determined by ∆ˆ uk = ϕD† ek

2

(18)

for any 0 ≤ ϕ ≤ 1.

(14)

In the next section, we present a new ILC algorithm that is based on the efficient implementation of the solution to the BDU estimation problem (14).

(17)

τ2k

The parameter α ˆ k is computed by randomly choosing the initial value α0k > 0 and performing M steps of the following recursion: for m=1:M

3. LINEAR COMPUTATIONAL COMPLEXITY ILC ALGORITHM The extensive analysis and solution of the ”min-max” problem (14) is presented in Chandrasekaran et al. (1998) and Sayed et al. (1998). The solution relies on computationally demanding steps like the SVD, matrix inversion or pseudo-inverse of D, and thus it is not computationally feasible when this matrix has large dimensions, that is, for the large trial lengths. As an illustration, the computational cost of the deterministic SVD method of D is O((N )3 (n2y nu + n2u ny )) Burden and Faires (2001). However, the BDU solution can be formulated without a need to introduce the SVD of D. Since the structure



D DT D + αk I −1 DT ek − ek

m−1

2 αkm = η 

DT D + αk I −1 DT ek

m−1

(19)

2

end

k Return α ˆ k = αM . Where all the vector-matrix multiplications, and the basic matrix operations (+−,×,−1 ) are computed using the algorithms presented in Chandrasekaran et al. (2003), that  is by exploiting the SSS structure of D.

Originally in Chandrasekaran et al. (1998), the bound τ1k is expressed in terms of the Σ and U of the following SVD:

3619

18th IFAC World Congress (IFAC'11) Milano (Italy) August 28 - September 2, 2011



 T D = U ΣT 0 V T , that is, τ1k = Σ−1 ek1 2 / Σ−2 ek1 2  T and where U ek1 T ek2 T = ek . Due to fact that:   † k

D e =V

Σ−1 ek1

D

†T

Σ−2 ek1 D e =U 0 † k

Q = (DT We D + W∆u + Wu )−1 (DT We D + W∆u ) L = (DT We D + W∆u )−1 DT We

(20)

and because the multiplication of some vector with U or V is an orthogonal transformation preserving

(length

−1 k

† k transformation) we have that D e = Σ e1 and 2



† T † k

D D e = Σ−2 ek1 2 . Therefore 2



† k

T

k

τ1 = D e / D† D† ek = Σ−1 ek1 / Σ−2 ek1 2 2 2 2

This shows that the bound τ1k can be expressed in the form not depending on the SVD of D. In the proof of the BDU solution Chandrasekaran et al. (1998); Sayed et al. (1998), the parameter α ˆ k is defined as the solution of the nonlinear equation αk = f (αk ) (secular equation), where:

D T + αk I −1 DT k

e − ek

D D 2

(21) f (αk ) = η

DT + αk I −1 DT k D e

2

Next, in Sayed et al. (1997), it is proved that the secular equation (21) represents a contraction mapping with respect to αk . That is, by selecting any positive initial k = f (αik ), it is guaranteed value α0k of the recursion αi+1 k k k that lim αi = α , where α is the unique positive solution i→∞

of the secular equation (21). It is also indicated that the contraction mapping converges relatively quickly to the solution. This fact has been exploited in Algorithm 1 to approximate the solution of (21) by the recursion (19). The O(N ) computational complexity of the Algorithm 1 follows from the following facts. Namely, the ILC update determined by the Algorithm 1 takes two forms, the pseudo-inverse form (17) and (18) or the form of the regularized pseudo-inverse (15) and (16). One the one hand, the closeness of the SSS matrix algebra under the basic matrix operations (+−,×,−1 ), implies that the matrix expressions in the numerator and the denominator of the recursion (19) satisfy the SSS structure. Due to this and due to fact that the recursion converges relatively quickly, we can approximate the regularization parameter with O(N ) complexity. On the other hand, the pseudo inverse and regularized pseudo inverse of the matrix D satisfy the SSS structure and can be also determined with O(N ) complexity Chandrasekaran et al. (2003) (once α ˆ k is determined). Similarly, the bounds τ1k and τ2k can also be computed with linear computational cost. As a consequence, the computational cost of the Algorithm 1 is linear in the trial length. Remark 1. Instead of the recursion (19) in Algorithm 1, a simple bisection algorithm can be used to solve (21) with O(N ) complexity. Remark 2. We now show that, under some assumptions, the whole class of the norm-optimal ILC laws satisfies the SSS structure. In the norm-optimal ILC Bristow et al. (2006), the ILC law is obtained by minimizing the following cost with respect to uk+1 : (ek+1 )T We ek+1 + (∆uk )T W∆u ∆uk + (uk+1 )T Wu uk+1

(1)-(2) is known accurately. The resulting ILC law is  uk+1 = Q uk + Lek where in the general case: The properties of the SSS structure imply that if the weighting matrices of the above law are selected to satisfy the SSS structure, then the structure of norm-optimal ILC law satisfies the SSS structure. Next, it is well know that with the proper selection of Wu , we can increase the robustness of the norm-optimal ILC Bristow et al. (2006). By exploiting these facts, we can compute the robust norm-optimal ILC laws with O(N ) complexity. 4. ROBUST CONVERGENCE ANALYSIS We analyise the robust convergence of the tracking error in the trial domain, of the closed-loop system obtained by substituting the ILC law determined by the Algorithm 1 in the open-loop system description (12). For the convergence analysis we assume that the parameter α ˆ k is exact solution k of the secular equation (21) (that is α ˆ = αk ). In addition we assume that D has full column rank and that the local time domain state space realization (1)-(2) has the same number of inputs and outputs: nu = ny = n. We assume that process disturbances and measurement noise are not affecting the system. Under these conditions, the following theorem is established: Theorem 3. Let the ∆uk be a non-trivial (∆uk 6= 0) learning control update determined by the Algorithm 1. If η < σmin then the tracking error ek monotonically  converges to e∞ = 0 The proof of above theorem is given in Haber et al. (2011). Remark 4. It is important to stress that Theorem 3 states the sufficient conditions for the robust monotonic convergence. Remark 5. By using the weighted BDU formulation presented in Sayed et al. (2002) (and also penalizing the uk+1 ), it is possible to additionally increase the robustness of the BDU ILC law. 5. COMPUTATION OF THE UNCERTAINITY BOUNDS In order to construct the ILC update using the Algorithm 1, the bound η (13) has to be computed. For the computation of η, we will assume that the model uncertainty vector δ is described as the real random vector with a given probability density function. Accordingly, the bound η will be computed in the probabilistic sense. This probabilistic approach is referred to as the Randomized Algorithm (RA) Calafiore et al. (2007). In most cases the computational cost of the RA methods is much lower and it is possible to compute the uncertainty bound for a much broader class of local time domain system realizations (1)-(2), than using other robust analysis techniques Zhou et al. (1996). Furthermore, under some assumptions the SSS structure of the lifted system matrices can be exploited to efficiently compute the bound η. From (10) we have: ˜ D(δ) = D(δ) − D

(22)

where in minimizing the cost (22) it is assumed that ek+1 = ek − D∆uk , that is, the local time domain system

(23)

(24)

The RA algorithm for approximation of η is stated as follows.

3620

18th IFAC World Congress (IFAC'11) Milano (Italy) August 28 - September 2, 2011

˜ Algorithm 2. Calafiore et al. (2007), For D(δ) defined in (24), depending on the model uncertainty vector δ, which is uniformly distributed in B2 (ρ, Rr ), and for the performance function γ(δ) : B2 (ρ, Rr ) → R+ defined as



˜ γ(δ) = D(δ)

, and for a given probability levels p∗ and 2 ζ, the following RA returns with probability 1 − ζ the approximation of the uncertainty bound η (13), denoted as η, such that P r{γ(δ) ≤ η} ≥ p∗ . (1) Determine the sample size N = N wc (p∗ , ζ) according to N wc (p∗ , ζ) = ln ζ1 / ln p1∗ . (2) Draw N ≥ N independent samples δ 1 , . . . , δ N , uniformly distributed in B2 (ρ, Rr ). (3) For every sample δ i , determine the value of the performance function γi = γ(δ i ) (4) Return the empirical maximum η = maxi=1,...,N γi .  Remark 6. The Algorithm 2 determines the number η, which is the lower bound on the uncertainty bound η, that is η ≤ η. In Algorithm 2 it is assumed that the model uncertainty vector δ is uniformly distributed in the norm ball B2 (ρ, Rr ) (how to draw samples from uniform distribution see Calafiore et al. (2007) and references therein). However, any other distribution function can be also assumed.

In the

˜

third step of the Algorithm 2, the value of γi = D(δ i ) 2 can be approximated as the solution of the optimization problem: ˜ i) ≥ 0 ˜ i )T D(δ min γi , subj. to: W (γi , δi ) = γi2 I − D(δ

(25)

˜ i ) in the last optimization Although the matrix D(δ problem, might be large dimensional, the linear matrix inequality in (25) has only one decision variable γi and thus it can be solved relatively efficiently. Since D(δ) and D satisfy the SSS structure, and due to fact that SSS matrix algebra is closed under subtraction, from (24) we ˜ conclude that D(δ) satisfies the SSS structure. Similarly ˜ i )T D(δ ˜ i ) satisfy the SSS structure we conclude that D(δ (SSS algebra is closed under matrix multiplication). In the sequel, we exploit this fact to compute γi with O(N ) complexity. The value of γi is equal to the square root of ˜ i )T D(δ ˜ i ). One of the most the maximal eigenvalue of D(δ simplest algorithms to determine the maximal eigenvalue of a matrix is the Power Method (PM) Burden and Faires (2001). The method consists of the following iteration: hj+1 =

˜

T

˜

j

D(δi ) D(δi )hj ˜ i )h ˜ i )T D(δ

D(δ

(26)

2

where j is an iteration index. The iteration (26) starts with a vector h0 , which can be chosen as a random vector or as an approximation of the dominant eigenvector of ˜ i )T D(δ ˜ i ). The sequence D(δ λj =

˜ i )T D(δ ˜ i )hj (hj )T D(δ (hj )T hj

(27)

˜ i ). ˜ i )T D(δ converges to the dominant eigenvalue of D(δ The PM will exhibit the fast convergence (in the number ˜ i )T D(δ ˜ i) of iterations) if the maximal eigenvalue of D(δ is strictly larger than other eigenvalues in the spectrum Burden and Faires (2001). Many other algorithms for

determining the matrix spectral properties rely on the iteration (26). These algorithms can improve the convergence ˜ i) ˜ i )T D(δ of the basic PM method. Due to fact that D(δ satisfies the SSS structure, one iteration of (26) (as well as (27)) can be computed with O(N ) complexity. 6. SIMULATION We consider the following local-time domain system: h i h i h i   0 −0.7 + δ1 −0.5 + δ2 2

A=

1 + δ3

0.2 + δ4

B=

0.5

F =

0

C= 1 0 D=0 (28)

and G = 0. For the simplicity reasons, we are considering the local time domain system with the affine uncertainty description (arbitrary uncertainty description can be also chosen). For N = 21 system (28) is lifted. For different values of ρ, we have computed η using the Algorithm 2. (where the probability levels were ζ = 0.01 and p∗ = 0.99). The results are presented in the next table. ρ η

0.07 0.66

0.095 0.805

0.5 16.1

The minimal singular value of D is σmin = 0.74. In total 100 Monte Carlo simulations were performed, where in each simulation run δ is randomly chosen from B2 (ρ, R4 ). The reference trajectory is given in Figure 2. In the each trial k, xk0 = 0. The simulations were performed for two selections of the uncertainty radius ρ. 1) ρ = 0.07. We have η = 0.66 < σmin . Due to this, the robust monotonic convergence condition of the Theorem 3 is satisfied. During the simulation experiments, it has been observed that the ILC update has the pseudo-inverse form. This is due to fact that η < τ1k for all k (see Algorithm 1). As expected, monotonic convergence of the tracking error has been observed. The learning graphs were omitted due to the space constraint. 2) ρ = 0.5. In this case we have η = 16.1. During the simulation experiments, it has been observed that η ≥ τ2k for all k. Due to this the ILC update determined by the Algorithm 1 is always zero. No learning transients have been observed. In this case, η will no longer be considered as the uncertainty bound. Instead, we consider η as the design parameter that can be chosen by the user. For the next simulation experiment, we decrease η and sample δ from from the same uncertainty ball B2 (0.5, R4 ) (ball with the radius ρ = 0.5). We have chosen η = 0.805. The starting value of the recursion (19) is α0k = 20. The number of the iterations of the secular equation was limited to M = 40. The learning transients are given in Figure 1. For comparison purpose, we have computed the pseudo-inverse control update, ∆ˇ uk = (DT D)−1 DT ek and repeated the simulations (under the same simulation conditions). The results are given in Figure 2. Comparing results in Figures 1 and 2, it can be concluded that the pseudo-inverse ILC law results in the unacceptably large overshoot, whereas the ILC law determined using the Algorithm 1 ensures relatively good convergence of the tracking error in the trial domain. In order to illustrate the O(N ) complexity of the Algorithm 1, we have simulated the necessary time to compute one iteration of the recursion (19) for different trial

3621

18th IFAC World Congress (IFAC'11) Milano (Italy) August 28 - September 2, 2011

lengths. In the first case, all matrix operations were performed using the algorithms presented in Chandrasekaran et al. (2003) (implemented in MATLAB), that is, by taking the SSS structure of D into an account. In the second case, all the matrix operations were performed using the standard MATLAB matrix operations. The computational times are presented in Figure 3 (left). The results clearly illustrate the linear computational complexity of the Algorithm 1. We have also computed the value of γi =

˜

D(δ i ) , using the Power method and the SSS structure. 2 For comparison purpose we have computed the value of γi using the MATLAB function norm. The computational times are presented in Figure 3 (right). They clearly illustrate the computational efficiency of the Algorithm 2. 5

4

4

Reference

3

Initial trial

2 Trial 8

1 y(t)

 k e  2

3

2

0 −1 −2

1

−3

0 1

−4 2

3

4

5

k

6

7

8

9

10

0

5

10

t

15

20

25

Fig. 1. Tracking performance in the trial domain (left) and in the time domain of the 8th trial (right) of the ILC law determined by Algorithm 1 10

4 3

8

2 1 y(t)

 k e  2

6 4 2

0 1

0 −1

Reference

−2

Initial trial

−3

Trial 8

−4 2

3

4

5

k

6

7

8

9

10

0

5

10

t

15

20

25

Fig. 2. Tracking performance in the trial domain (left) and in the time domain of the 8th trial (right) of the pseudo inverse ILC law

12

40 35

without structure SSS

computational time [s]

one iteration of secular equation [s]

14

10 8 6 4 2 0 0

30

MATLAB function norm() SSS and Symmetric Power Method

25 20 15 10 5

200

400

600

800

1000 1200 1400 1600 1800 2000 trial length

0 0

200

400

600 trial length

800

1000

Fig. 3. Linear computational cost in seconds, with and without taking into an account the SSS structure. Left-one iteration of the Algorithm 1. Right- the computation of γi

7. CONCLUSIONS In this paper we have formulated the ILC problem for the lifted uncertain systems as the BDU least squares problem. On the basis of the solution of the BDU least squares problem and by exploiting the SSS structure of the lifted

system matrices, we have derived a novel ILC algorithm for uncertain systems that is linear in the trial length. REFERENCES Ahn, H., Moore, K., and Chen, Y. (2007). Iterative Learning Control: Robustness and monotonic convergence for interval systems. Springer Verlag. Barton, K., Bristow, D., and Alleyne, A.G. (2010). A Numerical Method for Determining Monotonicity and Convergence Rate In Iterativel Learning Control. International Journal of Control. Bristow, D., Tharayil, M., and Alleyne, A. (2006). A survey of iterative learning control. IEEE control systems magazine, 26(3), 96–114. Burden, R. and Faires, J. (2001). Numerical Analysis. Brooks/Cole. Calafiore, G., Dabbene, F., and Tempo, R. (2007). A survey of randomized algorithms for control synthesis and performance verification. Journal of Complexity, 23(3), 301–316. Chandrasekaran, S., Dewilde, P., Gu, M., Pals, T., van der Veen, A., and White, D. (2003). Fast Stable Solvers for Sequentially Semi-Separable Linear Systems of Equations. Report, Lawrence Livermore National Library. Chandrasekaran, S., Golub, G., Gu, M., and Sayed, A. (1998). Parameter estimation in the presence of bounded data uncertainties. SIAM Journal on Matrix Analysis and Applications, 19(1), 235– 252. Chin, I., Qin, S., Lee, K., and Cho, M. (2004). A two-stage iterative learning control technique combined with real-time feedback for independent disturbance rejection. Automatica, 40(11), 1913– 1922. Haber, A., Fraanje, P., and Verhaegen, M. (2011). Fast and Robust ILC for Lifted Systems. Technical report, Delft Center for Systems and Control. Hakvoort, W., Aarts, R., van Dijk, J., and Jonker, J. (2008). Lifted system iterative learning control applied to an industrial robot. Control Engineering Practice. Harte, T., Hatonen, J., and Owens, D. (2005). Discrete-time inverse model-based iterative learning control: stability, monotonicity and robustness. International Journal of Control, 78(8), 577–586. Lee, J., Lee, K., and Kim, W. (2000). Model-based iterative learning control with a quadratic criterion for time-varying linear systems. Automatica, 36, 641–657. Nesterov, Y. and Nemirovsky, A. (1994). Interior point polynomial methods in convex programming. Studies in applied mathematics, 13. Phan, M., Longman, R., and Moore, K. (2000). Unified formulation of linear iterative learning control. Spaceflight mechanics 2000, 93–111. Rice, J. and Verhaegen, M. (2010). A Structured Matrix Approach to Efficient Calculation of LQG Repetitive Learning Controllers in the Lifted Setting. Accepted for publication in International Journal of Control. Sayed, A., Garulli, A., and Chandrasekaran, S. (1997). A fast iterative solution for worst-case parameter estimation with bounded model uncertainties. In 1997 American Control Conference, 15 th, Albuquerque, NM, 1499–1503. Sayed, A., Nascimento, V., and Chandrasekaran, S. (1998). Estimation and control with bounded data uncertainties. Linear Algebra and Its Applications, 284(1-3), 259–306. Sayed, A., Nascimento, V., and Cipparrone, F. (2002). A regularized robust design for uncertain data. SIAM Journal on Matrix Analysis and its Applications, 23(4), 1120–1142. van de Wijdeven, J., Donkers, T., and Bosgra, O. (2009). Iterative Learning Control for uncertain systems: Robust monotonic convergence analysis. Automatica, 45(10), 2383–2391. Yang, S., Qu, Z., Fan, X., and Nian, X. (2008). Novel iterative learning controls for linear discrete-time systems based on a performance index over iterations. Automatica, 44, 1366–1372. Zhou, K., Doyle, J., and Glover, K. (1996). Robust and optimal control. Prentice Hall Englewood Cliffs, NJ.

3622