IFAC MCPL 2007 The 4th International Federation of Automatic Control Conference on Management and Control of Production and Logistics September 27-30, Sibiu - Romania
POLYNOMIALS BASED ITERATIVE LEARNING CONTROL ALGORITHM M. Songjun ∗ D. H. Owens ∗ ∗ Department of Automatic Control and Systems Engineering, University of Sheffield, Mappin Street, Sheffield, S1 3JD United Kingdom
Abstract: A Polynomial Iterative Learning Control algorithm is firstly introduced especially the polynomial in transpose of system matrix G GT ILC algorithm is regarded. The purpose is to add knowledge to the area of optimization based ILC algorithm by investigating the possibility of using polynomial theory in order to accelerate the convergence rate. It is surprisingly shown that this new algorithm has potential benefits which include the consideration in convergence rate. The algorithm achieves a monotonic convergence of the error in norm to zero and faster convergence rate when the polynomial term is increased if the original system is a discrete-time LTI system. Numerical experimentation also shows that the convergence can be fast, if the system is minimum phase. The algorithm analysis and real-plant simulation are given in detail accordingly. Copyright @ 2007 IFAC Keywords: Iterative learning control, Polynomial, Convergence, Optimization
1. INTRODUCTION
trial to trial in the sense that the tracking error is sequentially reduced. (see (Arimoto et al., 1984) and (Uchiyama, 1978)) Since the introduction of iterative learning control methodology by (Arimoto et al., 1984), a lot of progress has been made by several researchers in understanding the different theoretical and practical aspects of ILC, e.g. (Oh et al., 1988), (Amann et al., 1996), (Barton et al., 2000) and (Moore, 2000). Most of work so far on ILC has concentrated on developing new algorithms and analyzing their convergence properties. In addition, different authors have introduced different design techniques as well as the analysis. For example, (Kurek and Zaremba, 1993) and (Owens et al., 2000) is based on the theory of 2-D systems whereas (Norrlof, 2000) is based on frequency based methods. In this paper, the polynomial in transpose of system matrix G ILC algorithm is introduced as a new paradigm for ILC research area. It is shown
Iterative Learning Control (ILC) is now a wellknown technique adding to the toolbox of control algorithms. ILC is especially concerned with an improvement of the transient response and tracking performance of systems that operate in a repetitive mode, i.e. repeat the same trajectory, motion, or operation over and over again. Such systems include robot arm manipulators and chemical batch processes, where the task of following a specified output trajectory r(t) in an interval t ∈ [0, T ] with high precision is executed repeatedly. The use of conventional control algorithms with such systems will result in the same level of tracking error being repeated time and time again. On the other hand, ILC algorithm can achieve zero tracking error as the number of iterations increases, e.g. (Amann and Owens, 1995). The basic idea of ILC is to use information from previous executions in order to gradually improve the accuracy by iteratively changing the input from 623
that if the original plant is a discrete-time, linear, time-invariant system, the polynomial in transpose of system matrix G achieves a monotonic convergence. Furthermore, it is also shown that increasing a number of polynomial term practically increases the convergence rate as the norm of error keep reducing iteration to iteration. The rest of the paper is organized as follows. In section 2 the ILC problem is defined in detail and matrix representation for dynamic systems is reviewed. The polynomial in GT ILC algorithm is derived in section 3. Some properties of the algorithm including the convergence properties are analyzed in section 4. Section 5 shows the benefit of the polynomial in GT ILC algorithm in numerical example. Finally, in section 6 some general conclusions are given.
¤T r(0) − yk (0) r(1) − yk (1) . . . r(T ) − yk (T ) . k · k is a suitable norm. Furthermore, u∗ is the input function that gives r(t) = [Gu∗ ](t) and G is the convolution mapping corresponding to (1). Note that if the mapping f in (2) is not a function of ek+1 , then it is typically said that the algorithm is of feedforward type, otherwise it is of feedback type. For analysis it is important to notice that because the system (1) is defined over a finite timeinterval, it can be represented equivalently with a matrix equation yk = Guk , where 0 0 0 ... 0 CB 0 0 ... 0 CAB CB 0 ... 0 G= (4) .. .. .. . . .. . . . . . CAT −1 B CAT −2 B . . . . . . 0
2. PROBLEM DEFINITION
and the elements CAj B of the matrix G are the Markov parameters of the plant (1). Assume from now on that the reference signal r(t) satisfies r(0) = 0. Then it can be shown (see (Hatonen et al., 2003)) that for analysis it is sufficient to analyze a ’lifted’ plant equation yk,l = Gl uk,l ¤T £ where uk,l = uk (0) uk (1) . . . uk (T − 1) , ¤T £ yk,l = yk (1) yk (2) . . . yk (T ) and
£
As a starting point consider the following standard linear, time-invariant single input, single output state-space representation defined over finite time interval t ∈ [0, T ] (in order to shorten notation it is assumed that sampling time is unity): x(t + 1) = Ax(t) + Bu(t) y(t) = Cx(t)
(1)
... 0 ... 0 ... 0 Gl = . .. . .. CAT −1 B CAT −2 B . . . . . . CB
where the state x(·) ∈ Rn , output y(·) ∈ R, input u(·) ∈ R and x(0) = 0. The operators A, B and C are matrices of appropriate dimensions. From now on it will be assumed that CB > 0 and that the system (1) is controllable and observable. Furthermore, a reference signal r(t) is specified and the control objective is to find an input function u(t) so that the output function y(t) would track this reference signal r(t) as accurately as possible. The special feature of the ILC problem is that after the system has reached that final time point t = T , the state of the system (1) is reset to zero, and after the resetting the system is required to repeat the same reference signal again. This repetitive nature of the problem opens up possibilities for modifying iteratively the input function u(t) so that as the number of repetitions increases, the system learns the input function that gives perfect tracking. To be more precise, the idea is to find a control law
k−→∞
0 0 CB .. .
(5)
3. THE POLYNOMIAL IN GT ILC ALGORITHM The motivation for the new iterative learning control algorithm that will be proposed in this section comes from the principal goal in ILC: the objective to achieve perfect tracking. More inspiration is the inclusion of more degrees of freedom can improve algorithm performance. Thus the new polynomial ILC scheme is initially introduced to put knowledge to the optimization based ILC area by considering the possibility of using polynomial conceptual to increase more degrees of freedom. To underline the potential of polynomial in
so that lim kek k −→ 0, lim kuk − u∗ k −→ 0
0 CB CAB .. .
Note that because it was assumed that CB 6= 0, Gl is invertible, and consequently for an arbitrary reference r there exists u∗ so that r = Gl u∗ . From now on this lifted plant model will be used as a starting point for analysis, and in order to shorten notation, the subscript l will be omitted from the model.
uk+1 = f (uk , uk−1 , . . . , uk−r , ek+1 , ek , . . . , ek−s )(2)
k−→∞
CB CAB CA2 B .. .
(3)
£ ¤T where uk = uk (0) uk (1) . . . uk (T ) , yk = £ ¤T yk (0) yk (1) . . . yk (T ) , and the error ek = 624
GT procedures, consider the following feedforward ILC algorithm uk+1 (t) = uk (t) + GT
M X
βk+1 (j)(GGT )j−1 ek (t + 1);
j=1
F =
The matrix D can also be rewritten in a combination form as
which is chosen as a starting point for further investigation. Where ek (t) = r(t) − yk (t) is the tracking error; βk+1 (j) are scalar gain parameters, and M is the number of polynomial term of the ILC updating law. Based on the parameter optimization ILC algorithm (POILC)(Owens and Feng, 2003), to complete a calculation of the algorithm, the optimal parameter βk+1 (i) could be found by satisfying the following condition
w1 0 · · · 0 0 w2 · · · 0 D= . . . .. .. . . ... 0 0 · · · wM
where (8)
Thus the parameter vector (βk+1 (i)) is selected as the solution of the suitable optimization problem defined in the following wi βk+1 (i)2 (9)
βk+1 (j)(GGT )j ek
2 2 +βk+1 kM (ek )k2 + W βk+1
βk+1 = [W + M (ek )T M (ek )]−1 M (ek )T ek (16)
kGek k + wi hGek , G ek i · · · hGek , G ek i ¯ k i kG ¯ 2 ek k2 + w2 · · · hG ¯ 2 ek , G ¯ M ek i ¯ 2 ek , Ge hG .. .. .. .. . . . . ¯ M ek , Ge ¯ k i hG ¯ M ek , G ¯ 2 ek i · · · kG ¯ M ek k2 + wM hG (11) ¯ ki hek , Ge βk+1 (1) βk+1 (2) hek , G¯ 2 ek i = × .. .. . . ¯ M ek i βk+1 (M ) hek , G
The above equation can be written in more compact form Dβ = F , where D=
¯2
¯ k k + wi ¯ k , G ek i kGe hGe ¯ 2 ek , Ge ¯ k i kG ¯ 2 ek k2 + w2 hG . . . . . . ¯ k i hG ¯ M ek , G ¯ 2 ek i ¯ M ek , Ge hG
¯M
where
0 ··· 0 w2 · · · 0 .. . . . . .. . 0 0 · · · wM
w1 0 W = . ..
(17)
is a diagonal square M × M matrix. 4. PROPERTIES
¯ k , G ek i hGe ¯ 2 ek , G ¯ M ek i hG (12) . .. . . . ¯ M ek k2 + wM · · · kG
··· ···
(15)
With the same stationary condition, the optimal vector βk+1 is finally obtained
The necessary and sufficient condition dβk+1 (j) = 0, 1 ≤ i ≤ M , gives, after some calculation ¯ 2 ¯ ¯2 ¯ ¯M
2
(14)
T = kek+1 k2 − βk+1 [eT k M (ek ) + M (ek ) ek ]
dJ
¤
2 = kek+1 − M (ek )βk+1 k2 + W βk+1
∀k ≥ 0, 1 ≤ M ≤ k
£
2 J(βk+1 ) = kek+1 k2 + W βk+1
(10)
j=1
¤T
£ ¤ where M (ek ) = GGT ek (GGT )2 ek · · · (GGT )M ek is the suitable N × M matrix. Using the same optimization problem (9), the optimal vector βk+1 can accordingly be solved step by step as
where wi > 0 are weighting parameters of the performance index (9). For further calculation, using ek = r − Guk and the control updating law (6), the error evolution equation becomes ek+1 = ek +
¯ M ek ¯ 2 ek · · · G G
ek+1 = ek − M (ek )βk+1
i=1
M X
£ + Ge ¯ k
¯ here is replaced with GGT . Please note G It is easily seen that if either (i) wi > 0, i = 1, . . . , M or (ii) GGT ek , (GGT )2 ek , . . . , (GGT )M ek are linearly independent, then D−1 exists and β can be solved as β = D−1 F . Alternatively, there is an easy way to calculate the optimal βk+1 , The error evolution equation (10) is again determined and also modified to a new form as
βk+1
M X
¯ M ek ¯ k G ¯ 2 ek · · · G × Ge
[βk+1 (1), . . . , βk+1 (M )] = arg min {Jk+1 (βk+1 (i)) : ek+1 = r − Guk+1 } (7)
J(βk+1 (1), . . . , βk+1 (M )) = kek+1 k2 +
(6)
1≤j≤M ≤k
J(βk+1 (1), βk+1 (2), . . . , βk+1 (M )) ≥ 0
¯ ki hek , Ge ¯ 2 ek i hek , G (13) . . . M ¯ ek i hek , G
The polynomial in GT defined in the previous section is conceptually simple. However, it possesses several useful properties demonstrated in
625
this section. for all k, which proves (a) and (b). Theorem 1: For the polynomial in GT ILC algorithm defined by equation (6) and (9), the optimality criterion satisfies the interlacing/monotonicity 4.1 Convergence analysis condition kek k2 ≥ J(βk+1 (1), . . . , βk+1 (M )) ≥ kek+1 k2 (18)
The important benefit of the polynomial in GT ILC algorithm such as the convergence rate analysis can be seen as follows.
with equality holding if, and only if, βk+1 (1) = βk+1 (2) = · · · = βk+1 (M ) = 0. Proof: Based on the fact of the optimality that the non-optimal choice of βk+1 (1) = βk+1 (2) = · · · = βk+1 (M ) = 0, 1 ≤ M ≤ k gives Jk+1 (0, 0, · · · , 0) = kek k2 , consequently results in the following interlacing result
Proposition 1: Assume that βj∗ defined in the performance index (9) is the optimal value of the M terms of polynomial in GT ILC algorithm, and βj∗∗ is the optimal value of M + 1 terms of polynomial in GT ILC algorithm which are determined in an arbitrary single iteration j. Then the following inequality holds
J(0, · · · , 0) = kek k2 ≥ optimalvalue M X
2
= kek+1 k +
wi (βk+1 (i))
J(βj∗∗ (1), · · · , βj∗∗ (M + 1)) ≤ J(βj∗ (1), · · · , βj∗ (M )) (26) 2
(19)
Proof: For M + 1 terms of polynomial in GT ILC algorithm, and the non optimal βj∗ (i)(i = 1, · · · , M ) and βj∗ (M + 1) = 0 are chosen, then a simple interlacing result is derived.
i=1
≥ kek+1 k
2
(20)
This result shows that the norm of error is monotonically non-increasing in k with equality, if and only if βk+1 (1) = βk+1 (2) = · · · = βk+1 (M ) = 0, 1 ≤ M ≤ k.
J(βj∗∗ (1), · · · , βj∗∗ (M ), βj∗∗ (M + 1)) ≤ J(βj∗ (1), · · · , βj∗ (M ), 0) = ke∗j k2 +
wi (βj∗ (i))2
i=1
Theorem 2: For the polynomial in GT ILC algorithm defined by equation (6) and (9): (a) The parameter sequence satisfies the condition ∞ X M X
M X
= J(βj∗ (1), · · · , βj∗ (M )) (27)
where e∗j is the error optimal values of M , e∗∗ j is the error optimal values of M + 1, and
wi (βj+1 (i))2 < ∞
(21)
j=0 i=1
J(βj∗∗ (1), · · · , βj∗∗ (M ), βj∗∗ (M + 1))
which immediately implies that (b)
2 = ke∗∗ j k +
which directly lead to the established property.
k−→∞
= lim βk+1 (M ) = 0 k−→∞
(22)
Proposition 1 holds some useful properties such as (25) and (26), it yields
Proof: Based on Theorem 1, the following approximation is necessarily used
2 ∗ 2 ke∗∗ j k ≤ kej k +
M X
= kek+1 k +
2 wi βk+1 (i), 1
≤ M ≤ k (23)
2 wi βk+1 (i)
M X
(24)
Also note that if βj∗∗ (M + 1) = 0, then βj∗ (i) = βj∗∗ (i), and it is easy to show that
Then apply induction to give ∞ X M X
wi [(βj∗ (i))2 − (βj∗∗ (i))2 ] − wM +1 (βj∗∗ (M + 1))2 ≤ 0(30)
i=1
i=1
0 ≤ kek+1 k2 ≤ ke0 k2 −
(29)
If 0 < λ < wi < δ where i = 1, · · · , M , then there exists a small enough value δ and a big enough value λ such that
in the form M X
wi [(βj∗ (i))2 − (βj∗∗ (i))2 ]
− wM +1 (βj∗∗ (M + 1))2
i=1
0 ≤ kek+1 k2 ≤ kek k2 −
M X i=1
kek k2 ≥ J(βk+1 (1), βk+1 (2), . . . , βk+1 (M ) 2
wi (βj∗∗ (i))2 + wM +1 (βj∗∗ (M + 1))2(28)
i=1
lim βk+1 (1) = lim βk+1 (2) = · · ·
k−→∞
M X
2 wi βj+1 (i)
M X
(25)
j=0 i=1
i=1
626
wi [(βj∗ (i))2 − (βj∗∗ (i))2 ] − wM +1 (βj∗∗ (M + 1))2 = 0(31)
¯ = M (ek )T M (ek ) and (35) is then where M proved. For the second part of proposition (part (b)), it is necessary to use the error evolution equation (32) zi + zi⊥ , it appears that with ei = M (ek )ˆ
the above equations show that ∗ ke∗∗ j k ≤ kej k
(32)
This apparently means that the higher number of polynomial term in GT ILC algorithm achieves faster convergence rate that lower number of polynomial term for an arbitrary single iteration j. More properties in convergence is described in the following description. Using the error evolution equation (14) with solution of βk+1 (16) results in the following form
⊥ ek+1 = M (ek )ˆ zk+1 + zk+1 ¯ )−1 M ¯ ]ˆ = M (ek )[I − (W + M zk + z ⊥ (40) k
¯ )−1 M ¯] ∈ Since the term M (ek )[I − (W + M R(M (ek )), the equality (36) is therefore certainly proved. This proposition could simply show an important conclusions that, at each iteration zk k in Range M (i) kˆ zk+1 k ≤ λkˆ ⊥ = zk⊥ (ii) zk+1 It is implied that increasing the iteration number gives the reduction in norm of error. However, it is inevitable to consider the significant fact that: (i) Only a part of the error signal is reduced. (ii) It is clearly seen in (36) that the orthogonal part of the error signal is unchanged. (iii) Due to the matrix M (ek ) is changeable from iteration to iteration, the subspace of Range M can therefore varies from iteration to iteration.
ek+1 = ek − M (ek )[W + M (ek )T M (ek )]−1 M (ek )T ek (33)
For further analysis, equation (32) is necessary to modify to be 1 ¯ −1 M (ek )T ek+1 W 2M 1 1 ¯ )−1 W 2 ]W 12 M ¯ −1 M (ek )T ek(34) = [W 2 (W + M
in which the following estimation holds 1 ¯ −1 M (ek )T ek+1 k kW 2 M
≤
1
1
1
1
¯W 2] 1 + λmin [W 2 M
¯ −1 M (ek )T ek k kW 2 M
(35)
¯ = M (ek )T M (ek ). Note that M This inequality equation immediately implies in (i) the optimality holding : kek+1 k ≤ kek k (ii) the following proposition
5. NUMERICAL EXAMPLE To demonstrate the effectiveness of the polynomial in GT ILC algorithm (6), consider a system plant having the transfer function
Proposition 2: If the error signal ei = zˆi + zi⊥ , zˆ ∈ R(M (ek )), z ⊥ ∈ R(M (ek ))⊥ where i is any iteration number, then for the ILC algorithm (6) and (9), it is true that (a) The inequality holds 1
1
kW 2 zˆk+1 k ≤ λkW 2 zˆk k
G(s) =
1+λmin [W
1 2
1 M (ek )T M (ek )W
(36) 1 2
(41)
Using sampling time h = 0.1, the corresponding discrete-time system becomes G(z) =
where λ =
1 (s + 1)2
4.7 × 10−3 · z + 4.4 × 10−3 z 2 − 1.8097z + 0.8187
(42)
]
(b) The equality holds
The reference signal is selected as r(t) = et/20 sin t where t ∈ [0, 20]. The selection W = 10−6 I is ⊥ zk+1 = zk⊥ , ∀k ≥ 0 (37) chosen to provide numerical solutions of the evaluation of βk+1 at very small error values. With Proof: If ek ∈ RN +1−k L where RN +1−k is dethese particular choices of sampling time, and the composed into R(M (ek )) R(M (ek ))⊥ , any erconditions, the results are shown in figure 1 and ror signal can be written in the form as figure 2. ∗ ∗∗ ∗ ∗∗ ⊥ ek = zk + zk , zk ∈ R(M (ek )), zk ∈ R(M (ek )) (38) The results show that the behavior of norm of error kek k and βk+1 of a second order system ∗ converge to zero as k −→ ∞, also the converzk and It is possible to write as zk = M (ek )ˆ T ∗∗ gence of the error norm is monotonic. This is M (ek ) zk = 0 (from the perpendicular definidue to Theorems 1 and 2, which state that the tion). Hence, (37) is simply shown that ek = monotonic convergence can be guaranteed if βk+1 z , which consequently leads to the solution M (ek )ˆ decreases to zero. Furthermore, the results show in (34) as follows the important theoretical finding in Proposition 1 as the number of polynomial term increases, 1 ¯ −1 M (ek )T ek+1 k = kW 12 zˆk+1 k kW 2 M norm of error kek k gradually decreases. It is, how1 ¯ −1 M (ek )T ek k = kW 12 zˆk k (39) ever, noticeable that the improvement seems slow ≤ λkW 2 M 627
2
2
10
10
1
Log10 of Norm of error
Log10 of Norm of error
10
M=1 M=2 0
10
M=3
M=1 M=2 1
10
M=3
−1
10
M=5,10
M=5,10
−2
10
0
0
10
20
30
40
50 iteration
60
70
80
90
10
100
graph of Beta vs iteration M=10
output (k=0) output y
−400
0
50 iteration graph of input signal
0
5
4
10 15 time graph of error
error (k=100)
20
error (k=0)
2 error
input u
70
80
90
100
0
−4
100
input (k=0)
5 0
1
10
M=1 M=2 0
10
−2 0
5
10 time
15
20
M=3
0
−1
−4
10
0
5
10 time
15
20
down. Consider now the polynomial in GT ILC algorithm with the non-minimum phase real plant obtained in transfer function −0.3105s + 1.242 0.001871s4 + 0.04023s3 + 0.3186s2 + 0.6895s + 1.242
0.3105s + 1.242
M=10 10
20
30
40
50 iteration
60
70
80
90
100
h = 0.1, the results are shown in figure 4. The results in figure 3 and figure 4 show the same important fact stated as Proposition 1, that when the number of polynomial term in the polynomial in GT ILC algorithm increases the norm of error kek k gradually decreases. It could be implied that increasing the number of polynomial term apparently increases the convergence rate. Furthermore, as stated in Proposition 2, they shows the monotonic convergence in norm of error as the iteration number increases. However, the behavior of error in norm when the non-minimum phase system plant is used shown in figure 3 converges to a nonzero limit. Make a comparison to norm of error shown in figure 4 when the minimum phase system is considered, it is surprisingly shown that the convergence rate is improved by the factor of 10 approximately. This means that after the system becomes a minimum phase, the convergence rate could be significantly improved.
(43)
The reference signal is chosen as r(t) = 10 sin( 2πt 6 ) over the time interval t ∈ [0, 6] and let the sampling interval h to be h = 0.1. The selection of weighting parameter is W = 10−6 I. The results are shown in figure 3. Reconsider the non-minimum phase real plant (42) with simply changing to minimum phase system having the transfer function 0.001871s4 + 0.04023s3 + 0.3186s2 + 0.6895s + 1.242
M=5 0
Fig. 4. Norm of error with different number of polynomial term of the minimum phase practical system.
Fig. 2. βk+1 , Error signal e(t), Output signal y(t) and Input signal u(t) at 100-th iteration when M =10.
G(s) =
60
2
input (k=100)
G(s) =
50 iteration
−2
10
−5
40
10
Log10 of Norm of error
Beta
−200
30
output (k=100)
2
0
20
graph of output signal
4
200
10
Fig. 3. Norm of error with different number of polynomial term of the non-minimum phase practical system.
Fig. 1. Norm of error when the number of polynomial term is varied. 400
0
(44)
Under the same condition; the reference signal r(t) = 10 sin( 2πt 6 ) where t ∈ [0, 6], the weighting parameter W = 10−6 , and the sampling interval 628
6. CONCLUSION
Norrlof, M. (2000). Iterative learning control analysis. PhD Thesis. Oh, S. R., Z. Bien and I. H. Shu (1988). An iterative learning control method with application for the robot manipulator. IEEE Journal of Robotics and Automation 4, 508–514. Owens, D. H. and K. Feng (2003). Parameter optimization in iterative learning control. International Journal of Control 76(11), 1059– 1069. Owens, D. H., N. Amann, E. Rogers and M. French (2000). Analysis of linear iterative learning control schemes - a 2d systems/repetitive process approach. Multidimensional Systems and Signal Processing 11, 125–177. Uchiyama, M. (1978). Formation of high-speed motion pattern of a mechanicalarm by trial. Transactions of the Society of Instrumentation and Control Engineers 19, 706–712.
In this paper, the polynomial in transpose of system matrix G (GT ) based iterative learning control algorithm was introduced as a new paradigm to solve the ILC problem when the original plant is a discrete-time LTI system. This more complex algorithm uses GT to form into polynomial term as the basis of control updating and it is expected to improved algorithm performance markedly. The algorithm is also regarded as a feed-forward type and it guarantees the monotonic convergence to zero. This paper has underlined the satisfied performance of this proposed algorithm and illustrated an observation that applying the knowledge of polynomial theory with the optimization based ILC is an alternative way to successfully achieve a perfect tracking. A convergence theory was given for this algorithm with the surprising conclusion that the number of polynomial term in the updating law can increase a convergence rate. It has been shown by example that as the number of polynomial term increases, the norm of error gradually decreases. This suggests that the use of polynomial in GT ILC algorithm does have benefits in an optimization framework but the additional work is needed to improve on the results of this paper. REFERENCES Amann, N. and D. H. Owens (1995). Iterative learning control for discrete time systems using optimal feedback and feedforward actions. Proceedings of the 34th IEEE Conference on Decision and Control pp. 1696–1701. Amann, N., D. H. Owens and E. Rogers (1996). Iterative learning control using optimal feedback and feedforward actions. International Journal of Control 65, 277–293. Arimoto, S., S. Kawamura and F. Miyasaki (1984). Bettering operations of robots by learning. Journal of Robotic Systems 1(2), 123–140. Barton, A. D., P. L. Lewin and D. J. Brown (2000). Practical implementation of a realtime itertive learning position controller. International Journal of Control 73, 992–998. Hatonen, J., K. L. Moore and D. H. Owens (2003). An algebraic approach to iterative learning control. International Journal of Control 77, 45–54. Kurek, J. and M. B. Zaremba (1993). Iterative learnin control synthesis based on 2-d systems theory. IEEE Transactions on Automatic Control 38, 121–125. Moore, K. L. (2000). Iteraive learning control: an expository overview, applied and computational control. Signals Processing and Circuits 1, 425–488. 629