A Learning Control Algorithm with Experiments on a Chopsticks Robot

A Learning Control Algorithm with Experiments on a Chopsticks Robot

A Learning Control Algorithm with Experiments on a Chopsticks Robot D. Vassileva*, G. Boiadjiev** *Harmonic Drive Systems Inc., Hotakamaki 1856-1, Azu...

697KB Sizes 0 Downloads 52 Views

A Learning Control Algorithm with Experiments on a Chopsticks Robot D. Vassileva*, G. Boiadjiev** *Harmonic Drive Systems Inc., Hotakamaki 1856-1, Azumino-shi Japan (Tel: 263-83-6515; e-mail: daniela.vassileva@ hds.co.jp). **Bulgarian Academy of Sciences, Acad. G. Bonchev Str., 1113 Sofia: Bulgaria(e-mail: [email protected]) Abstract: A novel approach for learning control for robot-manipulators has been proposed. Real-time implementation on a five-degree of freedom chopsticks robot has been conducted to evaluate the performance of the repetitive controller. The repetitive controller is recommended when the desired trajectory is periodic. In such applications the proposed repetitive control scheme provides a very simple adaptation algorithm and is effective in removing any periodic tracking errors. Keywords: Learning control, stability, monotonic convergence, chopsticks robot. 1. INTRODUCTION The repetitive (learning) control is based on the idea that the performance of a system that executes the same task multiple times can be improved by learning from previous executions. The objective of the paper is to provide a simple learning update law, incorporating error information into the control for subsequent iterations in order to improve performance. There are several approaches to the learning algorithm. A survey of the most popular learning controllers is presented in Bristow at al. (2006). Some researchers consider algorithms with linear time-varying functions, in Moore at al. (2005), nonlinear functions, in Xu at al. (2003), and iteration varying functions, in Moore at al. (1993). Additionally the number of the previous iterations used to compute the learning term can be increased. When designing a learning controller, proving error convergence, with each iteration, is crucial. However some researchers have argued that unstable learning algorithms can be effective if their initial behaviour quickly decreases the error, in Huang at al. (1996). These algorithms can then be said to satisfy a “practical stability” condition because the learning can be stopped at a low error before the divergent learning transient behaviour begins.

in the additional joint corrections terms, called for simplicity in the text Z – corrections. 2. Z – LEARNIG CONTROLLER DESIGN 2.1 Z-Control method As stated above, the learning control rejects repeating disturbances, by learning from the previous trials and ignores noise and non-repeating disturbances. Therefore the desired robot joints trajectories are periodic functions with period T , repeating N iterations. To reject non repeating disturbances, a feedback controller used in combination with the learning controller is the best approach. Our proposed controller represents a combination of a feed forward, feedback and learning terms. Let us first consider the controller without learning. As such we have used a modified computed torque control method, with additional corrections for the joints positions and velocities, discussed in details in Vassileva at al. (2007), further considered briefly. Our plant dynamics is described by (1):

A( q, d ) q&& + h( q, q& , d ) + g (q, d ) = τ + N ( q, Text ) ,

(1)

A unified approach for synthesis and stability analysis of repetitive controllers for mechanical manipulators has been proposed in Sadeh at al. (1990). However most of the works in that field neglect the nonlinear dependence of the manipulator parameters on the joints coordinates and rely on local linearization techniques to prove the stability.

with q the joint-variable vector, τ the generalized force/torque vector, A(q, d ) the inertia matrix, h(q, q& , d ) the Coriolis/centripetal vector, g (q, d ) gravity vector, N (q, Text ) -

Arimoto at al. (2008) have presented a learning control update law in task-space for a class of redundant robots.

measured by a strain-gage sensor mounted on the robot chopstick and d the robot geometric parameters. In more compact form, (1) could be written as:

Our approach provides a simple learning update term, added to a feedback and feed forward controller. Also additional corrections for the robot joints positions, velocities and accelerations have been introduced. Further, the stability of the proposed controller could be considered in a non traditional way, by assuming the learning term is comprised

external torque components,

Text is the external torque

A(q, d )q&& + B (q, q& , d ) = τ + N (q, Text )

(2)

The robot control input in its general form is (without learning update term):

τ = Aq&& + B − K 2 (∆q& − z&) − K1 (∆q − z ) − (∆q&& − &z&) (3) where K 1 and K 2 are diagonal feedback coefficients matrices with positive elements. The conditions for choosing them have been defined in Vassileva at al. (2007); z, z&, &z& are additional corrections for the joints positions, velocities and accelerations, derived by solving the system (4):

&z&(t ) + K 2 z&(t ) + K 1 z (t ) = KV (t ) , t ∈ [t i −1 , t i ]

(4)

where K is a constant matrix and its elements have weighty coefficients meaning, V (ti ) = M 1V1(ti ) + M 2V2 (ti ) + M 3V3 (ti ) ,

V1 (t ) = q (t ) − q d (t ) = ∆q , V2 (t ) = q& (t ) − q& d (t ) = ∆q& , V3 (t ) = q&&(t ) − q&&d (t ) = ∆q&& ,

M1 , M 2 , M 3 -

dimensional

&&d - desired joints positions, velocities, coefficients; q d , q& d , q accelerations. Moreover, for simplicity, in the real case, we have neglected Coriolis and Centrifugal forces, as well as the additional corrections term on the accelerations, so the control input feed forward and feedback components become: τ = Aq&& + g (q, d ) + N (q, Text ) − K 2 (∆q& − z& ) − K 1 (∆q − z ) (5) The additional correction terms z and z& for the joints positions and velocities are responsible for uncertainty neutralizations due to some parameter variation or inexactness in the plant dynamics (Vassileva at al. (2007)). Further using the features of both terms z and z& we propose additional term in the controller responsible for rejecting repeating disturbances and useful for periodic motion as pickand-place tasks. Additionally the 4th and 5th robot joints in contact motion (when manipulating some object) are controlled by a PI type torque controller having the form:

τ = τ d − K p (τ − τ d ) − K i ∫ ∆τdt , where

τ − τ d → ∆τ

;

τd

(6)

- desired contact torque;

measured contact torque by the strain-gage; K p and

Finally the total control input to be provided to the plant is:

τ = Aq&& + g (q, d ) + N (q, Text ) − K 2 (∆q& − z& ) − K 1 (∆q − z ) + τ l (8) It consists of feed forward, feedback and learning terms and in more compact form could be written as:

τ = τ ff + τ fb + τ l ,

(9)

where τ fb

τ

2.3 Stability discussion Further, we assume the desired trajectories are bounded, continuous, two times differentiable time functions. The z corrections are modified for the sake of the learning purposes as follows, i.e. comprising the learning term they are expressed as (this is another form of (8), i.e. in the following considerations the learning term is introduced inexplicitly):

V1 (t ) = ∆q (t ) − K L1 ∆q (t − T ) , V2 (t ) = ∆q& (t ) − K L 2 ∆q& (t − T ) , V3 (t ) = ∆q&&(t ) − K L 3 ∆q&&(t − T ) . Then again we have

dimensional coefficients. &z&(t ) + K 2 z& (t ) + K1 z (t ) = K vV (t ) , t ∈ [t i −1 , t i ] . Where K 1 , K 2 , K L1 , K L 2 , K L 3 -defined previously diagonal feedback coefficients matrices; K v weighty coefficients matrix. The coefficients K L1 , K L 2 , K L 3 have to be taken to assure the monotonous decreasing of the numbers {V1( k ) } t =t , {V2( k ) } t =t , {V3( k ) } t =t , V (ti ) = M1V1 (ti ) + M 2V2 (ti ) + M 3V3 (ti )

i.e. -

Ki -

2.2 Z-Learning update We have used successfully additional z-corrections functions in feedback mode to reject uncertain dynamics disturbances (Vassileva at al. (2007)) and further we would like to reuse them in designing a learning update term for our controller, which will be added to the control input (5). The learning update term we propose is simple to derive, because it contains information on the position and velocity errors from the previous iteration only and contains also additional information on the corrections made at the previous trial for the joint positions and velocities. In its general form it is:

,

τ ff = Aq&& + g ( q , d ) + N ( q , Text ) = − K 2 (∆q& (t ) − z& (t )) − K 1 (∆q (t ) − z (t )) .

,

M1 , M 2 , M 3

k

proportional and integral positive feedback gains.

τ l = − K L1 ( ∆q (t − T ) − z (t − T )) − K L 2 ( ∆q& (t − T ) − z& (t − T )) , (7)

where K L1 and K L 2 are positive learning gains, T - desired joints trajectory period; t - current time.

K L1 >

∆q ∆q

( k +1)

( k +1)

t =tk +1 (t

k

− ∆q

(k )

− T ) + ∆q

(k )

t =t k +1

t =tk t =t k

(t − T )

k

(10).

Analogously the conditions for choosing the other 2 coefficients are defined. Then, the control input, containing the learning term inexplicitly in z could be written as (this form is analogous to (8)):

τ = Aˆ q&& + Bˆ − K 2 (∆q& − z& ) − K1 (∆q − z ) − (∆q&& − &z&) , (11)

ˆ and Bˆ are some evaluations for the dynamics where A model. Then the equation of the error system dynamics (closed loop dynamics) could be written as: ( A − Aˆ )q&& + ( B − Bˆ ) = − K 2 (∆q& − z&) − K1 (∆q − z ) − (∆q&& − &z&)

or, in more compact form:

ε&&(t ) + K 2 ε& (t ) + K 1ε (t ) = −∆f (t ) ,

(12)

where ε (t ) = ∆q(t ) − z (t ) , ∆f = ( A − Aˆ )q&&(t ) + ( B − Bˆ ) the Lyapounov function candidate is defined as (13):

. Then

V=

1 T ε& ε& 2

(13)

Which is obviously positive defined one (Barbashin at al. (1970), Demidovich at al. (1967)). Its first derivative, conform to the error equation is: V& = −( K 2ε& T ε& + K1ε& T ε + ε& T ∆f )

(14)

By applying the Caushy-Schwarz-Byankovsky inequality we obtain upper bounds as: ε& T ε& ≤ ε&

2

, ε& T ε ≤ ε& ε , i.e. the

first two components in (14) are bounded. To assure V& to be negative, it is enough to make the following consideration: Let’s consider the term in the brackets in (14). After some transformation we have K 2ε&T ε& + K1ε&T ε + ε&T ∆f

≥ K 2 ε&T ε& − K1ε&T ε + ε&T ∆f ≥

≥ K 2 ε& T ε& − {K 1 ε& ε + ε& [ ( A − Aˆ ) q&& + B + Bˆ ]} ≥ 0 ,

if K 2 is chosen to satisfy the condition K 2 ≥

D

ε& T ε&

where

D = K 1 ε& ε + ε& [ ( A − Aˆ ) q&& + B + Bˆ ] , means the term in the brackets of (14) is non-negative, so the first derivative of V is non-positive, i.e. V& ≤ 0 and the motion is stable according to the Lyapounov’s theorem. A large K 2 will help satisfy the stability condition. Further, let us discuss the bounds existence on ε and ε& as well as to analyse ∆f . It can be written as: ε& ∆f = ε& [( A − Aˆ )q&&(t ) + ( B − Bˆ )] ≤ ε& T [( A − Aˆ )q&&(t ) + ( B − Bˆ )] ≤ ε& T ( A − Aˆ )q&&(t ) + ( B − Bˆ ) T

T

( A − Aˆ )q&&(t ) + ( B − Bˆ ) ≤ ( A − Aˆ )q&&(t ) + ( B − Bˆ ) ≤ ( A − Aˆ ) q&&(t ) + B + Bˆ

It is known that the inertia matrix is bounded above and bellow. A bound on the gravity term may be derived for any given robot (Lewis at al. (2004)). Let us discuss the boundness of the velocities. Let us apply Theorem 1 (see appendix A), for the intervals where the functions q(t ) have a constant convexity. The theorem states that the derivative of a smooth function with constant convexity in the interval (a, b) is bounded. Then the first derivative q& (t ) is bounded for these intervals. For the isolated cases, when the convexity has been changing, the function has local extremum or inflexions, but then for these points q&& is zero and the right term of (12) becomes const, i.e. (B − Bˆ ) . It is known the solutions of a differential equation with positive const coefficients, i.e. with roots with negative real parts and partial solution – const (coming from the right side const term), are bounded functions. Choosing the learning feedback coefficients according to (10), we assure monotonic convergence of the number series V (t i ) , i.e. the z corrections are bounded, the same holds for ε and the error trajectory. Using the system dynamics we can express the

acceleration as a function of bounded terms, i.e. q&& is also bounded: q&& = [ Aˆ − A − I ] −1{− K 1 (∆q (t ) − z (t )) − K 2 (∆q& (t ) − z&(t )) − (q&&d − &z&) − Bˆ + B}

The boundness of the acceleration could be proved alternatively by using theorem 2 (appendix B). By using the mentioned above evaluation for the velocities boundness in the intervals, where the function convexity has a constant sign, the whole interval of motion could be considered as a union of subintervals for which the condition of the theorem 2 holds. The only exception is the extremum and inflexion points where the second derivative is zero, i.e. the velocity is a constant and has a fixed value. As a result of the above reasoning, (13) is a Lyapunov function, if the conditions for the choice of the feedback coefficients are satisfied and then the system is stable in the sense of Lyapounov. Further, if we take into account the external forces acting on the system, we can express the system dynamics as: Aq&& + g = τ + J T Fext ,

where

(15)

Fext -external forces; J - Jacoby matrix; Jacoby also

gives the relationship between the generalized and absolute velocities and accelerations:

Jq&& + J&q& = &x&

(16)

From (16) we can conclude &x& is bounded as being expressed as a function of bounded terms ( q(t ) appears in J through sin and cosine terms, which magnitudes are bounded by 1. From (15) we have: q&& = A −1 (τ − g + J T Fext )

(17)

By replacing (17) into (16) we have:

Fext = ( JA −1 J T ) −1{&x& − J&q& − JA−1 (τ − g )}

(18)

The external forces are expressed as a function of bounded terms, i.e. Fext is bounded. So, we have proved the system is stable under the appropriate choice of the feedback gain coefficients. 3. THE CONTROLLED ROBOT The proposed learning control method has been implemented on a 5 d.o.f. chopsticks robot shown on fig. 1. The robot weight is 1.6 kg. The angles limit ranges and the output joints torques are shown in table 1.

Fig. 1. The controlled chopsticks robot

Joint No Range [deg] Output

τ

[Nm]

1 -130/110 11.7

2 -80/28 4.9

3 ± 130 0.75

4 -85/55 0.75

5 -200/13 0.75

The PC-based real time control system is operating at 1 ms sampling time. It is developed using MATLAB 2007a, SIMULINK, xPC Target and Real-Time Workshop. To measure the load on the chopsticks when manipulating an object, a strain-gage sensor has been mounted on the lower stick. The strain-gage signal is amplified through an amplifier with cut-off frequency of 300 Hz. The 5 joints use harmonic drive motors.

Measured contact torque, [Nm]

Table 1. Joint ranges and output torques

desired Kp=2.3, Ki=0.0115

0.06 0.04 0.02 0 30

31

32

33 Time, [sec]

34

35

36

Fig. 4. Measured contact torque at the 1st iteration. Further, because of the lack of space we will discuss only the results for the 2nd and 3rd joints. The desired trajectory for the 3rd joint is presented on fig 5.

4. EXPERIMENTAL RESULTS

0

Angle, [rad]

The Chopsticks robot can be used to perform pick-and-place tasks by manipulating objects of different sizes and material. On fig. 2 is shown repetitive pick-and-place task of soy beans.

-0.1 -0.2 -0.3 -0.4 -0.5 -0.6 0

5

10

15

Tme, [sec]

20

25

30

Fig. 5. Desired trajectory for the 3rd joint, 5 iterations. For the experiments the learning term has been considered in 3 different forms. One is (7). The second is when K L1 = K L 2 . And the third is when it has been filtered through a low-pass filter (19): τ l = Q{− K L1 ( ∆q (t − T ) − z (t − T )) − K L 2 (∆q& (t − T ) − z& (t − T ))} (19)

The desired trajectories for the joints are periodic, because the motion is repetitive. The first 3 joints are controlled by using (9) and the last 2 joints in contact tasks are controlled by using (6). For the soy beans pick and place the torque controller feedback gains for the 4th and 5th joints were selected as K p =2.3, K i =0.0115. The desired contact torque

Measured contact torque, [Nm]

is 0.04 Nm. The desired and measured torque by the straingage for a ten-time pick and place task is shown on fig. 3. Only one iteration is shown on fig.4. desired Kp=2.3, Ki=0.0115

0.06 0.04

1

q2 desired q2 measured q3 desired q3 measured

0.5 0 -0.5 -1 0

5

10 Time, [sec]

15

20

Fig. 6. Measured and desired 2nd and 3rd joints velocities.

0.02 0 0

Joints velocities, [rad/s]

Fig. 2. Pick and place of soy beans

According to some works (Bristow et al. (2006)), the lowpass filter can be used to add robustness and high frequencies noise filtering and stabilize the system. In equation (19), Q is a low-pass filter (Chebyshev, Gaussian or Butterworth for ex.). However by using Q , the algorithm can not converge to zero (Norrlof at al. (2002)). The robot has been driven for 50 iterations. The second and third joints speed is the same and their measured and desired velocities are shown on fig. 6.

10

20

30 Time, [sec]

40

50

60

Fig. 3. Ten iterations pick and place of soy beans by using the Chopsticks robot. Measured contact torque.

The 2nd joint has been driven in slower motion (max speed 0.8 rad/s) and faster motion (max speed 1.2 rad/s). The tracking error, for 3 different cases – without learning and with learning update ( K L1 = K L 2 = 0.2 and K L1 = K L 2 = 0.3 ) for the slow motion case, 10 iterations is shown on fig. 7.

3

x 10

not monotonically convergent to zero, but it is smaller than the max error from the 1st iteration and it never goes beyond it. Similar results were obtained for the 3rd joint and are presented bellow.

-3

1

-3

0

x 10 2 1st loop

-1 -2

without update L1

L2

L1

L2

K =K =0.2

-3 0

2

4

Steps

K =K =0.3

6

8

10 4 x 10

Fig. 7 Tracking error, 2nd joint with and without learning As seen from the figure above, after the first iteration the tracking error has been decreased in the case when we have used a learning update term in the control input. Without learning, by controlling using (5), the tracking error remains the same all 10 trials (black colour). For faster motion, the tracking errors, as well as the maximum error variation with each iteration, are visualized on fig. 8 and fig. 9 respectively. 2

-3

x 10

nd

joint tracking error

0

0 -2 -4 -6

without learning with learning update KL1=KL2=0.35

-8 0

2

4

Steps

6

8

10 4 x 10

Fig.10. Tracking error 3rd joint On fig. 10 is presented the tracking error for the 3rd joint with learning update (green) and without learning (blue) for 10 iterations. As seen from the figure, the error decreases since the second loop and remains within a predetermined zone. The learning gains were selected as K L1 = K L 2 = 0.35

-2 2

20

40 Time, [sec]

60

80

3rd joint tracking error

-3

0

Fig. 8. Tracking error, 2nd joint, faster motion The result presented on fig. 8 is relative to the case, when we have used 2 different gains for the learning gains, i.e. K L1 = 2 , K L 2 = 0.1 . We have also controlled by using a lowpass filter of the form y(i) = c0 u (i) + c1u (i − 1) − c2 y(i − 1) with cut-off freq. of 10 Hz. As seen from fig. 8 the tracking error has decreased since the 2nd iteration and it is smaller compared with the 1st one. The error is also less oscillating. -3

2.75

x 10

1

x 10

2.7 KL1=2, KL2=0.1 KL1=KL2=0.2

2.65 2.6

Angle, [rad]

0

-1 -2 -3 -4 -5 -6 -7 0

50

100

150

Time, [sec]

250

300

The maximum error variation for 50 iterations, in the case of learning (red) and without learning (black) term is shown on fig. 12. In the case of learning the error has been decreased 17% in the case of the 3rd joint and 25% in the case of the 2nd joint and the performance of the system has been improved. 6.5

2.55

x 10

-3

No learning Z-learning update

2.5

6

2.45 2.4 2.35 2.3

200

Fig. 11. Tracking error 3rd joint, 50 iterations

5

10

15

20

25 30 Iterations

35

40

45

50

Angle, [rad]

Angle, [rad]

10 loops pick and place: 3rd joint trackig error

2nd loop

On fig.11 is presented the tracking error for the 3rd joint for 50 iterations, when the learning gains were selected K L1 = 0.81 , K L 2 = 0.16 . The result is similar to the previous one.

2 Angle, [rad]

Tracking error, [rad]

Angle, [rad]

2

5.5

5

nd

Fig. 9 Max error variation, 2 joint, 50 iterations The

maximum

elmax = max ql (t ) − qld (t ) nd

t∈[ 0, T ]

tracking error variation , l = 1,...,50 , for 50 iterations, for the

2 joint is visualized on fig. 9. Two cases were considered. First, when K L1 = 2 , K L 2 = 0.1 (green) and second, when K L1 = K L 2 = 0.2 (black). As seen from the result on fig 9, choosing both learning gains equal provides good performance of the algorithm. In both cases the max error is

4.5

5

10

15

20

25 30 Iterations

35

40

45

50

Fig. 12 Max error variation, 3rd joint, 50 iterations In the case the robot has been controlled without learning update term, we have used (5) and the feedback controller gains were selected as shown in table 2, ( K1 = K 22 4 ). The same values for the learning case have been used.

Table 2. Feedback gains

Joint nb. K M1 M2 K2

1 300 350 1 1

2 280 280 1 1

3 300 200 1 1

4 300 250 1 1

5 300 200 1 1

As seen from the experiments, the obtained results are in close agreement with the theoretical ones. 6. CONCLUSIONS A learning controller has been proposed for repetitive tasks for robot-manipulators. The conditions assuring the system stability have been discussed (they are related with the choice of the feedback gain coefficients) as well as the controller efficiency has been shown through real experiments on a chopsticks-robot. As seen from the obtained results, the learning update leads to fast error decreasing since the second iteration and increases the system accuracy and performance compared to the case when controlling without learning. The experimental results confirm the theoretical ones In the future it would be interesting to analyse the quantity characteristics of the convergence rate, i.e. the speed of the convergence, which will lead to faster accuracy achievement. ACKNOWLEDGEMENT The authors would like to thank Haruhisa Kawasaki, Tetsuya Mouri and Hiroshi Fujii for their helpful comments.

REFERENCES Anchev A., Lilov L., (1981). Stability of the motion. Nauka I izkustvo, Sofia. Arimoto S., Sekimoto M., Kawamura S., (2008). Task-space iterative learning for redundant robotic systems: existence of a task-space control and convergence of learning. SICE Journal of control, measurements, and system integration, vol. 1, No 4, pp. 312-319. Barbashin E.A., (1970). Lyapunov functions. Nauka, Moscow. Bristow A., Tharayil M., Alleyne A. (2006). A survey of iterative learning control. IEEE Control systems magazine, June 2006, pp. 96-114. Demidovich B.P., (1967). Lectures on mathematical stability theory. Nauka, Moscow. Heinzinger G., Fenwick D., Paden B.,Miyazaki F., (1989). Robust learning control. Proc. of the 28th conf. on decision and control. Florida. pp.436-440. Huang Y.-C., Longman R. W. (1996). Source of the often observed property of initial convergence followed by divergence in learning and repetitive control. Advances astronaut. sci., vol. 90, No 1, pp. 555-572. Lewis F., Dawson D., Abdallah Ch. (2004). Robot manipulator control theory and practice. Marcel Dekker, New York. Moore K. L. (1993). Iterative Learning Control for Deterministic Systems. Springer-Verlag, London.

Moore K. L., Chen Y., Bahl V. (2005). Monotonically convergent iterative learning control for linear discretetime systems. Automatica, vol. 41, No 9, pp. 1529-1537. Norrlof M., Gunnarsson S. (2002). Experimental comparison of some classical iterative learning control algorithms. IEEE Trans. on robot.& aut., vol.18, No 4, pp. 636-641, Sadegh N., Horowitz R., Wei-Wen Kao, Tomizuka M., (1990). A unified approach to the design of adaptive and repetitive controllers for robotics manipulators. Transactions of the ASME, vol. 112, pp. 618-627. Vassileva D., Boiadjiev G., Kawasaki H., Mouri T. (2007). Application of the servo-control method with standard corrections for robot-manipulators control. Proc. of ICMA, pp.3238-3243, China. Xu J.-X., Tan Y. (2003), Linear and nonlinear iterative learning control. Springer, Berlin. Appendix A. THEOREM 1 Theorem for derivative evaluation. For a smooth function f(x) in an interval [a,b] and differentiable in (a,b), which has constant convexity, its derivative is bounded, i.e. for every point c ∈ (a, b) there exists a constant Lc so that f ' (c) ≤ Lc

Proof. Let c ∈ (a, b) . Then the local tangent line of the function via the point has the (c, f (c)) equation y = f ' (c)( x − c) + f (c) . For the sake of constant convexity, a number h exists, so that one of the lines y = f ' (c)( x − c) + f (c ) ± h crosses the graph of the function at

least at two points α and β . For these points the line and is parallel passing through f (α ) f (β ) Its angle coefficient to y = f ' (c)( x − c) + f (c) . is [ f ( β ) − f (α )] /(β − α ) . Considering the function smoothness in [a, b] the inequality holds f ' (c) = [ f ( β ) − f (α )] / β − α ≤ [ f max − f min ] x∈[α , β ] / β − α = L* = const ≤ Lc

, where it can be chosen Lc ≥ L* . Consequence. Every vector-function y = y( y1 ,... yn ) which coordinates as scalar functions satisfy the conditions of the theorem above also has bounded derivative. Let the vector-function have “n” coordinates which themselves are scalar functions satisfying the conditions of the theorem. Then for any one the constants L(c1) ,....., L(cn ) exist so that y& i ≤ L(ic ) . If we choose ~ ~ L = max{L(ci ) }, i = 1,..., n , then y& ≤ L . Appendix B. THEOREM 2 Theorem for the second derivative evaluation. For a smooth function f(x) in an interval [a, b] and differentiable in (a, b), which has constant convexity and its first derivative is smooth in (a, b), its second derivative is bounded, i.e. for every point c ∈ (a, b) there exists a constant Ls so that f ' ' (c) ≤ L s .

The proof of the theorem is omitted because of

lack of space. Finally, let us remind the system motions understanding in the stability theory means considering only smooth functions of time and their partial derivatives as smooth and differentiable functions as well.