Neurocomputing ∎ (∎∎∎∎) ∎∎∎–∎∎∎
Contents lists available at ScienceDirect
Neurocomputing journal homepage: www.elsevier.com/locate/neucom
Terminal neural computing: Finite-time convergence and its applications Ying Kong a,n, Hui-juan Lu b, Yu Xue d, Hai-xia Xia c a
School of Information and Electronic Engineering, Zhejiang University of Science and Technology, China School of Information Engineering, China Jiliang University, China c College of Informatics, Zhejiang Sci-Tech University, China d Nanjing University of Information Science and Technology, China b
art ic l e i nf o
a b s t r a c t
Article history: Received 23 October 2015 Received in revised form 11 May 2016 Accepted 15 May 2016
This paper will discuss the function of terminal neural networks (TNN) to enhance the convergent behaviors of asymptotic ones. The terminal attraction of the matrix differential equations is analyzed, and the results show that the method can assure the networks of converging to zero during a limited period. The terminal neural networks can also be used to account for the time-varying matrix inversion as well as the trajectory planning of redundant manipulators. The typical example for a planar is the manipulator in which the end-effector appeared as a closed path, and the joint variables can return to the initial values, making the motion repeatable. The simulation results certify for the validity and superiority of the terminal neural method. & 2016 Elsevier B.V. All rights reserved.
Keywords: Finite-time convergence Neural networks Time-varying matrices Redundant manipulators Trajectory planning
1. Introduction Sylvester equation is commonly encountered in mathematics and control theory [1] and finds applications in linear least squares regression [2], disturbance decoupling [3], Eigen-structure assignment [4,5], etc. And much more efforts have been applied to the solution algorithms of matrix square roots due to its basic roles. A lot of parallel-processing computational themes including ample active recurrent neural networks (RNN) have been fully generated and analyzed. RNN method is treated as an effective alternative to real-time computation and optimization for solving the redundancy resolution problems, owning to its parallel-processing nature and convenience of hardware implementation. Besides, robots always tend to have good profits in lots of fields including science and engineering which will activate the robot developers to enhance the functionality and flexibility of robot manipulators. As a robot manipulator is redundant when the degrees of freedom (DOF) are much more available than the minimum of DOF which is always required to work as a given endeffect main task [6,7]. Redundant manipulators are more likely to have a wider operational space as well as extra DOF caters to the number of functional constraints. RNN have been studied for realtime solution to those redundancy-resolution problems [8,9]. In n
Corresponding author. E-mail address:
[email protected] (Y. Kong).
recent years, Zhang neural network (ZNN) has been newly proposed and applied to robot kinematic planning [10]. Recurrent neural network has been thoroughly and widely studied in numerous scientific fields, especially after the discovery of the famous Hopfield neural network [11] which was initially designed for real-time process while the recurrent neural networks are getting more and more popular. Various gradient-based recurrent neural networks are proposed for solving Sylvester equation. Those methods adopt the norm of error matrix as the index and are applied to the gradient-based neural networks to ensure the error norm and could be vanished to zero with time in time-invariant case. However, in most time-varying problems, it intrinsically exists in scientific areas, in which situation, the error norm cannot be zeroed anymore even after such an infinite time. But fortunately, a novel neural network named Zhang neural network has been already proposed, and the result avoids the lagging errors thus can guarantee exact convergent performance to time-varying solution of time-varying problem in the error-free manner. ZNN may also be called an asymptotic neural network (ANN). Zak regarded terminal attracts as motivation function and applied finite-time neural network to time-varying matrix inversion [12]. We know that the normal activation function has a wider application background. So a terminal neural network (TNN) proposed is of finite-time characteristics when the time tends to be finite. To the best of our knowledge, it is a brilliant neural-solution to time-varying Sylvester equation as well as the reverse kinematic problem of redundant robot manipulators [13–15].
http://dx.doi.org/10.1016/j.neucom.2016.05.091 0925-2312/& 2016 Elsevier B.V. All rights reserved.
Please cite this article as: Y. Kong, et al., Terminal neural computing: Finite-time convergence and its applications, Neurocomputing (2016), http://dx.doi.org/10.1016/j.neucom.2016.05.091i
Y. Kong et al. / Neurocomputing ∎ (∎∎∎∎) ∎∎∎–∎∎∎
2
Up to now, few works are reported to provide a finite-time neurosolution to the time-varying Sylvester equation problem. The main contributions of this paper may lie in the following: (i) To solve the time-varying quadratic problem, an finite-time convergent neural networks is designed based on a vector-valued error function, which gives the exact solution after the convergent instant. (ii) A novel repeatable robot manipulator program based on joint-velocity level scheme is presented and studied to reduce the joint-angle drift phenomenon. To the best of our knowledge, this terminal repeatable kinematic scheme which makes the joint angle of manipulator to its initial position in finite time has not been investigated by other researchers. (iii) The comparison between TNN and ANN is analyzed and tested in this paper. Computer simulation results show good results on planar three-link redundant manipulator in velocity scheme. An illustrative comparison example is presented, where TNN models are used to solve the time-varying problems. The novel velocity minimization using terminal attractors of the proposed TNN is shown effectively. The remainder contents of this paper is organized as follows. In Section 2, the terminal neural network is put up. Section 3 will continue the analysis with terminal network in theory. In Section 4, we adopt the instance of time-varying matrix inversion to different the performance of the asymptotic neural network (ANN) with the TNN. In Section 5, terminal neural network is adopted to find the answers for the joint-angle drift problems of three-link planar redundant robot arms. Section 6 is the conclusion of this paper.
Firstly, for the time-varying matrix, A (t ) = (Aij (t )) ∈ Rn × n , + Aij (·) : R → R , Aij (·), is the every entry of matrix A(t), 1 ≤ i, j ≤ n. Throughout this paper, the following notation is used, a positive constant α, Aα (t ) = (Aijα (t )). The matrix function derivative dA (t ) /dt is the derivative of matrix A(t), That is dA (t ) /dt = (Aij̇ (t )). For mab
it
∫a A (t ) dt =
is
(∫
b
a
defined
entry-wisely,
i.e.,
for
S (u) = (1 − exp ( − ξ u)) /(1 + exp ( − ξ u)) with ξ ≥ 1. The linear activation function, S (·) = · is usually used. The detailed theoretical analysis of TNN model (2) is presented 1 here. We also choose the Lyapunov candidate V (t ) = 2 E2 (t ). The derivative of V(t) can be expressed as:
d d V (t ) = E (t ) ⊙ E (t ) = − γE (t ) ⊙ S (Eα (t )) dt dt where E (t ) ⊙ S (Eα (t )) should be positive matrix, and S (·) is chosen as a monotonically increasing and odd function. So α can be set as: (1) E (t ) ≥ Λ, α = q1/p1, q1 and p1 are positive odd number respectively satisfying q1 ≥ p1. (2) E (t ) < Λ, α = q2/p2 , q2 and p2 are positive odd number respectively satisfying q2 < p2.
Λdenotes an approximately dimensioned identity-matrix. The common type for TNN is given as: (3)
where γ , α1, α2 > 0, q1, p1, q2, p2 are positive odd number requiring q1 ≥ p1, q2 < p2. In this paper, we adopt the linear activation function S (E ) = E, q1 = p1, q2 = q , p2 = p.
matrix
)
3. Finite-time convergence analysis
Aij (t ) dt .
Consider the following differential equations governed by
d E (t ) = − γS (E (t )) dt
When the activation function S (E ) = E is given, the dynamic neural network is applied
(1) Rn × n
a where γ > 0 is a positive constant, E (t ) ∈ is a time-varying matrix, S (·) : Rn × n → Rn × n is an activation function, which is defined for matrix S (E (t )) = (S (Eij (t ))). 1
We choose the Lyapunov candidate V (t ) = 2 E2 (t ). The derivative of V(t) can be calculated as:
d d V (t ) = E (t ) ⊙ E (t ) = − γE (t ) ⊙ S (E (t )) = − γEij (t ) S (Eij (t )) dt dt where symbol ⊙ denotes the Hadamard product of two matrices. E (t ) ⊙ S (E (t )) should be positive to guarantee the convergence of E(t). Usually S (·) is chosen a monotonically increasing and odd function, satisfying S ( − ·) = − S (·). Besides, different choices of activation function S (·) lead to different convergence performance of Formula (1). Consider the following TNN governed by
d E (t ) = − γS (Eα (t )) dt
(1) a linear function: S (u) = u; (2) a power function S (u) = up with integer p ≥ 3; (3) a bipolar sigmoid function.
q1 q2 ⎛ ⎞ d E (t ) = − γS ⎜ α1E p1 (t ) + α2 E p2 (t ) ⎟ dt ⎝ ⎠
2. Terminal neural network
trices,
Generally speaking, in the model (2), any monotonically increasing odd activation function S (·) can be adopted to construct the neural network. However, different choices for the activation function will lead to different convergence processes. Three types of functions are usually used:
(2)
a where α > 0, then α = 1 which implies that Formula (2) is equivalently written as (1). Different from the sign-bi-power activation function which make (·)α as a kind of activation functions, the neural network function (2) is a kind of dynamic system (in fact, it is a terminal attraction dynamic system) [14].
E (t ) = E (0) e−γt
(4)
which means that, as t → ∞ , E (t ) → 0 globally and exponentially as long as time goes infinity. The design formula (4) can be called asymptotically convergent neural network (ANN), with constant γ denoting a positive design-parameter used to scale the convergence rate. In order to improve the convergent rate, we introduce terminal attraction and propose a terminal neural network (TNN), which makes the error E(t) to converge to zero in a finite time. Consider the following matrix differential equation governed by:
E ̇ (t ) = − γE q / p (t )
(5)
where γ > 0, q and p are positive odd number satisfying q < p. Eq. (5) can be rewritten as
dt = −
1 E (t )−q / p ⊙ dE (t ) γ
Integrating both side yields
∫0
t
dt = −
1 γ
0
∫E (0)
E (t )−q / p ⊙ dE (t )
Hence, starting from any initial value E (0) > 0, E (t ), converges to the equilibrium, (E (t ) = 0), the convergence instant can be calculated by
Please cite this article as: Y. Kong, et al., Terminal neural computing: Finite-time convergence and its applications, Neurocomputing (2016), http://dx.doi.org/10.1016/j.neucom.2016.05.091i
Y. Kong et al. / Neurocomputing ∎ (∎∎∎∎) ∎∎∎–∎∎∎
tf =
p E (p − q) / p (0) γ (p − q)
(6)
Observing (6), it can be found that when E(t) is closer to the equilibrium state, E (t ) = 0, the convergent rate is much faster. When E(t) is far away from zero, the convergent rate becomes even slower than the rate of ANN. So we adopt the following accelerated terminal neural network (ATNN):
(
E ̇ (t ) = − γ α1E (t ) + α2 E q / p (t )
)
(7)
where β1 = γα1, β2 = γα2, β1 > 0, β2 > 0. It is a simple form of Formula (3). Eq. (7) can be rewritten as
E−q / p (t ) ⊙
dE (t ) + β1E1 − q / p (t ) = − β2 dt
(8)
Setting Y (t ) = E1 − q / p (t ), then
(9)
Substituting (8) into (9):
p−q dY (t ) p−q + β1Y (t ) = − β2 p dt p
(10)
⎛β ⎞ p−q β Y (t ) = ⎜ 2 + Y (0) ⎟ e− p β1t − 2 β1 ⎝ β1 ⎠
−γ (A (t ) X (t ) − In )q / p = Ȧ (t ) X (t ) + A (t ) Ẋ (t )
(14)
At the same time, the accelerated terminal neural network (7) for solving the matrix inversion can be given as:
−γ (β1 (A (t ) X (t ) − In ) + β2 (A (t ) X (t ) − In )q / p) = Ȧ (t ) X (t ) + A (t ) Ẋ (t )
(15)
Remark. Given a time-varying continuously differentiable nonsingular matrix A (t ) ∈ Rn × n , and with the error dynamics (14) and (15), X(t) will always converge to the inverse of A(t) in a finite time. The dynamical equation model (15) get a superior convergent rate to zero than (14).
(16)
Since the above A(t) is a unitary matrix, we could readily write out its time-varying inverse as
⎛β ⎞ p−q β ⎜ 2 + Y (0) ⎟ e− p β1t f = 2 β1 ⎝ β1 ⎠ Hence, starting from any initial value E (0) > 0, the converging time can be calculated by t f
(11)
Observing (7), it can be found that one more term appears in the right-hand side of (5), which could lead fast convergence rate through suitably setting p, q , β1, β2, When E(t) is far away from zero, the convergence time is determined by the first term of the right-hand side of (7), β1E (t ). While E(t) is closer to the equilibrium state, E (t ) = 0, the convergence time is determined by the second term β2 E q / p (t ). The design formula of terminal neural network (7) could accelerate the convergent time and get a superior result for convergence.
4. Time-varying matrix inversion For illustrative example, we choose time-varying matrix is the inverse matrix of A (t ) ∈ Rn × n, X⁎ (t ) ∈ Rn × n A (t ) , X⁎ (t ) = A−1 (t ) , t ∈ [0, + ∞), So
A (t ) X⁎ (t ) = In
(13)
According to (13), whenever E (t ) = 0, X (t ) becomes the theoretical ⁎ value of the inverse of A(t), i.e., X (t ) = A−1 (t ). On the basis of the definition of E(t), (5) can be rewritten as
⎡ cos (t ) sin (t ) ⎤ A (t ) = ⎢ ⎥ ⎣ − sin (t ) cos (t )⎦
Setting Y (t f ) = 0, then
β E (0)(p − q) / p + β2 p ln 1 β1 (p − q) β2
E (t ) = A (t ) X (t ) − In
For illustrative and comparative purposes, let us define the coefficients as:
The common solution of the differential equation (10) is
tf =
varying Sylvester equation with X (t ) = X⁎ (t ). Think about the following solvable system of time-varying equation:
where Ȧ (t ) and Ẋ (t ) are the derivative of A (t ) , X (t ).
q
dY (t ) dE (t ) p − q −p = E (t ) ⊙ dt dt p
3
⎡ cos (t ) − sin (t )⎤ ⁎ X (t ) = A−1 (t ) = ⎢ ⎥ ⎣ sin (t ) cos (t ) ⎦
(17)
11 The initial value X (0) can be set as X (0) = ⎡⎣ 1 1 ⎤⎦ Applying the two neural network models to solve the inverse matrix problem, we can get the X (t ) = A−1 (t ) in a finite time. Compared with ANN model for solving online systems of timevarying inverse matrix, it has been discovered that terminal neural network models (14) and (15) receive superior convergent effect.
4.1. Terminal neural network The overall system model of TNN (14) is depicted in Fig. 1, where q = 3, p = 5. As seen from Fig. 2, starting from a randomly generated initial state X (0), X(t) converges to the theoretical inverse A−1 (t ). To monitor the neural-network convergence, we increase the value of γ, as shown in Fig. 3, the convergence time is within 0.03 s. In contrast, as shown in Fig. 4, the matrix-vector X(t) of ANN does not fit well with the theoretical value A−1 (t ).
(12)
where In denoting an identity matrix n × n. In this case, the analytical solution of Sylvester equation is X⁎ (t ) = A−1 (t ). For every instant t , X⁎ (t ) can be obtained from (12). On the contrary, the conventional gradient-based methods (GNN), Zhang exploits a time-varying dynamical neural network which methodologically solves the inverse problem of time-varying matrix [15]. The Zhang neural network is depicted as Formula (1), which means that, t → ∞ , E (t ) → 0, X (t ) = X⁎ (t ). We propose the terminal neural network to accelerate the Zhang neural network to finite-time convergence as a theoretical solution of time-
Fig. 1. The diagram of neural network module (13) for the matrix inversion.
Please cite this article as: Y. Kong, et al., Terminal neural computing: Finite-time convergence and its applications, Neurocomputing (2016), http://dx.doi.org/10.1016/j.neucom.2016.05.091i
Y. Kong et al. / Neurocomputing ∎ (∎∎∎∎) ∎∎∎–∎∎∎
4
1.5
1.5
1
1
x12
x
0.5
0
0
−0.5
−0.5
0
2.0
4.0 6.0 time t(s)
x21
22
11
0.5
−1
x
8.0
10.0
−1
0
2.0
4.0 6.0 time t(s)
8.0
10.0
Fig. 2. Terminal neural network (5) with γ = 1.
1.5
1.5
1
x 12
x
0.5
0.5
0
0
−0.5
−0.5
−1 0
2.0
4.0
6.0
8.0
x
1
11
10.0
−1
0
2.0
x
22
4.0
6.0
8.0
21
10.0
time t(s)
time t(s) Fig. 3. Terminal neural network (5) with γ = 10 .
We could also define the error-norm E (t ) F = n n A (t ) X (t ) − In F = ( ∑i = 1 ∑ j = 1 eij2 (t ))1/2 With the increasing of the
design parameter γ, the convergence will become much faster. We can see the different convergent error to zero in Fig. 5.
inverse A−1 (t ). For comparison, we also show the output of A (t ) X (t ) − In F for the situation of different γ. From Fig. 7, the computational error converges to zero in 1.8 s when γ = 1. If γ is increased to 10, the convergence time is within 0.2 s. If γ is 100, the convergence time is within 0.015 s. Obviously, ATNN has a better superior convergence effect than TNN and ANN.
4.2. Accelerated terminal neural network 5. Application to robot kinematic control The construction procedure is the same as Fig. 1, where q = 3, p = 5, β1 = 1, β2 = 1. As seen from Fig. 6, for a randomly generated initial state X (0), X(t) converges to the theoretical
Kinematically redundant manipulators are those having more degrees of freedom (DOF) than required for end-effectors' position
Please cite this article as: Y. Kong, et al., Terminal neural computing: Finite-time convergence and its applications, Neurocomputing (2016), http://dx.doi.org/10.1016/j.neucom.2016.05.091i
Y. Kong et al. / Neurocomputing ∎ (∎∎∎∎) ∎∎∎–∎∎∎
1.5
1.5
x
1
x
12
0
0
−0.5
−0.5
2.0
4.0
6.0
x
22
21
11
0.5
0
x
1
0.5
−1
5
8.0
10.0
−1
0
2.0
time t(s)
4.0
6.0
8.0
10.0
time t(s) Fig. 4. Asymptotic neural network with γ = 1.
Fig. 5. Comparison of error by using ANN and TNN(5) with different γ.
and orientation in the kinematic sense. Redundant degrees of freedom (DOF) enable them to be easy to finish simultaneously numerous useful objectives, for instant, avoiding the crush with
obstacles in operational space or even the avoidance of the joint limits while the manipulator moves. When the end-effect wants to finish a closed path in its work space, the five initial joint angles
Please cite this article as: Y. Kong, et al., Terminal neural computing: Finite-time convergence and its applications, Neurocomputing (2016), http://dx.doi.org/10.1016/j.neucom.2016.05.091i
Y. Kong et al. / Neurocomputing ∎ (∎∎∎∎) ∎∎∎–∎∎∎
6
1.5
1.5
x
1
x
12
0.5
0.5
0
0
−0.5
−0.5
−1
0
2.0
4.0
6.0
8.0
−1
10.0
x 21
x 22
1
11
0
2.0
4.0
6.0
8.0
10.0
time t(s)
time t(s) Fig. 6. Terminal neural network (7) with γ = 1.
Fig. 7. Comparison of error A (t ) X (t ) − In
may not trace back to their original position after the task. These are the joint-angle shift phenomena or non-repeatable problems. We put up a typical repeatable motion scheme and use the TNN to work out the joint-angle shifting problems. Cartesian trajectory r (t ) ∈ Rm is regarded as the main path at the manipulator's end-effector path regulations, and how we figure out the corresponding joint trajectory θ (t ) ∈ Rn with real time. Actually, the relationship f (·) between the end-effector and real vector r (t ) ∈ Rm and the joint variable vector θ (t ) ∈ Rn for repeatable trajectory planning can be expressed as follows:
2
by the three neural networks with different γ.
r (t ) = f (θ (t ))
(18)
where f (θ ) is a continuous non-linear corresponding to a different structure, particularly for high degree of freedom manipulator. The solution for Eq. (17) is often considered as the joint-velocity by differentiating (17):
r ̇ (t ) = J (θ (t )) θ ̇(t )
(19)
where J (θ ) = ∂f (θ ) /∂θ ∈ Rm × n is the Jacobian matrix.
Please cite this article as: Y. Kong, et al., Terminal neural computing: Finite-time convergence and its applications, Neurocomputing (2016), http://dx.doi.org/10.1016/j.neucom.2016.05.091i
Y. Kong et al. / Neurocomputing ∎ (∎∎∎∎) ∎∎∎–∎∎∎
7
where θ (0) is the initial state of the joint-angle vector. The design parameter μ > 0, is used to speed up the rate of the manipulator response to the joint displacement. Based on the repeatable performance index (20), we get the following basic problem formulation for redundant robot manipulators:
2.5
2
Min:
θ ̇(t )T θ ̇(t )/2 + μc Tθ ̇(t ) J (θ (t )) θ ̇(t ) = r ̇ (t )
Subject to:
(22)
The avoidance of joint-angle limits θ ± and joint-velocity limit θ ±̇ is considered with θ − ≤ θ ≤ θ +, θ −̇ ≤ θ ̇ ≤ θ +̇ . Based on the preliminary on equality-constrained optimization problems, we have the relative Lagrangian:
Y/m
1.5
L (θ ̇(t ), λ (t ), t ) = θ ̇(t )T θ ̇(t )/2 + μ (θ (t ) − θ (0))T θ ̇(t ) + λ (t )T (J (θ (t )) θ ̇(t ) − r ̇ (t ))
1
where λ (t ) denotes the Lagrangian-multiplier vector. The derivative of L with respect to θ ̇ (t ) and λ (t ) could simply be derived as ∂L (θ ̇ (t ), λ (t ), t ) ∂θ ̇ (t )
0.5
= 0,
∂L (θ ̇ (t ), λ (t ), t ) ∂λ (t )
= 0. So we have the dynamic equation as
follows:
W (t ) y (t ) = v (t )
0
0
0.5
1
1.5
with
2
X/m Fig. 8. Trajectories of the three-link planar robot arm when its end-effector tracks a circular path.
For redundant manipulators, because m < n Eq. (18) is underdetermined and has an infinite number of solutions. The conventional pseudoinverse-type solution to (18) is given as
θ ̇ = J † r ̇ + (I − J † J ) z˜ J†
(20)
Rn × m
Rn
where denotes the pseudoinverse of J, and z˜ ∈ is an ∈ arbitrary vector selected by using different optimization criteria. But the pseudoinverse method takes enormous amount of calculation and joint angles never trace back to their origin position after a circulation. To ensure the repeatable inverse-kinematics solution, the minimization of the joint displacement between the current and initial states could be improved. The performance index is [15]:
(θ ̇(t ) +
μc )T (θ ̇(t )
+ μc )/2
with c = (θ (t ) − θ (0))
(23)
(21)
⎡ I J T (θ (t ))⎤ ⎥ W (t ) = ⎢ 0 ⎦ ⎣ J (θ (t ))
(24)
where is the identity matrix.
⎡ θ ̇(t )⎤ y (t ) = ⎢ ⎥ ⎣ λ (t ) ⎦
(25)
⎡ − μ (θ (t ) − θ (0))⎤ v (t ) = ⎢ ⎥ r ̇ (t ) ⎣ ⎦
(26)
To solve the time-varying quadratic prob lem (21) via time-varying equation (22), we can define the error vector as:
E (t ) = W (t ) y (t ) − v (t ) In order to solve the redundancy resolution problem at the jointvelocity, we have to obtain the Jacobian matrix of f (·) by differentiating (18), which is shown as: 1
1 θ
dθ /dt
1
rad
θ2
3
0.8
dθ2/dt
θ
0.5
3
dθ /dt 1
0.6 0.4
0 0.2 0 −0.5 −0.2 −0.4
−1
−0.6 −1.5
0
2.0
4.0
6.0 time t(s)
8.0
10.0
−0.8
0
2.0
4.0
6.0
8.0
10.0
time t(s)
Fig. 9. Joint-angle and joint-velocity profiles of the three-link planar arm when its end-effector tracks a circle.
Please cite this article as: Y. Kong, et al., Terminal neural computing: Finite-time convergence and its applications, Neurocomputing (2016), http://dx.doi.org/10.1016/j.neucom.2016.05.091i
Y. Kong et al. / Neurocomputing ∎ (∎∎∎∎) ∎∎∎–∎∎∎
8
Fig. 10. E(t) during a period when using both ANN and TNN(5) neural networks models.
⎤ ⎡ 3 − ∑i = 1 si − s2 − s3 − s3⎥ J = ⎢⎢ ⎥ 3 c2 + c3 c3 ⎦ ⎣ ∑i = 1 ci where
3
c1 = cos θ1, c2 = cos (θ1 + θ 2 ) , c3 = cos ( ∑i = 1 θi ) , s1 = sin θ1, s2,
⎡ −0.5 sin (B) Ḃ ⎤ 3 = sin (θ1 + θ 2 ) , s3 = sin ( ∑i = 1 θi ) , r ̇ (t ) = ⎢⎣ ⎦ 0.5 cos (B ) B ̇ ⎥ ̇ with B = 2π cos (0.5πt /T ) cos (0.5πt /T ), B = − 2π cos (0.5πt /T ) π sin (0.5πt /T ) /T . The Jacobian matrix could be obtained via a similar derivation for the five-link planar robot manipulator. The three-link manipulator's end-effector is anticipated to trace a circle path, with the radius being 0.5 m. Task duration T = 10 s , T initial joint vector θ (0) = ⎣⎡ π /3, − π /3, π /3⎦⎤ rad, and the design parameter μ = 5, γ = 100. Fig. 8 depicts the motion trajectory of the three-link planar robot arm operating in 2-D plane. Here we only give out the trajectory of TNN model. The motion trajectory of other case (using ANN (4)) is very similar as shown in Fig. 8. As observed from the simulation results, the motion trajectory of the robot arm is close to the desired circle with the maximum position error less than 1.0 × 10−4 . Fig. 9 shows the joint-angle and joint-velocity profiles when the three-link robot end-effector tracks the circle path. The profiles generated by the ANN solver are very similar to the ones in Fig. 9 and are omitted due to the space limitation. From Fig. 9, the joint angle and joint velocity fit very well with each other. We use the Frobenius norm E (t ) = W (t ) y (t ) − v (t ) 2 to evaluate the estimation error and compare the performance using the ANN model and TNN model. As shown in Fig. 10, the estimation error of TNN model converges to zero within 0.03 s, which validates the finitetime convergence property of the proposed terminal neural network. In contrast, the ANN model appears as a relatively large estimation error at t¼0.1 and still does not return to the true value X⁎ (t ) at the end of the simulation. The proposed terminal neural network for jointdrifting phenomenon has been demonstrated effectively.
6. Conclusion In this paper, a novel TNN based on velocity level has been proposed for solving the time-varying nonlinear equation mentioned in (12). The convergence performance of TNN has been analyzed and presented. Furthermore, a readily repeatable program for redundant manipulator
has been proposed. Compared with the existing ANN that never converges in a finite time, the terminal neural network demonstrates the advantage of speed and precision over existing ones in the convergence rate. Terminal neural network with linear activation function is applied to calculate the inverse of a matrix and quadratic programming problems for redundant robot arm planning. Theoretical analysis and simulation results ensure the efficiency of our terminal neural network.
Acknowledgment This study was partly supported by National Natural Science Foundation of China (Nos. 61272315 and 60842009), Zhejiang Provincial Natural Science Foundation (No. Y1110342), Zhejiang Provincial Science and Technology Department of International Cooperation Project (No. 2012C24030).
References [1] R. Bhatia, P. Rosenthal, How and why to solve the operator equation AX − XB = Y , Bull. Lond. Math. Soc. 29 (1) (1997) 1–21. [2] M. Harker, P.O. Leary, Least squares surface reconstruction from gradients: direct algebraic methods with spectral, Tikhonov, and constrained regularization, in: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Colorado Spring, 2011, pp. 2529–2536. [3] Z. Wu, X. Fu, Structure identification of uncertain dynamical networks coupled with complex-variable chaotic systems, IET Control Theory Appl. 7 (9) (2013) 2285–2292. [4] D. Lin, F. Zhang, Symbolic dynamics-based error analysis on chaos synchronization via noisy channels, Phys. Rev. E 90 (1) (2014) 123–134. [5] H.C. Lee, J.W. Choi, Linear time-varying eigenstructure assignment with flight control application, IEEE Trans. Aerosp. Electron. Syst. 40 (1) (2004) 145–167. [6] D.E. Whitney, Resolved motion rate control of manipulators and human prostheses, IEEE Trans. Man Mach. Syst. 10 (2) (1969) 467–553. [7] W. Li, Tracking control of chaotic cornary artery system, Int. J. Syst. Sci. 43 (1) (2012) 21–30. [8] Xia Youshen, J. Youshen, A dual neural network for kinematic control of redundant robot manipulators, IEEE Trans. Man Mach. Syst. 31 (1) (2001) 147–154. [9] H.-N. Wu, M.-Z. Bai, Active fault-tolerant fuzzy control design of nonlinear model tracking with application to chaotic systems, IET Control Theory Appl. 3 (6) (2009) 642–653. [10] Y. Zhang, Repetitive Motion Planning and Control of Redundant Robot Manipulators, Springer, Berlin, Heidelberg, Germany, 2013. [11] Sichao Wu, Erratum to: Exponential stability of discrete-time delayed Hopfield neural networks with stochastic perturbations and impulses, Results Math. 62 (5) (2012) 217–227.
Please cite this article as: Y. Kong, et al., Terminal neural computing: Finite-time convergence and its applications, Neurocomputing (2016), http://dx.doi.org/10.1016/j.neucom.2016.05.091i
Y. Kong et al. / Neurocomputing ∎ (∎∎∎∎) ∎∎∎–∎∎∎ [12] M. Zak, Terminal attractors for addressable memory in neural networks, Phys. Lett. A 133 (1) (1988) 18–22. [13] Shuai Li, A class of finite-time dual neural networks for solving quadratic programming problems and its k-winners-take-all application, Neural Netw. 39 (2) (2013) 27–39. [14] Shuai Li, Accelerating a recurrent neural network to finite-time convergence for solving time-varying Sylvester equation by using a sign-bi-power activation function, Neural Process. Lett. 37 (2) (2013) 189–205. [15] Y. Zhang, Design and analysis of a general recurrent neural network model for time-varying matrix inversion, IEEE Trans. Neural Netw. 16 (6) (2005) 1477–1490.
9 Ying Kong, born in 1980, M.S., lecturer. She is a research fellow at the School of Information Engineering College, Zhejiang University of Technology. She is studying for a doctor degree. She worked at the Zhejiang University of Science and Technology. Her research interests include neural computing, iterative learning identification, image processing, and pattern recognition.
Please cite this article as: Y. Kong, et al., Terminal neural computing: Finite-time convergence and its applications, Neurocomputing (2016), http://dx.doi.org/10.1016/j.neucom.2016.05.091i