Accepted Manuscript Title: Cuckoo search algorithm with membrane communication mechanism for modeling overhead crane systems using RBF Neural Networks Authors: Xiaohua Zhu, Ning Wang PII: DOI: Reference:
S1568-4946(17)30143-6 http://dx.doi.org/doi:10.1016/j.asoc.2017.03.019 ASOC 4104
To appear in:
Applied Soft Computing
Received date: Revised date: Accepted date:
7-7-2016 19-3-2017 20-3-2017
Please cite this article as: Xiaohua Zhu, Ning Wang, Cuckoo search algorithm with membrane communication mechanism for modeling overhead crane systems using RBF Neural Networks, Applied Soft Computing Journalhttp://dx.doi.org/10.1016/j.asoc.2017.03.019 This is a PDF file of an unedited manuscript that has been accepted for publication. As a service to our customers we are providing this early version of the manuscript. The manuscript will undergo copyediting, typesetting, and review of the resulting proof before it is published in its final form. Please note that during the production process errors may be discovered which could affect the content, and all legal disclaimers that apply to the journal pertain.
Cuckoo search algorithm with membrane communication mechanism for modeling overhead crane systems using RBF Neural Networks
Xiaohua Zhua,b, Ning Wangb,*
a
College of Physics and Information Engineering, Minnan Normal University, Zhangzhou
363000, PR China b
National Laboratory of Industrial Control Technology, Institute of Cyber-systems and Control,
Zhejiang University, Hangzhou 310027, PR China
Graphical abstract:
mc
Trolley
F
F
x
Overhead crane system
rail
l
RBF NN
ˆ
+ -
mCS
x mp g
Payload
RBF NN
xˆ
-
+
mCS
Highlights:
1
A novel cuckoo search algorithm (mCS) is proposed. Membrane communication mechanism is employed to maintain the population diversity. Chaotic local search strategy is used to improve local search ability. The mCS based RBF-NNs are adopted for modeling the overhead crane systems.
Abstract: Developing a precise dynamic model is a critical step in the design and analysis of the overhead crane system. To achieve this objective, we present a novel radial basis function neural network (RBF-NN) modeling method. One challenge for the RBF-NN modeling method is how to determine the RBF-NN parameters reasonably. Although gradient method is widely used to optimize the parameters, it may converge slowly and may not achieve the optimal purpose. Therefore, we propose the cuckoo search algorithm with membrane communication mechanism (mCS) to optimize RBF-NN parameters. In mCS, the membrane communication mechanism is employed to maintain the population diversity and a chaotic local search strategy is adopted to improve the search accuracy. The performance of mCS is confirmed with some benchmark functions. And the analyses on the effect of the communication set size are carried out. Then the mCS is applied to optimize the RBF-NN models for modeling the overhead crane system. The experimental results demonstrate the efficiency and effectiveness of mCS through comparing with that of the standard cuckoo search algorithm (CS) and the gradient method.
Keywords: Overhead crane systems; Radial basis function neural networks (RBF-NNs); Cuckoo search algorithm (CS); Membrane communication mechanism; Chaotic local search strategy
1. Introduction Overhead cranes have been widely used in various occasions to accomplish transportation tasks, for the purpose, the two objects are usually expected. First, the trolley of an overhead crane needs to reach the desired position fast and accurately for realizing high-efficient transportation. Second, the payload should swing as small as possible to avoid unexpected accidents, especially 2
when the overhead crane is employed in such a dangerous situation as the transportation of melting steel or iron. However, they are generally in contradiction with each other. Therefore, it is demandable to design an efficient control system for overhead cranes [1-6]. In order to obtain better control performance, modelling an overhead crane system accurately is the key issue. During last decades, This problem has received much attention from the researchers. Sakawa et al. applied Newton’s law of motion to derive state equations for an overhead crane [7]. Meressi obtained a linearization model of the three dimensional gantry robot using standard Lagrange formulation [8]. Kaneshige et al. constructed a double pendulum model based on a kinetic equation for the three-dimensional transfer of a liquid tank by an overhead crane [9]. Huang et al. discussed the nonlinear dynamics of bridge cranes with distributed-mass beams [10]. Ismail et al. established a mathematical model for the offshore container crane by the Euler-Lagrange formulation [11]. Taking different wind disturbances into account, Tomczyk et al. set up the mathematical model for an overhead crane [12]. However, the literatures mentioned as above mainly focus on the mechanism modeling methods under some assumptions, which causes a comparatively large deviation between the overhead crane system model and the real world. Non-parametric modeling is an effective way to improve the modeling accuracy, for example, the case of using the neural network modeling method. Being one of important neural networks, the radial basis function neural networks (RBF-NN) were introduced by Broomhead and Lowe in 1988 [13]. Having the simpler topological structure, better approximation ability and faster learning speed, RBF-NNs have been successfully employed in modeling and control for nonlinear systems [14-16]. To a neural network, the determination of the network structure and the parameters is the key 3
issues [17]. Inappropriate parameters of a neural network will decline its performance [18]. Some researchers have revealed that intelligent optimization algorithm is a good way to obtain a better performance neural network [19-22]. Cuckoo search algorithm (CS) was developed by Yang and Deb in 2009 [23]. This global search algorithm is inspired by the interesting breeding behaviors of the cuckoo species and the Lévy flight of insects. Due to its merits such as good global searching ability, simplicity, few controlled parameters, generality, unique characteristics of lévy flights and so on, CS has been widely studied and efficiently applied in complex optimization problems. To build a model for the prediction of OPEC CO2 emissions, CS is hybridized with particle swarm optimization for training an artificial neural network (ANN) [24]. Reference evapotranspiration estimation with the CS based ANN is investigated in [25]. The hybrid optimization technique combining a general regression neural network and CS is developed for microwave filtering in [26]. Wang et al. proposed a BP neural network soft-sensor model optimized by the shuffled CS for the forecasting target [27]. However, there are some limitations for CS, such as slow convergence speed and low precision. Therefore, various improved methods have been addressed. Rajabioun presented the cuckoo optimization algorithm (COA) in [28], and COA has been applied in some engineering optimization problems [29-32]. Huang et al. put forward a hybrid algorithm combining the Lévy flight with teaching-learning process for the optimization of the machining parameters [33]. Li et al. adopted the two new mutation rules and self-adaptive parameter setting for CS [34]. Liu et al. enhanced CS by using chaos theory, inertia weight and local search mechanism [35]. To solve the problem of combined heat and power economic dispatch, Naik et al. proposed a novel CS using adaptive step size according to the knowledge of its fitness function value and the current 4
location [36]. Huang et al. used chaotic sequences to generate initialized host nest location, changed step size and reset the locations beyond the boundary [37]. In order to obtain a better modeling performance, it is necessary to adopt a more efficient global optimization technique to estimate RBF-NN parameters. Conventional CS is inspired by solitary lifestyle of cuckoos but lacks of swarm collaboration. Therefore, combining CS with individual communication mechanism is a novel attempt to improve the global search ability, and using the membrane communication mechanism of P systems is a good choice. P systems (membrane computing) was first suggested by Gheorghe Păun in 1998. It is a nondeterministic computer model derived from the structure and functions of living cells and the interactions of living cells in tissues or higher order biological structure [38]. The fundamental ingredients of membrane computing are a membrane structure, objects and rules. A cell-like membrane structure is a hierarchically arranged set of membranes where various “chemicals” (objects or multiple sets of the problem) operate according to the different rules. The rules correspond to the chemical reactions in the compartments of a cell. For example, the communication rule is used to describe the fluidity of a membrane which enables macromolecular proteins and inorganic salt to pass [39-41]. Chaos belongs to a characteristic of nonlinear systems. Due to its non-repetition and ergodicity, chaos has been drawn to many fields, one is optimization algorithms. Gandomi et al. combined chaos with firefly algorithm and the accelerated particle swarm optimization [42-43]. Alatas et al. adopted a selected chaotic map to adjust the parameters for improving the global convergence of the Big Bang-Big Crunch optimization [44]. In this paper, we present a novel radial basis function neural network (RBF-NN) modeling method 5
for the overhead crane system and propose the cuckoo search algorithm (mCS) with the membrane communication mechanism and the chaotic local search strategy to optimize the RBF-NNs. The rest of this paper is organized as follows. Section 2 describes RBF-NN modeling method for overhead crane systems. We propose the cuckoo search algorithm with membrane communication mechanism (mCS) and the chaotic local search strategy in Section 3. Section 4 gives the numerical experiment results with some benchmark functions to show the superiority of mCS in contrast to CS and ACS. Then, We apply the mCS to optimize the parameters of RBF-NNs for the overhead crane system in section 5. Section 6 briefly concludes this paper.
2. RBF neural network modelling method for overhead crane systems 2.1 The overhead crane system An illustration for the overhead crane system is depicted in Fig.1, where x and are the trolley position and the payload swing angle, respectively; F stands for the actuating force exerted on the trolley; mc and m p represent the masses of the trolley and the payload, respectively; g denotes the gravity acceleration and l is the rope length. Fig.1 is about here It is clearly shown that the overhead crane system is an underactuated system with two degrees of freedom and one input. The trolley position can be controlled by the actuating force, while the swing angle suppression needs to take the advantage of the coupling relationship with the trolley position. Such a nonlinear dynamical system is consist of the two subsystems, the trolley position system and the swing angle system, which can be respectively described by the nonlinear autoregressive models as follows: 6
x(t ) f ( X x (t )) X (t ) [ x(t 1), x(t 2), , x(t n ), F (t ), F (t 1), F (t 2), , F (t m )] x 1 1 ( t ) g ( X ( t )) X (t ) [ (t 1), (t 2), , (t n2 ), F (t ), F (t 1), F (t 2), , F (t m2 )]
(1)
where t is the time series; f () and g () are the nonlinear functions of the trolley position system and the swing angle system, respectively; F (t )
X
x
R p , p n1 m1 1 and
X
represents the system input;
R q , q n2 m2 1 are the input vectors; x(t ) and (t )
stand for the outputs ; n1 , m1 , n2 and m2 are the numbers of the related variables. 2.2 RBF-NN models of the overhead crane system The RBF-NN, a kind of feed-forward neutral networks, is composed of three layers with totally different roles: an input layer connects the network to its environment, each hidden unit implements nonlinear transformation and an output layer executes linear transformation, to be specific, a weighted sum of hidden unit outputs. In this paper, we adopt two RBF-NNs for modeling the nonlinear overhead crane system, namely the trolley position RBF-NN shown in Fig.2 (a) and the swing angle RBF-NN shown in Fig.2 (b). Fig.2 is about here The outputs of RBF-NN models can be given by the inner product between the adjustable weight vector and the vector of responses of the basis functions: nx ˆ( x t ) wxjxj j 1 n ˆ(t ) w j j j 1
(2)
where xˆ(t ) and ˆ(t ) are the model outputs of the trolley position RBF-NN ( FNN ) with nx hidden-nodes and the swing angle RBF-NN ( GNN ) with n hidden-nodes, respectively; xj () and j () denote the basis function of the j-th hidden node; Wx [wx1 , W [w 1 , 7
wxnx ]T
and
w n ]T are the weights. The Gaussian activation function is used in this paper, which
can be written as 2 X (t ) Cxj xj exp( x ) xj 2 X (t ) C j ) j exp( j
where
C j [c j1 ,
(3)
is a norm; Cxj [cxj1 , , cxjp ] and xj are the centers and widths for FNN , while , c jp ] and j for GNN .
2.3 Parameter estimation of RBF-NNs using mCSs Moreover, the ongoing challenge of this modeling method is how to determine the RBF-NN parameters. As can be seen from Eqs. (2) and (3), the adjustable parameters are hidden parameters (centers and widths) and the weights of the RBF-NN. In order to obtain the suitable parameters, there are many different algorithms for training RBF-NN [45]. Basically, they can be divided into two categories. One is firstly determining the parameters of radial basis function by using an unsupervised learning method (i.e. K-means) and secondly the output weight by a supervised learning method (i.e. least mean square) [46]. Another is training all the parameters by adopting a supervised algorithm [47], which is used for comparing in this paper. The latter category is also called as gradient method where parameters are turned with the search directions defined by the gradient of the cost function at the current point. Taking FNN for example, the iteration algorithms of determining the RBF-NN parameters are as follows wxj (t ) (x(t )-xˆ (t ))xj (t 1)
(4)
wxj (t ) wxj (t -1)+ηwxj (t ) (wxj (k 1) wxj (k 2))
xj (t ) (x(t )-xˆ (t ))wxj (t 1)xj (t 1)
X x (t )-Cxj (t 1)
xj 3 (t 1)
xj (t ) xj (t -1) η xj (t ) ( xj (t -1)- xj (t -2)) 8
(5)
2
(6)
(7)
Δcxji (t ) (x(t )-xˆ (t ))wxj (t 1)xj (t 1)
xxj (t )-cxji (t 1)
xj2 (t 1)
cxji (t ) cxji (t -1) ηΔcxji (t ) (cxji (t -1)-cxji (t -2))
where
j 1, 2,
, nx ;
i 1, 2,
, p ;
(8)
(9)
η[0,1] is the learning rate; [0,1] is the
momentum factor. The drawback of the above two methods is that hidden nodes and output weights are treated separately, that is to say, their correlation are ignored, hence it may converge slowly and jump into local optimum [14]. The training process of RBF-NN can be counted as a hard continuous optimization problem due to its high-dimensional and multimodal search space. For solving this problem, we propose a novel CS (mCS) to estimate the parameters of the RBF-NNs, the schematic diagram of which can be shown in Fig.3.
Fig.3 Schematic diagram of mCS algorithm based RBF-NN 3 The cuckoo search algorithm with membrane communication mechanism (mCS) CS developed by Yang and Deb has the advantages described as above. Moreover, it has fine balance between exploration and exploitation [23] and is demonstrated to show better performance than some other optimization algorithms [33,36]. In this paper, we propose a novel CS with the membrane communication mechanism called mCS. 3.1 The cuckoo search algorithm 3.1.1 Cuckoo breading behavior The cuckoo species lay their eggs in the nests of other host birds, who have just laid their own eggs. In general, the cuckoo eggs hatch slightly earlier than their host eggs. Once the first cuckoo chick is hatched, he evicts the host eggs. This action results in their gaining access to get more 9
feeding opportunity [23].
3.1.2 Lévy flights ‘Lévy flights’ describes a model of random walk differentiated by their step lengths which follow a power-law distribution. Investigations have shown that the flight behavior of many animal and insects including cuckoos exhibits the typical characteristics of Lévy flights. Accordingly, such model has been applied to a large number of optimization problems [23], by which the algorithm performance is improved a lot.
3.1.3 Cuckoo search algorithm To simulate the breeding behavior of some cuckoo species and then form the mathematical model of CS, there are mainly three ideal assumptions during the search process [23]. The first assumption is that each cuckoo can only lay one egg each time, which will be dumped in a randomly chosen nest. The second is that the best nest with high quality of eggs will be reserved to the next generation. The third is that the number of available host nests is invariable, and a host will recognize the egg laid by a cuckoo with possibility pa [0,1] . When it happens, the host bird can either throw the egg away or build a new nest. Based on these three assumptions, the basic steps of CS in pseudo code can be described as follows: Cuckoo search algorithm Begin Randomly generate an initial population of n host nests; Calculate fitness for every individual; while (stop criterion is not met) 10
Take the Lévy flights for a cuckoo i and calculate its fitness Fi ; Choose a nest among n (say. j ) randomly; if ( Fi Fj ) Replace j by the new solution for minimization problem; end A fraction ( pa ) of the worse nests are replaced by new ones; Keep the best solutions; Update the generation number; end while end In CS, each egg can be considered as a solution of the problem, and a cuckoo egg represents a new solution. The initial solutions are generated at random, while new solutions (position of egg or location of nests) are produced by Lévy flights as [23] t nest it 1 nest it (nest it nest best ) Lévy( ),
( i 1, 2,
n)
(10)
t 1 t where nest i and nest i denotes the i-th solution in t and t 1 generation, respectively; t nest best is the best-so-far solution; is a constant over zero and varies in different cases; the
product represents point to point multiplication;
Lévy( )
is a Lévy distribution function and
is converted into a probability density function [23]. Lévy( ) ~ t - ,
(1 3)
(11)
where represents the power coefficient. In order to describe the Lévy flights in simple and programmable language, Mantegna proposed a new formula shown as [48] u
Lévy ( ) v
1/
(12)
where v and u are two random numbers that follow the normal distributions, u and v are the standard deviations of the normal distributions [49] 11
u ~ N (0, u2 ) , v ~ N (0, v2 )
(13) 1/
(1 )sin( / 2) ( 1)/ 2 [(1 ) / 2] 2
u2
(14)
v2 1
(15)
where is the standard gamma function. Another way of cuckoo search is to replace some nests by new ones, and the new nests are constructed by transformation as follows nestit (nest tj nestkt ) if r pa nestit 1 t else nesti
(16)
t where nest j and nestkt are two random solutions in the t generation, denotes a scaling
factor, r is a random number in interval [0, 1]. 3.2 The CS with membrane communication mechanism (mCS) CS iteratively generates new solutions by two operators, Lévy flights and random walk, which embody solitary lifestyle of cuckoos. However, some literatures have validated the important role of the crossover operator, which has a significant effect on the search space and on the increase of the population diversity [18, 21]. In the mCS, the crossover operation based on membrane communication mechanism of membrane computing is adopted to implement information exchange of individuals. The membrane communication mechanism is principally characterized by several ingredients: the membrane structure, the nests placed in the regions, the selection rule, the crossover rule, the communication rule. The membrane structure employed in the mCS and the schematic plan of communication mechanism are depicted in Fig.4, where ‘ * ’denotes objects, ‘ ’ represents communication objects, ‘ ’ means communication direction of objects between the nearby membranes, ‘ ’ denotes implementing the crossover rule. The two inner membranes all contain objects and 12
communication objects, while the skin membrane is the place for crossover rule only. Fig. 4 is about here The rules are the key factors in the membrane communication mechanism. The three types of rules are described as follows. Selection rule: The objects (nests) with better fitness are selected from candidate objects as communication object sets. Crossover rule: The objects in the same region may interact with each other. The crossover rule used in the mCS is described as t t nest1t [nest1,1 , nest1,1 , , nest1,t i 1 , nest1,t i , nest1,t i 1 , , nest1,t D ] t t t t t t t nest2 [nest2,1 , nest2,1 , , nest2,i 1 , nest2,i , nest2,i 1 , , nest2, D ] t t t t t t nest1 [nest1,1 , nest1,1 , , nest1,i 1 , nest1,i , nest2,i 1 , , nest2, D ]
t t nest1t [nest1,1 , nest1,1 , , nest1,t j 1 , nest1,t j , nest1,t j 1 , , nest1,t D ] t t t t t t t nest2 [nest2,1 , nest2,1 , , nest2, j 1 , nest2, j , nest2, j 1 , , nest2, D ] t t t t t t nest2 [nest2,1 , nest2,1 , , nest2, j 1 , nest2, j , nest1, j 1 , , nest1, D ]
(17)
(18)
nest1 nest1t 1 t nest1
if f (nest1) f (nest1t ) else
(19)
nest2 nest2t 1 t nest2
if f (nest2 ) f (nest2t ) else
(20)
where nest1t and nest2t in Eqs. (17) and (18) are two individuals selected randomly for crossover rule in t generation; i and j denote two random numbers in interval [1, D] ; D is the dimension of the search space; nest1 and nest2 are the two new individuals; Eqs. (19) and (20) represent the communication results with the assumption that we tackle a minimization problem. Communication rule: Replace the worst solutions in membrane with communication object set. 3.3 Chaotic local search strategy Compared with stochastic local search, chaotic local search reduces blindness and randomness,
13
which improves the efficiency of the local search ability. In this paper, the piecewise chaotic mapping shown as Eq. (21) is adopted to generate chaotic sequences [42]. c(t ) P c(t ) P 0.5 P c(t 1) 1 P c(t ) 0.5 P 1 c(t ) P
0 c (t ) P P c (t )
1 2
(21)
1 c(t ) 1 P 2 1 P c (t ) 1
where c(t ) and c(t +1) are the consecutive chaotic numbers, P [0,0.5] is the control parameter. The procedure of chaotic local search used in mCS can be shown as follows: t Step 1: Mapping every dimension of the solutions {nesti , s , i 1, 2, , n} to the range of [0, 1];
c(t )
nestit, s Ls
(22)
U s Ls
t where nesti , s denotes the s-th dimension of the i-th solution in t generation,
Ls
and
Us
represent lower and upper bound of s-th dimension of the solution, respectively. Step 2: Use Eq. (21) to achieve chaotic searching; Step 3: Restore chaotic solutions to the original space; nestit,s1 (U s Ls ) c(t 1) Ls
(23)
Step 4: Update the solutions according to Eq. (24). nestit 1 nestit 1 t nesti
if f (nestit 1 ) f (nestit ) else
(24)
In conclusion, the flow chart of mCS is shown in Fig.5 Fig.5 is about here The procedure of the mCS can be summarized as follows: Step 1: Set the parameters such as Gmax (the maximum generation), D , , pa , , n , P , Cn (the communication set size). 14
Step 2: Create an initial population and the two communication sets with size Cn . Step 3: Divide the population into two parts, P1 and P2 assigned to membrane 1 and membrane 2, respectively. Step 4: Apply communication rule to renew the objects in membrane 1 and membrane 2. Step 5: Perform Lévy flights, random walk, and chaotic local search orderly on P1 and P2 . Step 6: Carry out selection rule on both P1 and P2 to form two updated communication sets. Step 7: Implement crossover rule on the two communication sets and then get the new communication sets. Step 8: Carry out chaotic local search. Step 9: Repeat steps 4 to 8 until the stopping criterion is met.
4. Numerical experiments and results 4.1 Benchmark functions In this section, 6 well-known benchmark functions with different characters are employed in order to verify the performance of the mCS. The definition of each function is presented in Table 1, including its characteristic (C), dimension (D), mathematical formula, interval of the search space and the global optimum value. Basically, the characteristic of a function can be grouped by the modality and the separability. Among these functions, f1 , f 2 are unimodal-seperable (US),
f3 , f 4 are multimodal-seperable (MS) and f5 , f 6 are multimodal-nonseperable (MN). Table 1 is about here 15
4.2 Parameter setting For mCS, we conduct several experiments with different values of the new parameter, the communication set size Cn ( Cn 5 , Cn 10 , Cn 15 ). And the effect of this parameter is analyzed by means of the solution quality obtained by the mCS. Moreover, the results of the mCS are compared with the standard CS and the adaptive cuckoo search algorithm (ACS) [36]. The settings of parameters except Cn for the three algorithms remain the same as that in [36] and are shown in Table 2. For each benchmark function, 30 independent runs are carried out with random seeds and the final results are the mean values of these runs. Table 2 is about here
4.3 Experimental results 4.3.1 Effect of communication set size ( Cn ) If parameter Cn is a small value, there will be less individuals participating in communication, vise verse. The effect of parameter Cn is shown in Table 3, which gives the mean values (Mean), the standard deviation values (SD), the bests and the worsts calculated for the benchmark functions. The bold values indicate the best results obtained in the experiments. From Table 3, we can explicitly see that mCS ( Cn 10 ) can find the best mean values for the four functions ( f 2 , f 3 , f 5 , f 6 ), and rank in the second place for the other two functions ( f1 , f 4 ). Table 3 is about here Considering the objective function values and the maximum generation, the convergence curves of mCS with different Cn value are shown in Fig.6. When Fig.6 is examined, mCS ( Cn 10 ) 16
has a quicker convergence performance than mCS ( Cn 5 ) and mCS ( Cn 15 ) for the four functions ( f 2 , f 4 , f 5 , f 6 ), and rank in the second place for the other two functions ( f1 , f 3 ). It can be generalized that Cn 10 is an appropriate value of the parameter Cn and that promises a superior convergence performance and better solution quality for mCS. Hence, Cn is a key parameter to balance exploitation and exploration of mCS. Fig.6 is about here
4.3.2 mCS vs. CS and ACS In the comparison of mCS with CS and ACS, the mCS has Cn 10 . The comparison results are given in Table 4. Table 4 is about here Table 4 suggests that the above mentioned four indices of mCS are much smaller than that of CS for test functions. Hence, mCS clearly outperforms CS. However, it is not very clear that there is a significant difference between the two improved CS. In order to compare the performances of the mCS and ACS, the non-parametric Wilcoxon signed-rank test is used here. In the test, a p-value is calculated based on the results of 30 independent runs. The test results are shown in Table 5. Table 5 reports that mCS is statistically better than ACS in functions f1 - f 5 , as shown by the bold values and only for f 6 there is not enough evidence to reject the null hypothesis (0.8709>0.05). Therefore, it can be seen that mCS can generally get better means within Gmax than that of ACS. Table 5 is about here Except for the mean and SD, the convergence characteristics are further discussed. Fig.7 shows 17
the convergence curves of CS, ACS and mCS for the functions f1 - f 6 . From the convergence curves, it is found that mCS can generally show the advantage of fast convergence speed in all functions. In sum, the experiment results indicate that mCS is superior to ACS in search accuracy and convergence speed. Fig.7 is about here
5. The overhead crane system modeling results and discussion The data of the 1200 samples for modeling are gathered from the hardware overhead crane test-bed of Nankai University (see Fig.8) [6], where the system parameters are as: F 6 N , mc 6.5kg , mp 0.75kg , l 1m , g 9.8m / s 2 , and the sampling period is 5ms. Due to the
small difference between adjacent samples, it is necessary to preprocess the data to double sampling period. Therefore, the data include 600 samples. Select 300 samples randomly from the data as the training set, and the rest of the data as the testing set. Fig.8 is about here In light of modeling requirements, we should minimize the objective functions for getting the best parameters for RBF-NNs using two mCSs (RBF-NN-mCS) respectively. Take trolley position RBF-NN for example, the procedure of modeling the trolley position of the overhead crane system can be addressed as follows: Step 1: Besides nx 30 , set Gmax , D , , p a , , n , P , Cn to be the same as those in Section 4. Step 2: Construct the trolley position RBF-NN where n1 , m1 can be enumerated in such a 18
complex optimization process. Step 3: Generate an initial population which represent the following parameters of the RBF-NN
cx11 c x 21 nest i cxj1 cxnx 1 Where
E
i h
nest i
[1,0,
cx12
cx1k
cx1 p
cx 22
cx 2 k
cx 2 p
x1 wx1 x 2 wx 2
cxj 2
cxjk
cxjp
x3
cxnx 2
cxnx k
cxnx p
xn
i-th
solution,
denotes
,1], Eh R nx , i 1, 2,
,n
x
wxj wxnx
hidden
i 1, 2,
nodes
are
n
encoded
(25)
in
binary string
where 1 denotes connection between input and output, on
the contrary, 0 expresses disconnection. Step 4: Determine the objective function which is formulated as Obj
1 s
s
(Y i 1
xm
(i) Yx (i)) 2
(26)
where Yx (i) is the i-th sample data of trolley position; Yxm (i) is the i-th output of trolley position RBF-NN; s is the size of training data. Step 5: Train the trolley position RBF-NN to get the optimal network by adopting mutation operator on Ehi and mCS on nest . i
Step 6: Use testing data to validate the generalization performance of the RBF-NN. The same steps as above can be adopted to obtain the swing angle RBF-NN model. For further comparison, standard CS is adopted to optimize RBF-NNs with the same parameter setting as that in Step 1 (RBF-NN-CS). Furthermore, mCS, CS and gradient method are also used to optimize the RBF-NN parameters based on all the data. To provide a better visual understanding about the accuracy of the established RBF-NN models, the comparison of model outputs and real values on the gathered data for overhead crane are given in Figs.9-12. 19
Figs.9-12 are about here Fig.9 and Fig.11 show the training errors, the testing errors and the model outputs for trolley position RBF-NN-mCS and RBF-NN-CS, respectively. Fig.9 (a) shows the modeling errors of the trolley position RBF-NN-mCS on the training data, while Fig.9 (b) displays the corresponding results on the testing data. Also, the model output and real value curves are shown in chronological order in Fig.9 (c) (solid line for model output and dashed line for real value) and Fig.9 (d) shows the errors between them. Fig.10 and Fig.12 are the corresponding results for the swing angle RBF-NN-mCS and RBF-NN-CS, respectively. From Figs.9-12, we can see that the maximum modeling error obtained by RBF-NN-mCS is significantly smaller than that obtained by RBF-NN-CS. In particular, at the initial stage and the sampling points between 300 and 400, RBF-NN-mCS has apparently smaller modeling errors. In addition to graphical validation of the model outputs, statistical parameters of correlation coefficient (CC), average absolute relative error (AARE), and standard deviation (SD) are employed to evaluate the performance of the RBF-NN models discussed in this paper. They are formulated as follows [44]: s
(Y
CC 1 i 1s
xm
(i ) Yx (i)) 2
(27)
(Yxm (i) Yx )2 i 1
AARE
1 SD s
100 s (Yxm (i) Yx (i)) Y (i) s i 1 x (Yxm (i) Yx ) 2 i 1 s
(28)
0.5
(29)
where Yx is the mean. Table 6 illustrates the fitting accuracy and efficiency of the models (RBF-NN-mCS, RBF-NN-CS and RBF-NN (gradient method)) in terms of various evaluation 20
indices for training set and testing set, while Table 7 shows those for all gathered samples. Tables 6-7 is about here From the Table 6, we can generally see that the learning ability and generalizing ability of RBF-NN-mCS are better than that of RBF-NN-CS under the same network input. As a consequence, better performance indices in terms of CC, AARE and SD, as shown by the bold values, can be observed in RBF-NN-mCS. Moreover, Table 7 indicates that RBF-NN-mCS achieved best results including all the indices than that of RBF-NN-CS and RBF-NN. For example, CC of the trolley position RBF-NN-mCS is 0.9998, while those of RBF-NN-CS and RBF-NN are 0.9995 and 0.9953, respectively. In sum, RBF-NN-mCS can better capture the mapping relation for overhead crane. 6. Conclusions In this paper, a novel cuckoo search algorithm (mCS), the cuckoo search algorithm with membrane communication mechanism, is proposed for modelling overhead crane systems using RBF-NNs. In mCS, the membrane communication mechanism is adopted to maintain the population diversity. Moreover, in order to improve the search accuracy, chaotic local search strategy is adopted. Then numerical simulation results on six benchmark functions demonstrate that mCS can obtain more accurate solution and have higher convergence speed. When mCS is applied to estimate the parameters of RBF-NNs, the experimental results show that the mCS has better global exploitation ability and the mCS based RBF-NN models have a higher precision. The mCS can also be used for other complex optimization problems and the proposed modeling method can be employed for other complex nonlinear systems.
21
Acknowledgements This work is supported partly by the National Natural Science Foundation of China (Grant No. 61573311), the National Science and Technology Pillar Program of China (Grant No.2013BAF07B03) and the Education and scientific research projects of young and middle-aged teachers in Fujian Province (JAT160285).
22
References [1]
K. Yoshida,H. Kawabe, A design of saturating control with a guaranteed cost and its application to the
crane control system, IEEE Trans. Automat. Contr. 37 (1992) 121-127.
[2]
N. Sun, Y.C. Fang, H. Chen, B. Lu, Amplitude-saturated nonlinear output feedback antiswing control for
underactuated cranes with double-pendulum cargo dynamics, IEEE Trans. Ind. Electron. 64(3) (2017)
2135-2146.
[3]
N. Sun, Y.M. Wu, Y.C. Fang, H. Chen, B. Lu, Nonlinear continuous global stabilization control for
underactuated RTAC systems: Design, analysis, and experimentation, IEEE/ASME Transactions on
Mechatronics, in press, DOI: 10.1109/ TMECH. 2016. 2631550.
[4]
W. Chen, M. Saif, Output feedback controller design for a class of MIMO nonlinear systems using
high-order sliding-mode differentiators with application to a Laboratory 3-D crane, IEEE Trans. Ind.
Electron, 55 (11) (2008) 3985-3996.
[5]
Y. Zhao, H. J. Gao, Fuzzy-model-based control of an overhead crane with input delay and actuator
saturation, IEEE Trans. Fuzzy Syst. 20 (1) (2012) 181-186.
[6]
B. Ma, Y. Fang, Y. Zhang, Switching-based emergency braking control for an overhead crane system, IET
Contr. Theory Appl. 4 (9) (2009) 1739-1747.
[7]
Y. Sakawa, H. Sano, Nonlinear Model and Linear Robust Control of Overhead Traveling Cranes, Nonlinear
Anal. 30 (4) (1997) 2197-2207.
[8]
T. Meressi, Modeling and Control of a Three Dimensional Gantry Robot, in: Proceedings of the 37th IEEE
Conference on Decision & Control Tampa, Tampa, FL, IEEE, (1998) 1514-1515.
[9]
A. Kaneshige, N. Kaneshige, S. Hasegawa, T. Miyoshi, K. Terashima, Model and control system for 3D
transfer of liquid tank with overhead crane considering suppression of liquid vibration, Int. J. Cast. Metals
23
Res. 21 (2008) 293-298.
[10] J. Huang, Z. Liang, Q. Zang, Dynamics and swing control of double-pendulum bridge cranes with
distributed-mass beams, Mech. Syst. Signal Proc. 54-55 (2015) 357-366.
[11] R.M.T.R. Ismail, N. D. That, Q. P. Ha, Modelling and robust trajectory following for offshore container
crane systems, Autom. Constr. 59 (2015) 179-187.
[12] J. Tomczyk, J. Cink, A. Kosucki, Dynamics of an overhead crane under a wind disturbance condition,
Autom. Constr. 42 (2014) 100-111.
[13] D.S. Broomhead, D. Lowe, Multivariable functional interpolation and adaptive networks, Complex Syst. 2
(1988) 321–355.
[14] L. Zhang, K. Li, H.B. He, G.W. Irwin, A New Discrete-Continuous Algorithm for Radial Basis Function
Networks Construction, IEEE Trans. Neural Netw. Learn. Syst. 24 (11) (2013) 1785-1798.
[15] M. Gan, H.X. Li, H. Peng, A Variable Projection Approach for Efficient Estimation of RBF-ARX Model,
IEEE T. Cybern. 45 (3) (2015) 476-485.
[16] Y.M. Fang, J.T Fei, K.Q Ma, Model reference adaptive sliding mode control using RBF neural network for
active power filter, Int. J. Electr. Power Energy Syst. 73 (2015) 249–258.
[17] F.H.F. Leung, H.K. Lam, S.H. Ling, P.K.S. Tam, Tuning of the structure and parameters of a neural network
using an improved genetic algorithm, IEEE Trans. Neural Netw. 14 (1) (2003) 79–88.
[18] J.L. Tao, N. Wang, Splicing System Based Genetic Algorithms for Developing RBF Net-works Models,
Chin. J. Chem. Eng. 15 (2) (2007) 240-246.
[19] K. Elsayed, C. Lacor, Robust parameter design optimization using Kriging, RBF and RBFNN with
gradient-based and evolutionary optimization techniques, Appl. Math. Comput.236 (2014) 325–344.
[20] H. Kaydani, A. Mohebbi, A comparison study of using optimization algorithms and artificial neural
24
networks for predicting permeability, J. Pet. Sci. Eng.112 (2013) 17–23.
[21] X. Chen, N. Wang, Modeling a Delayed Coking Process with GRNN and Double-Chain Based DNA
Genetic Algorithm, Int. J. Chem. React. Eng. 8 (1) (2010) 47-54.
[22] G. E. Tsekouras, A. Manousakis, C. Vasilakos, K. Kalabokidis, Improving the effect of fuzzy clustering on RBF network’s performance in terms of particle swarm optimization, Adv. Eng. Softw. 82 (2015) 25–37.
[23] X.S. Yang, S. Deb, Cuckoo Search via Lévy Flights, in: Proceeding of World Congress on Nature &
Biologically Inspired Computing, Coimbatore, IEEE, (2009) 210-214.
[24] H. Chiroma1, S.A. kareem1, A. Khan, N.M. Nawi, A.Y. Gital, L.Shuib1, A. I. Abubakar, M. Z. Rahman, T.
Herawan, Global Warming: Predicting OPEC Carbon Dioxide Emissions from Petroleum Consumption
Using Neural Network and Hybrid Cuckoo Search Algorithm, PLoS One, 10 (8) (2015) e0136140.
[25] S. Shamshirband, M. Amirmojahedi, M. Gocic, S. Akib, D. Petkovic, J. Piri, S. Trajkovic, Estimation of
Reference Evapotranspiration Using Neural Networks and Cuckoo Search Algorithm, J. Irrig. Drain Eng.
142 (2) (2016) 04015044. [26] M.C. Alcantara Neto, J.P.L. Araujo, F.J.B. Barros, A.N. Silva, G.P.S. Cavalcante, A.G. D’Assuncao,
Bioinspired multiobjective synthesis of X-Band FSS Via general regression neural network and cuckoo
search algorithm, Microw. Opt. Technol. Lett. 57 (10) (2015) 2400-2405.
[27] J.S.Wang, S. Han, N.N. Shen, S.X. Li, Features extraction of flotation froth images and BP neural network
soft-sensor model of concentrate grade optimized by shuffled cuckoo searching algorithm, Sci. World J. 11
(2014) 208094.
[28] R. Rajabioun, Cuckoo Optimization Algorithm. Appl. Soft. Comput. 11(8) (2011) 5508-5518.
[29] M. Khajeh, E. Jahanbin, Application of cuckoo optimization algorithm–artificial neural network method of
zinc oxide nanoparticles–chitosan for extraction of uranium from water samples, Chemometrics Intell. Lab.
25
Syst.135 (8) (2014) 70–75.
[30] M. A. Mellal, Williams E J. Cuckoo optimization algorithm for unit production cost in multi-pass turning
operations. Int. J. Adv. Manuf.
Technol. 76(1-4) (2014) 647-656.
[31] M. A. Mellal, E.J. Williams, Cuckoo optimization algorithm with penalty function for combined heat and
power economic dispatch problem, Energy, 93 (2015) 1711-1718.
[32] M. A. Mellal, Williams E J. Total production time minimization of a multi-pass milling process via cuckoo
optimization algorithm. Int. J. Adv. Manuf.
Technol. 87 (2016) 1-8.
[33] J. D. Huang, L. Gao, X.Y. Li, An effective teaching-learning-based cuckoo search algorithm for parameter
optimization problems in structure designing and machining processes, Appl. Soft. Comput. 36 (2015)
349-356. [34] X.T. Li, M.H. Yin, Modified cuckoo search algorithm with self adaptive parameter method, Inf. Sci. 298
(2015) 80-97.
[35] X.Y. Liu, M.L. Fu, Cuckoo search algorithm based on frog leaping local search and chaos theory, Appl.
Math. Model. 266 (2015) 1083–1092.
[36] M.K. Naik, R. Panda, A novel adaptive cuckoo search algorithm for intrinsic discriminant analysis based
face recognition, Appl. Soft. Comput. 38 (2016) 661–675.
[37] L. Huang, S. Ding, S. Yu, J. Wang, K. Lu, Chaos-enhanced Cuckoo search optimization algorithms for
global optimization, Appl. Math. Model. 40 (2016) 3860-3875. [38] G. Păun, G. Rozenberg, A guide to membrane computing, Theor. Comput. Sci. 287 (1) (2002) 73-100.
[39] S.P. Yang, N. Wang, A novel P systems based optimization algorithm for parameter estimation of proton
exchange membrane fuel cell model, Int. J. Hydrog. Energy, 37 (10) (2012) 8465-8476.
[40] S.P. Yang, N. Wang, A P systems based hybrid optimization algorithm for parameter estimation of FCCU
26
reactor–regenerator model, Chem. Eng. J. 211–212 (47) (2012) 508–518.
[41] J.H. Zhao, N. Wang, A bio-inspired algorithm based on membrane computing and its application to
gasoline blending scheduling, Comput. Chem. Eng. 35 (2011) 272–283.
[42] A.H. Gandomi, X.S. Yang, S. Talatahari, A.H. Alavi, Firefly algorithm with chaos, Commun. Nonlinear Sci.
Numer. Simul.18 (2013) 89–98.
[43] A.H. Gandomi, G.J. Yun, X.S. Yang, S. Talatahari, Chaos-enhanced accelerated particle swarm optimization,
Commun. Nonlinear Sci. Numer. Simul. 18 (2013) 327–340.
[44] B. Alatas, Uniform big bang–chaotic big crunch optimization, Commun. Nonlinear Sci. Numer. Simul.16
(2011) 3696–3703.
[45] H. G. Han, J. F. Qiao, Adaptive computation algorithm for RBF neural network, IEEE Trans. Neural Netw.
23 (2) (2012) 342-347.
[46] J. Wu, J. Long, M. Liu, Evolving RBF neural networks for rainfall prediction using hybrid particle swarm
optimization and genetic algorithm, Neurocomputing, 148 (2015) 136-142.
[47] R. Yang, P. V. Er, Z. Wang, K. K. Tan, An RBF neural network approach towards precision motion system
with selective sensor fusion, Neurocomputing, 199 (2016) 31-39.
[48] R.N. Mantegna, Fast, accurate algorithm for numerical simulation of Lévy stable stochastic processes, Phys.
Rev. E. 49 (5) (1994) 4677-4683.
[49] X. S. Yang, S. Deb, Multiobjective cuckoo search for design optimization. Comp. Oper. Res. 40 (6) (2013)
1616-1624.
27
mc
Trolley
F rail
l
x mp g
Payload
Fig.1. Illustration for the overhead crane system
28
(t)
x(t)
(t 1), , (t n2 ), F (t), , F (t m2 )
x(t 1), , x(t n1 ), F (t), , F (t m1 )
(a) Trolley position RBF-NN FNN
(b) Swing angle RBF-NN GNN
Fig. 2. RBF neural networks F
x
Overhead crane system
RBF NN
ˆ
+ -
mCS RBF NN
xˆ
--
+
mCS
Fig. 3. Schematic diagram of mCS algorithm based RBF-NNs for overhead crane systems
29
c1=c10
2 ****** … c2=c20
1 2 ****** 0 ****** … … c 1
(a)
1 ****** 0 …
1 ****** 0
2 ******
1 ****** 0
…
c1=c11… … c2=c21
2 ******
c2
(b)
c1
…
c2
(c)
(d) c1
c1=c12
2 ****** …
1 2 *** *** 0 *** ***
1 2 *** *** 0 *** *** c1 … c2 …
(g)
(f)
(h)
0
c2=c22
1 ******
c1
2 ******
1 ****** 0 …
… c2 … (e)
Fig. 4. Membrane structure and schematic plan of communication mechanism; (a)-(h) show objects transfer in one communication cycle; (a) initial objects and communication objects ( c10 and c20 ) created in membrane 1and membrane 2; (b) direction of transfer; (c) two communication object sets c1 and c2 transfer to membrane 0; (d) two updated communication object sets ( c11 and c21 ) obtained by crossover rule; (e) direction of transfer; (f) two communication object sets c1 and c2
transfer to membrane 1 and membrane 2, respectively ;(g) updated
objects in membrane 1 and membrane 2 by carrying out communicating rule; (f) two updated communication object sets ( c12 and c22 ) obtained by selection rule.
30
Start Set parameters of mCS Initialize the population and two
communication sets Divide the population into two parts: P1and P2
Apply communication rule on P1
Apply communication rule on P2
Levy flights
Levy flights
Random walk
Random walk
Chaotic local search
Chaotic local search
Apply selection rule to form a updated communication set
Apply selection rule to form a updated communication set
Apply crossover rule on two communication sets and get updated communication sets
Meet termination criteria? Out results
Fig. 5. Flow chart of mCS
31
15 mCS1 mCS2 mCS3
10 5 0
Error(log)
-5 -10 -15 -20 -25 -30 -35
0
100
200
300
400 500 600 Generations
700
800
900
1000
(a) f1 30 mCS1 mCS2 mCS3
20
Error(log)
10
0
-10
-20
-30
0
100
200
300
400 500 600 Generations
(b) f 2
32
700
800
900
1000
9.5 mCS1 mCS2 mCS3
9
Error(log)
8.5
8
7.5
7
0
100
200
300
400 500 600 Generations
700
800
900
1000
(c) f 3 6.5 mCS1 mCS2 mCS3
6
Error(log)
5.5
5
4.5
4
3.5
3 0
100
200
300
400 500 600 Generations
(d) f 4
33
700
800
900
1000
4 mCS1 mCS2 mCS3
2
Error(log)
0
-2
-4
-6
-8
-10 0
100
200
300
400 500 600 Generations
700
800
900
1000
(e) f 5 8 mCS1 mCS2 mCS3
6 4
Error(log)
2 0 -2 -4 -6 -8
0
100
200
300
400 500 600 Generations
(f)
700
800
900
f6
Fig. 6. Convergence curves of mCS with different parameters
34
1000
15 mCS ACS CS
10 5 0
Error(log)
-5 -10 -15 -20 -25 -30 -35
0
100
200
300
400 500 600 Generations
(a)
700
800
900
1000
f1
30 mCS ACS CS
20
Error(log)
10
0
-10
-20
-30
0
100
200
300
400 500 600 Generations
(b)
35
f2
700
800
900
1000
9.5 mCS ACS CS
9
Error(log)
8.5
8
7.5
7
0
100
200
300
400 500 600 Generations
(c)
700
800
900
1000
f3
6.5 mCS ACS CS
6
Error(log)
5.5
5
4.5
4
3.5
3 0
100
200
300
400 500 600 Generations
(d)
36
f4
700
800
900
1000
4
2
Error(log)
0
-2
-4
-6 mCS ACS CS
-8
-10 0
1
2
3 Generations
(e)
4
5
6 4
x 10
f5
8 mCS ACS CS
6 4
Error(log)
2 0 -2 -4 -6 -8
0
100
200
300
400 500 600 Generations
(f)
700
800
f6
Fig. 7. Convergence curves of mCS and other algorithms
37
900
1000
Fig. 8. The experiment set
38
-3
Trolley position error(m)
5
x 10
0
-5
-10
-15
0
50
100
150 Samples
200
250
300
200
250
300
(a) -3
Trolley position error(m)
5
x 10
0
-5
-10
-15
0
50
100
150 Samples (b)
39
0.35 Real Values Model outputs
Trolley position(m)
0.3 0.25 0.2 0.15 0.1 0.05 0
0
100
200
300 Samples
400
500
600
400
500
600
(c) -3
Trolley position error(m)
5
x 10
0
-5
-10
-15
0
100
200
300 Samples (d)
Fig. 9. Trolley position RBF-NN model output using mCS
40
-3
2
x 10
Swing angle error(rad)
1.5 1 0.5 0 -0.5 -1 -1.5
0
50
100
150 Samples
200
250
300
200
250
300
(a) -3
2.5
x 10
Swing angle error(rad)
2 1.5 1 0.5 0 -0.5 -1 -1.5
0
50
100
150 Samples
(b)
41
0.06 Real Values Model outputs
Swing angle(rad)
0.04
0.02
0
-0.02
-0.04
0
100
200
300 Samples
400
500
600
400
500
600
(c) -3
2.5
x 10
Swing angle error(rad)
2 1.5 1 0.5 0 -0.5 -1 -1.5
0
100
200
300 Samples
(d) Fig. 10. Swing angle RBF-NN model output using mCS
42
Trolley position error(m)
0.02
0.01
0
-0.01
-0.02
-0.03
0
50
100
150 Samples
200
250
300
(a)
Trolley position error(m)
0.02
0.01
0
-0.01
-0.02
-0.03
0
50
100
150 Samples
(b)
43
200
250
300
0.35 Real Values Model outputs
Trolley position(m)
0.3 0.25 0.2 0.15 0.1 0.05 0
0
100
200
300 Samples
400
500
600
400
500
600
(c) 0.02
Trolley position(m)
0.01
0
-0.01
-0.02
-0.03
0
100
200
300 Samples
(d) Fig. 11. Trolley position RBF-NN model output using CS
44
-3
Swing angle error(rad)
5
x 10
0
-5
0
50
100
150 Samples
200
250
300
200
250
300
(a) -3
Swing angle error(rad)
5
x 10
0
-5
0
50
100
150 Samples
(b)
45
0.06 Real Values Model outputs
Swing angle (rad)
0.04
0.02
0
-0.02
-0.04
0
100
200
300 Samples
400
500
600
400
500
600
(c) -3
Swing angle error(rad)
5
x 10
0
-5
0
100
200
300 Samples
(d) Fig. 12. Swing angle RBF-NN model output using CS
46
Table 1 Description of the benchmark functions Test function
C
D
Sphere
US
30
Formula
Interval
Min
[-100,100]
0
[-10,10]
0
[-500,500]
0
[-5.12,5.12]
0
[-32,32]
0
[-600,600]
0
D
f1 (x) xi2 i 1
Schwefel2.22
US
30
D
D
i 1
i 1
f 2 (x) xi xi D
Schwefel2.26
MS
30
f3 (x) 418.9829 D ( xi sin( xi )
Rastrigin
MS
30
f 4 ( x) [ xi2 -10cos(2 xi ) 10]
i 1
D
i 1
f5 (x) 20 exp[0.2
Ackley
MN
30
Griewank
MN
30
47
1 D 2 xi ] D i 1
1 D exp[ cos 2 xi ] 20 e D i 1 f 6 (x)
D x 1 D 2 xi cos i 1 4000 i 1 xi i 1
Table 2 The parameters used for the benchmark functions Algorithm
Parameters
mCS, CS
Gmax 1000 , D 30 , 1.5 , pa 0.25 , 1 , n 30 , P 0.25 ( P only for mCS)
ACS
Gmax 1000 , D 30 , pa 0.25 , n 30
48
Table 3 Performance comparison of mCS with different parameters Function f1
f2
f3
f4
f5
f6
49
Algorithm
Best
Worst
Mean
SD
mCS( Cn 5 )
1.5226E-11
4.6614E-10
9.4363E-11
9.4260E-11
mCS( Cn 10 )
3.7627E-15
2.7260E-13
4.6271E-14
6.8974E-14
mCS( Cn 15 )
2.0557E-17
5.5516E-15
1.0702E-15
1.1166E-15
mCS( Cn 5 )
1.6732E-08
2.3321E-07
9.4126E-08
5.6435E-08
mCS( Cn 10 )
3.5135E-11
3.7789E-10
1.5642E-10
8.1026E-11
mCS( Cn 15 )
6.8803E-13
0.2650
0.01107
0.04950
mCS( Cn 5 )
1047.8172
3056.4191
1912.4654
469.0635
mCS( Cn 10 )
593.7221
2856.1869
1576.4882
471.5835
mCS( Cn 15 )
864.2819
2359.1316
1587.0721
406.8148
mCS( Cn 5 )
10.9876
40.7998
24.8448
6.6705
mCS( Cn 10 )
10.9452
46.7630
26.7371
9.0378
mCS( Cn 15 )
17.9092
54.8698
33.5462
9.2247
mCS( Cn 5 )
4.8271E-06
0.0537
0.0031
0.0109
mCS( Cn 10 )
1.0305E-07
0.0031
0.0001
0.0005
mCS( Cn 15 )
-1.0730E-08
9.9098
0.8924
2.1340
mCS( Cn 5 )
5.9849E-11
0.0190
0.0022
0.0049
mCS( Cn 10 )
1.0058E-13
0.0294
0.0013
0.0056
mCS( Cn 15 )
2.8865E-15
0.0681
0.0038
0.0141
Table 4 Comparison of the mCS with CS and ACS Function
f1
f2
f3
f4
f5
f6
50
Algorithm
Best
Worst
Mean
SD
mCS
3.7627E-15
2.7260E-13
4.6271E-14
6.8974E-14
ACS
9.8949E-10
4.0673E-07
3.4938E-08
7.8038E-08
CS
0.0008
0.0057
0.0019
0.0011
mCS
3.5135E-11
3.7789E-10
1.5642E-10
8.1026E-11
ACS
2.2721E-06
0.0004
5.7390E-05
0.0001
CS
0.0172
0.0586
0.0362
0.0114
mCS
593.7221
2856.1869
1576.4882
471.5835
ACS
1886.3303
5850.3675
5048.6332
768.8830
CS
3835.7064
6482.7667
4939.5392
683.9573
mCS
10.9452
46.7630
26.7371
9.0378
ACS
62.9922
140.9701
104.3145
25.5887
CS
75.0712
156.2617
125.9992
21.8192
mCS
1.0305E-07
0.0031
0.0001
0.0005
ACS
0.0003
0.1257
0.0176
0.0320
CS
0.0757
12.8691
3.6549
3.5809
mCS
1.0058E-13
0.0294
0.0013
0.0056
ACS
1.2326E-06
0.0172
0.0015
0.0037
CS
0.0053
0.1679
0.0334
0.0316
Table 5 Results of Wilcoxon signed-rank test (p-value is shown) Function
CS vs. ACS
ACS vs. mCS
f1
2.3261E-13
0.0172
f2
1.1983E-24
0.0026
f3
0.5637
7.0672E-29
f4
0.0013
1.7136E-22
f5
7.0649E-07
0.0040
f6
9.2257E-07
0.8709
51
Table 6 Comparison between statistical parameters of models on training set and testing set Statistical
RBF-NN-mCS
parameters
Position
RBF-NN-CS Angle
Position
Angle
Training
Testing
Training
Testing
Training
Testing
Training
Testing
set
set
set
set
set
set
set
set
CC
0.9998
0.9998
0.9994
0.9993
0.9989
0.9986
0.9986
0.9924
AARE
0.6794
1.8729
3.2658
3.4312
1.4934
1.5801
6.5498
7.1213
SD
0.0015
0.0017
0.0006
0.0007
0.0039
0.0041
0.0018
0.0020
52
Table 7 Comparison between statistical parameters of models on gathered samples Statistical
RBF-NN-mCS
RBF-NN-CS
RBF-NN
parameters
Position
Angle
Position
Angle
Position
Angle
CC
0.9998
0.9993
0.9995
0.9954
0.9953
0.9951
AARE
0.4052
3.4431
1.3982
5.6261
1.2095
5.9915
SD
0.0009
0.0006
0.0027
0.0016
0.0079
0.0017
53