Port throughput forecasting by MARS-RSVR with chaotic simulated annealing particle swarm optimization algorithm

Port throughput forecasting by MARS-RSVR with chaotic simulated annealing particle swarm optimization algorithm

Neurocomputing ∎ (∎∎∎∎) ∎∎∎–∎∎∎ Contents lists available at ScienceDirect Neurocomputing journal homepage: www.elsevier.com/locate/neucom Port thro...

666KB Sizes 0 Downloads 84 Views

Neurocomputing ∎ (∎∎∎∎) ∎∎∎–∎∎∎

Contents lists available at ScienceDirect

Neurocomputing journal homepage: www.elsevier.com/locate/neucom

Port throughput forecasting by MARS-RSVR with chaotic simulated annealing particle swarm optimization algorithm Jing Geng a, Ming-Wei Li a,n, Zhi-Hui Dong a, Yu-Sheng Liao b a b

College of Shipbuilding Engineering, Harbin Engineering, University, Harbin 150001, Heilongjiang, China Department of Healthcare Administration, Oriental Institute of Technology, No. 58, Sec. 2, Sichuan Rd., Panchiao, New Taipei, Taiwan, ROC

art ic l e i nf o

a b s t r a c t

Article history: Received 30 April 2014 Received in revised form 22 June 2014 Accepted 22 June 2014 Communicated by Wei Chiang Hong

Port throughput forecasting is a very complex nonlinear dynamic process, prediction accuracy is influenced by uncertainty of socio-economic factors, especially by the mixed noise (singular point) produced in the collection, transfer and calculation of statistical data; consequently, it is difficult to obtain a satisfactory port throughput forecasting result. Thus, establishing an effective port throughput forecasting scheme is still a significant research issue. Since the robust v-support vector regression model (RSVR) has the ability to solve the nonlinear and mixed noise in the port throughput history data and its related socio-economic factors, this paper introduces the RSVR model to forecast port throughput. In order to search the more appropriate parameters combination for the RSVR model, considering the proposed simulated annealing particle swarm optimization (SAPSO) algorithm and the original PSO algorithm still have the drawbacks of immature convergence and is time consuming, this study presents chaotic simulated annealing particle swarm optimization(CSAPSO) algorithm to determine the parameter combination. Aiming to identify the final input vectors for RSVR model, the multivariable adaptive regression splines (MARS) is adopted to select the final input vectors from the candidate input variables. This study eventually proposes a port throughput forecasting scheme that hybridizes the RSVR, CSAPSO and MARS to obtain a more accurate forecasting result. Subsequently, this study compiles the port throughput data and the corresponding socio-economic indicators data of Shanghai as the illustrative example to evaluate the feasibility and performance of the proposed scheme. The experimental results indicate that the proposed port throughput forecasting scheme obtains better forecasting result than the six competing models in terms of forecasting error. & 2014 Elsevier B.V. All rights reserved.

Keywords: Port throughput Forecasting Chaotic mapping Particle swarm optimization (PSO) Simulated annealing (SA) Robust v-support vector regression (RSVR)

1. Introduction The prediction of port throughput has the basic function of making port strategic decision, port developing scale, port general layout and port district division. If this prediction is not accurate enough, bias in policy will occur, which may cause huge financial losses. Therefore, developing an effective port throughput forecasting model has become a crucial and challenging task. Numerous forecasting approaches have been developed for port throughput prediction. In the conventional quantitative forecasting approaches, the autoregressive moving integrated moving average (ARIMA) models [1] are the most popular and practical time series forecasting. It is often applied to forecast the series when data are inadequate to construct econometric, or when knowledge of the structure of forecasting models is limited. Time

n

Corresponding author. Tel.: þ 886 18845615108; fax: þ 886 0451 82568317. E-mail addresses: [email protected] (J. Geng), [email protected], [email protected] (M.-W. Li), [email protected] (Y.-S. Liao).

series models are simple in calculation and fast in speed, and are likely to outperform other models in some cases, especially in short-term forecasting [2,3]. Therefore it is widely used in port throughput prediction [4–7]. However, time-series forecasting models fail to reflect other related factors of the predicting series. Artificial neural network (ANN) is primarily based on a model of emulating the processing of human neurological system identify related spatial and temporal characteristics from the historical data patterns (particularly for nonlinear and dynamic evolutions); therefore, they can approximate any level of complexity and do not need prior knowledge of problem solving. Since port throughput prediction is too complex to be solved by a single linear statistical algorithm, ANN should be considered as an alternative for solving port throughput forecasting. Owing to the superior performance to approximate any degree of complexity and requiring no prior knowledge of problem solving, ANN models [8–10] have been widely applied in port throughput forecasting [11]. Although ANN-based forecasting models can approximate to any function, particularly nonlinear functions, they have difficulties in the non-convex problem of network training errors, explaining

http://dx.doi.org/10.1016/j.neucom.2014.06.070 0925-2312/& 2014 Elsevier B.V. All rights reserved.

Please cite this article as: J. Geng, et al., Port throughput forecasting by MARS-RSVR with chaotic simulated annealing particle swarm optimization algorithm, Neurocomputing (2014), http://dx.doi.org/10.1016/j.neucom.2014.06.070i

2

J. Geng et al. / Neurocomputing ∎ (∎∎∎∎) ∎∎∎–∎∎∎

black-box operations, can easily be trapped in local minima [12,13], have time-consuming training procedures, and have subjectivity in selecting an ANN model architecture [14]. Additionally, the training of an ANN model requires a large amount of training samples, while port throughput and related impact indicators have limited datum. Thus, ANN models cannot achieve satisfactory performance in port throughput forecasting. Support vector regression (SVR) has overcome the inherent defects of the ANN model [15], with the structure risk minimization criterion. It possesses not only greater nonlinear modeling ability, but also several superior advantages, such as theoretically ensuring global optimum, simple modeling structure and processes, and small sample popularization requirement; therefore, SVR-based models [16] have been successfully applied to solve forecasting problems in many fields, such as financial time series forecasting [17–21], tourist arrival prediction [22,23], atmospheric science forecasting [24–27], traffic flow prediction [28–30], and electric load forecasting [31–36]. In addition, it has also successfully been used for prediction of port throughput [37,38]. The prediction of port throughput is a complex nonlinear dynamic procedure, and is affected by numerous factors, such as Gross Domestic Product, Gross National Product, Total Imports and Exports, Industrial Output, etc. These factors mostly have random and nonlinear characteristics, and may have complex nonlinear connections among them; thus, it is difficult to express them by a definite method in low dimensional space. Noise may emerge when collecting, transmitting and analyzing datum cause of stochastic error, and can exist both in port throughput sequence and in its related factors. The distribution of noise is subject to normal distribution, and has high amplitude value in some point. Theoretically, these models mentioned above do not consider about the drawback of noise. The mixed noise in throughput sequence data and related impact factors datum will largely affect the final prediction results, especially on sensitive SVR model (the matrix is not spare cause of outlier approaching decision boundary, while numbers of support vector grows very fast cause of outlier influence). Wu [39] designed a robust loss function (according to different situations, uses different loss functions), considering mixed noise of normal distribution, high amplitude values, singular point features in datum of prediction sequence and related impact factors, and gained a new support vector regression (namely RSVR), and applied it to products sale timing sequence prediction. The numerical calculation results indicate that the estimated model can effectively suppress noise, and lead to better prediction results. To deal well with the mixed noise in port throughput sequence and its related impact factors, this paper adopts the RSVR model to improve robustness and accuracy of port throughput prediction. The practical results indicate that the forecasting accuracy of SVR-based models is influenced by the determination of the parameters significantly [40]. Although there are some recommendations on the appropriate setting of SVR parameters in the literature [41], those approaches do not simultaneously consider the interaction effects among the parameters. The common crossvalidation method used for selecting SVR parameters has certain cross error [42], especially in complex forecasting problems, it cannot guarantee high forecasting accurate level. To identify which approach is suitable for specified data patterns, researchers have employed different hybrid evolutionary algorithms [43–45] (such as particle swarm optimization, simulated annealing algorithms, genetic algorithms and immune algorithms) to determine the parameters. Here, all SVR models with parameters determined by different evolutionary algorithms are superior to other competitive forecasting models (ARIMA, ANNs, etc.); however, these evolutionary algorithms still suffer from the shortcoming of being time consuming or inefficient in optimizing the parameters for SVR models.

Therefore, a more alternative evolutionary algorithms needs to be developed to improve the optimizing approach of parameters. PSO is a stochastic optimization algorithm based on the groups theory, first introduced by Kennedy and Eberhart, and currently is widely used in function optimization, neural network training, pattern classification, fuzzy control system and other engineering fields. PSO is influenced by random oscillation effect in late evolution, making it timeconsuming when near globally optimal values, which means convergence rate is very low, and is easily trapped in local minima, and becomes a bottleneck of the PSO algorithm for further development. The SA algorithm is a kind of heuristic random search algorithm based on the Monte-Carlo iteration solving method, originally introduced by N. Metropolis. The algorithm has a strong ability of global optimization search, and accepts both good solutions and inferior solutions by probability. Thus when SA falls into the trap of local optimization, theoretically it can also jump out of the trap after sufficient time, and finally obtain the global optimum. Recently, SA is used widely in engineering, such as production management, control engineering, machine learning, neural networks, image processing and other areas. PSO-SA, which combines SA and PSO, is proposed to enhance the algorithm's ability to jump out of the local maxima, and reduce convergence time [46]. Employing the PSO-SA algorithm to select parameters of the SVR model can improve the optimization effect of parameters for SVR [47,48]. However, due to the lack of diversity in the late evolutionary, the PSO-SA algorithm still easily falls into local optimum (immature convergence) because there is a lack of diversity in the late evolutionary. To enhance the diversity of evolution population and improve the ability of global exploration of PSO-SA, this paper decides to employ the chaotic sequence to transform the three hyper-parameters of an SVR model from the solution space to the chaotic space; any variable in this kind of chaotic space can travel ergodically over the whole space of interest to determine the improved solution eventually. Recently, numerous chaotic sequences adopt the Logistic mapping function, which is distributed at both ends in the interval [0,1]; it could not excellently strengthen the chaotic distribution characteristics [49]. By comparing with the analysis on chaotic distribution characteristics after mapping the hyper-parameters into chaotic space, the author concludes that the Cat mapping function has good ergodic uniformity in the interval [0,1] and does not easily to fall into a minor cycle [50]. Therefore, this study employs the Cat mapping function to implement the chaos disturbance for the evolutionary population of PSO-SA, designs the coupling evolution mechanism of Cat mapping, SA and PSO, and proposes the CSAPSO algorithm to improve the optimizing performance of RSVR's parameters. In the meanwhile, port throughput is influenced by various socioeconomic factors (the candidate input variables, such as gross domestic product, total investment in fixed assets, total imports and exports, industrial output, etc.), but the major disadvantage of SVR-based forecasting models is that it cannot select the final input vectors from the candidate input variables. Thus, selecting the final input vectors is crucial in constructing an SVR-based forecasting model. The MARS is a multivariate, nonlinear, nonparametric regression approach [51]; it not only has excellent variable selection capabilities, but also can analyze the difference between the degrees of significance for different variables effectively. Thus MARS has been widely used in various fields such as sales prediction [52], credit evaluation [53–55], stock price forecasting [56–58], software reliability analysis [59–61] and predicting species distribution [62,63]. However, MARS has been rarely used in port throughput forecasting in the existing literature; therefore, this paper employs MARS to determine the final input vectors for RSVR and analyze significance degrees between different factors for further port throughput generation mechanism research.

Please cite this article as: J. Geng, et al., Port throughput forecasting by MARS-RSVR with chaotic simulated annealing particle swarm optimization algorithm, Neurocomputing (2014), http://dx.doi.org/10.1016/j.neucom.2014.06.070i

J. Geng et al. / Neurocomputing ∎ (∎∎∎∎) ∎∎∎–∎∎∎

To improve forecasting performance in port throughput prediction, this study integrates MARS, RSVR and CSAPSO to propose a port throughput forecasting scheme (namely the MRSVR-CSAPSO scheme). The main idea of the established scheme is first to employ MARS to select the final input vectors from the candidate input variables for RSVR, and then an MRSVR model is built. After the MRSVR model was obtained, the CSAPSO is adopted to determine the values of three parameters in the MRSVR model. Finally, the MRSVR-CSAPSO port throughput forecasting scheme is built. In the forecasting process, this study selects numerous factors as the candidate input variables, such as gross domestic product (x1:GDP), total investment in fixed assets(x2:TIFA), total imports and exports(x3:TIE), industrial output(x4:IO), first industrial value (x5:FIV), second industry value(x6:SIV), tertiary industry value(x7: TIV), population measurement(x8:PM), total retail sales of consumer goods(x9:TRSCG), freight volume(x10:FV), highway freight volume(x11:HFV), and railway freight volume(x12:RFV). Meanwhile, as the basis of future port throughput, the historical port throughput data has strong influence on future port throughput, and it is easily obtained and quantifiable; thus, this paper also used historical port throughput data as the candidate input variables, such as the previous three year's port throughput(x13: PTYPT1), previous two year's port throughput(x14:PTYPT), previous year's port throughput(x15:PYPT), previous 2-year moving average(x16:P2YMA) and previous 3-year moving average(x17: P3YMA). To test forecasting performance of the proposed prediction scheme, the results of the proposed MRSVR-CSAPSO model were compared with those from the ARIMA, MBPNN, RSVR-CSAPSO, MSVR-CSAPSO, MRSVR-PSO and MRSVR-SAPSO models. The rest of this paper is organized as follows. Section 2 gives an introduction about MARS, the basic formulation of RSVR and parameters determination of the RSVR model by CSAPSO. The proposed MRSVR-CSAPSO port throughput forecasting scheme is described in Section 3. Section 4 provides a numerical example and compares the forecasting performances among all alternatives. The conclusions are provided in Section 5.

2. Methodology 2.1. Robust v-support vector regression (RSVR) 2.1.1. Fundamental of SVR The basic idea of the SVR model is: define nonlinear mapping Ф: Rn -Rm ðm Z nÞ, map the x of spatial data to a high-dimensional feature space, then make linear regression in the space. Given a dataset fðxi ; yi Þ; i ¼ 1; 2; ⋯; Ng, where xi A Rn is the input vector (the final input vectors), yi A Rn (port throughput forecasting result) is the output variable corresponding to xi, that decision attribute: total electricity consumption N is the total number of data points, SVR implements function regression estimate according to the following formula: f ðxÞ ¼ ωΦðxÞ þ b

ð1Þ

3

functions will form different Remp, and then the different SVR model is obtained. 2.1.2. Robust loss function There is a relationship between the optimal loss function and the intrinsic characteristics of the sample set. For normal noise distribution, using Gaussian function as the loss function can help obtain the best effect of noise reduction. For larger noise and singular point, the performance of Laplace loss function (linear loss function) is the best. Based on analysis with unpredictability, randomness, nonstationarity of the sample data and the characteristics of all kinds of loss function, in order to improve the robustness of the SVR model, the above loss function should take into account the new loss function (namely robust loss function) when the new loss function is designed. At this time, the mathematical model of robust loss function is expressed as follows: 8 0 jξj r ε > < 2 1 ε o jξj r εμ LðξÞ ¼ ð3Þ 2ðjξj εÞ > : μðjξj  εÞ  1μ2 ε o jξj r ε μ 2 where ε þ μ ¼ εμ ,ε Z 0,μ Z 0. This paper only considers the condition of μ ¼ 1. Robust loss function is divided into three parts (Fig. 1): (1) when jξj r ε, the deviation is not to be punished, in order to ensure sparse solutions of the learning machine. (2) when ε r jξjr εμ , the Gaussian loss function, 1=2ðjξj εÞ2 , is used to punish the deviation, to suppress the noise obeying Gaussian distribution. (3) when jξj Zεμ , the Laplace loss function, μðjξj εÞ, is employed to increase the punishment, to restrain the large amplitude noise and the anomalies data. Robust loss function weakens the influence of abnormal data, which makes the SVR model have better robustness and generalization ability. 2.1.3. Robust v-support vector regression Expected risk obtained from f ðxÞ ¼ wx also meets the structural risk minimization principle. The robust v-support vector regression (namely RSVR) has a hyper-plane of f(x). Suppose the dataset T ¼ f ðxi ; yi Þ g, where xi A Rd , yi A R, xi is d dimensional column vector, i¼ 1, 2,…, l; here, the mathematical model of the RSVR with the robust loss function is as follows: " # min

w;ξðnÞ;b;ε

2 1 2 2ð∥w∥ þ b Þ þ C

vε þ

1 l

∑ 12ðξ2i þξin2 Þ þ 1l ∑ μðξi þ ξni Þ

i A I1

i A I2

s:t:yi  wxi  b rε þ ξi ; wxi þ b  yi r ε þξni ξi ; ξni Z 0; v A ð0; 1; ε Z0; i ¼ 1; ⋯; l

ð4Þ

L(ξ)

where ω, Φ(x) is m dimensional vector, “  ” is the dot-product in the feature space, bA R is the threshold. SVR uses structural risk minimization to compute Formula (1); the core idea of structural risk minimization is R½f  rRemp þ Rreg

μ

ð2Þ

where R [f] is actual risk, Remp is empirical risk, which is a kind of measurement between f (x) and sampling bias. Rreg is confidence range, which is a kind of measurement f (x) complexity. Empirical risk Remp is determined by the loss function, different loss

0 ξ

ε

εμ

Fig. 1. Robust loss function.

Please cite this article as: J. Geng, et al., Port throughput forecasting by MARS-RSVR with chaotic simulated annealing particle swarm optimization algorithm, Neurocomputing (2014), http://dx.doi.org/10.1016/j.neucom.2014.06.070i

J. Geng et al. / Neurocomputing ∎ (∎∎∎∎) ∎∎∎–∎∎∎

4

where w is the d dimensional column vector, CðC Z 0Þ is a penalty coefficient, deciding the balance between confidence risk and experience risk, v A ð0; 1 is the upper bound of the proportion of error samples in the total number of training samples and the lower bound of the proportion of support vectors in the total number of training samples, ε is used to control the pipe size, ξi ,ξni ,ði ¼ 1; ⋯; lÞ are slack variables, used to ensure the constraint satisfaction, I1 is a sample set of slack variables falling in the interval 0 o jξi j,jξni j r εμ , I2 is a sample set of slack variable falling in the interval εμ o jξi j, jξni j. Using the dual principle, Karush–Kuhn– Tucker (KKT) conditions and kernel function skill, the dual problem of the above optimization problem is calculated and expressed as Eq. (5): l

l

l

1 min ∑ ∑ ðαi  αni Þðαj  αnj ÞðKðxi ; xj Þ þ 1Þ  ∑ yi ðαi  αni Þ n 2 α;α

þ

i ¼ 1j ¼ 1

i¼1

l

l ∑ ðα2  αni 2 Þ 2C i ¼ 1 i

s:t: eT ðα þ αn Þ r Cv 0 r αi ; αni r min

  C Cμ ; l l

ð5Þ

Transform Eq. (5) into the matrix form as follows: " #   Q  α  Q þ El α C 1 T n T min ½α ; ðα Þ  þ ½  yT ; yT  n 2 El n n α;α Q Q þ C α α s:t: eT ðα þ αn Þ r Cv   C Cμ 0 r αi ; αni r min ; l l

ð6Þ

where Q ij ¼ Kðxi ; xj Þ þ 1, i ¼ 1; ⋯; l, j ¼ 1; ⋯; l, e ¼ ½1; ⋯; 1T , e is l dimensional column vector, E is l order matrix, and α and αn are vectors formed by the Lagrange multiplier, they are all l-dimensional non-negative column vectors. The output of RSVR is as follows: l

f ðxÞ ¼ ∑ ðαi  αni ÞðKðxi ; xÞ þ 1Þ

2.2. CSAPSO PSO is easy to implement, does not need to adjust many parameters, and has fast convergence in the early stage; however, it needs longer search time in the vicinity of the global optimum value and easily gets trapped in local extremum, due to the random oscillation phenomenon in late stage. SA can converge to the global optimum with the 100 probability, provided the initial temperature is high enough and the temperature drops slowly enough. It has the ability to jump out of local optima, because it can accept poor particles in a certain probability. Combining the mechanism of SA and PSO algorithm, the PSO-SA hybrid algorithm was established, which both improves the convergence speed and enhances the ability of jumping out of local extremum of the algorithm, due to its jump characteristic in the evolution. The PSO-SA hybrid algorithm is superior to the standard PSO algorithm and SA algorithm independently in calculation accuracy and optimization result. However, the proposed PSO-SA as well as the original PSO algorithm still suffers from the shortcoming of immature convergence (trapped in local optimum) and is time consuming. In order to overcome the deficiency of the PSO-SA hybrid algorithm and search for more appropriate parameter combinations of the RSVR model, this paper introduces Cat mapping into the PSOSA hybrid optimization algorithm and establishes a more effective hybrid optimization algorithm (CSAPSO), and the proposed CSAPSO algorithm is employed to select the parameters of the RSVR model. In the proposed CSAPSO algorithm, the basic operations of the PSO are used to initialize particle swarm, then the mechanism of SA is adopted to converge the particles into the lowest energy state, and the improved particles from SA are obtained. After that, the Cat mapping with excellent chaos ergodicity is employed to implement chaos disturbance for particle swarm, to explore the better particles and to obtain the more accurate solution, when the PSO-SA evolution suffers from standstill; finally, the new modified individual will be sent back to standard PSO for the next generation till the termination condition of the algorithm is reached. Finally, the optimal parameter combination of the RSVR model is obtained by the CSAPSO algorithm.

ð7Þ

i¼1

According to Eq. (7), RSVR has a more concise dual form and simplifies the calculation process. RSVR has no parameter, b, in the output expressions. Owing to the good performance of the radial basis in the application of SVR [35], this paper employs the radial basis function as the kernel function of the RSVR model. The radial basis function is expressed as Eq. (8): ! jjxi xj jj2 Kðxi ; xj Þ ¼ exp  ð8Þ 2σ 2 There are three parameters (C, v, andδ) in the RSVR model; determining the parameters is the first step to train the RSVR model. The practical results indicate that the forecasting accuracy of an RSVR model depends on a good setting of parameters combination (C, v, andδ). Therefore, the study on parameters determination is a very valuable issue. Recently, authors have applied a series of intelligent optimization algorithms to test the potentiality and the suitability involved in the parameters determination of an RSVR model. However, the employed intelligent optimization algorithms almost suffer from being trapped in local optimum easily and slow convergence. To improve the optimization efficiency of the parameters combination, in this paper, the CSAPSO algorithm is proposed and used in the parameter determination of the RSVR model.

2.2.1. Global chaotic disturbance using cat mapping (GCDCM) Chaos [49] is a common nonlinear phenomenon; its behavior is complex and similar to random, but it has a delicate internal regularity. Employing chaotic mapping to optimize searching is superior to random search with a blind disorder; it can avoid being trapped in local optimum, due to the ergodicity of chaos. Chaos optimization approach is a global optimization technique, and has been widely used in improving the evolutionary algorithm. However, the current chaotic sequence generator used in improving the evolutionary algorithm mostly adopts the Logistic mapping, the Tent mapping and the An mapping. This paper introduces the Cat mapping (with better chaos distribution characteristic) into the PSO-SA algorithm to strengthen the global exploring ability of the CSAPSO hybrid optimization algorithm. 1) Cat mapping For two-dimensional Cat mapping function [64], as shown in Eq. (9), ( xn þ 1 ¼ ðxn þ yn Þ mod 1 ð9Þ yn þ 1 ¼ ðxn þ2yn Þ mod 1 where x mod 1 ¼ x ½x. Analysis [50] on chaotic characteristics of these four mapping functions (Logistic mapping, Tent mapping, An mapping and Cat mapping) indicated that the distribution of Cat mapping is

Please cite this article as: J. Geng, et al., Port throughput forecasting by MARS-RSVR with chaotic simulated annealing particle swarm optimization algorithm, Neurocomputing (2014), http://dx.doi.org/10.1016/j.neucom.2014.06.070i

J. Geng et al. / Neurocomputing ∎ (∎∎∎∎) ∎∎∎–∎∎∎

relatively uniform and has no cyclic phenomenon during the iteration process. Meanwhile, the chaotic sequence value of the Cat mapping may take 0 and 1; in short, Cat mapping has better chaos distribution characteristic. Therefore, this paper applies Cat mapping to comply with GCDCM, in order to further strengthen the swarm diversity and reduce the convergence time. 2) The procedure of GCDCM is illustrated as follows: Step 1: Set i¼1. Step 2: Employ the Cat mapping function to generate pop_size chaotic variables with different trails xi ¼ fxi1 ; xi2 ; ⋯; xiQ g, i¼1,2,…, pop_size. Step 3: Set j¼1. Step 4: According to Eq. (10), map all the components of xi to the value interval ½g j  min ; g j  max  to obtain gi, gi ¼ gj 

min þðg j  max  g j  min Þxi

ð10Þ

where g j  min and g j  max are, respectively, the minimum value and the maximum value of the jth vector of gi, j¼ 1,2,…,Q. Step 5: if jZ Q, go to step7, otherwise go to step 6. Step 6: Set j¼j þ1, go to step 4. Step 7: if iZpop_size, go to step 9, otherwise go to step 8. Step 8: Set i¼i þ1, go to step 2. Step 9: Calculate the fitness value of particles, mix with particle swarm from SA and sequence according to the fitness value. Then, select pop_size best particles ranked ahead in the fitness value as the new modified particle swarm. 2.2.2. Implementation structure of the CSAPSO algorithm This study proposes a hybrid CSAPSO algorithm by using SA to escape from local minima; in addition, it employs Cat mapping to comply with global chaotic disturbance for further strengthening global exploration capability and speeding up the convergence rate. The proposed CSAPSO algorithm consists of the standard PSO part, the SA part and the GCDCM part. The standard PSO part evaluates the initial particle swarm and uses four basic PSO operators (historical extremum updating, the global extremum updating, speed updating, location updating) to obtain new particle swarm (best particle). Then, for each generation of standard PSO, it will be delivered to SA for further processing. After completing all the SA processes, the modified individual will be continually delivered to GCDCM for global chaotic disturbance. After complete processes of GCDCM, the new modified individual will be sent back to standard PSO for the next generation. These evolution iterations will be never stopped till the termination condition of the algorithm is reached. The proposed procedure of CSAPSO is illustrated as follows and the flowchart is shown in Fig. 2.

The proposed CSAPSO algorithm is applied to optimize the parameters combination of the RSVR model, and the procedure is illustrated as follows: Step 1: Set population parameters. Set the size of particle swarm, pop_size, the acceleration parameters, c1 and c2, the maximum evolution generation, gmax, the population distribution coefficient, pop_distr, the initial temperature, T0, the thermal equilibrium temperature, T, maximum inner iteration, Tmax. Step 2: Generate initial population. Each particle, Xk(i), k¼ (C, v,δ), i¼1,2,…,pop_size, has three vectors, respectively, representing three parameters (C, v, and δ). Adopt rand(0,1) and Eq. (12) to generate pop_size particles, in the feasible intervals (Mink, Maxk). In addition, randomly initialize the speed and update the individual optimal position, P Gi , and the global optimal position, P Gg . Set g¼ 1. X k ðiÞ ¼ Mink þ xk ðiÞðMaxk Mink Þ; k ¼ ðC; v; δÞ

where N is the number of forecasting samples, f i ðxÞ is the actual value of the ith period, and f^ i ðxÞ is the forecasting value of the ith period.

ð12Þ

Step 3: Evaluate fitness. Set each particle Xk(i), k ¼(C, v,δ), i¼ 1,2, …, pop_size as the parameters combination of the RSVR model, train the RSVR model, calculate the fitness function value, then evaluate the fitness (forecasting errors) of each particle. Step 4: If the current swarm satisfies the stopping criteria R (if the number of generation is equal to the maximum evolution generation), go to step 12, otherwise go to step 5. Step 5: Calculate the adaptive inertia weight factor ϖ according to Eq. (13). Update the speed and position of each particle in particle swarm. Then, calculate the fitness values of all the particles, update the individual optimal position P Gi and the global optimal position P Gg of pop_size particles. Set g¼gþ1, go back to step 6. ϖ ¼ ϖ max  g n

ϖ max  ϖ min gen

ð13Þ

where ϖ is the updated inertia weight, ϖ min and ϖ max are the minimum value and the maximum value of inertia weight, respectively. In general, they are set as 0.4 and 0.9, correspondingly. Step 6: Provisional state. Receive values of the three parameters from PSO, make a random move to change the existing system state to a provisional state. Another set of three positive parameters are generated in this stage. Step 7: Metropolis criterion tests. The acceptance or rejection of provisional state is determined by the following metropolis criterion equation [44]. If the provisional state is accepted, then the provisional state is set as the current state; otherwise, return to step 6.

8 > < If EðSn Þ r EðSo Þ; the provisional state is accepted: If EðSn Þ 4 EðSo Þ and p o Pðaccept Snew Þ; 0 r p r 1; the provisional state is accepted > : Otherwise; the provisional state is regected:

2.2.3. Parameters determination by CSAPSO The PSO and SA employ the mean absolute percentage error (MAPE, expressed as Eq. (11)) of regressive value as the fitness function.   100 N f^ i ðxÞ  f i ðxÞ Fitness ¼ MAPEð%Þ ¼ ð11Þ ∑   N i ¼ 1  f i ðxÞ 

5

ð14Þ

where p is a random number to determine the acceptance of the provisional state, P (accept Snew), the probability of accepting the new state, is given by the following probability function:   EðSO Þ  EðSn Þ ð15Þ Pðaccept Sn Þ ¼ exp  kT where T is the thermal equilibrium temperature, and k represents the Boltzmann constant. Step 8: Incumbent solutions. If the provisional state is accepted and the current state is superior to the system state, then set

Please cite this article as: J. Geng, et al., Port throughput forecasting by MARS-RSVR with chaotic simulated annealing particle swarm optimization algorithm, Neurocomputing (2014), http://dx.doi.org/10.1016/j.neucom.2014.06.070i

J. Geng et al. / Neurocomputing ∎ (∎∎∎∎) ∎∎∎–∎∎∎

6

Initialize the parameters of particle swarm

Start SA with best individual obtained from PSO

Initialize the velocity and position of particle randomly

Initialize the maximum inner iteration, initial temperature, end temperature

g=1 Generate initial current state

Evaluate fitness value t=1 Yes

g>gmax

Random move to get a provisional state

No Update historical extremum PiG and global extremum PgG

pre-determined temperature is reached

Yes

Update the velocity and position of particle No

PSO finished

Metropolis criterion test

Yes

No CSAPSO finished (Obtain the optimal parameters ) Equilibrium or maximum inner iteration is reached

Yes

g=g+1 No Global chaotic disturbance using Cat mapping (GCDCM)

t=t+1

Get new modified particle swarm according to sorting of fitness value

Set current state as the new system state

Mix new particle swarm with particle swarm from SA

Temperature reduction

Use Cat mapping to generate new particle swarm

SA finished

Fig. 2. CSAPSO algorithm flowchart.

the current state as the new system state; otherwise, return to step 6 until the maximum number of loops (Ns) is reached. The investigation [65] demonstrated that Ns should be 100d to avoid infinitely repeated loops, where d denotes the problem dimension. In this paper, three parameters (C, v, andδ) are used to determine the system states; thus, Ns is state to 300. Step 9: Temperature reduction. After the new system state is obtained, reduce the temperature. The new temperature

reduction is implemented by the following equation: Nt ¼ C t ρ

ð16Þ

where 0 oρo 1, Nt is updated temperature, Ct is current temperature, and ρ is set at 0.9 [66]. Step 10: If the pre-determined temperature is reached, then stop the SA algorithm and the latest state is set as the approximate optimal solution, go to step 11. Otherwise, go to step 6.

Please cite this article as: J. Geng, et al., Port throughput forecasting by MARS-RSVR with chaotic simulated annealing particle swarm optimization algorithm, Neurocomputing (2014), http://dx.doi.org/10.1016/j.neucom.2014.06.070i

J. Geng et al. / Neurocomputing ∎ (∎∎∎∎) ∎∎∎–∎∎∎

Step 11: Global chaotic disturbance. Start GCDCM with improved individual obtained from SA. Adopt GCDCM modular to complete global chaos disturbance. Send the modified particle swarm from GCDCM back to standard PSO, and continue to implement the PSO operation. Go to step 4. Step 12: Output optimization results. The global optimal position P Gg ¼ ðC best ; vbest ; δbest Þ is presented as the three parameters (C, v, andδ) of the RSVR model.

deleted in the order of least contributions by the generalized cross-validation (GCV) criterion. Finally, the decrease of the GCV is used to assess importance of the basis functions, when the basis function is removed from the model. This process will continue until the remaining basis functions all satisfy the pre-determined requirements. The GCV can be expressed as follows [51]: GCVðMÞ ¼

2.3. Multivariable adaptive regression splines

Km

m¼1

k¼1

f^ ðxÞ ¼ a0 þ ∑ am ∏ ½skm ðxðk; mÞ  t km Þ

ð17Þ

where a0 and am are parameters, M is the number of basis functions, Km is the number of knots, skm takes on values of either 1 or –1 and indicates the right/left sense of the associated step function, x(k, m) is the label of the independent variable, and tkm indicates the knot location. MARS constructs a very large number of basis functions to build the optimal MARS model, but parts of the selected basis functions over-fit the data initially; therefore, these basis functions will be

Stage I

where there are N observations, and C(M) is the cost-penalty measures of a model containing M basis functions. Therefore, the numerator measures the lack of fit on the M basis function model fM(xi) and the denominator denotes the penalty for model complexity C(M). That is, the purpose of C(M) is to penalize model complexity, to avoid over-fitting, and to promote the parsimony of models. To do so, C(M) introduces a cost incurred per basis function included in the model, much like the adjusted R2 in least-squares regression. It is usually defined as C(M)¼M in linear least-squares regression and this is used in this paper. After creating a MARS model, the contribution to the fit of the model is used to estimate the relative importance of a variable. Because each variable can be added to the basis function, the importance of the variable can be estimated by the fit degree of the model. The GCV criterion is used to evaluate the importance of the variable. To implement the selection process, MARS removes one variable at a time, leaves the other variables, refits the model and then calculates the fit's reduction. The highest scoring, and most important, variable is the one that most reduces the fit of the model after being deleted, and it is endowed with 100 weights. Less important variables receive lower scores, and are endowed with corresponding weights, according to the ratio of the reduction in fit of the model. More details regarding the MARS model are available in the literature [51].

GDP

TIFA

TIE

PYPT

Using Min-Max normalization to process data

Obtaining the FIV of the RSVR FIV 1

FIV 2

FIV n-1

RSVR model

MRSVR model

P2YMA

P3YMA

Stage IV

Determining the final input vectors (FIV) by MARS

Stage V

ð18Þ

Selecting the candidate input variables

Stage II

Stage III

1 N ½yi  f^ M ðxi Þ2 ∑ N i ¼ 1 ½1  CðMÞ2 N

MARS is a nonlinear, nonparametric regression method. It was proposed by statistician Jerry Friedman [67] in 1995, and its application software was developed by Salford Systems company. The modeling procedure of MARS is based on a divide-andconquer strategy – partitioning the training datasets into separate regions, each of which obtains its own regression equation. The non-linearity of a model is approximated using separate linear regression slopes in distinct intervals of the independent variable space. Therefore, the slope of the regression line is allowed to change from one interval to the other as the two ‘knot’ points are crossed [52]. The general MARS function can be represented as the following equation [51]: M

7

FIV n

PSO Cat mapping SA

CSAPSO algorithm

MRSVR model

Obtain Cbest,vbest,δbest of MRSVR model

MRSVR-CSAPSO port throughput forecasting scheme Fig. 3. The flowchart of the MRSVR-CSAPSO port throughput forecasting scheme.

Please cite this article as: J. Geng, et al., Port throughput forecasting by MARS-RSVR with chaotic simulated annealing particle swarm optimization algorithm, Neurocomputing (2014), http://dx.doi.org/10.1016/j.neucom.2014.06.070i

J. Geng et al. / Neurocomputing ∎ (∎∎∎∎) ∎∎∎–∎∎∎

8

MARS is a flexible procedure with the ability to reveal optimal variable transformations and interactions, as well as the complex data structure that often hides in high-dimensional data [51], which makes MARS particularly suitable for problems with high input dimensions. Considering that port throughput is influenced by numerous factors (the candidate input variables) and the RSVR cannot determine the final input vectors from the candidate input variables, this study adopts MARS to select the final input vectors for RSVR and analyze significance degrees between different factors for further port throughput generation mechanism research.

3. The proposed port throughput forecasting scheme The MARS, RSVR and CSAPSO are integrated to establish a forecasting scheme for port throughput. The flowchart of the proposed MRSVR-CSAPSO port throughput forecasting scheme is presented in Fig. 3. As shown in Fig. 3, the proposed port throughput forecasting scheme consists of five stages. The first stage of the proposed MRSVR-CSAPSO port throughput forecasting scheme is selecting the candidate input variables. Because the forecasting year's port throughput is influenced by numerous socio-economic factors strongly, such as GDP (x1), TIFA (x2), TIE(x3), IO(x4), FIV(x5), SIV(x6), TIV(x7), PM(x8), TRSCG (x9), FV(x10), HFV(x11), RFV(x12), [11,68], this study selects the 12 socio-economic factors as the candidate input variables. Meanwhile, considering the previous year's port throughput also affects Table 1 The data of port throughput and the corresponding socio-economic indicators of Shanghai. Year

GDP/ billion yuan

1978 272.81 1979 286.43 1980 311.89 1981 324.76 1982 337.07 1983 351.81 1984 390.85 1985 466.75 1986 490.83 1987 545.46 1988 648.30 1989 696.54 1990 781.66 1991 893.77 1992 1114.32 1993 1519.23 1994 1990.86 1995 2499.43 1996 2957.55 1997 3438.79 1998 3801.09 1999 4188.73 2000 4771.17 2001 5210.12 2002 5741.03 2003 6694.23 2004 8072.83 2005 9247.66 2006 10572.24 2007 12494.01 2008 14069.87 2009 15046.45 2010 17165.98 2011 19 195.69 2012 20181.72 2013 21602.12

TLFV/ billion yuan

TAE/billion dollars

IO/billion FIV/ yuan billion yuan

SIV/ billion yuan

27.91 35.58 45.43 54.60 71.34 75.94 92.30 118.56 146.93 186.30 245.27 214.76 227.08 258.30 357.38 653.91 1123.29 1601.79 1952.05 1977.59 1964.83 1856.72 1869.67 1994.73 2187.06 2452.11 3084.66 3542.55 3925.09 4458.61 4829.45 5273.33 5317.67 5067.09 5254.38 5647.79

30.26 38.78 45.06 41.50 38.93 41.40 44.00 51.74 52.04 59.96 72.45 78.48 74.31 80.44 97.57 127.32 158.67 190.25 222.63 247.64 313.44 386.04 547.10 608.98 726.64 1123.97 1600.26 1863.65 2274.89 2829.73 3221.38 2777.31 3688.69 4374.36 4367.58 4413.98

514.01 556.30 598.75 620.12 634.65 663.53 728.12 862.73 952.21 1073.84 1304.66 1524.67 1642.75 1947.18 2429.96 3327.04 4255.19 4547.47 5126.22 5649.93 5763.67 6213.24 7022.98 7806.18 8730.00 11708.49 14595.29 16876.78 19631.23 23108.63 25968.38 24888.08 31038.57 33834.44 33186.41 33899.38

211.05 221.21 236.10 244.34 249.32 255.32 275.37 325.63 336.02 364.38 433.05 466.18 505.60 550.64 677.39 902.38 1148.45 1419.41 1596.73 1744.02 1871.89 1984.64 2207.63 2403.18 2622.45 3209.02 3892.12 4381.20 4969.95 5571.06 6085.84 6001.78 7218.32 7927.89 7854.77 8027.77

11.00 11.39 10.10 10.58 13.31 13.52 17.26 19.53 19.69 21.60 27.36 29.63 34.24 34.06 34.16 37.82 47.61 59.82 68.72 72.03 73.84 74.49 76.68 78.00 79.68 81.02 83.45 90.26 93.81 101.84 111.80 113.82 114.15 124.94 127.8 129.28

the forecasting year's port throughput, this paper also selects the PTYPT1(x13), PTYPT(x14), PYPT(x15), P2YMA (x16) and P3YMA (x17), 5 historical factors as the candidate input variables. Finally, 17 candidate input variables for port throughput forecasting are obtained. In the second stage, the original datasets are processed by the Min–Max normalization method. The original datasets employed in this paper include a variety of measurements reflecting a wide range of units. The magnitude of the absolute value can vary significantly for different variables. Data normalization can alleviate the effects that large value predictor variables overwhelm smaller value predictor variables, which can improve the performance of forecasting results [69,70]. Thus, in this paper, the Min–Max normalization method is used to standardize the original datasets. The Min–Max normalization method converts variable X to x in the range [  1.0, 1.0] using the following equation: x ¼ 1 þ

2ðX  X min Þ ðX max  X min Þ

ð19Þ

where X max and X min are the maximum and minimum values, respectively, of variable X. After determining the candidate input variables and normalizing the original datasets, the third stage is to obtain the final input vectors for RSVR. In this stage, the MARS is employed to select the final input vectors for the RSVR model, and then the MARS-RSVR model (namely MRSVR) is obtained. In constructing the MARS method, the Matlab Toolbox based on MARS algorithm (namely ARESLab) is used to implement MARS, which can be downloaded from http: //www.cs.rtu.lv/jekabsons/regression.html. The key step of training the MRSVR is to set appropriate parameters In the fourth stage, the CSAPSO algorithm is developed to search for the appropriate combination of three parameters (Cbest, vbest, andδbest ) for the MRSVR model. For details about the procedure of parameters determination, please refer to Section 2.2.3. In the final stage, the trained MRSVR model with the appropriate parameters (Cbest, vbest, and δbest ) is applied to forecast the future value of port throughput.

4. Empirical study 4.1. Datasets and performance criteria To evaluate the performance of the proposed port throughput forecasting scheme, the port throughput and the corresponding socio-economic indicators data, obtained from yearbook of the Shanghai and yearbook of China port, is used in this study. The data collection period for socio-economic indicators and the corresponding port throughput data are from 1978 to 2013. There are totally 36 data points in the dataset. The original data employed in this example is illustrated in Tables 1 and 2. The employed data are divided into two datasets, the first 29 data points are used as the training sample, while the remaining 7 data points are employed as the testing sample for measuring the forecasting performance of the proposed model. The prediction results of the proposed MRSVR-CSAPSO port throughput forecasting scheme are compared to those of the RSVR-CSAPSO model (for modeling the RSVR-CSAPSO model, all the 17 candidate input variables are used as the final input vectors), MARS-SVR-CSAPSO model (hereinafter simply called MSVR-CSAPSO) with ε-loss function in SVR, the MARS-RSVR-PSO model (hereinafter simply called MRSVR-PSO) and the MARSRSVR-SAPSO model (determining parameters by SAPSO [47], hereinafter simply called MRSVR-SAPSO). As the BP neural network is a well-known forecasting method, the MARS-BPNN (hereinafter

Please cite this article as: J. Geng, et al., Port throughput forecasting by MARS-RSVR with chaotic simulated annealing particle swarm optimization algorithm, Neurocomputing (2014), http://dx.doi.org/10.1016/j.neucom.2014.06.070i

J. Geng et al. / Neurocomputing ∎ (∎∎∎∎) ∎∎∎–∎∎∎

simply called MBPNN) [13] model is also employed to forecast the port throughput in this study as a comparison model. Besides, to fully assess the performance of the proposed port throughput forecasting scheme, the ARIMA model is also employed for the comparison model. The mean absolute deviation (MAD), root mean square error (RMSE), mean absolute percentage error (MAPE) and root mean square percentage error (RMSPE) are employed to evaluate the prediction performance of the proposed port throughput forecasting model. These four performance criteria are measures of the deviation between actual and predicted values. The smaller the values of these criteria, the more accurate are the predicted

Table 2 The data of port throughput and the corresponding socio-economic indicators of Shanghai. Year

TIV/ billion yuan

PM/ million people

1978 50.76 1098.28 1979 53.83 1132.14 1980 65.69 1146.52 1981 69.84 1162.84 1982 74.44 1180.51 1983 82.97 1194.01 1984 98.22 1204.78 1985 121.59 1216.69 1986 135.12 1232.33 1987 159.48 1249.51 1988 187.89 1262.42 1989 200.73 1276.45 1990 241.82 1283.35 1991 309.07 1287.20 1992 402.77 1289.37 1993 579.03 1294.74 1994 794.80 1298.81 1995 1020.20 1301.37 1996 1292.11 1304.43 1997 1592.74 1305.46 1998 1855.36 1306.58 1999 2129.60 1313.12 2000 2486.86 1321.63 2001 2728.94 1327.14 2002 3038.90 1344.23 2003 3404.19 1341.77 2004 4097.26 1352.39 2005 4776.20 1360.26 2006 5508.48 1368.08 2007 6821.11 1378.86 2008 7872.23 1391.04 2009 8930.85 1400.70 2010 9833.51 1412.31 2011 11142.86 1419.36 2012 12199.15 1426.93 2013 13445.07 1425.14

TRSCG/ billion yuan

FV/ million tons

HFV/ million tons

RFV/ million tons

PT/ million tons

54.10 68.28 80.43 88.73 89.80 100.68 123.72 173.39 196.84 225.25 295.83 331.38 333.86 382.06 464.82 675.92 834.76 1050.96 1258.00 1435.38 1593.27 1722.33 1865.28 2016.37 2203.89 2404.45 2656.91 2979.50 3375.20 3873.30 4457.23 5173.24 6070.50 6814.80 7412.30 8019.05

19645 19841 20037 20878 21719 22560 23401 24243 23964 23685 23406 23127 22848 22785 22722 22659 22596 22531 40928 41373 42090 44485 47954 49545 54196 58669 63180 68741 72617 78108 84347 76967 81024 93318 94376 91535

7184 7234 7284 7670 8056 8442 8828 9216 9113 9017 8912 8808 8714 8225 7737 7249 6761 6273 25023 25991 26351 27171 28369 28869 29759 30678 31554 32684 33799 35634 40328 37745 40890 42685 42 911 43809

4329 4406 4484 4599 4714 4829 4944 5059 4299 3539 2779 2019 1257 1280 1303 1326 1349 1376 1320 1252 1152 997 1055 1080 1131 1208 1284 1278 1233 1143 985 941 959 888 825 694

7955 8219 8483 9044 9605 10166 10727 11291 11825 12359 12893 13427 13959 14481 15003 15525 16047 16567 16401 16397 16387 18641 20440 22099 26384 36621 37897 44317 53748 56144 58170 59205 65339 72758 73559 77574

9

results. The definitions of these criteria are as follows: MAD ¼

1 N n ∑ jf ðxÞ f i ðxÞj Ni¼1 i

ð20Þ

RMSE ¼

sffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi 1 N n ∑ ðf ðxÞ  f i ðxÞÞ2 Ni¼1 i

ð21Þ

MAPE ¼

 n  1 N f i ðxÞ  f i ðxÞ ∑   Ni¼1 f i ðxÞ

ð22Þ

vffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi u u 1 N f n ðxÞ  f ðxÞ2 i i RMSPE ¼ t ∑ Ni¼1 f i ðxÞ

ð23Þ

n

where f i ðxÞ and f i ðxÞ represent the actual and predicted value of ith points, respectively, and N is the total number of data points. Numerical experiments have been implemented in Matlab 7.1 programming language with a 1.80 GHz Core(TM)2 CPU personal computer (PC) and 2.0 G memory under Microsoft Windows XP professional.

4.2. Selecting the final input vectors by MARS This study employs the MARS variable screening mechanism to analyze the 17 factors of Shanghai for selecting the final input vectors for RSVR. In constructing the MARS method, the maximum number of basis functions is set as 21, the maximum number of intersection functions is set as 10, the remaining parameters are determined by the aresparams(  ) functions of ARESLab. The calculated result of the MARS model is shown in Table 3. The calculation result of the MARS model shows that the SIV (x6) and PYPT(x15) play crucial roles in building the MARS model in the 17 candidate related factors. It is observed that SIV(x6) and PYPT(x15) are predominant influencing factors on port throughput. The reason is that the port is the distribution center of raw material, energy and products for import and export; as a result, most of the port throughput is form of raw materials, energy and products. The consumption of raw materials and energy as well as the production belong to the category of secondary industry (SIV); thus, the development level of the second industry will largely determine the scale of port throughput; as the cardinal number of the prediction year's port throughput, the PYPT (previous year's port throughput) will largely affect the level of the prediction year's port throughput. Consequently, the SIV(x6) and PYPT(x15) are selected as the final input vectors for RSVR. Let's take port throughput forecasting of 2008 as an example; the final input vectors are composed of x6 (SIV) and x15 (PYPT) of 2008.

Table 3 Result of MARS model. Selected variable x6

SIV

Second industry value

x15 PYPT Previous year’s port throughput MARS function: y¼  0.49692 þ 0.67143  BF1 þ 0.3571  BF2  0.82445  BF3

Importance score

GCV

Basis function

100.00

0.774

70.58

0.223

BF1 ¼max(0, x6þ 0.47456) BF2 ¼max(0, x15þ 0.411) BF3 ¼max(0,  0.411  x15)

Note: Type: piecewise-linear. GCV: 0.011. Total number of basis functions: 4. Total effective number of parameters: 8.5. Execution time: 0.22 s.

Please cite this article as: J. Geng, et al., Port throughput forecasting by MARS-RSVR with chaotic simulated annealing particle swarm optimization algorithm, Neurocomputing (2014), http://dx.doi.org/10.1016/j.neucom.2014.06.070i

J. Geng et al. / Neurocomputing ∎ (∎∎∎∎) ∎∎∎–∎∎∎

10

significant superiority of the proposed MRSVR-CSAPSO forecasting model can be summarized as follows:

4.3. Forecasting results and discussions In order to ensure the comparability of the contrast model, this study uses the same computer to calculate for each model and selects the mean absolute percentage error (MAPE) of the regression series as the fitness function. The maximum optimizing time should be as similar as possible, because more optimization time can improve the optimization performance. Considering the randomness of the evolutionary algorithms in the optimization process, the MBPNN, RSVR-CSAPSO, MSVR-CSAPSO, MRSVR-PSO, MRSVR-SAPSO, and MRSVR-CSAPSO models are employed to forecast the port throughput for 10 times, and the mean optimal solution as the final result. For the ARIMA model, the original port throughput data are directly used for model building, and the SAS statistical package is adopted in this study to build the ARIMA model. For the BPNN model, the network is trained by trainlm(  ) function, the learngdm(  ) is selected as the learning function of weights, and the learning rate is set as 0.01. The ranges of these three parameters in the SVR-based model are set as C A ½0:01; 1000, v A ½0:01; 1 and δ A ½0:1; 1000. The same parameters in the PSO and CSAPSO algorithms are set as pop_size ¼100, gen¼ 500, and c1 ¼ c2 ¼2.0. Table 4 and Fig. 4 demonstrate the actual values and predicted values of port throughput obtained from ARIMA, MBPNN, RSVRCSAPSO, MSVR-CSAPSO, MRSVR-PSO, MRSVR-SAPSO and MRSVRCSAPSO models. The four measurement criteria (MAD, RMSE, MAPE and RMSPE) of the seven models are computed and summarized in Table 5. It clearly depicts that the MAD, RMSE, MAPE and RMSPE of the proposed MRSVR-CSAPSO forecasting model are 681.06, 701.29, 1.02% and 1.03%, respectively, and smaller than those of the six competing models including ARIMA, MBPNN, RSVR-CSAPSO, MSVR-CSAPSO, MRSVR-PSO and MRSVR-SAPSO in this paper. The

1) The prediction performance of the ARIMA model is the worst among comparison models. The reason is that the future year port throughput is influenced not only by historical values but also by numerous socio-economic factors; as a result, the forecasting performance of the ARIMA only based on historical values is inferior to other competing models. Therefore, in this study, selecting the historical values and numerous socioeconomic factors as the input vector is highly necessary. 2) The forecasting accuracy of the SVR-based models is superior to that of the BPNN model; it indicates that the SVR-based models overcome the deficiency of the BPNN model, such as poor global search and easy to converge to local minima, which can well solve nonlinearity and wave property (fluctuation) problems. Thus, the SVR-based models obtain excellent performance compared to the BPNN model. Therefore, employing the SVR-based model to build the port throughput forecasting scheme is appropriate. 3) Due to the excellent variable selection capabilities, the MARS is adopted to reduce the dimension of the candidate input variables. In the forecasting process, selecting the final input vectors (x6: SIV and x15: PYPT) from the 17 candidate input variables by MARS improves the forecasting performance of MRSVR-CSAPSO. For example, in Table 5, using MARS can excellently shift the performance of the RSVR-CSAPSO model, with performance criteria (MAD¼1877.28, RMSE ¼1917.84, MAPE ¼2.84%, RMSPE ¼2.88%), to better the performance of the MRSVR-CSAPSO model, with performance criteria (MAD ¼681.06, RMSE ¼701.29, MAPE ¼1.02%, RMSPE ¼1.03%).

Table 5 Performance criteria results of seven models. Table 4 Comparison of prediction results (unit:104 T).

Models

Years

2007

2008

2009

2010

2011

2012

2013

Actual ARIMA MBPNN RSVR-CSAPSO MSVR-CSAPSO MRSVR-PSO MRSVR-SAPSO MRSVR-CSAPSO

56144 59781 54183 58057 55191 57720 57088 55628

58170 55257 59693 56683 59649 56944 57279 58763

59205 62866 60927 57523 60334 60174 60020 58782.

65339 68032 64008 66637 64369 63657 66438 66138

72758 66758 70217 70278 71061 74630 71644 73648

73559 78324 71204 75856 74962 72009 74571 72883

77574 81229 79605 75591 75271 75577 78760 78444

Parameters (C, v, σ)

ARIMA MBPNN RSVR-CSAPSO MSVR-CSAPSO MRSVR-PSO MRSVR-SAPSO MRSVR-CSAPSO

(784, 0.43, 191) (513, 0.68, 367) (25, 0.38, 912) (219, 0.23, 257) (691, 0.84, 289)

Performance criteria MAD

RMSE

MAPE (%)

RMSPE (%)

3903.73 1923.55 1877.28 1419.14 1553.13 1008.97 681.06

4043.47 1965.12 1917.84 1486.25 1587.88 1016.49 701.29

5.89 2.91 2.84 2.12 2.34 1.53 1.02

6.03 2.95 2.88 2.17 2.37 1.53 1.03

90000 Actual ARIMA MRSVR-PSO

85000

MBPNN RSVR-CSAPSO MRSVR-SAPSO

MSVR-CSAPSO MRSVR-CSAPSO

80000

Value

75000 70000 65000 60000 55000 50000 2007

2008

2009

2010

2011

2012

2013

Years Fig. 4. Comparison result of actual and predicted values.

Please cite this article as: J. Geng, et al., Port throughput forecasting by MARS-RSVR with chaotic simulated annealing particle swarm optimization algorithm, Neurocomputing (2014), http://dx.doi.org/10.1016/j.neucom.2014.06.070i

J. Geng et al. / Neurocomputing ∎ (∎∎∎∎) ∎∎∎–∎∎∎

4) In Table 5, the MAD, RMSE, MAPE and RMSPE values of the proposed MRSVR-CSAPSO model with robust loss function are clearly smaller than that of the MSVR-CSAPSO model with ε-loss function. It demonstrates that, first, the mixed noise data really exists in the historical data and socio-economic factors of the port throughput; second, the MRSVR-CSAPSO model with robust loss function has successfully solved these mixed noise data. Consequently, the proposed MRSVR-CSAPSO model obtains better performance than the MSVR-CSAPSO model in port throughput forecasting. 5) From Table 5, we can see that the SAPSO algorithm is excellent to shift the local solution of the MRSVR-PSO model, (C, v,σ)¼ (25, 0.38, 912), with performance criteria (MAD¼ 1553.13, RMSE ¼1587.88, MAPE ¼ 2.34% and RMSPE ¼ 2.37%), to another better solution, (C, v,σ) ¼(219, 0.23, 257), of the MRSVR-SAPSO model with performance criteria (MAD ¼1008.97, RMSE ¼ 1016.49, MAPE ¼ 1.53% and RMSPE ¼ 1.53%). It reveals that the SAPSO hybrid algorithm is superior to the standard PSO algorithm in determining the parameters of the MRSVR model, due to its jump characteristic in the evolution. 6) In evolution of CSAPSO, mapping the three hyper-parameters (C, v, σ) from the solution space to the chaotic space by the Cat mapping enhances the diversity of evolutionary particle swarm, explores the better particles and searches for more appropriate solution, when the SAPSO evolution suffers from standstill. In short, the introduction of Cat mapping to the SAPSO hybrid algorithm further improves the global exploration capability. For instance, in Table 5, using CSAPSO significantly improves the forecasting performance of MRSVRCSAPSO than that of MRSVR-SAPSO, shifting the solution of the MRSVR-SAPSO model, (C, v,σ)¼ (219, 0.23, 257), with performance criteria (MAD¼ 1008.97, RMSE¼ 1016.49, MAPE ¼ 1.53% and RMSPE ¼1.53%), to another better solution, (C, v,σ)¼ (691, 0.84, 289) of the MRSVR-CSAPSO model with performance criteria (MAD¼681.06, RMSE ¼701.29, MAPE ¼1.02% and RMSPE ¼1.03%). Therefore, it depicts that the CSAPSO algorithm developed in this paper is more appropriate to determine the parametric combination (C, v, σ) for MRSVR than PSO and SAPSO.

5. Conclusion Port throughput forecasting is an important part of port planning and feasibility study, and plays an important and indispensable role in determining the direction of port development, scale of basic facilities investment, berth location and port management strategy. Accurate port throughput forecasting result is the basis of correct decision and planning for port management; thus, it is very necessary to improve the forecasting precision of port throughput. This paper proposes an MRSVR-CSAPSO port throughput forecasting scheme, combining MARS, RSVR and CSAPSO. The empirical study based on Shanghai statistical data is used to elucidate the forecasting accuracy of the proposed scheme. The prediction results of the proposed MRSVR-CSAPSO port throughput forecasting scheme are compared to those of the ARIMA, MBPNN, RSVR-CSAPSO, MSVR-CSAPSO, MRSVR-PSO and MRSVR-SAPSO models. The empirical study indicates that the proposed MRSVR-CSAPSO port throughput forecasting scheme obtains better prediction result and outperforms the six comparison models. In short, the established scheme is a valid approach for port throughput predication. In future studies, more socio-economic factors such as interactions between ports and economic policy should be considered to determine candidate input variables. More appropriate tools for identifying the final input vectors of SVR should be explored. Other

11

approaches based on advanced optimization algorithms and novel hybrid methods should be further studied to search more appropriate parameter combinations for the SVR model and to obtain more accurate forecasting results of port throughput.

Acknowledgments The work was supported by the Fundamental Research Funds for the Central Universities (HEUCF140108), Science and technology project of Western Transportation Construction of Ministry of Communications (2014364554050).

References [1] G.E.P. Box, G.M. Jenkins, Time Series Analysis: Forecasting and Control, Holden-Day, San Francisco, 1976. [2] P.J. Sheldon, T. Var, Tourism forecasting: a review of empirical research, J. Forecast. 4 (1985) 183–195. [3] S.F. Witt, C.A. Witt, Modeling and Forecasting Demand in Tourism, Academic Press, London, 1992. [4] F. Jiang, K. Lei, Grey prediction of port throughput based on GM(1,1,a) model, Logistics Sci-Tech. 9 (2009) 68–70. [5] X.L. Jiang, Y.Q. WU, Y.J. Shi, Empirical study of an intermix algorithm for forecasting port throughput, China Harbour Eng. 2 (2009) 7–11. [6] G. Chen, L. Lei, H. Zou, China's port throughput of iron ore forecast based on cubic exponential smoothing method, Econ. Res. Guide 4 (2012) 191–192,229. [7] W.P. Zhang, Research on combined forecasting model in the container throughput forecasting of Ningbo port, Bull. Sci. Technol. 28 (5) (2012) 133–136. [8] N. Shang, M. Qin, Y. Wang, Z. Cui, Y. Cui, Y. Zhu, A BP neural network method for short-term traffic flow forecasting on crossroads, Comput. Appl. Softw. 23 (2) (2006) 33–35. [9] Y. Ye, Z. Lu, Neural network short-term traffic flow forecasting model based on particle swarm optimization, Comput. Eng. Des. 30 (18) (2009) 4296–4299. [10] H. Liu, C. Zhou, A. Zhu, L. Li, Multi-population genetic neural network model for short-term traffic flow prediction at intersections, Acta Geod. Cartogr. Sin. 38 (4) (2009) 364–368. [11] T.T. Chen, Y.Y. Chen, Port Cargo Throughput, Forecast based on BP neural network, Jisuanji Yu Xiandaihua 10 (2009) 4–5. [12] M.L. Liu, M.H. Zhu, The port throughput forecast model based on the BP neural network, J. Syst. Sci. 20 (4) (2012) 88–91. [13] J.P. Mu, J. Li, M. Zhang, Harbor cargo throughput forecast based on BP neural network and PCA, Tech. Method 31 (19) (2012) 79–82. [14] J.A.K. Suykens, Nonlinear modeling and support vector machines, in: Proceedings of IEEE Instrumentation and Measurement Technology Conference, 2001, pp. 87–294. [15] J. Cao, A. Cai, A robust shot transition detection method based on support vector machine in compressed domain, Patt. Recognit. Lett. 28 (12) (2007) 1534–1540. [16] V. Vapnik, S. Golowich, A. Smola, Support vector machine for function approximation regression estimation and signal processing, Adv. Neural. Inf. Process. Syst. 9 (1996) 281–287. [17] L. Cao, Support vector machine experts for time series forecasting, Neurocomputing 51 (2003) 321–339. [18] W. Huang, Y. Nakamori, S.Y. Wang, Forecasting stock market movement direction with support vector machine, Comput. Oper. Res. 32 (2005) 2513–2522. [19] W.M. Hung, W.C. Hong, Application of SVR with improved ant colony optimization algorithms in exchange rate forecasting, Control Cybern. 38 (2009) 863–891. [20] P.F. Pai, C.S. Lin, A hybrid ARIMA and support vector machines model in stock price forecasting, Omega 33 (2005) 497–505. [21] P.F. Pai, C.S. Lin, W.C. Hong, C.T. Chen, A hybrid support vector machine regression for exchange rate prediction, Int. J. Inf. Manag. Sci. 17 (2006) 19–32. [22] W.C. Hong, Y. Dong, L.Y. Chen, S.Y. Wei, SVR with hybrid chaotic genetic algorithms for tourism demand forecasting, Appl. Soft Comput. 11 (2011) 1881–1890. [23] P.F. Pai, W.C. Hong, An improved neural network model in forecasting arrivals, Ann. Tour. Res. 32 (2005) 1138–1141. [24] W.C. Hong, Rainfall forecasting by technological machine learning models, Appl. Math. Comput. 200 (2008) 41–57. [25] W.C. Hong, P.F. Pai, Potential assessment of the support vector regression technique in rainfall forecasting, Water Resour. Manag. 21 (2007) 495–513. [26] P.F. Pai, W.C. Hong, A recurrent support vector regression model in rainfall forecasting, Hydrol. Process. 21 (2007) 819–827. [27] M.A. Mohandes, T.O. Halawani, S. Rehman, A.A. Hussain, Support vector machines for wind speed prediction, Renew. Energy 29 (2004) 939–947. [28] W.C. Hong, Traffic flow forecasting by seasonal SVR with chaotic simulated annealing algorithm, Neurocomputing 74 (2011) 2096–2107.

Please cite this article as: J. Geng, et al., Port throughput forecasting by MARS-RSVR with chaotic simulated annealing particle swarm optimization algorithm, Neurocomputing (2014), http://dx.doi.org/10.1016/j.neucom.2014.06.070i

12

J. Geng et al. / Neurocomputing ∎ (∎∎∎∎) ∎∎∎–∎∎∎

[29] W.C. Hong, Y. Dong, F. Zheng, S.Y. Wei, Hybrid evolutionary algorithms in a SVR traffic flow forecasting model, Appl. Math. Comput. 217 (2011) 6733–6747. [30] W.C. Hong, Y. Dong, F. Zheng, C.Y. Lai, Forecasting urban traffic flow by SVR with continuous ACO, Appl. Math. Model. 35 (2011) 1282–1291. [31] W.C. Hong, Hybrid evolutionary algorithms in a SVR-based electric load forecasting model, Int. J. Electr. Power Energy Syst. 31 (2009) 409–417. [32] W.C. Hong, Chaotic particle swarm optimization algorithm in a support vector regression electric load forecasting model, Energy Convers. Manag. 50 (2009) 105–117. [33] W.C. Hong, Electric load forecasting by support vector model, Appl. Math. Model. 33 (2009) 2444–2454. [34] W.C. Hong, Application of chaotic ant swarm optimization in electric load forecasting, Energy Policy 38 (2010) 5830–5839. [35] P.F. Pai, W.C. Hong, Forecasting regional electric load based on recurrent support vector machines with genetic algorithms, Electr. Power Syst. Res. 74 (2005) 417–425. [36] P.F. Pai, W.C. Hong, Support vector machines with simulated annealing algorithms in electricity load forecasting, Energy Convers. Manag. 46 (2005) 2669–2688. [37] W.Z. Cheng, F.X. Ren f, X.C. Zhou, Application of SVM model in the predication of Harbor handling capacity in Jiujiang, Logist. Eng. Manag. 34 (9) (2012) 61–64. [38] M.W. Li, W.C. Hong, H.G. Kang, Urban traffic flow forecasting using Gauss-SVR with cat mapping, cloud model and PSO hybrid algorithm, Neurocomputing 23 (99) (2013) 230–240. [39] Q. Wu, H.S. Yan, Product sales forecasting model based on robust M-support vector machine, Comput. Integr. Manuf. Syst. 15 (6) (2009) 1081–1087. [40] P.F. Pai, W.C. Hong, Forecasting regional electric load based on recurrent support vector machines with genetic algorithms, Electr. Power Syst. Res. 74 (3) (2005) 417–425. [41] V. Cherkassky, Y. Ma, Practical selection of SVR parameters and noise estimation for SVM regression, Neural Netw. 17 (2004) 113–126. [42] F. Friedriehs, C. Igel, Evolutionary tuning of multiple SVM parameters, Neural Comput. 64 (1) (2005) 107–117. [43] X. Wang, C. Qin, W.H. Gui, Parameters election of support vector regression based on hybrid optimization algorithm and its application, J. Control Theory Appl. 3 (4) (2005) 372–376. [44] O. Chapelle, V. Vapnik, O. Bousquet, et al., Choosing mu1tip1e parameters for support vector machines, Mach. Learn. 46 (1) (2002) 131–159. [45] M.W. Chang, C.H. Lin, Leave-one-out bounds for support vector regression model selection, Neural Comput. 17 (5) (2005) 1188–1222. [46] X.T. Zhu, Power short-term load forecasting based on SA-LSSVM, Sci. Technol. Eng. (2012) 6171–6174. [47] H.Q. Zhang, X.Y. Zhang, Steam load forecasting based on chaos theory and LSSVM, Syst. Eng.-Theory & Pract. (2013) 1058–1066. [48] N. Lu, Y. Liu, Research on self-adaptive integrated model for short-term load forecasting based on algorithm combination, Power Syst. Protect. Control (2012) 109–113. [49] G. Chen, Y. Mao, C.K. Chui, A symmetric image encryption scheme based on 3D chaotic cat maps, Chaos Solitons Fractals 21 (2004) 61–74. [50] M.W. Li, H.G. Kang, P.H. Zhou, W.C. Hong, Hybrid optimization algorithm based on chaos, cloud and particle swarm optimization algorithm, J. Syst. Eng. Electron. 24 (2) (2013) 324–334. [51] J.H. Friedman, Multivariate adaptive regression splines (with discussion), Ann. Stat. 19 (1991) 1–141. [52] C.J. Lu, Sales forecasting of computer products based on variable selection scheme and support vector regression, Neurocomputing 128 (2014) 491–499. [53] W. Chen, C. Ma, L. Ma, Mining the customer credit using hybrid support vector machine technique, Expert Syst. Appl. 36 (4) (2009) 7611–7616. [54] T.S. Lee, C.C. Chiu, Y.C. Chou, C.J. Lu, Mining the customer credit using classification and regression tree and multivariate adaptive regression splines, Comput. Stat. Data Anal. 50 (4) (2006) 1113–1130. [55] W. Xiao, Q. Zhao, Q. Fei, A comparative study of data mining methods in consumer loans credit scoring management, J. Syst. Sci. Syst. Eng. 15 (4) (2006) 419–435. [56] M.H.F. Zarandi, M. Zarinbal, N. Ghanbari, I.B. Turksen, A new fuzzy functions model tuned by hybridizing imperialist competitive algorithm and simulated annealing, application: stock price prediction, Inf. Sci. 222 (2013) 213–228. [57] L.J. Kao, C.C. Chiu, C.J. Lu, C.H. Chang, A hybrid approach by integrating wavelet-based feature extraction with MARS and SVR for stock index forecasting, Decis. Support Syst. 54 (3) (2013) 1228–1244. [58] W. Dai, Y.E. Shao, C.J. Lu, Incorporating feature selection method into support vector regression for stock index forecasting, Neural Comput & Applic. 23 (2013) 1551–1561. [59] R. Mohanty, V. Ravi, M.R. Patra, Hybrid intelligent systems for predicting software reliability, Appl. Soft Comput. 13 (1) (2013) 189–200. [60] N. Raj Kiran, V. Ravi, Software reliability prediction by soft computing techniques, J. Syst. Softw. 81 (4) (2008) 576–583. [61] Y. Zhou, H. Leung, Predicting object-oriented software maintainability using multivariate adaptive regression splines, J. Syst. Softw. 80 (8) (2007) 1349–1361. [62] J.R. Leathwick, D. Rowe, J. Richardson, et al., Using multivariate adaptive regression splines to predict the distributions of New Zealand’s freshwater diadromous fish, Freshw. Biol. 50 (12) (2005) 2034–2052.

[63] J. Elith, J. Leathwick, Predicting species distribution from museum and herbarium records using multiresponse model fitted with multivariate adaptive regression splines, Divers. Distrib. 13 (3) (2007) 265–275. [64] Y. Kao, E. Zahara, A hybrid genetic algorithm and particle swarm optimization for multimodal functions, Applied Soft Computing 8 (2008) 849–857. [65] S. Kirkpatrick, C.D. Gelatt, M.P. Vecchi, Optimization by simulated annealing, Science 220 (1983) 671–680. [66] A. Dekkers, E.H.L. Aarts, Global optimization and simulated annealing, Math. Progr. 50 (1991) 367–393. [67] J.H. Friedman, C.B. Roosen, An introduction to multivariate adaptive regression splines, Statist. Method Med. Res. 4 (1995) 197–217. [68] J.H. Xu, H. Jin, Study on intrinsic factors of port throughput based on principal component analysis, Port Waterw. Eng. 1 (2010) 2–5. [69] S.F. Crone, S. Lessmann, R. Stahlbock, The impact of preprocessing on data mining: an evaluation of classifier sensitivity in direct marketing, Eur. J. Oper. Res. 173 (3) (2006) 781–800. [70] S. Crone, J. Guajardo, R. Weber, The impact of preprocessing on support vector regression and neural networks in time series prediction, in: Proceedings of the International Conference on Data Mining, DMIN'06, Las Vegas, USA, 2006, pp. 37–44.

Jing Geng was born in 1968. She received her Master’s degree in engineering from Dalian University of Technology in 2013. She is currently an associate professor in College of shipbuilding engineering of Harbin Engineering University. Her research interests are port planning simulation, geographic information system and hybrid evolutionary algorithm.

Ming-Wei Li was born in 1984. He received his Doctorate degree in Engineering from Dalian University of Technology in 2013. Since September 2013, he has been with the College of Shipbuilding Engineering of Harbin Engineering University, where he is currently a lecturer. His research interests mainly include port policy and digital applications of forecasting technology, computational intelligence and support vector forecasting.

Zhi-Hui Dong was born in 1985. She received her Master’s degree in Naval Architecture & Ocean Engineering Design and Manufacturing from Harbin Engineering University in 2010. She is working toward a Ph. D. degree in HEU. Her research interests are intelligent algorithm, the modeling and simulation optimization and logistics optimization.

Yu-Sheng Liao is a professor with the Department of Healthcare Administration, Oriental Institute of Technology, Taiwan. He received his PhD in Business Administration from National Taiwan University in 1988. His research interests are hybrid evolutionary algorithm, grey theory, and computational intelligence.

Please cite this article as: J. Geng, et al., Port throughput forecasting by MARS-RSVR with chaotic simulated annealing particle swarm optimization algorithm, Neurocomputing (2014), http://dx.doi.org/10.1016/j.neucom.2014.06.070i