disaggregation procedure, machine learning and ensemble models

disaggregation procedure, machine learning and ensemble models

ARTICLE IN PRESS JID: NEUCOM [m5G;December 17, 2019;20:58] Neurocomputing xxx (xxxx) xxx Contents lists available at ScienceDirect Neurocomputing...

2MB Sizes 4 Downloads 81 Views

ARTICLE IN PRESS

JID: NEUCOM

[m5G;December 17, 2019;20:58]

Neurocomputing xxx (xxxx) xxx

Contents lists available at ScienceDirect

Neurocomputing journal homepage: www.elsevier.com/locate/neucom

A freight inspection volume forecasting approach using an aggregation/disaggregation procedure, machine learning and ensemble models Juan…Jesús Ruiz-Aguilar a,∗, Daniel Urda b, José Antonio Moscoso-López a, Javier González-Enrique b, Ignacio J. Turias b a b

Department of Industrial and Civil Engineering, Polytechnic School of Engineering, University of Cadiz, Algeciras, Spain Department of Computer Science Engineering, Polytechnic School of Engineering, University of Cadiz, Algeciras, Spain

a r t i c l e

i n f o

Article history: Received 2 September 2018 Revised 14 March 2019 Accepted 19 June 2019 Available online xxx Keywords: Machine learning Ensembles Artificial neural networks Bayesian regularization Aggregation/disaggregation Time series inspection

a b s t r a c t Machine learning methods are a powerful tool to detect workload peaks and congestion in goods inspection facilities of seaports. In this paper, a time series data of freight inspection volume at the Border Inspections Posts in the Port of Algeciras Bay was used to construct 4 datasets based on different sizes of autoregressive window and several machine learning and ensemble models were used to aid decisionmaking in the inspection process. Moreover, an aggregation/disaggregation procedure to make predictions was proposed and compared to two different prediction horizons: daily (t + 1) and weekly (t + 7) predictions. In general, results showed that neural networks performed better than any other model independently of the size of the autoregressive window. The result obtained by a weighted average ensemble model was better and statistically significant than any other model. Moreover, the proposed aggregation/disaggregation procedure provided better performance results and more robust in terms of variance than considering daily or weekly predictions. © 2019 Elsevier B.V. All rights reserved.

1. Introduction Forecasting tasks in certain traffic volumes is crucial to allow better planning and operation of seaport entities. For instance, forecasts of container volumes are needed as the essential inputs to many decision activities in various functional areas such as building new container terminals, operation planning, marketing strategies, finance or accounting [1]. The key factors of port competitiveness mainly depend on the port’s infrastructure, geographic location and costs. Recently, there has been a change in these factors to include the quality of services to achieve a certain level of port service. Border Inspection Posts (BIPs) constitute a critical subsystem where border, veterinary and other controls are undertaken. The inspection process causes important delays in the flow of goods of a transport chain, thus increasing costs. A wrong operation of a BIP may lead to the escape of companies to ports that present a faster performance on their inspection process and



Corresponding author. E-mail addresses: [email protected] (J. Ruiz-Aguilar), [email protected] (D. Urda), [email protected] (J.A. Moscoso-López), [email protected] (J. González-Enrique), [email protected] (I.J. Turias).

with a more reliable supply chain. An optimal and seamless service at BIPs is thereby a key quality indicator to assess the level of port service. This process is becoming more and more important since the global trade and security in the movement of both goods and passengers have increased in the last years. Thus, a short-term prediction of daily inspections to forecast the inspection volume at BIPs may be a powerful tool to avoid congestion or delays. In this sense, machine learning algorithms (ML) arise as a more efficient tool in handling an extensive amount of data without any constraint in the degree of complexity compared to conventional statistical techniques. In addition, ML techniques are able to capture the underlying mechanism that governs the data even if it is not evident [2]. To this end, a wide range of methodologies has been proposed in the literature, including k-Nearest Neighbor [3,4], Random Forest [5,6], Gradient Boosting methods [7], Support Vector Machines [8,9] or Artificial Neural Networks [10,11], among many others. Moreover, ensemble methods allow to combine different ML in order to improve the prediction performance or classification accuracy. Different methods have been proposed in order to create the ensembles [12] such as Bagging [13], Random Forest [14] (a variation of Bagging), Boosting [15], AdaBoost [16], majority voting [17] (with weighted voting and averaging as its most usual alternatives) or Stacking technique [18] among others.

https://doi.org/10.1016/j.neucom.2019.06.109 0925-2312/© 2019 Elsevier B.V. All rights reserved.

Please cite this article as: J. Ruiz-Aguilar, D. Urda and J.A. Moscoso-López et al., A freight inspection volume forecasting approach using an aggregation/disaggregation procedure, machine learning and ensemble models, Neurocomputing, https://doi.org/10.1016/j.neucom.2019. 06.109

[m5G;December 17, 2019;20:58]

J. Ruiz-Aguilar, D. Urda and J.A. Moscoso-López et al. / Neurocomputing xxx (xxxx) xxx

2. Materials and methods This section describes the dataset, methods and experimental design used in this study with the aim of developing a predictive tool to aid decision-making with respect to the inspection process in the Port of Algeciras Bay based on the current and past number of inspections performed in a time-window. 2.1. Dataset The dataset provided by the authorities in the Port of Algeciras Bay consists of 1096 records collected between January 2010 and December 2012, where each record describes the number of inspections performed at the Algeciras BIP on a given day within this period. The first image in Fig. 1 shows the corresponding time series y(t), y(t + 1 ),…, y(t + 1096 ) that is analyzed in this study

Conut

150 100 50

Ja n Fe -20 b 1 M -20 0 ar 10 Ap -20 1 M r-20 0 ay 1 0 Ju -20 n- 10 Ju 201 l 0 2 Au 0 g 1 Se -20 0 p 10 O -20 ct 10 N -20 ov 1 D -20 0 ec 1 0 Ja -20 n 10 Fe -20 b- 11 M 20 a r 11 Ap -20 1 M r - 20 1 ay 1 1 Ju -20 1 n- 1 Ju 201 1 l Au -20 g 1 Se -20 1 p 11 O -20 ct 11 N -20 ov 1 D -20 1 ec 1 1 Ja -20 n 11 Fe -20 b- 12 M 20 ar 12 Ap -20 1 M r-20 2 ay 1 2 J u - 20 n - 12 Ju 201 Au l-20 2 g 1 Se -20 2 p 12 O -20 ct 12 N -20 ov 1 D -20 2 ec 1 Ja -20 2 n- 1 2 20 13

0

Month

Correlation

0.6 0.4 0.2 0 -0.2 20

40

60

80

100

120

140

160

180

200

Lags

0

7 14 21 28 56 35 49 42 63 70 84 77 98 119 105 91 182 112 133 147 154 126 161 140 189 175 168 196 184 170 180 166 191 159 187 156 163 173 145 177

0.5

Lags 0.5

0

7 14 21 35 56 49 28 42 63 84 119 98 70 77 112 182 105 91 161 196 168 189 147 175 126 154 140 133 1 162 8 176 169 191 165 141 113 20 184 57

Specifically in the maritime transport sector, Artificial Neural Networks (ANNs) have been successfully used in a wide range of tasks. Al-Deek [19] proposed ANNs to predict the level of cargo truck traffic moving at seaports. Lam et al. [20] developed an ANN model to predict several types of port cargoes throughput in the Port of Hong Kong. Gosasang et al. [10] compared ANNs with conventional methods to predict container volumes in the Port of Bangkok. Additionally, the prediction of several traffics in the Port of Algeciras Bay (Spain) was undertaken in different works: first, Moscoso-López et al. [21] used ANNs to predict Ro-Ro traffic and compared them to other well-known forecasting techniques and, second, Ruiz-Aguilar et al. [22] predicted flow freight congestion using ANNs for classification. Furthermore, Support Vector Regression (SVR) has been applied to forecast many transport tasks with promising results showing a significant reduction in prediction errors. Both ANNs and SVR are constantly compared in the research literature, e.g. in forecasting container throughputs at ports [23], forecasting the annual average daily traffic [24], forecasting traffic speed [9] or predicting intermodal freight prediction in ports [25]. With respect to time series data analysis, Kourentzes et al. [26] used an ANNs ensemble and compared it to different ensemble approaches showing that median or mode-based ensembles achieved better results than average-based ones. Recently, Moscoso-López et al. [27] proposed an SVR ensemble forecasting approach for Ro-Ro freight in the Port of Algeciras which showed that a combination of SVR methods together with a smooth pretreatment applied to the original time series produced a very important improvement on the forecasting results. However, the use of other well-known ML and the application of ensemble techniques is often overlooked apart from ANNs or SVR. Therefore, in this work authors propose to test a wide variety of machine learning methods and ensemble techniques to analyze a time series data of freight inspection volume at BIPs in the Port of Algeciras Bay, thus aiming to provide an appropriate estimation of the future traffic flows which would allow effective planning, organization of logistical tasks and support decision-making [1]. To this end, the present work further extends a previous study about prediction of the inspection process at BIPs [28] by introducing an aggregation/disaggregation procedure to make predictions in addition to other two commonly used prediction horizons (daily or weekly). Moreover, this work presents an extensive analysis of machine learning methods and ensembles’ performance considering different sizes of the autoregressive window. The rest of the paper is organized as follows. Section 2 describes the datasets and machine learning models used in this study. Next, in Section 2.3 the experimental design and validation strategy are described. Section 3 contains the results of the analysis and finally Section 4 presents the conclusion of this study.

|Correlation|

2

ARTICLE IN PRESS

Mutual Information

JID: NEUCOM

Lags

Fig. 1. Visualization of the number of daily inspections at the BIPs between January 2010 and December 2012 (first figure), partial autocorrelation plot for different lags -1 to 196 days- which suggests a possible weekly pattern in the time series (second figure), absolute autocorrelation plot sorted by the absolute autocorrelation (third figure) -the higher it is, the more to the left in the figure- and mutual information plot (fourth figure) for the same lags, both confirming the weekly pattern.

Table 1 Description of the datasets used within the analysis. Autoregressive window (K)

#Samples (N)

t + 1 features (Pd )

t + 7 features (Pw )

7 days 14 days 21 days 28 days

1086 1079 1072 1065

7 14 21 28

1 2 3 4

with an average of 66.19 daily inspections. There were no missing values present in the daily freight inspections time series provided, thus no imputation methods were applied. Usually, partial autocorrelation plots [29] are used in time series data analysis to check randomness in a dataset and, particularly, to identify the order of an autoregressive model. Randomness is assessed by computing autocorrelations for data values at varying time lags. In this sense, the second image in Fig. 1 shows the partial autocorrelation plot for different lags (1–196 days) which suggests the presence of a possible weekly pattern in the time series (partial autocorrelation of 0.726 for lag = 7). Moreover, the third and fourth images in Fig. 1 confirm the existence of a weekly pattern presenting the highest absolute correlation and mutual information followed by other lags multiple of 7. This exploratory analysis motivates the consideration of K = 7, 14, 21, 28 days autoregressive windows to model the number of inspections on a given day as a function of the current and past values included within these time windows. The input covariates resulting from this autoregressive data arrangements were normalized with zero mean and unit variance to avoid side effects linked to possible different scales of these covariates. Furthermore, in this study two different prediction horizons were analyzed considering that predictions are made at time t: one-day (t + 1) and seven-days (t + 7) ahead. On one hand, one-day ahead predictions are made based on K previous consecutive values. On the other hand, sevendays ahead predictions are made based on K/7 previous values that must correspond to the same day of the week for which the prediction is being made, i.e. only past Mondays are used to predict the number of inspections in the following Monday. Eqs. 1 and 2 depict how both outcomes are modeled. Therefore, Table 1

Please cite this article as: J. Ruiz-Aguilar, D. Urda and J.A. Moscoso-López et al., A freight inspection volume forecasting approach using an aggregation/disaggregation procedure, machine learning and ensemble models, Neurocomputing, https://doi.org/10.1016/j.neucom.2019. 06.109

JID: NEUCOM

ARTICLE IN PRESS

[m5G;December 17, 2019;20:58]

J. Ruiz-Aguilar, D. Urda and J.A. Moscoso-López et al. / Neurocomputing xxx (xxxx) xxx

summarizes the main data characteristics for the different scenarios considered in terms of number of samples available (N) and number of input features to the models (Pd or Pw ), with respect to the size of the autoregressive window (K).

y(t + 1 ) = F { y(t ), y(t − 1 ), . . . , y(t − (K − 1 )) }

⎧ ⎪ ⎨F { y(t ) }, F { y(t ), y(t − 6 ) }, y(t +7 ) = ⎪ ⎩F { y(t ), y(t − 6 ), y(t − 13 ) }, F { y(t ), y(t − 6 ), y(t − 13 ), y(t − 20 ) },

(1)

if if if if

K K K K

=7 = 14 = 21 = 28 (2)

2.2. Methods Several well-known machine learning models and ensemble techniques were used to perform the time series data analysis in this study. Next, a brief description of each individual learner and ensemble technique is provided.

2.2.1. Individual learners The R package mlr [30] was used to test the following machine learning models in order to predict the number of inspections at the BIPs on a given day: • Linear regression (linreg): it is a parametric model which models the independent variable as a linear combination of the dependent variables [31]. The regression estimates could be used to explain the relationship between one dependent variable and one or more independent variables. • k-Nearest Neighbours (knn): it is a distance-based method which calculates predictions for a new sample based on the predictions of its k-nearest neighbours, where k ∈ [1.N], according to a similarity measure (Euclidean distance, Mahalanobis distance, or any other) [32]. • Support vector machines with radial basis function kernel (svmrbf): it is a kernel-based method that uses a kernel function (radial basis, linear, polynomial, or any other) to maps the original input space into a new space where predictions can be made more accurately [33]. • Gaussian process with radial basis function kernel (gprbf): it is a kernel-based method that uses a kernel function (radial basis, linear, polynomial, or any other) to ensure smoother predictions, i.e. similar data inputs should get similar outputs. A Gaussian process finds a distribution over the possible functions that are consistent with the observed data [34]. • Random forests (rf): it is a tree-based bagging ensemble for which multiple decision trees are fitted to different views of the observed data. Random forest predictions are made by averaging the individual predictions provided by the multiple decision trees [14]. • Bayesian regularized neural networks (brnn): it is a fullyconnected artificial neural network with one single hidden layer composed by a certain number of units (neurons) which includes a regularization term to the objective function in order to deal with overfitting issues [35]. • Gradient boosting with linear components (glmboost): it is a boosting method which iteratively fits linear base learners to the residuals in order to improve prediction accuracy [36]. • Gradient boosting with smooth components (gamboost): it is a boosting method similar to the previous one but now considering different types of base learners (linear, trees, or any other) [37].

3

2.2.2. Ensembles Stacked ensembles [38] in decision-making is a machine learning technique that aims to provide an overall prediction based on a combination of individual models’ predictions. The use of this kind of ensemble is recommended when heterogeneous models are used [39], which is the case of the eight individual learners listed in Section 2.2.1 that form the ensemble. Each individual learner will be addressing the same learning task (or train data) from a different perspective given their heterogeneity (see Fig. 2 to have an overview of a stacked ensemble configuration of j individual learners). Moreover, ensembles need to use a combining approach which somehow combines predictions of these individual learners into a final overall prediction. Next, a brief description of the simplest combining approaches considered in this study and which were introduced in [40] is provided: • Average (avg): it computes the final prediction Pfinal as the avj erage of the individual learners’ prediction (Pf inal = 1j i=1 Pi ), assuming the use of j individual learners. • Median (median): in contrast to the previous one, this combination approach is not sensitive to possible outliers, i.e. those individual learners with predictions in the extreme of the distribution will not be taken into account (Pf inal = median(P1 , P2 , . . . , Pj )). • Weighted average (wavg): it makes a minor modification to the first combination approach (avg) in such a way that each individual learner j contributes differently to the final prediction according to its average performance, i.e. the higher the average performance of an individual leaner is, the bigger its corresponding weight coefficient would be. In this sense, final j predictions are calculated as follows: Pf inal = i=1 ωi Pi , where  ωi ∈ [0, 1] for each i ∈ [1, . . . , j] and ij=1 ωi = 1.

2.3. Experimental design In order to develop a predictive tool that aids decision-making in the inspection process in the Port of Algeciras Bay, different sizes of the autoregressive window (k) and prediction horizons (t + 1 and t + 7) were considered. Additionally, an aggregation/disaggregation procedure and the validation strategy used in this study are next described. 2.3.1. Aggregation/disaggregation procedure This procedure is proposed due to the fact of a weekly time series being more stable than a daily time series with respect to prediction tasks [41]. It consists of three steps: first, an aggregation step takes part in order to compress daily measures of each week into a new value W(t) by summing up the daily inspections in each logical week (Monday to Sunday); second, a machine learning model is trained over the new weekly time series allowing to ˆ (t + 1 ); and finally, a disaggregapredict a future weekly value W ˆ (t + 1 ) tion step decomposes the predicted future weekly value W into daily measures to obtain daily predictions. The disaggregation procedure relies on assigning a certain weight to each day of the week based on the predicted future ˆ (t + 1 ). For this purpose, a weight matrix D = {λi j } weekly value W consisting of i rows (i ∈ [1.NumW eeks]) depicting each week and j columns ( j ∈ [1.7]) representing the days of the week is constructed. In this sense, λij will be a number between 0 and 1 representing the proportion of inspections for each day j with respect to the weekly aggregated value of week i. Finally, for a given ˆ (t + 1 ), each day of the week j predicted future weekly value W will be assigned a weight as a result of calculating the median

Please cite this article as: J. Ruiz-Aguilar, D. Urda and J.A. Moscoso-López et al., A freight inspection volume forecasting approach using an aggregation/disaggregation procedure, machine learning and ensemble models, Neurocomputing, https://doi.org/10.1016/j.neucom.2019. 06.109

JID: NEUCOM 4

ARTICLE IN PRESS

[m5G;December 17, 2019;20:58]

J. Ruiz-Aguilar, D. Urda and J.A. Moscoso-López et al. / Neurocomputing xxx (xxxx) xxx

Fig. 2. Schema configuration of a stacked ensemble model which combines predictions of j individual heterogeneous learners which are fitted to training data (lighter green) and the optimized models (darker green) are used to make predictions on unseen test data.

of the λkj in the past weeks, where k ∈ [1.t]. Thus, the disaggregation step will provide future daily predictions by computing a ˆ (t + 1 ) with these weight vector product of the predicted value W coefficients. Fig. 3 shows the proposed aggregation/disaggregation procedure. On the top figure, an schema of the procedure is presented where y indicates the values of the original time series used for the aggregation step, W are the aggregated weekly values, n is the number of W values in the past (which will depend on the size ˆ is the predicted weekly value, of the autoregressive window), W D is the disaggregation matrix and yˆ values are the daily freight predictions of the following week obtained by the disaggregation step. On the bottom figure, an example of the original and the ag-

gregated time series as well as their corresponding predictions is shown. 2.3.2. Validation strategy The time series was transformed in different autoregressive datasets as explained Eqs. 1 and 2. Therefore, each prediction (i.e. yˆ(t + 1 )) can be seen as a function of the time series in different lags in the past y(t ), y(t − 1 ), . . . , y(t − (K − 1 )). Hence, each point yˆ(t + 1 ) is estimated using the information contained in this autoregressive arrangement as a multiple regression problem using the different methods explained in Section 2.2. The analysis was carried out performing 10-fold cross-validation [42], thus partitioning the entire datasets in 10 folds of equal sizes in order to esti-

Fig. 3. Schema of the weekly aggregation/disaggregation procedure (top figure) and an example of the observed original and aggregated time series and their corresponding predictions (bottom figure).

Please cite this article as: J. Ruiz-Aguilar, D. Urda and J.A. Moscoso-López et al., A freight inspection volume forecasting approach using an aggregation/disaggregation procedure, machine learning and ensemble models, Neurocomputing, https://doi.org/10.1016/j.neucom.2019. 06.109

ARTICLE IN PRESS

JID: NEUCOM

[m5G;December 17, 2019;20:58]

J. Ruiz-Aguilar, D. Urda and J.A. Moscoso-López et al. / Neurocomputing xxx (xxxx) xxx Table 2 Average performance results for the individual learners after 20 repetitions of 10-fold cross-validation and considering an autoregressive window of K = 7 days. Models’ were tested in scenarios where predictions are made in t+1, t+7 and using the aggregation/disaggregation procedure. The top-3 settings in terms of Pearson’s correlation coefficient are highlighted in bold darkgreen colour. Autoregressive Window (K = 7) Model brnn

gamboost

glmboost

gprbf

knn

linreg

rf

svmrbf

Predictions

σ

d

mse

mae

t+1 t+7 aggregated t+1 t+7 aggregated t+1 t+7 aggregated t+1 t+7 aggregated t+1 t+7 aggregated t+1 t+7 aggregated t+1 t+7 aggregated t+1 t+7 aggregated

0.772 0.7384 0.8061 0.754 0.7389 0.8059 0.7329 0.726 0.8032 0.7866 0.7384 0.8047 0.796 0.7281 0.8006 0.7333 0.726 0.8034 0.8004 0.7243 0.7974 0.7829 0.7367 0.8064

0.8644 0.8363 0.8864 0.8491 0.8365 0.886 0.8314 0.8264 0.8849 0.867 0.8347 0.8846 0.8718 0.8312 0.8823 0.8328 0.8265 0.885 0.8723 0.8304 0.8808 0.8713 0.8414 0.8859

439.6306 494.8991 381.4856 467.2138 493.9152 381.5885 501.6595 513.5133 387.3163 414.0375 494.9393 383.89 399.5406 512.1935 391.7533 501.0472 513.5215 386.7124 391.6501 518.7614 399.1119 421.2023 502.4547 380.8164

15.3639 16.2476 13.949 15.8589 16.2381 13.9491 16.6103 16.9118 14.1228 14.8772 16.2794 14.0087 14.3887 16.4815 14.2118 16.5767 16.9075 14.1107 14.3625 16.6832 14.3291 14.8848 16.067 13.9266

N

i=1

N i=1

(yi − y¯ )(yˆi − yˆ¯ )

(yi − y¯ )2

N

i=1

(yˆi − yˆ¯ )2

Model brnn

gamboost

glmboost

gprbf

knn

linreg

rf

svmrbf

mse(y, yˆ) =

(3)

N

(yˆi − yi )2 ˆ¯ | + |yi − y¯ | )2 i=1 (|yˆi − y

d (y, yˆ) = 1 − N

i=1

(4)

N 1 (yˆi − yi )2 N

(5)

N 1 |yˆi − yi | N

(6)

i=1

mae(y, yˆ) =

Table 3 Average performance results for the individual learners after 20 repetitions of 10-fold cross-validation and considering an autoregressive window of K = 14 days. Models’ were tested in scenarios where predictions are made in t+1, t+7 and using the aggregation/disaggregation procedure. The top-3 settings in terms of Pearson’s correlation coefficient are highlighted in bold darkgreen colour. Autoregressive Window (K = 14)

mate the performance of each model. In this sense, models were fitted in 9 folds (train set) and tested in the unseen test fold left apart (test set) within an iterative procedure that rotates the train and test folds used. Although this work deals with time series analysis, [43] showed that common cross-validation procedures can be used for time series prediction evaluation when purely autoregressive models are used. Furthermore, this procedure was repeated 20 times in order to guarantee the randomness of the partitioning process. Several performance measures were used to test the goodness of each model. Eqs. 3–6 show how the Pearson’s correlation coefficient (σ ), index of agreement (d), mean squared error (mse) and mean absolute error (mae) are calculated given the observed and predicted outcome. Higher values of σ and d show better performance while lower values of mse and mae reflects more accurate predictions.

σ (y, yˆ) = 

5

i=1

Regarding the tune of models’ hyper-parameters, the R package mlrMBO [44] was used to perform a Bayesian optimization within the train set. This package implements a Bayesian optimization of black-box functions which allows to find faster an

Predictions

σ

d

mse

mae

t+1 t+7 aggregated t+1 t+7 aggregated t+1 t+7 aggregated t+1 t+7 aggregated t+1 t+7 aggregated t+1 t+7 aggregated t+1 t+7 aggregated t+1 t+7 aggregated

0.7937 0.7901 0.812 0.7857 0.7766 0.8096 0.7731 0.7713 0.8093 0.804 0.7911 0.8111 0.8209 0.7865 0.8031 0.771 0.7713 0.8093 0.8189 0.787 0.8079 0.8042 0.7915 0.8116

0.8775 0.8734 0.8899 0.8714 0.8638 0.8885 0.8596 0.86 0.8885 0.8793 0.8723 0.8885 0.8871 0.8692 0.8842 0.8611 0.86 0.8886 0.8857 0.87 0.8875 0.8851 0.8779 0.8897

402.0257 408.9086 370.2625 414.2814 430.7514 375.1528 435.2659 439.061 375.5709 384.3438 407.5245 371.7847 360.7984 416.5203 386.6472 438.4682 439.0511 375.5629 360.5648 414.7612 378.2356 383.4633 408.5709 371.395

14.5559 14.6185 13.791 14.8235 15.1028 13.9 15.3118 15.3625 13.9392 14.2548 14.5562 13.8305 13.7385 14.7708 14.1526 15.3125 15.3604 13.939 13.6921 14.7832 13.989 14.0733 14.4395 13.7949

optimal hyper-parameters setting in contrast to traditional hyperparameters search strategies such as grid search (highly time consuming when more than 3 hyper-parameters are tuned) or random search (not efficient enough since similar or non-sense hyperparameters settings might be tested). In order to contrast the results of the proposed approach and to assure their reproducibility, authors have made available a basic implementation of the code in a public repository GitHub (https://bit.ly/2u4zuLt)”. 3. Results and discussion This section contains the results obtained in this study on the development of a predictive tool to aid decision-making regarding the inspection process in the Port of Algeciras Bay. Tables 2–5 show the average performance of the different individual learners tested in this study for different sizes of the autoregressive window (K = 7, 14, 21, 28 respectively). Moreover, on each table the results for the two prediction horizons considered (t + 1 and t + 7) and the results obtained by using the aggregation/disaggregation procedure described in Section 2.3.1 are presented. In terms of predictions, the aggregated procedure seems to provide much better performance when the size of the autoregressive window is small (K = 7), with σ ≈ 0.806 and mse ≈ 381. However, the picture changes when higher sizes of autoregressive window are used (K = 14, 21, 28), finding some individual models which may provide similar or slightly better results when daily predictions (t + 1) are made. In this sense, it is possible to achieve results of σ ≈ 0.821 and mse ≈ 355 when K = 28 days in the past are used to predict the number of inspections of the next day. Another important finding in the analysis carried out is linked to the size of the autoregressive window. It turns out that the individual learners’ performance seem to improve whenever higher values of K are used, i.e. more days in the past are used to make

Please cite this article as: J. Ruiz-Aguilar, D. Urda and J.A. Moscoso-López et al., A freight inspection volume forecasting approach using an aggregation/disaggregation procedure, machine learning and ensemble models, Neurocomputing, https://doi.org/10.1016/j.neucom.2019. 06.109

ARTICLE IN PRESS

JID: NEUCOM 6

[m5G;December 17, 2019;20:58]

J. Ruiz-Aguilar, D. Urda and J.A. Moscoso-López et al. / Neurocomputing xxx (xxxx) xxx Table 4 Average performance results for the individual learners after 20 repetitions of 10-fold cross-validation and considering an autoregressive window of K = 21 days. Models’ were tested in scenarios where predictions are made in t+1, t+7 and using the aggregation/disaggregation procedure. The top-3 settings in terms of Pearson’s correlation coefficient are highlighted in bold darkgreen colour. Autoregressive Window (K = 21) Model brnn

gamboost

glmboost

gprbf

knn

linreg

rf

svmrbf

Table 5 Average performance results for the individual learners after 20 repetitions of 10-fold cross-validation and considering an autoregressive window of K = 28 days. Models’ were tested in scenarios where predictions are made in t+1, t+7 and using the aggregation/disaggregation procedure. The top-3 settings in terms of Pearson’s correlation coefficient are highlighted in bold darkgreen colour. Autoregressive Window (K = 28)

Predictions

σ

d

mse

mae

t+1 t+7 aggregated t+1 t+7 aggregated t+1 t+7 aggregated t+1 t+7 aggregated t+1 t+7 aggregated t+1 t+7 aggregated t+1 t+7 aggregated t+1 t+7 aggregated

0.8019 0.7954 0.816 0.7947 0.7885 0.813 0.7879 0.7869 0.8126 0.8077 0.7973 0.8152 0.8247 0.7947 0.8087 0.7844 0.7869 0.8126 0.8227 0.7994 0.8104 0.8048 0.7988 0.8154

0.8829 0.8778 0.8927 0.8767 0.872 0.8907 0.8696 0.8711 0.8906 0.8825 0.8768 0.8911 0.8901 0.8742 0.887 0.871 0.8711 0.8909 0.8885 0.8779 0.8887 0.8847 0.8825 0.8923

384.8653 396.9969 360.5075 395.7638 407.7962 366.1331 407.8232 410.3561 366.4689 374.8514 393.8418 361.8665 350.4394 400.0238 374.5905 413.2337 410.3642 366.7073 350.951 390.1099 371.6315 380.2808 392.3813 362.1649

14.4464 14.4284 13.6981 14.5423 14.7495 13.8227 14.7989 14.8 13.8388 14.1551 14.3174 13.7411 13.7033 14.4998 14.0059 14.8685 14.7971 13.8456 13.5386 14.4012 14.0097 14.0877 14.189 13.7127

future predictions. In this sense, it can be observed that, in the best case scenario, the performance results in terms of (σ , mse) are (0.806, 380.8), (0.819, 360.6), (0.825, 350.4), (0.821, 355) for K = 7, 14, 21, 28 respectively. These results clearly show that incrementing the size of the autoregressive window is a priori beneficial up to a certain point (K = 21), i.e. the number of inspections on a given day is somehow related to the number of inspections in the past 21 days. Nevertheless, going further in time such as K = 28 saturates the results and does not provide any increase in terms of performance, meaning that the consideration of more days apart from to the closest 21 ones may be adding noise that will make individual learners’ predictions harder. With respect to the individual learners tested in this study, it can be highlighted that a neural network based model (brnn) appears always in the top-3 models with better performance results independently of the size of the autoregressive window. Another two individual learners such as knn and rf seem to be consistent too as they are also in the top-3 models with better performance results except when the size of autoregressive window is small (K = 7), where gamboost and svmrbf provide slightly better results. Furthermore, some ensemble models were also considered in this study in order to try to boost predictions. In this sense, for a given time t the ensemble models will take individual learners’ prediction at time t and will combine them as described in Section 2.2.2. Table 6 shows the average performance of the three ensemble models tested for different sizes of the autoregressive window. In general, it turns out that the ensemble which makes a weighted average of individual learners assigning a higher weight to learners with lower mean squared error performs always better than the other two ensemble models, independently of the size of the autoregressive window. Moreover, it is important to highlight that this ensemble model achieves the best performance result of all the settings tested in this study when the size of

Model brnn

gamboost

glmboost

gprbf

knn

linreg

rf

svmrbf

Predictions

σ

d

mse

mae

t+1 t+7 aggregated t+1 t+7 aggregated t+1 t+7 aggregated t+1 t+7 aggregated t+1 t+7 aggregated t+1 t+7 aggregated t+1 t+7 aggregated t+1 t+7 aggregated

0.8018 0.7964 0.8152 0.7969 0.7936 0.8132 0.7932 0.7929 0.8136 0.8084 0.8 0.8152 0.8213 0.7947 0.8073 0.7906 0.7929 0.8135 0.821 0.8008 0.8098 0.8063 0.8 0.8152

0.8829 0.8787 0.8918 0.8776 0.8751 0.8906 0.8741 0.875 0.8908 0.8829 0.8787 0.8907 0.8867 0.873 0.885 0.8753 0.875 0.891 0.8871 0.8783 0.8879 0.887 0.8815 0.8921

384.9156 396.9813 363.1143 392.7942 400.214 366.7332 399.4588 401.2977 366.3419 374.9365 390.3091 363.3332 358.7233 401.965 378.714 403.5055 401.2955 366.5673 354.993 388.7101 373.9779 378.0227 391.6581 363.5332

14.2657 14.3714 13.7486 14.3654 14.5251 13.803 14.5059 14.5528 13.8036 14.0332 14.1909 13.7647 13.9101 14.5839 14.1465 14.5706 14.5502 13.8009 13.6116 14.2991 14.0365 13.9199 14.1991 13.7258

Table 6 Average performance results for the ensembles used in the analysis after 20 repetitions of 10-fold cross-validation. These results correspond to the measured performance obtained by combining predictions made by the different individual learners and settings tested in this study at time ‘t’. The best result in terms of Pearson’s correlation coefficient is highlighted in bold dark-green colour. Ensembles K 7 days

14 days

21 days

28 days

Ensemble

σ

d

mse

mae

avg median wavg avg median wavg avg median wavg avg median wavg

0.8019 0.7968 0.8062 0.8175 0.8138 0.829 0.8209 0.817 0.8322 0.8202 0.8158 0.8307

0.8728 0.8709 0.8862 0.8869 0.8851 0.893 0.89 0.8878 0.8956 0.8898 0.8872 0.8939

391.1764 398.9041 381 361.8319 367.9398 342.8329 352.8278 359.6751 334.192 354.9295 362.4167 338.4464

14.3122 14.3339 13.9336 13.6014 13.6841 13.2395 13.4851 13.6301 13.1595 13.5416 13.6457 13.2934

the autoregressive window is K = 21 days in the past, getting up to σ ≈ 0.832 and mse ≈ 334.2. Additionally, the results of this ensemble model is statistically significant compared to the next best model knn when K = 21 (p-value = 1.907 × 10−6 ) according to a Wilcoxon signed rank test [45–47] which is commonly used to test statistical significance of machine learning models. Despite the positive results obtained with this ensemble model, one might expect even better results taking into account the wide variety of individual learners that are part of the ensembles (one of the rule of thumbs to keep in mind when using ensembles). However, Fig. 4 shows the high correlation (dark blue colour and “line” shape representation) among all predictions of the different indi-

Please cite this article as: J. Ruiz-Aguilar, D. Urda and J.A. Moscoso-López et al., A freight inspection volume forecasting approach using an aggregation/disaggregation procedure, machine learning and ensemble models, Neurocomputing, https://doi.org/10.1016/j.neucom.2019. 06.109

JID: NEUCOM

ARTICLE IN PRESS

[m5G;December 17, 2019;20:58]

J. Ruiz-Aguilar, D. Urda and J.A. Moscoso-López et al. / Neurocomputing xxx (xxxx) xxx

7

Fig. 4. Pearson’s correlation coefficient between predictions of the individual learners used in this analysis considering a window of K=21 days -best setting according to Table 6-, where highly correlated predictions are represented with darker blue colour and sharper line representation. The names of the individual learners include a prefix, where ’D’ indicates a (t + 1) prediction, ’W’ a (t + 7) prediction, and ’A’ a prediction using the aggregation/disaggregation procedure.

vidual learners used and prediction horizons considered, where the prefix ‘D’ corresponds to daily predictions (t + 1), ‘W’ to weekly predictions (t + 7), and ‘A’ to predictions made using the aggregation/disaggregation procedure. This clearly violates the second rule of thumbs when using ensembles which suggests to combine a wide variety of individual learners for which their predictions are not highly correlated, thus having different views to approach the same problem. Therefore, the small improvement achieved using the ensemble may be explained by the highly correlated predictions of individual learners. Fig. 5 summarizes a comparative analysis on the effects of the size of the autoregressive window in the performance of both individual learners and ensemble models to predict the number of inspections in the BIPs. On one hand, it confirms that results saturate beyond K = 21 days and that both aggregated (blue colour) and daily predictions (red colour) performs better than weekly predictions (green colour). On the other hand, Fig. 5 highlights the low variance present when using the aggregation/disaggregation procedure to make predictions independently of the size of the autoregressive window, i.e. the choice of the model used to make predictions is not an important factor anymore as performance results will look very similar.

4. Conclusions This paper has presented an approach to develop a tool in order to aid decision-making with respect to the inspection process at the PIBs in the Port of Algeciras Bay. In concrete, 4 datasets were generated from the original time series data based on the size of the autoregressive window K = 7, 14, 21, 28 days in the past. These sizes were motivated by the autocorrelation analysis made considering different lags which revealed the existence of a weekly pattern. Datasets were analyzed considering two prediction horizons (t + 1) and (t + 7) and by using a weekly aggregation/disaggregation procedure proposed in this paper. Therefore, this study aimed to analyze the efficacy of using a variety of wellknown machine learning models (individual learners) and ensemble models which allow to combine predictions of individual learners in order to accurately predict the number of inspections at the PIBs in a specific day. Models’ performance were tested using a 10-fold cross-validation strategy which was repeated 20 times to guarantee the randomness of the folds partitioning process. In general, the results of the analysis have shown that using the proposed aggregation/disaggregation procedure provides bet-

Please cite this article as: J. Ruiz-Aguilar, D. Urda and J.A. Moscoso-López et al., A freight inspection volume forecasting approach using an aggregation/disaggregation procedure, machine learning and ensemble models, Neurocomputing, https://doi.org/10.1016/j.neucom.2019. 06.109

JID: NEUCOM 8

ARTICLE IN PRESS

[m5G;December 17, 2019;20:58]

J. Ruiz-Aguilar, D. Urda and J.A. Moscoso-López et al. / Neurocomputing xxx (xxxx) xxx

Fig. 5. Comparative analysis on the effects of the size of the autoregressive window in the performance of machine learning techniques to predict the number of inspections in the BIPs.

ter performance than making daily (t + 1) or weekly (t + 7) predictions. This procedure produced also very consistent results in terms of variance across different sizes of autoregressive window and independently of the model used. Moreover, a neural network based model (brnn) turned out to be the more robust machine learning model among the individual learners considered in terms of highly accurate predictions, independently of the size of the autoregressive window. Regarding ensemble models, the application of a weighted average ensemble model allowed to boost predictions and increase the performance results up to σ ≈ 0.832 and mse ≈ 334.2, being these results statistically significant according to a Wilcoxon signed rank test (p-value = 1.907 × 10−6 ). In addition, the performance results of all models tested seemed to improve when the size of the autoregressive window increased. Nevertheless, the absolute improvement achieved was getting lower and lower within each increment of the size of the autoregressive window, up to a point of K = 21 days in the past where performance results were saturated, thus make irrelevant to consider higher sizes of K. Finally, these results confirmed the utility of machine learning approaches in forecasting freight inspection volume which may lead to an efficient supporting tool to detect workload peaks and congestion in goods inspection facilities of seaports or airports. This work may be further extended in the future by considering cutting-edge models such as deep learning which allows to incorporate prior knowledge based on problem-specific information to treat groups of independent variables differently with the hope of pushing predictions even forward.

Declaration of Competing Interest None. Acknowledgments This work is part of the coordinated research projects TIN201458516-C2-1-R and TIN2014-58516-C2-2-R supported by MICINN (Ministerio de Economía y Competitividad), Spain. The data has been kindly provided by the Port Authority of Algeciras Bay. References [1] C.-C. Chou, C.-W. Chu, G.-S. Liang, A modified regression model for forecasting the volumes of taiwan’s import containers, Math. Comput. Model. 47 (9–10) (2008) 797–807, doi:10.1016/j.mcm.2007.05.005. [2] E.I. Vlahogianni, M.G. Karlaftis, J.C. Golias, Short-term traffic forecasting: where we are and where we’re going, Transp. Res. Part C: Emerging Technol. 43 (2014) 3–19. [3] L. Zhang, Q. Liu, W. Yang, N. Wei, D. Dong, An improved k-nearest neighbor model for short-term traffic flow prediction, Proc.-Soc. Behav. Sci. 96 (2013) 653–662. [4] Z. Zheng, D. Su, Short-term traffic volume forecasting: a k-nearest neighbor approach enhanced by constrained linearly sewing principle component algorithm, Transp. Res. Part C: Emerging Technol. 43 (2014) 143–157. [5] M. Frisk, M. Göthe-Lundgren, K. Jörnsten, M. Rönnqvist, Cost allocation in collaborative forest transportation, Eur. J. Oper. Res. 205 (2) (2010) 448–458. [6] M. Reichel, M. Botsch, R. Rauschecker, K.-H. Siedersberger, M. Maurer, Situation aspect modelling and classification using the scenario based random forest algorithm for convoy merging situations, in: 2010 13th International IEEE Conference on Intelligent Transportation Systems (ITSC), IEEE, 2010, pp. 360–366.

Please cite this article as: J. Ruiz-Aguilar, D. Urda and J.A. Moscoso-López et al., A freight inspection volume forecasting approach using an aggregation/disaggregation procedure, machine learning and ensemble models, Neurocomputing, https://doi.org/10.1016/j.neucom.2019. 06.109

JID: NEUCOM

ARTICLE IN PRESS

[m5G;December 17, 2019;20:58]

J. Ruiz-Aguilar, D. Urda and J.A. Moscoso-López et al. / Neurocomputing xxx (xxxx) xxx [7] Y. Zhang, A. Haghani, A gradient boosting method to improve travel time prediction, Transp. Res. Part C: Emerging Technol. 58 (2015) 308–324. [8] C.-H. Wu, C.-C. Wei, D.-C. Su, M.-H. Chang, J.-M. Ho, Travel time prediction with support vector regression, in: Intelligent Transportation Systems, 2003. Proceedings. 2003 IEEE, volume 2, IEEE, 2003, pp. 1438–1442. [9] J. Wang, Q. Shi, Short-term traffic speed forecasting hybrid model based on chaos–wavelet analysis-support vector machine theory, Transp. Res. Part C: Emerging Technol. 27 (2013) 219–232. [10] V. Gosasang, W. Chandraprakaikul, S. Kiattising, A comparison of traditional and neural networks forecasting techniques for container throughput at bangkok port, Asian J. Shipp. Logist. 27 (3) (2011) 463–482. [11] Y. Wei, M.-C. Chen, Forecasting the short-term metro passenger flow with empirical mode decomposition and neural networks, Transp. Res. Part C: Emerging Technol. 21 (1) (2012) 148–162. [12] S.B. Kotsiantis, I.D. Zaharakis, P.E. Pintelas, Machine learning: a review of classification and combining techniques, Artif. Intell. Rev. 26 (3) (2006) 159–190. [13] L. Breiman, Bagging predictors, Mach. Learn. 24 (2) (1996) 123–140. [14] L. Breiman, Random forests, Mach. Learn. 45 (1) (2001) 5–32. [15] H. Drucker, C. Cortes, L.D. Jackel, Y. LeCun, V. Vapnik, Boosting and other ensemble methods, Neural Comput. 6 (6) (1994) 1289–1301. [16] Y. Freund, R.E. Schapire, et al., Experiments with a new boosting algorithm, in: ICML, volume 96, Citeseer, 1996, pp. 148–156. [17] F. Roli, G. Giacinto, G. Vernazza, Methods for designing multiple classifier systems, in: International Workshop on Multiple Classifier Systems, Springer, 2001, pp. 78–87. [18] K.M. Ting, I.H. Witten, Issues in stacked generalization, J. Artif. Intell. Res. 10 (1999) 271–289. [19] H. Al-Deek, Which method is better for developing freight planning models at seaports-neural networks or multiple regression? Transp. Res. Rec.: J. Transp. Res. Board 1763 (2001) 90–97, doi:10.3141/1763-14. [20] W.H.K. Lam, P.L.P. Ng, W. Seabrooke, E.C.M. Hui, Forecasts and reliability analysis of port cargo throughput in hong kong, J. Urban Plann. Dev. 130 (3) (2004) 133–144, doi:10.1061/(ASCE)0733-9488(2004). [21] J.A.M. López, J.J. Ruiz-Aguilar, I. Turias, M. Cerbán, M.J. Jiménez-Come, A comparison of forecasting methods for ro-ro traffic: a case study in the strait of gibraltar, in: Proceedings of the Ninth International Conference on Dependability and Complex Systems DepCoS-RELCOMEX. June 30–July 4, 2014, Brunów, Poland, Springer, 2014, pp. 345–353. [22] J.J. Ruiz-Aguilar, I. Turias, J.A. Moscoso-López, M.J. Jiménez-Come, M. Cerbán, Forecasting of short-term flow freight congestion: a study case of algeciras bay port (spain), Dyna 83 (195) (2016) 163–172. [23] K.-L. Mak, D.H. Yang, Forecasting hong kong’s container throughput with approximate least squares support vector machines, in: World Congress on Engineering, Citeseer, 2007, pp. 7–12. [24] Y. Zhang, Y. Xie, Forecasting of short-term freeway volume with v-support vector machines, Transp. Res. Record 2024 (1) (2007) 92–99. [25] J.-A. Moscoso-López, I.J.T. Turias, M.J. Come, J.J. Ruiz-Aguilar, M. Cerbán, Short-term forecasting of intermodal freight using ANNs and SVR: case of the port of algeciras bay, Transp. Res. Proc. 18 (2016) 108–114. [26] N. Kourentzes, D.K. Barrow, S.F. Crone, Neural network ensemble operators for time series forecasting, Expert Syst. Appl. 41 (9) (2014) 4235– 4244. [27] J.A. Moscoso-López, I.J. Turias, J.J.R. Aguilar, F.J. Gonzalez-Enrique, Svr-ensemble forecasting approach for ro-ro freight at port of algeciras (spain), in: The 13th International Conference on Soft Computing Models in Industrial and Environmental Applications, Springer, 2018, pp. 357–366. [28] J.J. Ruiz-Aguilar, I. Turias, J.A. MoscosoLópez, J.M. Jesús, M. CerbánJiménez, Efficient goods inspection demand at ports: a comparative forecasting approach, Int. Trans. Oper. Res. (2017), doi:10.1111/itor.12397. – [29] G.E.P. Box, G. Jenkins, Time Series Analysis, Forecasting and Control, Holden– Day, Inc., San Francisco, CA, USA, 1990. [30] B. Bischl, M. Lang, L. Kotthoff, J. Schiffner, J. Richter, E. Studerus, G. Casalicchio, Z.M. Jones, Mlr: machine learning in r, J. Mach. Learn. Res. 17 (170) (2016) 1– 5. [31] J. Neter, M.H. Kutner, C.J. Nachtsheim, W. Wasserman, Applied Linear Statistical Models, Irwin, 1996. [32] T. Cover, P. Hart, Nearest neighbor pattern classification, IEEE Trans. Inf. Theor. 13 (1) (2006) 21–27. [33] F. Rossi, N. Villa, Support vector machine for functional data classification, Neurocomputing 69 (7) (2006) 730–742. [34] C.K.I. Williams, Prediction with Gaussian Processes: From Linear Regression to Linear Prediction and Beyond, Springer Netherlands, 1998, pp. 599– 621. [35] D.J.C. MacKay, A practical Bayesian framework for backpropagation networks, Neural Comput. 4 (3) (1992) 448–472.

9

[36] Y. Freund, R.E. Schapire, A decision-theoretic generalization of on-line learning and an application to boosting, J. Comput. Syst. Sci. 55 (1) (1997) 119– 139. [37] J.H. Friedman, Greedy function approximation: a gradient boosting machine, Ann. Stat. 29 (5) (2001) 1189–1232. [38] R. Polikar, Ensemble based systems in decision making, IEEE Circuits Syst. Mag. 6 (3) (2006) 21–45. [39] M.P. Sesmero, A.I. Ledezma, A. Sanchis, Generating ensembles of heterogeneous classifiers using stacked generalization, Wiley Interdiscip. Rev.: Data Min. Knowl. Discov. 5 (1) (2015) 21–34, doi:10.1002/widm.1143. [40] C.M. Bishop, Pattern Recognition and Machine Learning (Information Science and Statistics), Springer-Verlag, Berlin, Heidelberg, 2006. [41] S. Khan, S. Ritchie, Statistical and neural classifiers to detect traffic operational problems on urban arterials, Transp. Res. Part C: Emerging Technol. 6 (5–6) (1998) 291–314. [42] R. Kohavi, A study of cross-validation and bootstrap for accuracy estimation and model selection, in: Proceedings of the 14th International Joint Conference on Artificial Intelligence - Volume 2, in: IJCAI’95, 1995, pp. 1137–1143. [43] C. Bergmeir, R.J. Hyndman, B. Koo, A note on the validity of cross-validation for evaluating autoregressive time series prediction, Comput. Stat. Data Anal. 120 (2018) 70–83, doi:10.1016/j.csda.2017.11.003. [44] B. Bischl, J. Richter, J. Bossek, D. Horn, J. Thomas, M. Lang, MlrMBO: a modular framework for model-based optimization of expensive black-box functions. arXiv preprint arXiv:1703.03373 (2017). [45] T.G. Dietterich, Approximate statistical tests for comparing supervised classification learning algorithms, Neural Comput. 10 (1998) 1895–1923. [46] J. Demšar, Statistical comparisons of classifiers over multiple data sets, J. Mach. Learn. Res. 7 (2006) 1–30. [47] A. Lacoste, F. Laviolette, M. Marchand, Bayesian comparison of machine learning algorithms on single and multiple datasets, in: Proceedings of the Fifteenth International Conference on Artificial Intelligence and Statistics, volume 22, 2012, pp. 665–675. Juan J. Ruiz-Aguilar, received the BEng. degree in Civil Engineering in 2006, the MEng. degrees in Civil Engineering (2008) and Computational Modelling in Engineering (2010), the MSc degree in Logistics and Port Management (2012), and the PhD degree in Civil Engineering in 2014. From 2009 to 2010 he worked for consulting companies within the civil sector and since 2010 for the University of Cádiz. He is currently an Associate Professor in the Department of Industrial and Civil Engineering at the Engineering School of Algeciras and he is the coordinator of the Port Engineering specialization in the MS degree in Logistics and Port Management (University of Cádiz). His present interests lies in the field of soft computing, simulation, modeling, forecasting and its applications in transportation, civil and logistics problems. Daniel Urda is an ASECTI researcher at the University of Cádiz who holds a Ph.D. in Computer Science. His areas of specialization are focused on data analysis and machine learning. He has been hired for 2 years with a Marie Curie fellowship by Pharmatics Ltd., a company whose main clients are the National Health System of the United Kingdom and pharmaceutical companies, to apply machine learning models in biomedical data. He has made different internships at the Liverpool John Moores University, the INSERM in Paris and in the ETH Zurich. Currently, he is involved in several research projects to apply machine learning techniques in the industry.

José Antonio Moscoso-López, received the BEng. degree in Civil Engineering in 2001, the MEng. degree in Civil Engineering in 2003 from the University Alfonso X el Sabio (Spain) and the Ph.D. in Engineering in 2013 from de University of Cádiz (Spain). From 2003 to 2009, he worked for civil construction and consulting companies within the civil engineering (construction) sector and since 2009 for the University of Cádiz. He is currently an Asociate Professor with the Department of Industrial and Civil Engineering at the Engineering School of Algeciras. His research interest include simulation, modeling and forecasting nonlinear time-series in ports and logistics environments.

Please cite this article as: J. Ruiz-Aguilar, D. Urda and J.A. Moscoso-López et al., A freight inspection volume forecasting approach using an aggregation/disaggregation procedure, machine learning and ensemble models, Neurocomputing, https://doi.org/10.1016/j.neucom.2019. 06.109

JID: NEUCOM 10

ARTICLE IN PRESS

[m5G;December 17, 2019;20:58]

J. Ruiz-Aguilar, D. Urda and J.A. Moscoso-López et al. / Neurocomputing xxx (xxxx) xxx Javier González-Enrique is pursuing a Ph.D. in Sustainable and Energy Engineering at the University of Cádiz. Having experience in different IT consulting companies, he currently works at the Department of Computer Engineering of the University of Cádiz under a predoctoral contract. His main research fields are Artificial Intelligence and Data Mining, with high interest in the application of Soft Computing techniques to solve problems of different fields. He is a member of the Intelligent Modelling of Systems research group of the aforementioned University and is currently working in several research projects devoted to the prediction of atmospheric pollutants.

Ignacio J. Turias, received the B.Sc. and M.Sc. degrees in Computer Science from the University of Málaga (Spain), and the Ph.D. degree in Industrial Engineering from the University of Cádiz in 2003. He is currently a Professor (Reader or Associate Professor) with the Department of Computer Engineering at the University of Cádiz. His present interests lie in the field of soft computing and its applications in industrial, environmental and logistics problems. He has coauthored numerous technical journals and conference papers which are the result of his participation and leadership of research projects and has served as reviewer of several journals and conference proceedings. He has been contracted by a number of companies. He also was the Head of the Engineering School of Algeciras during the period between 2003 and 2011. He is currently the principal investigator of the research group of Intelligent Modelling of Systems.

Please cite this article as: J. Ruiz-Aguilar, D. Urda and J.A. Moscoso-López et al., A freight inspection volume forecasting approach using an aggregation/disaggregation procedure, machine learning and ensemble models, Neurocomputing, https://doi.org/10.1016/j.neucom.2019. 06.109