Journal of King Saud University – Computer and Information Sciences xxx (2017) xxx–xxx
Contents lists available at ScienceDirect
Journal of King Saud University – Computer and Information Sciences journal homepage: www.sciencedirect.com
A hybridized ELM-Jaya forecasting model for currency exchange prediction Smruti Rekha Das ⇑, Debahuti Mishra, Minakhi Rout Departement of Computer Science and Engineering, Siksha Ó’Anusandhan University, Bhuabneswar, Odisha, India Department of Information Technology, GNITC, Hyderabad, India
a r t i c l e
i n f o
Article history: Received 19 May 2017 Revised 6 September 2017 Accepted 18 September 2017 Available online xxxx Keywords: Currency exchange prediction Extreme Learning Machine (ELM) Jaya Neural Network (NN) Functional Link Artificial Neural Network (FLANN)
a b s t r a c t This paper establishes a hybridized intelligent machine learning based currency exchange forecasting model using Extreme Learning Machines (ELMs) and the Jaya optimization technique. This model can very well forecast the exchange price of USD (US Dollar) to INR (Indian Rupee) and USD to EURO based on statistical measures, technical indicators and combination of both measures over a time frame varying from 1 day to 1 month ahead. The proposed ELM-Jaya model has been compared with existing optimized Neural Network and Functional Link Artificial Neural Network based predictive models. Finally, the model has been validated using various performance measures such as; MAPE, Theil’s U, ARV and MAE. The comparison of different features demonstrates that the technical indicators outperform both the statistical measures and a combination of statistical measures and technical indicators in ELM-Jaya forecasting model. Ó 2017 The Authors. Production and hosting by Elsevier B.V. on behalf of King Saud University. This is an open access article under the CC BY-NC-ND license (http://creativecommons.org/licenses/by-nc-nd/4.0/).
1. Introduction Exchange rate is the price for which the currency of a country can be exchanged for another country’s currency. Foreign exchange rates (Forex) market analysis is one of the most popular channels of investment, based on the Forex time series data like the stock market indices. In International market, Forex is used as the most important statistical measure of the country, to judge the overall health of the economy. The basic ideas and methods used for the development of the trading system (Dymova et al., 2016) in equity, Forex and future markets are similar to each other. The exchange price is (Yao and Tan, 2000) affected by political, economic and even psychological factors who are having a mutual relationship with a very complex fashion. Therefore, to forecast such highly fluctuating and irregular data is (Rout et al., 2015) usually subject to large errors. Researchers and practitioners have been making great efforts to develop more realistic models for predicting finan⇑ Corresponding author at: Departement of Computer Science and Engineering, Siksha Ó’Anusandhan University, Bhuabneswar, Odisha, India. E-mail addresses:
[email protected] (S.R. Das), debahutimishra@ soauniversity.ac.in (D. Mishra),
[email protected] (M. Rout). Peer review under responsibility of King Saud University.
Production and hosting by Elsevier
cial time series data. The traditional statistical system (Yao and Tan, 2000) used for the financial market prediction undergoes various shortcomings, hence; researchers have developed various soft computing techniques, which are extensively used for various categories of financial market prediction. The models used for financial forecasting involves two categories. One category involves statistical measures like Autoregressive Moving Average (ARMA), Auto Regressive Integrated Moving Average (ARIMA) etc. and the second category involves soft computing based methods like Artificial Neural Network (ANN), Support Vector Machine (SVM), Fuzzy Set Theory (Dash et al., 2014) etc. In the literature many exciting publications on exchange rate prediction have been introduced based on both parametric and nonparametric approach and both (Ince and Trafails, 2006) parametric and nonparametric approaches have been combined so as to get better performance. In parametric models the economic theory is used to understand the structural relations between exchange rate and other variables and also statistical methods to identify the correlation and non linearity in time series. In another communication (Anastasakis and Mort, 2009) the authors argued that due to the fast evolving and nonparametric nature of financial markets, nonparametric modelling approaches are thought to be a better alternative to parametric methods for financial forecasting. As the time series data are highly non-linear and the mean and variance of the series can change over time, in order to overcome this difficulty, Auto Regressive Conditional Heteroscedasticity (Engle, 1982) (ARCH) is introduced by Eagle, which is generalized by Bollerslev
https://doi.org/10.1016/j.jksuci.2017.09.006 1319-1578/Ó 2017 The Authors. Production and hosting by Elsevier B.V. on behalf of King Saud University. This is an open access article under the CC BY-NC-ND license (http://creativecommons.org/licenses/by-nc-nd/4.0/).
Please cite this article in press as: Das, S.R., et al. A hybridized ELM-Jaya forecasting model for currency exchange prediction. Journal of King Saud University – Computer and Information Sciences (2017), https://doi.org/10.1016/j.jksuci.2017.09.006
2
S.R. Das et al. / Journal of King Saud University – Computer and Information Sciences xxx (2017) xxx–xxx
(Bollerslev, 1986). These nonlinear models ARCH and GARCH are appropriate for volatility changes in financial time series (AmirAskari and Menhaj, 2016). In the past decades linear models are introduced (Box et al., 2015) which are simple to understand and implementation but from survey (Ahmed et al., 2013; Kadilar et al., 2009) it is found that, though; ARIMA developed based on the assumption that the time series being forecasted is linear and stationary, but most of the time series data are nonlinear and non stationary, which proves highly poor prediction performance of ARIMA. To alleviate the limitations of statistical based methods Rout et al. (2014) introduced ARMA based on DE, employing on Rupees, Pound and Yen exchange rate with respect to USD. The above ARMA-DE model used the features of statistical measures such as mean and variance along with normalized value of exchange rate price, and proved its best performance by comparing with other competitive model. Using the same feature mean and variance with some of the common features from past conversation rates (Majhi et al., 2009) from Dollar to Pound and Rupees, the authors proposed an efficient low complexity nonlinear adaptive model and proves its superior performance in all the cases. Lahmiri (2014) uses technical indicators as inputs to forecast the trend of all US currency exchange rate future prices using BPNN trained with different numerical algorithms. In addition to this in another article (Lahmiri, 2012a) the author uses four categories of technical analysis measures such as oscillators, stochastic measures, indexes and indicators for the prediction of future price of international stock market. Through simulation study the effectiveness of technical indicators is proved to be better in comparison with oscillators, stochastic measures and indexes. Here in Lahmiri (2012b), author uses technical indicators as inputs to the learning vector quantization (LVQ) neural networks to predict the future price index of S&P 500 downs by 0.1%, 0.2%, 0.3% and 0.4%. Again in Lahmiri (2017), the author extracted eight technical indicators as the predictive variables from historical volatility series and fed to BPNN for the prediction of exchange rate volatility. From the survey on different types of ANN, it is found that back propagation algorithm is commonly used learning technique, but it suffers from slow rate of convergence. Therefore, optimization techniques such as; Genetic Algorithm (GA) and Particle Swarm Optimization (PSO) have been applied on ANN and used for share price prediction but this research cannot produce (Baba et al., 1992) satisfactory stock trading export system. In traditional feed forward network, the weights and biases of different layers need to be tuned at each iteration. Hence, many iteration steps are required to get the better performance (Huang et al., 2004) in case of the learning methods based on the gradient descent, which is easily converged to the local minima. In Single-hidden Layer Feed Forward Neural Network (SLFN) with maximum n hidden neurons, the arbitrarily taken input weights and the hidden (Huang, 2003) layer biases needs to be adjusted each time. Based on this concept a simple learning algorithm called (Huang et al., 2004) Extreme Learning Machines (ELMs) has been introduced, whose learning speed is much faster than traditional learning algorithms. Here the Author compared ELM with traditional classic gradient based learning and stated that unlike the traditional gradient-based learning algorithm which intend to reach minimum training error but do not consider the magnitude of weights, the ELM tends to reach not only the smallest training error but also the smallest norm of weights and simultaneously trying to achieve the better generalization performance for feed forward neural networks with a blisteringly faster running speed. The motivation behind this study is to improve the skill of time series model in predicting exchange rate price with appropriate machine learning approach. We have observed from the literature study that a very few researchers have used technical indicators as well as a combination of both technical indicators and statistical
measures in the exchange rate prediction, hence this is the noble idea to introduce technical indicators and a combination of both technical indicators and statistical measures. The next objective of this paper is to introduce ELM along with Jaya, which is new to this area as not any one has tried for this. Jaya is a newly proposed algorithm having strong potential to solve the constrained as well as Rao (2016) unconstrained optimization problem. The algorithm needs only common controlling parameters and does not require any algorithm specific control parameters. Keeping in view of Teaching Learning Based Optimization (TLBO) algorithm, which also does not require (Khalghani and Khooban, 2014) any algorithm specific parameters to be tuned, Jaya is introduced, but again Jaya differs from TLBO, as unlike two phases that is teaching phase and learner phase of TLBO, Jaya is having only one phase and it is simpler to apply. In this study an exchange rate prediction based on ELM, Functional Link Artificial Neural Network (FLANN) and (Neural Network) NN with a set of technical indicators and statistical measures have been experimented. As the output weights are dependent on the incorporation of the random input weights and biases, which affects the performance of execution, hence; it needs to be optimized for the betterment. Here, TLBO, Jaya, PSO and DE optimization techniques have been used, which have a great impact on the network’s usefulness. The main objective of this paper is to have a depth study on exchange rate market and prediction of 1 day, 3 days, 5 days, 7 days, 15 days and 1 month in advance of the opening price of exchange rate. Two exchange rate data such as; USD to INR and USD to EURO has been taken for an experimental analysis. Basically, a comparative study has been carried out between ELM, FLANN and NN optimised by TLBO, Jaya, PSO and DE. The rest of the paper is organized as follows; Section 2 describes the datasets and also discusses the preliminary concepts or methodologies adopted throughout this paper, Section 3 deals with the experimental setup, Section 4 explores the experimental analysis and finally Section 5 concludes the paper with future scope.
2. Preliminaries 2.1. Dataset Description Foreign exchange rate between two currencies is the rate at which one currency will be exchanged for another. It basically determined by supply and demand factor. If there will be a great demand for American goods then the currency value of USD will be increased. All currency is having a buy price and a sell price with exchanging currencies at current or determined price. For simulation purpose real life data of two different Forex, USD to INR and USD to EURO have been considered for the period of 2001 to 2016. The model has been experimented using two datasets and for 6 time horizons. The data for the exchange rate prediction is available in the website http://in.investing.com/currencies/usdinr-historical-data and http://in.investing.com/currencies/eurusd-converter. The dataset has been divided into two parts: one is for training and another is for testing. Entire description of the datasets is given in the Table 1. Seven important technical indicators along with open price, high value, low value and prices of the respected exchange rate are taken into account. The empirical results are shown in terms of Actual versus Predicted values and Error Convergence Rate. In this paper, we also present the comparative analysis of the performance of all the models along with the time horizons. For statistical measures such as mean and standard deviation, the window size is 12 containing the present and previous 11 opening values of Forex data. Also for the calculation of technical indicators the window
Please cite this article in press as: Das, S.R., et al. A hybridized ELM-Jaya forecasting model for currency exchange prediction. Journal of King Saud University – Computer and Information Sciences (2017), https://doi.org/10.1016/j.jksuci.2017.09.006
3
S.R. Das et al. / Journal of King Saud University – Computer and Information Sciences xxx (2017) xxx–xxx Table 1 Description of data samples and data range. Datasets
Total Samples
Data Range
Training Sample
Test Sample
USD to INR USD to EURO
4161 3805
04/05/2000–03/06/2016 14/09/2001–03/06/2016
2913 2664
1248 1141
size is considered as 12, as it provides better performance in the simulation experiment. In the 12th position of the result of statistical measures and technical indicators, each group (Rout et al., 2014) of 12 data is calculated and shifted by one position to extract the second input pattern. 2.2. Datasets analysis and selection of input The experimental data has been selected from two different Forex markets, which is a world’s biggest financial market. The rate at which our primary currency used to conduct business can be exchanged in global Forex market, which will decide what will be the price that we have to pay for becoming greater number of products, the rate of return on our investment and even for the rate of interest of our deposits and loans. We have given a closer view on USD to INR and USD to EURO to predict the future trend. In this section a theoretical structure of the proposed model is sketched out step wise. This model is basically a hybridised model applied for exchange rate market analysis. Step 1: Generation of time series dataset: Let EhEp ; Eo ; Eh ; El i, where angular bracket represents the dataset and column Ep ; Eo ; Eh ; El represents exchange rate price, exchange rate open price, exchange rate high price and exchange rate low price respectively for any day ei as shown in (1). Here, eif 2 Rn for f ¼ hp; o; h; li Here,
0
Ep
e2 B B e1p E¼ . B . .. B @ ..
Eh
e1o .. .
e1h .. .
e1l C C C .. C . A
en
eno
enh
enl
enp
El
1
Eo
e1
ð1Þ
Observing the above dataset, the exchange rate prediction for 1 day ahead, 1 weak ahead and 1 month ahead can be predicted. A serious issue in exchange rate prediction is that it does not have any class label, so for classification, a new attribute is introduced, as exchange i.e. t, where t can be changed in opening price or (Nayak et al., 2015) momentum or volatility. It can be expressed th
as given in (2). Where, DC it is the difference between i opening values.
DC it ¼ 2it 2ðiþ1Þtc
th
and i +1
ð2Þ
Step 2: Data points smoothening using technical indicators: A technical indicator is a series of data points, as one data point does not provide a large amount of information for which a sequence of data points over a period of time is required. It is a mathematical calculation based on the security. So many indicators found in literature survey like Simple Moving Average (SMA), Relative Strength Index (RSI), Moving Average Convergence Divergence (MACD), Average Direction Index (ADX), Alligator, Aroon Indicator, Boliinger Brand, Fibonacci Ratio Indicator, Momentum Parabolic SAR, Stochastic Oscillator, Exponential Moving Average (EMA), Momentum, Willium%R, Average True Range (ATR), Double Exponential Moving Average (DEMA), Linear Regression Angle (LRA), Linear Regression Slope (LRS) (Nayak et al., 2015) etc. In this paper, few technical indicators such as; EMA, Stochastic Oscillator, Momentum, RSI, SMA, ATR, Willium%R are used for smoothening the dataset and for the prediction of the opening price. Along with the technical
indicators, two statistical measures such as; mean and standard deviation (STDEV) (Rout et al., 2014) is taken in the dataset. There are so many factors which directly influence the prediction of exchange rate. Description of the above indicators and statistical measures are discussed below: (a) EMA is used to count the lagging weakness of the SMA indicator by weighing more recent prices more heavily. Though, the calculation of EMA looks a bit daunting, but in practice it is simple. Mathematical derivation of EMA is given in (3)
EMA½today ¼ ðPrice½today SFÞ þ ðEMA½Yesterday ð1 SFÞÞ
ð3Þ
2 where, SF (Smoothing Factor) = kþ1 , k = the length of the EMA i.e. period setting, here 12 is considered as theperiod setting or window size. (b) Stochastic Oscillator is a momentum indicator where the security opening price is compared with the range of its price for a specific time period. By adjusting the time period the sensitivity of the oscillator can be reduced. The calculation of this indicator is done using (4).
%k ¼ 100
ðO L12 Þ ðH12 L12 Þ
ð4Þ
where, O = most recently opening price, L12 is the lowest price of the 12 previous trading session and H12 is the highest price of the 12 previous trading session. (c) Momentum indicators compares current price with the past price. It is calculated as a ratio of today’s price to a number n of period ago price.
Momentum ¼
OpenpriceðpÞ Openpriceðp nÞ 100
ð5Þ
where, OpenpriceðpÞ is the opening price of the current bar and Openpriceðp nÞ is the opening bar price n periods ago. (d) RSI is a technical momentum indicator, where the magnitude of the recent gains and recent losses compared, which is an effort to decide firmly the overbought and oversold state of an asset. The RSI can be computed using (6).
RSI ¼ 100
100 ð1 þ RS Þ
ð6Þ
v erage of 12 days up opens where, RS ¼ AvAerage of 12 days down opens
100 RSI ¼ If Av erageloss ¼ 0; 100; 100 ð1 þ RSÞ Basically, 100
100 ð1þRSÞ
ð7Þ
helps to generate buy and sell signal.
(e) SMA is the simple type of moving average in Forex analysis. Basically, it has been calculated by adding up the last X periods of opening prices and then dividing that number by X. Here the value of X is 12. 0
Mathematically; for 12day sSMA ¼
Summation of 12days open price 12
ð8Þ
Please cite this article in press as: Das, S.R., et al. A hybridized ELM-Jaya forecasting model for currency exchange prediction. Journal of King Saud University – Computer and Information Sciences (2017), https://doi.org/10.1016/j.jksuci.2017.09.006
4
S.R. Das et al. / Journal of King Saud University – Computer and Information Sciences xxx (2017) xxx–xxx
(f) ATR is a technical indicator designed to easily read market volatility. If ATR is a higher value, traders may seek more price interest points on a specific trade. Conversely, if ATR is indicating low volatility, traders may temper their trading expectations with smaller limit orders. Mathematical formulation of ATR is shown in (10). 0 0 0 0 TR ¼ Max Today s high Today slow ; Today s high Today s open ; 0 0 ð9Þ Today s open Today slow The ATR at the at the moment of time t is calculated as
ATRt ¼
ATRt1 Xðn 1Þ þ TRt n
ð10Þ
here, n parameter is usually used for the value of 12. (g) Willium%R or just%R is a momentum indicator which is contrary in position of the fast stochastic oscillator. It is also referred to as %R. It shows the current opening price in relation to the high and low of the past 12 days. By multiplying the raw value with 100,%R is able to correct for the inversion, as a consequence both Willium%R and fast stochastic oscillator generate an accurate same line, only with the different scaling. The oscillation of Willium%R is from 0 to 100, where 0 to 20 readings are taken as over bought and -80 to -100 readings are taken as over sold. This can be computed using (11).
%R ¼
ðHighest high OpenÞ 100 ðHighest high Lowest lowÞ
ð11Þ
(h) MACD is a trend following momentum indicator, which basically demonstrate the relationship between two moving average of prices and it is calculated by deducting 26 days EMA from 12 days EMA. For the mathematical formulation of MACD, at first we have to calculate 12 days EMA for the open price and 26 days EMA for the open price. EMA (12 days EMA for the open price) is:
13th day open price
2 12 þ 1
þ Av erageð1 to 12 days of open priceÞ 1
2 ð12 þ 1Þ
ð12Þ
EMA (26 days EMA for the open price) is:
26th day open price
2 26 þ 1
þ Av erageð1 to 26 days of open priceÞ 1
2 26 þ 1
Calculation of MACD ¼ 12 days EMA 26 days EMA
ð13Þ ð14Þ
Signal line (9 days EMA of MACD) is:
2 MACD on 27th day ð9 þ 1Þ þ AVGð9 days MACDðfrom 26th day MACD to 34th day MACDÞÞ 2 ð15Þ 1 9þ1 (i) STDEV is the amount of variation or dispersion among the data values in a set can be quantified by standard deviation. If the data points are close to the mean of the set, then; it is having low standard deviation and if the data points are extended over a large range then it indicates a high standard deviation. Mathematical derivation for the calculation of standard deviation is given in (16).
vffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi u N u1 X r¼t ðxi lÞ2 N i¼1
ð16Þ
(j) Mean or Average is the total sum of the numbers divided by the quantity of the numbers. The mathematical formulation for the calculation of mean is given in (17).
A¼
N 1X ai N i¼1
ð17Þ
here, the dataset containing the values are a1 ; a2 ; . . . . . . . . . :; an . Step 3: Dataset construction: The dataset has been reconstructed as shown in (18) and as discussed in the previous section, where TIi are the technical indicators and SM i are the statistical measures. 0 2:3:e1 Ep e2 B B e1p E¼ .. B B . @
Eo
Eh
El
e1o
e1h
en
eno
enh
e1l .. . enl
enp
Dc t Dc1t
TI1
TI2
... ... ... ...
TI8
SM 1
SM 2
TI11
TI12
TI18
TIn1
TIn2
SM 11 .. . SM n1
SM 12
Dcnt
.. .
TIn8
SM n2
Y
1
Y1 C C C C A Yn
ð18Þ
2.3. Methodologies adopted ELM is a simple learning approach, where the input weights of the model and the biases in the hidden layer are chosen randomly, which analytically calculates the output weights. Hence there inevitably exist a set of non-optimal input weights and number of biases in the hidden layer for which two problems may be created, one is ELM may have need of more neurons than conventional (Zhu et al., 2005) tuning based learning algorithms, which may result in slow respond of ELM for unknown testing data and the second being that, there may arise an ill-conditioned (Zhao et al., 2009) hidden output matrix H, when huge number of hidden layer neurons is used, which may worsen the generalization performance. ELM works for the generalised SLFN but the hidden layer needs not (Huang et al., 2012) to be tuned in SLFN. The basic interest in ELM against SLFN is that, instead of tuning the hidden layer it methodically calculates the output weights. The performance of ELM is compared with two other widely accepted prediction models such as FLANN and NN. FLANN solves the complexities of real life problems by generating non-linear (Dehuri et al., 2012) decision boundaries, because a single network models cannot be stated to be the best. Here, in FLANN, to enhance the input vector dimensionality, the input vector is expanded using any linearly independent function in the functional expansion block. FLANN offers less (Patra and Pal, 1995) computational complexity compared to MLP because of its single layer structure, whereas ANN is very useful when the non linear complexities exist in the datasets, where the primary goal is (Tu, 1996) the outcome prediction. ANN is increasingly well-liked in finance as finance services organizations have been the (Kaastra and Boyd, 1996) second prime sponsors of research in neural network. ELM, FLANN and NN used TLBO, Jaya, PSO and DE optimization technique for the exclusion of non optimal weights. TLBO is a population based optimization technique developed by Rao et al., uses a population (Rao et al., 2011) of solution to proceed towards global solutions. It is based on the inspiration of a teaching learning process, used in order to solve non linear optimization problem. Here, a group of learner is picked up as population and variables of different design (Khalghani and Khooban, 2014) are well thought-out as different subjects presented to the learners. The learner of the (Singh et al., 2013) class strongly depends on the quality of the teacher. A good teacher improves the average performance of the class by motivating the learners. It basically imitates the teaching learning skill of teachers and learners in a class room. The output in TLBO algorithm is (Baykasog˘lu et al., 2014) considered in terms
Please cite this article in press as: Das, S.R., et al. A hybridized ELM-Jaya forecasting model for currency exchange prediction. Journal of King Saud University – Computer and Information Sciences (2017), https://doi.org/10.1016/j.jksuci.2017.09.006
5
S.R. Das et al. / Journal of King Saud University – Computer and Information Sciences xxx (2017) xxx–xxx
of results or grades of the learners which depend on the quality of the teacher. The teacher takes part the role of best solution acquires so far. TLBO is mark out as the class of learners. TLBO algorithm (Moghadam and Seifi, 2014; Rao and Kalyankar, 2011) is separated into two primary operations: (i) Teacher Phase, (ii) Learner Phase. Teacher and learners are the two very important components of the algorithm and mark out as two basic modes of the learning. Keeping in view of the TLBO, Jaya is another optimization (Rao, 2016) technique, where the number of parameters is less. The basic concept of the algorithm lies in the fact that the solution of the problem moves towards success (selecting the best solution) and tries to (Rao et al., 2016) avoid failure (moving away from the worst solution). Jaya not only (Zhang et al., 2016) accepts the algorithm specific parameters free concept of TLBO but also is simpler than TLBO; it only requires tuning of the population size and number of iterations which are the common controlling parameters of the algorithm. The implementation of Jaya algorithm is simple (Rao et al., 2016) as its solution is updated only in one single phase using a single equation. Unlike TLBO it is having only one phase. Although both TLBO and JAYA are (Mishra and Ray, 2016) independent of algorithm-specific parameters, still JAYA is dominant over TLBO by virtue of its less implementation complexity, less computational time and faster convergence characteristics. The implementation steps of Jaya are summarized below: Step 1: Initialize the population size, number of design variables and the termination condition. Step 2: The best and worst solutions in the population should be identified. Step 3: The solution based on the best and worst solutions should be modified by using the Eq. (19).
x0j;k;i ¼ xj;k;i þ r 1;j;i xj;best;i ðxj;k;i Þ
r2;j;i ðxj;worst;i Þ ðxj;k;i Þ
ð19Þ
xj;k;i is the value of the jth variable for the kth candidate during the ith iteration. Hence, i is the number of iteration, j is the number of design variables and k is the population size. Step 4: The modified solution is compared with the existing solution. If the modified one is better it will replace the previous solution otherwise it will keep the previous solution. Step 5: The procedure is repeated from Step 2 to Step 4 until the termination condition is satisfied. The model outcome of TLBO and Jaya is compared with another two extensively accepted model i.e. PSO and DE. PSO maintains a swarm of particles and find its global best solution, where each individual move (Chakravarty and Dash, 2012; Pulido et al., 2014; Abdual-Salam et al., 2010; Briza and Naval, 2011) towards its best particle of the entire population at each iteration, and DE is similar to GA, having same stages but with a different sequence. Unlike GA, here mutation is the first operation in every generation and (Das and Suganthan, 2011; Hegerty et al., 2009; Hachicha et al., 2011) then recombination work is performed in every generation
which is similar to cross over probability in GA. Researchers from numerous domains applying DE in their field, as it’s a simple as well as robust technique to solve the problem of optimization. 3. Experimental setup 3.1. Experimental configuration The model has been developed using MatLab version R2015b (8.6.0 267246) 64 bits, which can run on both Windows and Unix Operating system. 3.2. Parameter Description In this study an empirical comparison among the performance of ELM, FLANN and NN using TLBO, Jaya, PSO and DE is well thought out. All the models are used to predict 1 day, 3 days, 5 days, 7 days, 15 days and 1 month in advance. The simulation parameters used for DE, PSO, TLBO and Jaya algorithm along with the network topology of the models are shown in the Table 2. For ELM rectified linear unit is considered as an activation function, whereas for NN and FLANN sigmoidal activation function is used, in this study. The size of the hidden nodes present in the hidden layer of ELM is 15 in all the cases, except PSO where it has been considered 5 for experimentation. 3.3. Performance evaluation Performance measure is a quantifiable expression of the amount to achieve its objectives and progress. In this paper different types of performance measures are used to judge the performance (Majhi et al., 2014) of the proposed prediction model for the test data. Here, exchange rate prediction comparison are conducted for 1 day, 3 days, 5 days, 7 days, 15 days and 1 month ahead in terms of Mean Squared Error (MSE), Mean Absolute Percentage Error (MAPE), Mean Absolute Error (MAE), Theil’s U and Average Relative Variance(ARV) and their mathematical formulation is given in (20), (21), (22), (23) and (24) respectively. Where, P i is the predicted output and Ai is the actual output. MSE measures the average of the squares of error. It is certainly non-negative and the value closer to zero is considered better. It represents the difference between actual observation and the predicted observation values. It is used to find out the extent to which the model fits the data and whether the removal of some explanatory variables or simplifying the model is possible without significantly harming the model’s predictive ability.
MSE ¼
N 1X ðAi Pi Þ2 N i¼1
ð20Þ
Although the MSE can derive the predictive model in the Ferreira et al. (2008) training process, it alone cannot be decisive parameter for comparing different models. For doing this few other performance criteria should also be taken into account to enable most robust performance.
Table 2 Values of different parameters of algorithms used in simulation. ELM
NN
FLANN
DE
PSO
TLBO
Jaya
No of nodes in the hidden layer-15
No of hidden layers-3
Size of Expansion-9
Population size-100
Population size-100
Max. Iteration-100
Max. Iteration-100
Population size-100 Max. Iteration100
Population size-100 Max. Iteration100
Crossover rate(Cr)-0.8
Acceleration Coefficient (1)-0.01 Acceleration Coefficient (2)-0.02
No of nodes in each hidden layer-5
Mutation scale factor (F)-0.6
Please cite this article in press as: Das, S.R., et al. A hybridized ELM-Jaya forecasting model for currency exchange prediction. Journal of King Saud University – Computer and Information Sciences (2017), https://doi.org/10.1016/j.jksuci.2017.09.006
6
S.R. Das et al. / Journal of King Saud University – Computer and Information Sciences xxx (2017) xxx–xxx
MAPE is measure of a forecasting method in statistics. It usually expresses accuracy as a percentage. Whenever the average of the two values is considered, minimum and maximum values cancel each other, hence; in order to fix the problem it is an easy way to take MAPE. It is easy to understand and easy to calculate with a benefit that it gets a better representation of the true forecast error.
MAPE ¼
PN Ai Pi i¼1 Ai N
100
ð21Þ
MAE is a quantity used to measure how close forecasts or predictions are to the eventual outcomes. It gives better result for the series using same scales. Though, it depends on the scale of the dependent variable but it is less sensitive than the usual squared loss. In time series analysis, it is considered as a common measure of forecast error.
MAE ¼
N 1X Ai Pi N i¼1
ð22Þ
Theil’s U is a relative accuracy measure that compares the forecasted results with the results of forecasting with minimal historical data. It also squares the deviations to give more weight to large errors and to exaggerate errors, which can help eliminate methods with large errors. Theil’s coefficient of inequality is one of the statistical forecasting evaluators. The value of Theil’s U varies from 0 to 1. If the value is 1, then the predictor is having the same performance of Random Walk model model. If it is greater than 1 then the performance is worse than (Ferreira et al., 2008) Random walk model and the predictor is usable if its value is less than 1. If the value tends to zero, then the predictor tends to a perfect model.
qffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi ffi PN 2 1 ðA P Þ i i i¼1 N 0 Theil s U ¼ qffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi PN 2 qffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi PN 2ffi 1 1 i¼1 Ai þ i¼1 P i N N
ð23Þ
ARV is the average of the measure of how much the set of data points can vary. If ARV = 1, the predictor has the same performance as calculating the mean over the series. If ARV > 1, the predictor is worse than simply taking the mean and vice versa.
PN ðPi Ai Þ2 ARV ¼ Pi¼1 N ~ i¼1 P i A
ð24Þ
3.4. Algorithm The algorithm of ELM-Jaya for analysis of Exchange rate prediction is given as follows 3.5. Schematic layout of proposed ELM-Jaya prediction model The proposed exchange rate analysis model is shown in the Fig. 1. This model works step wise. At first the currency exchange rate data is used throughout the paper for experimental purpose. Later, for enhancing the dimensionality of the dataset technical indicators and statistical measures are used by using (3)–(17), considering 12 as the window size. After regenerating the dataset, it is divided into two parts, training part and testing part, in the ratio of 7:3. The model is trained using ELM, FLANN as well as NN and each of them is optimized with TLBO, Jaya, PSO and DE and the architecture of ELM model optimized with TLBO, Jaya, PSO and DE is shown in Fig. 2.
Please cite this article in press as: Das, S.R., et al. A hybridized ELM-Jaya forecasting model for currency exchange prediction. Journal of King Saud University – Computer and Information Sciences (2017), https://doi.org/10.1016/j.jksuci.2017.09.006
S.R. Das et al. / Journal of King Saud University – Computer and Information Sciences xxx (2017) xxx–xxx
7
Fig. 1. Detailed diagram of currency exchange rate forecasting.
Fig. 2. Hybrid ELM forecasting model.
4. Experimental analysis Two noteworthy exchange rate prediction datasets, such as USD to INR and USD to EURO are used for experimentation. In this way a total of 4161 samples for rupees and 3805 feature patterns for euro are extracted. Out of these patterns 70% are used for training and 30% are used for testing. The model is used to predict 1 day, 3 days, 5 days, 7 days, 15 days and 1 month in advance. Few important technical indicators as well as statistical measures discussed in Section 2 are taken into consideration along with open price, maximum price, minimum price and price. The performance of the trained model is measured by MSE and the performance of the validation is measured using MSE, MAPE, MAE, Theil’s U and ARV.
The performance metrics for the above mentioned two datasets for opening price have been obtained by evaluating execution of the algorithm using the same training and testing samples. The technical indicators such as EMA, Stochastic oscillator, Momentum, RSI, SMA, ATR, Willium%R, MACD and the statistical measure such as STDEV and Mean are used for smoothing the data. The performance comparison of ELM with TLBO and Jaya is excellent comparing ELM hybridized with PSO and DE. The experimental result of TLBO and Jaya is compared with the widely used PSO and DE. The controlling parameters of evolutionary algorithms are application oriented. No fixed value is assigned to these parameters. ELM is having one hidden layer and here for ELM-TLBO and ELM-Jaya, 15 numbers of hidden units is considered to produce a
Fig. 3. Few technical indicators and statistical measures of (a) USD to INR and; (b) USD to EURO data.
Please cite this article in press as: Das, S.R., et al. A hybridized ELM-Jaya forecasting model for currency exchange prediction. Journal of King Saud University – Computer and Information Sciences (2017), https://doi.org/10.1016/j.jksuci.2017.09.006
8
S.R. Das et al. / Journal of King Saud University – Computer and Information Sciences xxx (2017) xxx–xxx
good approximation of the designed mapping, with a suitable no of population size i.e. 100 is taken and execution is carried out for 100 numbers of iteration. ELM-PSO is tested with the above similar values of population size and iteration but gives better result with less
number of nodes in the hidden layer compared to TLBO and Jaya, with a fitting value of acceleration coefficient i.e. first acceleration coefficient = 0.01 and the second one = 0.02. Though; ELM-PSO shows better result by 5 numbers of nodes in the hidden layer,
Fig. 4. Opening price prediction of USD to INR exchange rate data using ELM based on TLBO, Jaya, PSO and DE.
Fig. 5. Opening price prediction of USD to INR exchange rate data using (a) FLANN-TLBO and (b) NN-TLBO.
Please cite this article in press as: Das, S.R., et al. A hybridized ELM-Jaya forecasting model for currency exchange prediction. Journal of King Saud University – Computer and Information Sciences (2017), https://doi.org/10.1016/j.jksuci.2017.09.006
9
S.R. Das et al. / Journal of King Saud University – Computer and Information Sciences xxx (2017) xxx–xxx
but to make it more generalized and for a standard comparison, the population size has been increased, and it has been observed that ELM-TLBO and ELM-Jaya shows far better performance considering both complexity and accuracy issues. Apart from number of nodes in the hidden layer and population size ELM-DE is having another two controlling parameters Crossover rate (Cr) and mutation scale factor (F). The value of Crossover rate is considered within 0 to 1 and the value of mutation scale factor is considered within 0.4 to 0.95. The above implemented optimization technique is hybridized with NN and FLANN. The expansion size of FLANN performing better at 9. Performance analysis of ELM for 1 day, 3 days, 5 days, 7 days, 15 days and 1 month ahead prediction for both INR and
Table 3 Comparison of MSE for USD to INR Forex data based on ELM. ELM
Statistical Measures
Technical Indicators
Statistical Measures with Technical Indicators
1 day 3 days 5 days 7 days 15 days 1 Month
0.010482 0.024403 0.036761 0.015095 0.14745 0.44344
0.0095855 0.014331 0.029067 0.020225 0.085946 0.31029
0.012108 0.01956 0.072243 0.023757 0.2102 0.37819
Fig. 6. Convergence speed of USD to INR using MSE for (a) ELM-TLBO, ELM-Jaya, ELM-PSO, ELM-DE and; (b) ELM-TLBO, NN-TLBO, FLANN-TLBO.
Fig. 7. Opening price prediction of USD to INR data using ELM for 1 day, 3 days, 5 days, 7 days, 15 days and 1 month ahead using technical indicators.
Please cite this article in press as: Das, S.R., et al. A hybridized ELM-Jaya forecasting model for currency exchange prediction. Journal of King Saud University – Computer and Information Sciences (2017), https://doi.org/10.1016/j.jksuci.2017.09.006
10
S.R. Das et al. / Journal of King Saud University – Computer and Information Sciences xxx (2017) xxx–xxx
Prediction graph using statistical measures
(a) 1day
(b) 3 days
(d) 7 days
(e) 15 days
(c) 5 days
Prediction graph using technical indicators
(a) 1day
(b) 3 days
(c) 5 days
(d) 7 days
(e) 15 days
(f) 1 month
Fig. 8. Opening price prediction of USD to INR data using ELM based on TLBO for 1 day, 3 days, 5 days, 7 days, 15 days and 1 month ahead using statistical measures, technical indicators and both statistical measures and technical indicators.
Please cite this article in press as: Das, S.R., et al. A hybridized ELM-Jaya forecasting model for currency exchange prediction. Journal of King Saud University – Computer and Information Sciences (2017), https://doi.org/10.1016/j.jksuci.2017.09.006
S.R. Das et al. / Journal of King Saud University – Computer and Information Sciences xxx (2017) xxx–xxx
11
Fig. 8 (continued)
Fig. 9. Opening price prediction of USD to INR using ELM based on Jaya for 1 day, 3 day s, 5 days, 7 days, 15 days and 1 month ahead using statistical measures, technical indicators and both statistical measures and technical indicators.
Please cite this article in press as: Das, S.R., et al. A hybridized ELM-Jaya forecasting model for currency exchange prediction. Journal of King Saud University – Computer and Information Sciences (2017), https://doi.org/10.1016/j.jksuci.2017.09.006
12
S.R. Das et al. / Journal of King Saud University – Computer and Information Sciences xxx (2017) xxx–xxx
Fig. 9 (continued)
EURO exchange rate prediction in terms of error is calculated. Here, in the Fig. 3 shows the opening indices of each datasets after applying few of technical indicators and statistical measures.
In this paper, for predicting days ahead in ELM, the number of nodes in the hidden layer differs for different optimization techniques. Though, ELM shows better prediction result but there
Please cite this article in press as: Das, S.R., et al. A hybridized ELM-Jaya forecasting model for currency exchange prediction. Journal of King Saud University – Computer and Information Sciences (2017), https://doi.org/10.1016/j.jksuci.2017.09.006
S.R. Das et al. / Journal of King Saud University – Computer and Information Sciences xxx (2017) xxx–xxx
may be inclusion of a set of non-optimal or unnecessary input weights and bias of the hidden nodes. As the random initialization of the weights may affect execution of the performance of ELM, it needs to optimize the weight for better result. The actual versus predicted graph of opening price of USD to INR generated using ELM with TLBO, Jaya, PSO and DE are shown in Fig. 4. Here, from the above figures it can be observed that, ELM is performing better irrespective of optimization techniques but again ELM optimized with TLBO and Jaya give better result. In the simulation study of FLANN the input pattern has been enhanced by using trigonometric function and the expansion size 9 has been considered. FLANN with trigonometric expansion is better compared to other neural network. NN involves high computational complexity and the structure needs excessive training time and the number of weights between layers increases, when the number of nodes in that layer increases. But, again NN require less formal statistical training like other classifiers. Here, FLANN and NN, optimizing its weight with TLBO has been modeled. For the simulation of NN the number of hidden layers considered as 3 and the number of nodes in the layer is considered as 5. The actual versus predicted graph of opening values of USD to INR is generated using FLANN with TLBO and NN with TLBO is shown in Fig. 5. ELM does not require many parameters and iterations to train and test compared to FLANN and NN, thus; it reduces the complexity and computational cost. Fig. 6(a) shows the MSE of ELM hybridized with different optimization techniques, where it shows ELM optimized with TLBO converges faster than other and Fig. 6 (b) shows the MSE of TLBO combined with ELM, FLANN and NN. Where it can be observed that ELM with TLBO converges faster, which proves about it’s higher forecasting performance. The experimentation undertaken for three models using four optimization techniques and the simulation applying on two Forex data. A comparision analysis of all the models along with the time
13
horizon 1 day, 3 days, 5 days, 7 days, 15 days and 1 month is presented in this paper. For smoothening the dataset and for prediction, few of the indicators along with some statistical measures has been experimented. In some datasets only indicators and only statistical measures is considered and in some other indicators combine with statistical measures is taken. The experimental result for USD to INR dataset, including indicators and statistical measures is shown below where it found in some places indicators with statistical measures giving better result, where as in some other places only indicators and only statistical measures gives better result. 4.1. Experimentation of ELM based on TLBO, Jaya, PSO and DE ELM is a simple tuning-free-three step algorithm.. The learning speed of ELM is also very fast. ELM reach the solutions straight forward without facing issue like local minima. It works much simpler than NN and SVM. The training time of ELM is very less as compared to popular Back Propagation (BP) and SVM. In numerous applications, the prediction accuracy of ELM is quite better than BP and close to SVM. ELM can be implemented easily as it needs only the intensive parameters to be tuned. In order to improve the performance of ELM, NN and FLANN, the exploration and exploitation capacities of these algorithms are combined with optimization algorithms and hybrid algorithms are produced. ELM has been hybridized with TLBO, Jaya, PSO, DE where TLBO and Jaya require only common controlling parameters like population size and number of iterations. Besides these common controlling parameters, many algorithms require their own algorithm specific parameters. For example, mutation and cross over rate are used in GA whereas PSO uses inertia rate and social and cognitive parameter. To run any population (Rao and Patel, 2013) based optimization algorithm common controlling parameter is required, but a partic-
Fig. 10. Opening price prediction of USD to INR using ELM based on PSO for 1 day, 3 days, 5 days, 7 days, 15 days and 1 month ahead using technical indicators.
Please cite this article in press as: Das, S.R., et al. A hybridized ELM-Jaya forecasting model for currency exchange prediction. Journal of King Saud University – Computer and Information Sciences (2017), https://doi.org/10.1016/j.jksuci.2017.09.006
14
S.R. Das et al. / Journal of King Saud University – Computer and Information Sciences xxx (2017) xxx–xxx
ular algorithm has different specific parameters based on the logic of that algorithm, like this different algorithms have different speific parameters. In an optimization algorithm, tuning of algorithm specific parameters is a vital factor which affects its performance. Thus improper tuning of such parameter either elevates the computational cost or the solution gets trapped in local optima. ELM optimized with TLBO and Jaya shows better performance compared to PSO and DE, as well as TLBO and Jaya gives near to result. ELM is computationally free from iterations that make ELM extremely fast by significantly reducing the computational time required to train the SLFNs. The Actual versus predicted graph of opening price of USD to INR data using ELM for the prediction of 1 day, 3 days, 5 days, 7 days, 15 days and 1 month ahead is shown in Fig. 7, and the MSE of train data for technical indicator, statistical measure and a combination of both technical indicators and statistical measures is shown in the Table 3. Though ELM is having good potential to solve the prediction problem and performing better result of accuracy but the random selection of non-optimal input weights and biases may decrease the performance accuracy. In this study the proposed model follows the basic concept of ELM, in which the output weights(Dash et al., 2014)are generated analytically but to optimize the weight performance; TLBO, Jaya, PSO and DE with the regularization parameter is used. The regularization parameter basically improves the performance of the ELM. The actual versus predicted graph of TLBO-ELM, Jaya-ELM, PSO-ELM and DE-ELM for USD to INR exchange rate prediction is shown in Figs. 8, 9, 10 and 11 respectively. PSO has the ability to quickly converge and the performance of PSO is not sensitive (Shi et al., 1999) to the population size. In case of ELM hybridized with TLBO and Jaya, the number of nodes in the hidden layer is taken 15, giving better performance, whereas; for PSO, it is giving better result with less than 15 number of nodes.
Here, to achive the desired result we have increased the population size. Forex data with statistical measures (Rout et al., 2014) mark out about its better performance but here in this paper along with the statistical measure, technical indicators and a combination of both technical indicators with statistical measures is taken. To assess the quality of being efficient of ELM based exchange rate predictor, some judgemental observations on the simulations are presented. From Tables 4–7 it is clearly visible that in case of ELM-DE, techni-
Table 4 Comparison of MSE for USD to INR Forex data based on ELM-TLBO. ELM TLBO
Statistical Measures
Technical Indicators
Statistical Measures with Technical Indicators
1 day 3 days 5 days 7 days 15 days 1 Month
0.0086131 0.012503 0.010243 0.0084296 0.069369 0.32059
0.0074477 0.010579 0.010191 0.00082015 0.036807 0.035663
0.0077247 0.010616 0.01711 0.0084978 0.042481 0.038992
Table 5 Comparison of MSE for USD to INR Forex data based on ELM-Jaya. ELM Jaya
Statistical Measures
Technical Indicators
Statistical Measures with Technical Indicators
1 day 3 days 5 days 7 days 15 days 1 Month
0.0085876 0.012476 0.010127 0.0084193 0.069187 0.3108
0.0077447 0.0095543 0.010118 0.0083045 0.033791 0.036099
0.0061082 0.0096574 0.0090557 0.0088094 0.034635 0.03391
Fig. 11. Opening price prediction of USD to INR using ELM based on DE for 1 day, 3 days, 5 days, 7 days, 15 days and 1 month ahead using technical indicators.
Please cite this article in press as: Das, S.R., et al. A hybridized ELM-Jaya forecasting model for currency exchange prediction. Journal of King Saud University – Computer and Information Sciences (2017), https://doi.org/10.1016/j.jksuci.2017.09.006
15
S.R. Das et al. / Journal of King Saud University – Computer and Information Sciences xxx (2017) xxx–xxx Table 6 Comparison of MSE for USD to INR Forex data based on ELM-PSO. ELM PSO
Statistical Measures
Technical Indicators
Statistical Measures with Technical Indicators
1 day 3 days 5 days 7 days 15 days 1 Month
0.0086881 0.019367 0.010256 0.0085609 0.10114 0.32051
0.0084392 0.018554 0.010397 0.015676 0.070726 0.1649
0.0088652 0.012487 0.012879 0.01709 0.06126 0.11719
Table 7 Comparison of MSE for USD to INR Forex data based on ELM-DE. ELM DE
Statistical Measures
Technical Indicators
Statistical Measures with Technical Indicators
1 day 3 days 5 days 7 days 15 days 1 Month
0.008669 0.010613 0.011142 0.007451 0.051162 0.311322
0.006326 0.010118 0.010122 0.015363 0.071966 0.078414
0.008417 0.010507 0.010031 0.013625 0.061892 0.287314
cal indicator is performing better for 1 day, 3 days, 5 days and 1 month ahead prediction, whereas for 7 days and 15 days ahead prediction statistical measures is performing better and in few of the cases technical indicators combined with statistical measures performing better than only indicators and only statistical measures. While in ELM-PSO technical indicator is performing better for 1 day, 3 days, 15 days and 1 month ahead prediction and the performance of statistical measures shows better for 5 days and 7 days ahead prediction, except ELM-TLBO and ELM-Jaya, where
technical indicator is performing better for all days ahead prediction compared to statistical measure. One thing is clear in most of the cases, that when the number of days increases the convergence error increases. Results obtained from our different model shows the performance of technical indicator offers more accurate exchange rates compared to others. Another interesting observation marked in Tables 3–7, that though the performance of standard ELM is appreciable but ELM optimized with TLBO, Jaya, PSO and DE outperforms it and the corresponding results obtained through simulation of ELM proves the outstanding performance of technical indicators over statistical measures. 4.2. Experimentation of FLANN based on TLBO and PSO FLANN is basically (Misra and Dehuri, 2007) flatnet, where there is no need of hidden layer and gets trained easily by a simple learning algorithm. The optimization techniques are used in FLANN, to improve its performance. As the performance of TLBO and Jaya is more or less equal, hence here only FLANN-TLBO is compared with FLANN-PSO, and from the Figs. 12 and 13 it is clearly visible that FLANN-TLBO performing better than FLANN-PSO. In the Tables 8 and 9 it has been clearly figured out. From the conver-
Table 8 Comparison result of MSE for ELM, NN and FLANN optimized by TLBO. Day(s)
ELM-TLBO
NN-TLBO
FLANN-TLBO
1 day 3 days 5 days 7 days 15 days 1 Month
0.0074477 0.010579 0.01008 0.0082015 0.036907 0.035663
0.03983 0.10217 0.030565 0.040069 0.49513 0.15307
0.17622 0.56463 0.31752 0.74828 1.1919 0.54362
Fig. 12. Opening price prediction of USD to INR using FLANN based on TLBO for 1 day, 7 days, 1 month ahead.
Fig. 13. Opening price prediction of USD to INR using FLANN based on PSO for 1 day, 7 days, 1 month ahead.
Please cite this article in press as: Das, S.R., et al. A hybridized ELM-Jaya forecasting model for currency exchange prediction. Journal of King Saud University – Computer and Information Sciences (2017), https://doi.org/10.1016/j.jksuci.2017.09.006
16
S.R. Das et al. / Journal of King Saud University – Computer and Information Sciences xxx (2017) xxx–xxx
4.3. Experimentation of NN based on TLBO and PSO
Table 9 Comparison result of MSE for ELM, NN and FLANN optimized by PSO. Day(s)
ELM-PSO
FLANN-PSO
NN-PSO
1 day 3 days 5 days 7 days 15 days 1 Month
0.0084392 0.018554 0.010397 0.015676 0.070726 0.1649
17.1392 63.2394 22.5759 2.2737 53.4011 45.2036
0.31643 0.11992 0.14664 0.25078 0.14344 0.28209
gence speed of FLANN-TLBO and FLANN-PSO shows in Figs. 16 and 17, it can be visualized that when the prediction days increases the convergence speed decreases. The right hand side of Figs. 16 and 17 shows the zoomed of left side figure.
In the field of economics that analyzes the financial market where decisions are always under uncertain, NN is capable to predict the future price as it is having the ability for close to the actual of any continuous function. But again due to its maximum computational complexity and suffering from over fitting, it cannot produce the required quality of result as compared to ELM, which can be clearly envisioned from the Tables 8 and 9. As the performance of both TLBO and Jaya is similar, hence, TLBO is compared with PSO which is widely applied and the simulation result of NN-TLBO and NN-PSO is presented here in Figs. 14and 15. The figure itself concludes the performance of NN-TLBO is better compared to NN-PSO in all the experimental days ahead prediction.
Fig. 14. Opening price prediction of USD to INR using NN based on TLBO for 1 day, 7 days and 1 month ahead.
Fig. 15. Opening price prediction of USD to INR using NN based on PSO for 1 day, 7 days and 1 month ahead.
Fig. 16. Convergence speed of FLANN-TLBO.
Please cite this article in press as: Das, S.R., et al. A hybridized ELM-Jaya forecasting model for currency exchange prediction. Journal of King Saud University – Computer and Information Sciences (2017), https://doi.org/10.1016/j.jksuci.2017.09.006
S.R. Das et al. / Journal of King Saud University – Computer and Information Sciences xxx (2017) xxx–xxx
From the convergence speed of NN-TLBO and NN-PSO shows in Figs. 18 and 19, it can be visualized that when the prediction days increases the convergence speed decreases. The right hand side of Figs. 18 and 19 shows the zoomed of left side figure. NN design includes an input layer, output layer, network topology (structure or arrangement of nodes) and the weighted connec-
17
tion of the nodes. In NN the learning occurs from bottom to top direction. A limitation of the NN-based approach is that the numerical values and the NN (Samanta and Al-Balushi, 2003) structure, presumably may not be optimal for any other machine. Though, NN requires less statistical training but again it requires more computational complexity. Comparing with NN, ELM performs better
Fig. 17. Convergence speed of FLANN-PSO.
Fig. 18. Convergence speed of NN-TLBO.
Fig. 19. Convergence speed of NN-PSO.
Please cite this article in press as: Das, S.R., et al. A hybridized ELM-Jaya forecasting model for currency exchange prediction. Journal of King Saud University – Computer and Information Sciences (2017), https://doi.org/10.1016/j.jksuci.2017.09.006
18
S.R. Das et al. / Journal of King Saud University – Computer and Information Sciences xxx (2017) xxx–xxx
which is reported in the Tables 8 and 9, which is the outcome of the simulation result. Again comparing with NN and FLANN, ELM requires less training time because it is a single hidden layer feed forward neural network, whose hidden layer need not be tuned. The experimental result for USD to EURO Exchange rate dataset applying on ELM, NN and FLANN optimized by TLBO, Jaya, PSO and DE is given below. Here, it also shows in most of the places opening price, maximum price, minimum price, change in percentage with technical indicators shows better result. It seems to prove that technical indicator helps for the better performance. Though, we have obtained the resulting simulation graphs of all the models, but due to the limitation of space, we have not included all of them in this paper. So, only MSE of all the hybrid models using USD to EURO can clearly be visualized in the Tables 10–14. Exchange rate is widely open for various types of buyers and sellers, where the foreign exchange market decides the price for the rate of exchange. Here, in this paper we have taken two datasets USD to INR and USD to EURO, because USD is standard unit of currency. Both USD and EURO are of great significance of interna-
Table 10 Comparison of MSE for USD to EURO Forex data based on ELM. ELM
Statistical Measures
Technical Indicators
Statistical Measures with Technical Indicators
1 day 3 days 5 days 7 days 15 days I Month
0.0012418 0.001763 0.00012325 0.00081137 0.00016524 0.00056412
0.00010114 2.8342e05 2.7974e05 3.0319e05 0.0011739 0.00046519
0.00029606 0.00015466 0.00013311 0.00011703 0.002043 0.00054493
Table 11 Comparison of MSE for USD to EURO Forex data based on ELM-TLBO. ELM TLBO
Statistical Measures
Technical Indicators
Statistical Measures with Technical Indicators
1 day 3 days 5 days 7 days 15 days I Month
6.9916e09 2.7414e05 2.5246e05 1.8923e05 0.00015735 0.00055017
1.4416e06 2.0679e05 2.2314e05 1.8876e05 6.3576e05 0.00013837
1.7808e06 2.9997e05 2.2423e05 2.877e05 6.4997e-05 0.00010264
Table 12 Comparison of MSE for USD to EURO Forex dada based on ELM-Jaya. ELM Jaya
Statistical Measures
Technical Indicators
Statistical Measures with Technical Indicators
1 day 3 days 5 days 7 days 15 days I Month
1.0534e10 2.742e05 2.5254e05 1.8883e05 0.00015645 0.00055081
2.4093e07 1.4548e05 2.0861e05 1.335e05 5.7642e05 7.4e05
8.5191e07 1.3538e05 2.0794e05 1.9191e05 6.8118e05 7.7454e05
Table 14 Comparison of MSE for USD to EURO Forex data based on ELM-DE. ELM DE
Statistical Measures
Technical Indicators
Statistical Measures with Technical Indicators
1 day 3 days 5 days 7 days 15 days I Month
1.148e08 2.6321e05 2.5263e05 1.7639e05 0.00014717 0.00045995
6.4335e06 2.4227e05 1.2622e05 1.5625e05 6.1576e05 8.7713e05
3.8335e06 2.3210e05 2.6236e06 2.8127e05 7.7373e05 0.00018359
tional reserve currency. The simulation result of ELM-TLBO, ELMJaya, ELM-PSO and ELM-DE for the opening price of USD to EURO Forex data, for all the days ahead prediction using technical indicator, statistical measure and a combination of both technical indicator and statistical measure is observed. In Fig. 20 the actual versus predicted graph of the opening values of USD to EURO Forex data using the integrated model of ELM based on TLBO, Jaya, PSO and DE for 1 day ahead prediction using technical indicators shows its outcome. Though, TLBO is having two phase and Jaya is having the single phase but the performance of both are most likely same. Hence the simulation result of ELM-Jaya establish its exceptionally good performance for the opening price prediction of USD to EURO data like USD to INR for 1 day, 3 days, 5 days, 7 days, 15 days and 1 month ahead prediction. In most of the cases of days ahead prediction, technical indicator performing better than statistical measure except 1 day ahead prediction, where statistical measure is performing better, for all the models, which is clearly visible in Tables 11–14. Like other stochastic algorithms, PSO accomplishes better in the lower dimension (Van den Bergh and Engelbrecht, 2004) and the hybridization of PSO expands its capabilities (AlRashidi and El-Hawary, 2009) in case of accuracy and time complexity. For choosing the optimum weights in ELM using PSO, it contributes strong and healthy learning. Implementation of DE is easier in comparison to GA and PSO as it needs less parameter for (Paterlini and Krink, 2006) considerable tuning and in multidimensional (Aliev et al., 2011) search space DE is categorized by high-quality of convergence properties. In order to optimize the input weights in ELM, a hybrid learning algorithm approach ELM-DE attain good quality of generalisation performance and can be seen in the simulated graph. ELM performing outstanding regardless of any optimization techniques, which can be visualize from the Table 15, where in all the performance measure ELM is performing better. The MSE values of ELM without optimization technique are recorded in Table 10, from this table it is observed that, although ELM performs fairly well but the performance is even better when the random initialization of the weight which affects the output in standard ELM is updated with optimization techniques, which can be visualized in Tables 11–14. The MSE of ELM with different optimization techniques for the datasets USD to EURO using technical indicators and statistical measures and a combination of technical indicators with statistical measures is specified in the Tables 11–14.
4.4. Result analysis of ELM based on performance evaluation measures Table 13 Comparison of MSE for USD to EURO Forex data based on ELM-PSO. ELM PSO
Statistical Measures
Technical Indicators
Statistical Measures with Technical Indicators
1 day 3 days 5 days 7 days 15 days I Month
4.8893e07 2.7464e05 2.5466e05 1.9021e05 0.00015792 0.00054898
6.0586e06 1.5956e05 2.1225e05 1.2313e05 7.7005e05 0.00016838
1.5448e06 1.4956e05 3.228e05 2.6992e05 8.7601e05 0.00029807
TLBO performed competitively with PSO and DE and proves its robustness and effectiveness over the two. It is having a great potential to solve multi objective problems. Comparing to metaheuristic approaches it leads to very satisfactory result. It is able to solve complicated optimization problems with highly nonlinear behaviour. Jaya is another algorithm which is having less number of parameters and unlike TLBO it has only one phase. After the wide acceptance of TLBO, Jaya has developed and it is comparatively simpler, basically parameterless, no algorithm specific
Please cite this article in press as: Das, S.R., et al. A hybridized ELM-Jaya forecasting model for currency exchange prediction. Journal of King Saud University – Computer and Information Sciences (2017), https://doi.org/10.1016/j.jksuci.2017.09.006
19
S.R. Das et al. / Journal of King Saud University – Computer and Information Sciences xxx (2017) xxx–xxx
Fig. 20. Opening price prediction of USD to EURO exchange rate data using ELM based on (a)TLBO, (b)Jaya, (c)PSO and (d)DE Using technical indicators.
Table 15 Comparison of different performance evaluation measure. Methods
MAPE
Theil’s U
ARV
MAE
ELM TLBO ELM Jaya ELM PSO ELM DE NN TLBO NN Jaya NN PSO NN DE FLANN TLBO FLANN Jaya FLANN PSO FLANN DE
0.1401 0.13599 0.17254 0.13514 0.29383 0.23789 0.30347 0.30058 2.7792 2.0789 23.9118 35.4337
2.3804e05 0.00025408 0.0011183 3.7376e06 0.0022735 0.063899 0.0033287 0.003453 0.03764 0.95047 0.2781 0.72218
0.0012113 0.0011793 0.00016947 0.0010302 0.065567 0.061266 0.064137 0.066557 1.10003 1.00001 1.00021 1.00023
0.059543 0.062859 0.065489 0.062411 0.19802 3.4437 0.11811 0.11277 1.9824 3.3187 10.9337 36.0226
parameter is required. The potency of application of TLBO and Jaya has been compared with PSO and DE. However, like PSO and DE, few algorithms are having some specific parameters which require efficient tuning for the enhanced performance of the algorithm. On the contrary TLBO and Jaya do not depends on any algorithm specific parameters rather they are computationally simple and provides reliable prediction outcome. The proposed model is trained with MSE and tested with different error like MAPE, Theil’s U, ARV and MAE to evaluate the performance. Evaluation measure of ELM, NN and FLANN with MAPE, Theil’s U, ARV and MAE is carried out, where it shows the performance of ELM is standardize to better irrespective of any optimization technique. The result of the comparision study is observed in
Table 15. The graphs plotted in Fig. 21, shows the MSE of 1 day and 3 days ahead prediction convergence faster, it seems as the number of days increases the convergence speed decreases. The time complexity comparison of ELM-TLBO and ELM-Jaya based on MSE using USD to INR Forex data is considered. Based on this comparison result, showed in Table 16; ELM-Jaya proves about its outstanding performance. 4.5. Discussion Currency valuations are decided by the move of currency in and out of a country. An exchange rate has a base currency as well as counter currency, where the foreign currency is the base currency
Please cite this article in press as: Das, S.R., et al. A hybridized ELM-Jaya forecasting model for currency exchange prediction. Journal of King Saud University – Computer and Information Sciences (2017), https://doi.org/10.1016/j.jksuci.2017.09.006
20
S.R. Das et al. / Journal of King Saud University – Computer and Information Sciences xxx (2017) xxx–xxx
Fig. 21. Convergence speed of (a) USD to INR and; (b) USD to EURO using MSE for 1 day, 3 days, 5 days, 7 days, 15 days and 1 month ahead.
Table 16 Time complexity comparison of ELM-TLBO and ELM-Jaya in Seconds.
ELM-TLBO ELM-Jaya
1 day
3 days
5 days
7 days
15 days
1 Month
646.9644 313.4171
648.5334 320.7969
658.5202 321.6281
675.8187 319.8203
651.5675 320.6792
676.3415 323.6773
and the domestic currency is the counter currency. Most of the exchange rates make use of USD as the base currency and other currencies as the counter currency, as it is the benchmark unit of currency. Moreover, the proposed hybridized and optimized model is essentially different from traditional prediction models and the proposed ELM based TLBO and Jaya can be summarized as follows: (1) Here, in this paper USD to INR and EURO currency exchange has been considered. These data has been employed in three prediction techniques such as; ELM, NN and FLANN. (2) ELM is a simplest training algorithm, whose hidden layer desires no training. But again it is having some of the issues (Dash et al., 2014) like it require more hidden neurons than NN and FLANN. Since in case of ELM and FLANN the input weights and bias of the hidden nodes taken randomly, whereas in NN the input weights as well as the weights between hidden layers along with biases of hidden nodes and output weights are taken randomly, which effects the performance of the models, hence some of global optimization methods like TLBO, Jaya, PSO and DE have been proposed in this paper. Relating to classical modeling techniques, NN represents (Bourquin et al., 1998) a useful substitute applying for variable datasets expressing nonlinear relationships. As the prediction of exchange rate price is a dynamic and complex problem, NN learning is having the ability to implicitly notice complex non linear relationship and able to solve this prediction attempt in finance. Like this, FLANN is another prediction technique which can be called an efficient NN which is able to solve the dynamic and computational complexity problem. Its hidden layer is (Patra and Kot, 2002) completely removed by expanding the input layer. (3) To solve the problem of finding the global optimum of a function, TLBO and Jaya is employed here. The attractiveness of TLBO and Jaya is that it require less (Rao and Patel, 2013) parameters compared to other population based optimization technique which needs algorithm specific parameters and it is able to enhance its exploration and exploitation
capacities. These two techniques compared with PSO and DE, as both PSO and DE are widely accepted optimization technique used in most of the dynamic and complex problem basically in the financial market prediction. (4) Here, in this paper the datasets considered uses technical indicator and statistical measures, where the window size is 12, as in most of the paper relating to financial market prediction has taken 12 as window size with an excellent performance for prediction of opening values. Comparing the performance accuracy of technical indicators, statistical measures and a combination of technical indicators and statistical measures, technical indicators pointed to good currency exchange rate forecasting capability and proved it in the experimented work of the proposed model. (5) As all the optimization techniques are population based so population size and number of iterations are common in all algorithms and the rest parameters are taken according to specification of the algorithm. For TLBO and Jaya the common controlling parameter is only population size and maximum number of iteration, and here for all the optimization techniques the population size and number of iterations are taken 100 each to make the comparison standardize. From the experimentation, it has been observed that, the optimization technique TLBO and Jaya gives similar result and performing better than the PSO and DE. Though, TLBO and Jaya shows similar result, but again considering the CPU execution time of ELM-TLBO and ELM-Jaya presented in Table 16, it can be observed that proposed model ELMJaya performing better in all aspects. (6) Thus through various simulation results along with the performance metrics, it is found that proposed ELM-Jaya outperforms all other similar hybrid models in this study.
5. Conclusion An integrated model comprising ELM with TLBO, Jaya, PSO and DE have been proposed to predict exchange rate prediction of USD
Please cite this article in press as: Das, S.R., et al. A hybridized ELM-Jaya forecasting model for currency exchange prediction. Journal of King Saud University – Computer and Information Sciences (2017), https://doi.org/10.1016/j.jksuci.2017.09.006
S.R. Das et al. / Journal of King Saud University – Computer and Information Sciences xxx (2017) xxx–xxx
to INR and USD to EURO data for 1 day, 3 days, 5 days, 7 days, 15 days and 1 month in advance. The model is compared with FLANN and NN optimized with TLBO, Jaya, PSO and DE, which clearly establish that the model not only predict the open price but also able to guide the investor to invest in Forex market. The simulation result and performance metrics clearly delineate that ELM trained by the above optimization technique perform better compared to FLANN and NN trained by the same technique. Again, comparing with the other algorithms where some are evolutionary and some are based on swarm intelligence, besides the common controlling parameters some requires their own algorithmspecific parameters, whereas; TLBO and Jaya provides nearly equivalent result and requires only common controlling parameters such as population size and number of iterations. The simulation results show the performance of TLBO and Jaya supersedes PSO and DE. Specifically, TLBO and Jaya needs less number of parameters compared to PSO and DE. Besides the similar prediction result of ELM-TLBO and ELM-Jaya, our proposed model ELMJaya shows it less time complexity compared to ELM-TLBO. The simulation results clearly describe about the significantly better performance of technical indicators in comparison with statistical measures and a combination of technical indicators and statistical measures. ELM does not show better result when the number of days for prediction increases; hence, as a future work we have the intention to add the influencing factor to the data for better prediction result for high horizon.
References Abdual-Salam, Mustafa E., Abdul-Kader, Hatem M., Abdel-Wahed, Waiel F., 2010. Comparative study between Differential Evolution and Particle Swarm Optimization algorithms in training of feed-forward neural network for stock price prediction. In: Informatics and Systems (INFOS), 2010 The 7th International Conference on IEEE. IEEE, pp. 1–8. Ahmed, Shamsuddin, Khan, M.G.M., Prasad, Biman., 2013. Forecasting Tala/USD and Tala/AUD of Samoa using AR (1) and AR (4): A Comparative Study. Math. Comput. Contempory Sci., 178–186 Aliev, Rafik A., Pedrycz, W., Guirimov, B.G., Aliev, R.R., Ilhan, U., Babagil, M., Mammadli, S., 2011. Type-2 fuzzy neural networks with fuzzy clustering and differential evolution optimization. Inf. Sci. 181 (9), 1591–1608. AlRashidi, Mohammed R., El-Hawary, Mohamed E., 2009. A survey of particle swarm optimization applications in electric power systems. IEEE Trans. Evol. Comput. 13 (4), 913–918. AmirAskari, Mercedeh, Menhaj, Mohammad Bagher, 2016. A modified fuzzy relational model approach to prediction of Foreign Exchange rates‘‘. Control, Instrumentation, and Automation (ICCLA). In: 2016 4th International Conference on. IEEE. Anastasakis, Leonidas, Mort, Neil, 2009. Exchange rate forecasting using a combined parametric and nonparametric self-organising modelling approach. Expert Syst. Appl. 36 (10), 12001–12011. Baba, Norio, Kozaki, Motokazu, 1992. An intelligent forecasting system of stock price using neural networks. In: Neural Networks, 1992. IJCNN., International Joint Conference on, Vol. 1. IEEE, pp. 371–377. Baykasog˘lu, Adil, Hamzadayi, Alper, Köse, Simge Yelkenci, 2014. Testing the performance of teaching–learning based optimization (TLBO) algorithm on combinatorial problems: Flow shop and job shop scheduling cases. Inf. Sci. 276, 204–218. Bollerslev, Tim, 1986. Generalized autoregressive conditional heteroskedasticity. J. Econometrics 31 (3), 307–327. Bourquin, Jacques, Schmidli, H., van Hoogevest, P., Leuenberger, H., 1998. Advantages of Artificial Neural Networks (ANNs) as alternative modelling technique for data sets showing non-linear relationships using data from a galenical study on a solid dosage form. Eur. J. Pharm. Sci. 7.1, 5–16. Box, George E.P., jenkins, Gwilym M., Reinsel, Gregory C., Ljung, Greta M., 2015. Time Series Analysis: Forecasting and Control. John Wiley & Sons. Briza, Antonio C., Naval, Prospero C., 2011. Stock trading system based on the multiobjective particle swarm optimization of technical indicators on end-of-day market data. Appl. Soft Comput. 11 (1), 1191–1201. Chakravarty, Sreejit, Dash, Pradipta K., 2012. A PSO based integrated functional link net and interval type-2 fuzzy logic system for predicting stock market indices. Appl. Soft Comput. 12 (2), 931–941. Das, Swagatam, Suganthan, Ponnuthurai Nagaratnam, 2011. Differential evolution: a survey of the state-of-the-art. IEEE Trans. Evol. Comput. 15 (1), 4–31. Dash, Rajashree, Dash, Pradipta Kishore, Bisoi, Ranjeeta, 2014. A self adaptive differential harmony search based optimized extreme learning machine for financial time series prediction. Swarm Evol. Comput. 19, 25–42.
21
Dehuri, Satchidananda, Roy, Rahul, Cho, Sung-Bae, Ghosh, Ashish, 2012. An improved swarm optimized functional link artificial neural network (ISOFLANN) for classification. J. Syst. Softw. 85 (6), 1333–1345. Dymova, Ludmila, Sevastjanov, Pavel, Kaczmarek, Krzysztof, 2016. A Forex trading expert system based on a new approach to the rule-base evidential reasoning. Expert Syst. Appl. 51, 1–13. Engle, Robert F., 1982. Autoregressive conditional heteroscedasticity with estimates of the variance of United Kingdom inflation. Econometrica: J. Econometric Soc., 987–1007 Ferreira, Tiago A.E., Vasconcelos, Germano C., Adeodato, Paulo J.L., 2008. A new intelligent system methodology for time series forecasting with artificial neural networks. Neural Process. Lett. 28.2, 113–129. Hachicha, Nizar, Jarboui, Bassem, Siarry, Patrick, 2011. A fuzzy logic control using a differential evolution algorithm aimed at modelling the financial market dynamics. Inf. Sci. 181 (1), 79–91. Hegerty, Brian., Hung, Chih-Cheng., Kasprak, Kristen., 2009. ‘‘A comparative study on differential evolution and genetic algorithms for some combinatorial problems.” In: Proceedings of 8th Mexican International Conference on Artificial Intelligence. Nov:9–13. Huang, Guang-Bin, 2003. Learning capability and storage capacity of two-hiddenlayer feedforward networks. IEEE Trans. Neural Networks 14 (2), 274–281. Huang, Guang-Bin, Zhu, Qin-Yu, Siew, Chee-Kheong, 2004. Extreme learning machine: a new learning scheme of feedforward neural networks. In: Neural Networks, 2004. Proceedings. 2004 IEEE International Joint Conference on, Vol. 2. IEEE, pp. 985–990. Huang, Guang-Bin, Zhou, Hongming, Ding, Xiaojian, Zhang, Rui, 2012. Extreme learning machine for regression and multiclass classification. IEEE Trans. Syst. Man Cybern. Part B (Cybernetics) 42.2, 513–529. Ince, Huseyin, Trafails, Theodore B., 2006. A hybrid model for exchange rate prediction. Decis. Support Syst. 42 (2), 1054–1062. Kaastra, Iebeling, Boyd, Milton, 1996. Designing a neural network for forecasting financial and economic time series. Neurocomputing 10 (3), 215–236. _ ßEK, Muammer., ALADAG ˘ , Arasß Gör Çag˘dasß Hakan., 2009. Kadilar, Cem., S ß IMS ‘‘Forecasting the exchange rate series with ANN: the case of Turkey”. konometri ve Istatistik e-Dergisi 9:17–29. Khalghani, Mohammad Reza, Khooban, Mohammad Hassan, 2014. A novel selftuning control method based on regulated bi-objective emotional learning controller’s structure with TLBO algorithm to control DVR compensator. Appl. Soft. Comput. 24, 912–922. Lahmiri, Salim, 2012a. Resilient back-propagation algorithm, technical analysis and the predictability of time series in the financial industry. Decision Sci. Lett. 1 (2), 47–52. Lahmiri, Salim, 2012b. An entropy-LVQ system for S & P500 downward shifts forecasting. Manage. Sci. Lett. 2 (1), 21–28. Lahmiri, Salim., 2014. ‘‘An Exploration of Backpropagation Numerical Algorithms in Modeling US Exchange Rates.” Handbook of Research on Organizational Transformations through Big Data Analytics, 380. Lahmiri, Salim, 2017. Modeling and predicting historical volatility in exchange rate markets. Physica A 471, 387–395. Majhi, Ritanjali, Panda, Ganapati, Sahoo, Gadadhar, 2009. Efficient prediction of exchange rates with low complexity artificial neural network models. Expert Syst. Appl. 36 (1), 181–189. Majhi, Babita, Rout, Minakhi, Baghel, Vikash, 2014. On the development and performance evaluation of a multiobjective GA-based RBF adaptive model for the prediction of stock indices. J. King Saud University Comput. Inf. Sci. 26 (3), 319–331. Mishra, Soumya, Ray, Pravat Kumar, 2016. Power quality improvement using photovoltaic fed DSTATCOM based on JAYA optimization. IEEE Trans. Sustainable Energy 7 (4), 1672–1680. Misra, B.B., Dehuri, S., 2007. Functional Link Artificial Neural Network for Classification Task in Data Mining 1. Moghadam, Ahmad, Seifi, Ali Reza, 2014. Fuzzy-TLBO optimal reactive power control variables planning for energy loss minimization. Energy Convers. Manage. 77, 208–215. Nayak, Rudra Kalyan, Mishra, Debahuti, Rath, Amiya Kumar, 2015. A Naïve SVMKNN based stock market trend reversal analysis for Indian benchmark indices. Appl. Soft Comput. 35, 670–680. Paterlini, Sandra, Krink, Thiemo, 2006. Differential evolution and particle swarm optimisation in partitional clustering. Comput. Stat. Data Anal. 50 (5), 1220– 1247. Patra, Jagdish C., Kot, Alex C., 2002. Nonlinear dynamic system identification using Chebyshev functional link artificial neural networks. IEEE Trans. Syst. Man Cybern. Part B (Cybernetics) 32.4, 505–511. Patra, Jagdish C, Pal, Ranendra N, 1995. A functional link artificial neural network for adaptive channel equalization. Sig. Process. 43 (2), 181–195. Pulido, Martha, Melin, Patricia, Castillo, Oscar, 2014. Particle swarm optimization of ensemble neural networks with fuzzy aggregation for time series prediction of the Mexican Stock Exchange. Inf. Sci. 280, 188–204. Rao, R., 2016. Jaya: A simple and new optimization algorithm for solving constrained and unconstrained optimization problems. Int. J. Ind. Eng. Comput. 7 (1), 19–34. Rao, R.V., Kalyankar, V.D., 2011. Parameters optimization of advanced machining processes using TLBO algorithm. EPPM, Singapore 20, 21–31. Rao, R. Venkata, Patel, Vivek, 2013. An improved teaching-learning-based optimization algorithm for solving unconstrained optimization problems. Sci. Iranica 20.3 (0), 710–720.
Please cite this article in press as: Das, S.R., et al. A hybridized ELM-Jaya forecasting model for currency exchange prediction. Journal of King Saud University – Computer and Information Sciences (2017), https://doi.org/10.1016/j.jksuci.2017.09.006
22
S.R. Das et al. / Journal of King Saud University – Computer and Information Sciences xxx (2017) xxx–xxx
Rao, Ravipudi V., Savsani, Vimal J., Vakharia, D.P., 2011. Teaching–learning-based optimization: a novel method for constrained mechanical design optimization problems. Comput. Aided Des. 43 (3), 303–315. Rao, R.V., More, K.C., Taler, J., Ocłon´, P., 2016. Dimensional optimization of a microchannel heat sink using Jaya algorithm. Appl. Therm. Eng. 103, 572–582. Rao, R.V., Rai, D.P., Ramkumar, J., Balic, J., 2016. A new multi-objective Jaya algorithm for optimization of modern machining processes. Adv. Prod. Eng. Manage. 11 (4), 271. Rout, Minakhi, Majhi, Babita, Majhi, Ritanjali, Panda, Ganapati, 2014. Forecasting of currency exchange rates using an adaptive ARMA model with differential evolution based training. J. King Saud University Comput. Inf. Sci. 26 (1), 7–18. Rout, Ajit Kumar, Dash, P.K., Dash, Rajashree, Bisoi, Ranjeeta, 2015. Forecasting financial time series using a low complexity recurrent neural network and evolutionary learning approach. J. King Saud University Comput Inf. Sci. (article in press) Samanta, B., Al-Balushi, K.R., 2003. Artificial neural network based fault diagnostics of rolling element bearings using time-domain features. Mech. Syst. Sig. Process. 17 (2), 317–328. Shi, Yuhui, Eberhart, Russell C., 1999. Empirical study of particle swarm optimization. In: Evolutionary Computation, 1999. CEC 99. Proceedings of the 1999 Congress on, Vol. 3.. IEEE, pp. 1945–1950.
Singh, Manohar, Panigrahi, B.K., Abhyankar, A.R., 2013. Optimal coordination of directional over-current relays using Teaching Learning-Based Optimization (TLBO) algorithm. Int. J. Elect. Power Energy Syst. 50, 33–41. Tu, Jack V., 1996. Advantages and disadvantages of using artificial neural networks versus logistic regression for predicting medical outcomes. J. Clin. Epidemiol. 49 (11), 1225–1231. Van den Bergh, Frans, Engelbrecht, Andries.Petrus, 2004. A cooperative approach to particle swarm optimization. IEEE Trans. Evol. Comput. 8 (3), 225–239. Yao, Jingtao, Tan, Chew Lim, 2000. A case study on using neural networks to perform technical forecasting of forex. Neurocomputing 34 (1), 79–98. Zhang, Yudong, Yang, Xiaojun, Cattani, Carlo, Rao, Ravipudi Venkata, Wang, Shuihua, Phillips, Preetha, 2016. Tea category identification using a novel fractional Fourier entropy and Jaya algorithm. Entropy 18 (3), 77. Zhao, Guopeng, Shen, Zhiqi, Miao, Chunyan, Man, Zhihong, 2009. On improving the conditioning of extreme learning machine: a linear case. In: Information, Communications and Signal Processing, 2009. ICICS 2009. 7th International Conference on. IEEE, pp. 1–5. Zhu, Qin-Yu, Qin, A.K., Suganthan, P.N., Huang, G.B., 2005. Evolutionary extreme learning machine. Pattern Recogn. 38 (10), 1759–1763.
Please cite this article in press as: Das, S.R., et al. A hybridized ELM-Jaya forecasting model for currency exchange prediction. Journal of King Saud University – Computer and Information Sciences (2017), https://doi.org/10.1016/j.jksuci.2017.09.006