Applied Soft Computing Journal 80 (2019) 263–284
Contents lists available at ScienceDirect
Applied Soft Computing Journal journal homepage: www.elsevier.com/locate/asoc
Novel computing paradigms for parameter estimation in Hammerstein controlled auto regressive auto regressive moving average systems Ammara Mehmood a , Naveed Ishtiaq Chaudhary b , Aneela Zameer c , ∗ Muhammad Asif Zahoor Raja d , a
Department Department c Department d Department b
of of of of
Electrical Engineering, Pakistan Institute of Engineering and Applied Sciences, Nilore, Islamabad, Pakistan Electronic Engineering, International Islamic University Islamabad, Pakistan Computer and Information Sciences, Pakistan Institute of Engineering and Applied Sciences, Nilore, Islamabad, Pakistan Electrical and Computer Engineering, COMSATS University Islamabad, Attock Campus, Attock, Pakistan
highlights • • • •
Application of computational paradigm for parameter estimation problem of HCARARMA systems. The population and single solution based algorithms are exploited for HCARARMA identification. Validation on the basis of statistical performance indices for different noise scenarios and model dynamics. Stability, reliability, extendibility, and wider applicability are other worthy perks of the scheme.
article
info
Article history: Received 11 January 2018 Received in revised form 17 February 2019 Accepted 27 March 2019 Available online 17 April 2019 Keywords: Nonlinear system identification Hammerstein models Differential evolution Genetic algorithms Simulated annealing Pattern search method
a b s t r a c t In the present study, strength of meta-heuristic computing techniques is exploited for estimation problem of Hammerstein controlled auto regressive auto regressive moving average (HCARARMA) system using differential evolution (DE), genetic algorithms (GAs), pattern search (PS) and simulated annealing (SA) algorithms. The approximation theory in mean squared error sense is utilized for construction of cost function for HCARARMA model and highly uncorrelated adjustable parameter of the system is optimized with global search exploration of DE, GAs, PS and SA algorithms. Comparative study is carried out from desired known parameters of the HCARARMA model for different degree of freedom and noise variation scenarios. Performance analysis of the DE, GAs, PS and SA algorithms is conducted through results of statistics based on sufficient large independent executions in terms of measure of central tendency and variation for both precision and complexity indices. The exhaustive simulations established that the population-based heuristics are more accurate than single solution-based methodologies for HCARARMA identification. © 2019 Elsevier B.V. All rights reserved.
1. Introduction Parameter estimation has a growing interest both in linear as well as nonlinear systems due to their strength of accurately describing the behavior of mathematical models in terms of probability distributions, data-base driven procedures and parametric dynamic models [1–6]. Parameter estimation of nonlinear systems based on Hammerstein structure has been studied extensively using local [7–11] and global search techniques [12–14], however, relatively few studies are available to handle a comparatively stiff class of Hammerstein systems named as Hammerstein controlled auto regressive auto regressive moving average ∗ Corresponding author. E-mail addresses:
[email protected] (A. Mehmood),
[email protected] (N.I. Chaudhary),
[email protected] (A. Zameer),
[email protected] (M.A.Z. Raja). https://doi.org/10.1016/j.asoc.2019.03.052 1568-4946/© 2019 Elsevier B.V. All rights reserved.
(HCARARMA) structure [15–18]. In this study strength of modern computing paradigm heuristic through differential evolution (DE), genetic algorithms (GAs), simulated annealing (SA) and pattern search (PS) algorithms are exploited for novel application of parameter estimation of HCARARMA systems. 1.1. Related work Nonlinear systems are generally classified into four main categories: input, output, input–output and feedback nonlinear systems [19,20]. Among input nonlinear systems, Hammerstein models are constituted by static nonlinear blocks integrated with linear dynamical subsystems [21] and have practical importance in modeling electrical drives [22], sticky control valves [23], chemical processes [24], and biomedical engineering systems [25]. For these Hammerstein models, several system identification
264
A. Mehmood, N.I. Chaudhary, A. Zameer et al. / Applied Soft Computing Journal 80 (2019) 263–284
Fig. 1. Graphical abstract of proposed methodology.
schemes have been reported including kernel regression estimate method [26], nuclear norm minimization technique [27], stochastic gradient based identification methods and least squares algorithms [28–33], fractional order gradient methods [34–38] adaptive digital control schemes [39] and Kalman filtering based approach [40,41]. However, as per authors literature survey relatively few studies are available to handle a comparatively stiff class of Hammerstein systems i.e., HCARARMA model, such as, Chen et al. proposed gradient based iterative algorithm for identification [42], Li presented estimation through Newton iterative method [43], Xiong et al. utilized the modified recursive least-squares algorithm [44] and Li et al. developed a maximum likelihood forgetting gradient estimation algorithm [45]. All of these are local search techniques and have an advantage of easy implementation but suffer from local minima problem. In order to improve the accuracy as well as to avoid local minima issue caused by the local search methods, various global search approaches through bio/nature inspired heuristics based on evolutionary and swarm optimization mechanisms have been proposed for Hammerstein systems. For instance, genetic algorithms [13,46], differential evolution [14], cuckoo search [47,48], artificial neural network [49], brain storm optimization [50], backtracking search algorithm [51], particle swarm optimization [52], social emotional optimization [53], bacterial foraging optimization [54] gravitational search algorithm [55] and biogeography-based optimization algorithm [56]. Beside the utmost significance of these heuristic paradigms based on artificial intelligence techniques in nonlinear regimes, these methodologies are yet to be explored and exploited in system identification problems of HCARARMA models. These are the motivational factors for the authors to make explorations and exploitations in formulating new optimization mechanisms for accurate, robust, reliable and stable solutions for identification problems of HCARARMA models. 1.2. Contribution The aim of this study is to utilize the power of differential evolution (DE) optimization mechanism for estimating the design parameters of variants of HCARARMA systems based on noise variances and model dynamics. The comparative studies of the proposed scheme are carried out with standard computing techniques based on genetic algorithms (GAs), simulated annealing (SA) and pattern-search (PS) for each variant of HCARARMA model. The salient features are briefly described for the proposed scheme as:
• Novel application of modern computational paradigms is presented for parameter estimation problem of HCARARMA systems. • The population based (DE and GAs) and single solution based (SA and PS) algorithms are exploited for HCARARMA identification with different noise scenarios and model dynamics. • The proposed model is validated on the basis of statistical performance indices including root mean squared error (RMSE), mean weight deviation (MWD), coefficient of determination (R2) and Thiel’s inequality coefficient (TIC) as well as their global versions based on enormous independent executions. • Besides the quality parameter estimation for HCARARMA model, effortless implementation, straightforward conceptual strategy, stability, reliability, extendibility, and wider applicability are other worthy perks. 1.3. Paper outline Remaining paper is organized as follows: the system model based on HCARARMA, development of fitness function, details of proposed methodology and optimization process are given in Section 2. Section 3 provides the simulation results, both graphically and in tabular form of three examples based on HCARARMA system. Conclusion along with few future openings in the domain of Hammerstein parameter estimation are listed in Section 4. 2. Design methodology For HCARARMA system identification, methodology is composed of two stages, in first step, system modeling of HCARARMA system is performed, also fitness function is defined in mean square sense, while second step is the optimization method used to minimize the cost function. The detailed proposed methodology is shown in Fig. 1. 2.1. HCARARMA system modeling The generic representation in terms of process blocks for HCARARMA model is shown in Fig. 1, while it is given mathematically as: [42,43] P(z)x(t) = Q (z)u(t) +
R(z) S(z)
v (t),
(1)
A. Mehmood, N.I. Chaudhary, A. Zameer et al. / Applied Soft Computing Journal 80 (2019) 263–284
265
Fig. 2. Workflow diagram of DE for HCARARMA system.
[
P(z) R(z)
Q (z) S(z)
]
[ =
1 + p1 z −1 + p2 z −2 + · · · + pny z −ny 1 + r1 z −1 + r2 z −2 + · · · + rnw z −nw
q1 z −1 + q2 z −2 + · · · + qnu z −nu 1 + s1 z −1 + s2 z −2 + · · · + snv z −nv
]
.
(2)
Box I.
where P(z), Q (z), R(z), and S(z) are polynomials provided in terms
using (2) and (3) in (1), then the output is given as:
of unit backward shift operator z −1 , i.e., [z −1 y(t) = y(t-1)] as in Box I. x(t) and v (t) represents the model output, and white noise to
x(t)
=−
i= 1 nq
the system, respectively. While the output of nonlinear block u(t) is a nonlinear function of known basis (f1 , f2 , . . . , fm ) of the model with coefficients (γ1 , γ2 , . . . , γm ) and it is given as:
np ∑
+
∑ i= 1 nr
+
∑
pi x (t − i) −
ns ∑
si x (t − i) −
i= 1
qi f (u (t − i)) +
np ns ∑ ∑
pi sj x (t − (i + j))
i=1 j=1
nq ns ∑ ∑
qi sj f (x (t − (i + j)))
i=1 j=1
ri v (t − i) + v (t).
i= 1
u(t) = f (u(t)) = γ1 f1 (u(t)) + γ2 f2 (u(t)) + · · · + γm fm (u(t)).
(3)
(4)
266
A. Mehmood, N.I. Chaudhary, A. Zameer et al. / Applied Soft Computing Journal 80 (2019) 263–284
Fig. 3. Schematics of GA, PS and SA algorithm.
The parameter vector for the HCARARMA model using Eq. (4) is given as:
Θ = [ θx
θu
T
θ v ] ∈ ℜn0 , n0 = np + ns + (nq + ns )m + nr , (5)
θ x = [θx1 , θx2 , . . . , θxnp +ns ] ,
fm (u(t − 1)), . . . , fm (u(t − (nq + ns )))],
(10)
φ(t) = x(t − 1), x(t − 2), . . . , x(t − (np + ns − 1)), ] x(t − (np + ns )) .
[
Using (5) to (10) in (6), then
T
θ x = [p1 + s1 , p2 + p1 s1 , . . . , pnp sns −1 + pnp −1 sns , pnp sns ]T ,
(6)
θ u = [θu1 , θu2 , . . . , θu
]T , (nq +ns )m θ u = [q1 γ1 , q2 γ1 + q1 s1 γ1 , . . . , q1 γ2 , q2 γ2 +q1 s1 γ2 , . . . , qnq γm sns −1 + qnq −1 γm sns , qnq γm sns ]T , θ v = [r1 , r2 , . . . , rnr ] . T
(7) (8)
Accordingly, the corresponding information vector of HCARARMA model is written as:
[ ] ϕ(t) = ψ(t), vnr ∈ ℜn0 , vnr = [v (t − 1), v (t − 2), . . . , v (t − nr )] ,
ψ(t) = [φ(t), u(nq +ns )m ], u(nq +ns )m = [f1 (u(t − 1)), . . . , f1 (u(t − (nq + ns ))) . . .
(9)
x(t) = ϕT (t)Θ + v (t)
(11)
The approximation theory in mean square error sense is exploited for HCARARMA system in Eq. (11) in order to optimize the cost/fitness function based on following minimization problem
εΘ =
N )2 1 ∑( x(ti ) − xˆ (ti ) , N
(12)
i=1
where x(ti ) is actual output for ith observation given in Eq. (11) and xˆ (ti ) is estimated response of x(ti ), while N is a constant, stands for total number instances. The approximated response of x(ti ) is defined for known information vector ϕT (ti ) as:
(
)
ˆ , xˆ (ti ) = ϕT (ti )Θ
(13)
A. Mehmood, N.I. Chaudhary, A. Zameer et al. / Applied Soft Computing Journal 80 (2019) 263–284 Table 1 Pseudocode of DE heuristic for HCARARMA system identification.
267
268
A. Mehmood, N.I. Chaudhary, A. Zameer et al. / Applied Soft Computing Journal 80 (2019) 263–284
Table 2 Parameter setting of the proposed meta-heuristics for HCARARMA system. GAs
PS
SA
Parameters
Setting
Parameters
Setting
Parameters
Setting
Population Creation Scaling faction Selection function Elite count Crossover function Mutation Max generations
Constraint dependent Rank Stochastic Uniform 2 Heuristic Adaptive feasible 1500
Poll method Polling order Mesh Accelerator Mesh Rotae/Scale Mesh expansion factor Mesh Contraction factor Max Iteration
GPS Positive basis 2N Consecutive Off On 2.0 0.5 1000
Temperature update temperature Chromosome size Mesh size Re annealing interval Max annealing Function Hybridization
Exponential function update 30 (1, 10) 200 1000 IPA
Fig. 4. Iterative adaptation results of fitness function for DE and GAs algorithm for HCARARMA Problem.
ˆ is an estimated parameter vector of desired vector Θ . where Θ The cost function in Eq. (12) is given as: εΘ =
N 1 ∑ ((
N
))2 ) ( ˆ ϕT (ti )Θ + v (ti ) − ϕT (ti )Θ ,
i=1
(14)
ˆ such that εΘ → 0, Now our desire is to find appropriate Θ in the minimization problem (14) for system identification of HCARARMA model.
A. Mehmood, N.I. Chaudhary, A. Zameer et al. / Applied Soft Computing Journal 80 (2019) 263–284
269
Fig. 5. Iterative adaptation results of fitness function for PS and SA algorithm for HCARARMA Problem.
2.2. Optimization procedure
Adaptation strength of DE, GAs, PS and SA is used for identification of parameter vector Θ of HCARARMA system through fitness function (14) optimization and brief introductory material for optimization solvers is presented in this section.
Among meta-heuristic optimization algorithms, differential evolution (DE) introduced by Rainer Storn and Kenneth Price [57], has been exhaustively practiced to solve complex problems in variety of fields in engineering domain [58,59] due its efficacy, stability and robustness. It seeks global optimal solution of any constrained and unconstrained tasks of optimization over a discrete as well as continues domain. DE is a population-based
270
A. Mehmood, N.I. Chaudhary, A. Zameer et al. / Applied Soft Computing Journal 80 (2019) 263–284
Fig. 6. Comparison of the accuracy of DE and GA for HCARARMA model with degree of freedom and noise variation scenarios.
search technique, which employs mutation as a search tool and utilizes selection method to locate the potential regions in the feasible search space. Generic flow diagram with fundamental procedural steps of DE is presented in Fig. 2, while few recent applications include hyper-spectral image segmentation [60], cognitive radio design [61], software effort estimation problems [62] and life time maximization of wireless sensor networks [63].
GAs is a leading member of evolutionary algorithms, inspired by natural genetic processes [64]. The GAs is a reliable and efficient global search mechanism to obtain the accurate solutions of both convex and nonconvex optimization scenarios of applied sciences [65–67]. GAs work through its three fundamental operators namely crossover, mutation and selection processes and generic flow diagram with coherent designed steps of GAs is shown in Fig. 3(a), Few potential applications reported recently include optimization of ultracapacitor models for electric vehicles [68],
A. Mehmood, N.I. Chaudhary, A. Zameer et al. / Applied Soft Computing Journal 80 (2019) 263–284
Fig. 7. Comparison of the accuracy of PS and SA for HCARARMA model with degree of freedom and noise variation scenarios.
271
272
A. Mehmood, N.I. Chaudhary, A. Zameer et al. / Applied Soft Computing Journal 80 (2019) 263–284
Fig. 8. Plot of fitness values for multiple runs of the algorithms in the form of sorted, unsorted and zoomed illustrations.
mining Internet of Things [69], dusty plasma models [70] and wind speed forecasting [71]. Hooke and Jeeves were first who introduced pattern search method, belongs to the class of direct search numerical optimization technique based on lattice structure [72,73]. PS method work through sequence of points in form of meshes and patterns to track the feasible direction and their flexibility. The generic flow graph of PS with intermediate steps is shown in Fig. 3(b). The PS is simple methods that have been widely employed for optimization problems like modal parameters-based system identification [74], efficient VLSI design [75] and fast block-matching motion estimation [76].
Simulating annealing is a meta-heuristic technique developed on the mathematical modeling on material heating and controlled cooling phenomenon [77]. SA optimize the candidate solution through the statistical mechanics of equilibration, i.e., annealing and exploitation of thermodynamic characteristics. Generic flow diagram with stepwise procures are given in Fig. 3(c). SA is widely used by researchers for optimization and its recent potential applications include wireless sensor network models [78], optimization in unit commitment problems [79] and Bayesian system identification problems [80]. Effectiveness of DE, GAs, PS and SA motivates authors to use these optimization mechanisms for finding the parameter vector
A. Mehmood, N.I. Chaudhary, A. Zameer et al. / Applied Soft Computing Journal 80 (2019) 263–284
273
Fig. 9. Comparative study of DE, GA, PS and SA on the basis of MWD values for all three case examples of HCARARMA model.
of HCARARMA model. The flow diagram of DE for HCARARMA is given in Fig. 2, while the details of the procedures is given in pseudocode illustrated in Table 1. The parameters involved in the DE are explained in the pseudocode of DE, given in Table 1, while the parameter setting of GAs, PS and SA are presented in Table 2.
2.3. Performance indices The evaluation of the performance of proposed algorithms for parameter estimation of HCARARMA model is carried out with five performance measuring tools in this research work, i.e.,
274
A. Mehmood, N.I. Chaudhary, A. Zameer et al. / Applied Soft Computing Journal 80 (2019) 263–284
Table 3 Performance metrics of accuracy and complexity for each algorithm in three case studies of HCARARMA model. Method
Model
Noise
GA
PS
SA
Complexity
ε
MWD
TIC
RMSE
R2
TIME
Gens
FCs
2
I
σ = 0.001 σ 2 = 0.0102 σ 2 = 0.1002
3.53E−15 2.71E−09 2.19E−05
2.26E−03 9.51E−02 1.20E−01
1.05E−02 3.26E−01 2.94E−01
5.97E−03 2.51E−01 1.78E−01
2.17E−04 3.84E−01 1.94E−01
3.92 4.91 5.95
124 154 212
8803 10933 15051
II
σ 2 = 0.0012 σ 2 = 0.0102 σ 2 = 0.1002
2.08E−15 3.50E−09 1.06E−05
1.51E−04 7.10E−02 1.19E−01
7.99E−04 2.93E−01 3.47E−01
4.51E−04 2.12E−01 1.64E−01
2.47E−03 2.92E−01 1.48E−01
6.14 7.73 10.56
153 193 268
13922 17562 24387
III
σ 2 = 0.0012 σ 2 = 0.0102 σ 2 = 0.1002
1.45E−15 8.66E−10 1.79E−05
4.41E−05 7.57E−03 4.52E−02
2.85E−04 4.70E−02 2.48E−01
1.46E−04 2.48E−02 1.19E−01
1.60E−03 3.00E−01 1.52E−01
11.05 13.16 19.38
198 232 346
21977 25751 38405
I
σ 2 = 0.0012 σ 2 = 0.0102 σ 2 = 0.1002
2.45E−11 4.12E−09 4.27E−05
7.58E−04 2.69E−04 4.76E−03
3.23E−03 7.40E−04 1.34E−02
1.83E−03 4.17E−04 7.51E−03
2.03E−05 1.06E−06 3.44E−04
117.81 127.51 120.09
1500 1500 1500
480320 480320 480320
II
σ 2 = 0.0012 σ 2 = 0.0102 σ 2 = 0.1002
3.10E−12 9.53E−09 2.79E−05
4.31E−04 2.60E−03 1.14E−02
2.28E−03 1.26E−02 2.65E−02
1.29E−03 7.06E−03 1.49E−02
1.41E−01 9.54E−02 2.87E−02
133.31 133.39 144.15
1500 1500 1500
480320 480320 480320
III
σ 2 = 0.0012 σ 2 = 0.0102 σ 2 = 0.1002
4.84E−04 5.13E−04 4.65E−04
9.39E−02 9.28E−02 9.90E−02
2.22E−01 2.21E−01 2.23E−01
1.37E−01 1.34E−01 1.38E−01
3.85E−03 1.76E−02 8.75E−03
145.25 165.09 148.66
1500 1500 1500
480320 480320 480320
I
σ 2 = 0.0012 σ 2 = 0.0102 σ 2 = 0.1002
2.62E−09 2.19E−07 4.93E−05
3.93E−02 1.18E−02 2.25E−02
1.60E−01 3.93E−02 7.79E−02
1.03E−01 2.29E−02 4.68E−02
6.45E−02 3.20E−03 1.33E−02
3.48 3.01 3.02
1001 1001 1001
9819 10008 10296
II
σ 2 = 0.0012 σ 2 = 0.0102 σ 2 = 0.1002
7.19E−06 2.62E−07 2.77E−05
3.81E−02 6.20E−02 1.43E−02
9.35E−02 3.21E−01 3.50E−02
5.71E−02 1.67E−01 1.94E−02
3.16E−01 2.70E−01 1.25E−02
3.51 4.08 3.80
1001 1001 1001
11938 12183 13762
III
σ 2 = 0.0012 σ 2 = 0.0102 σ 2 = 0.1002
2.41E−03 2.54E−03 3.40E−03
1.14E−01 9.97E−02 1.04E−01
2.78E−01 2.30E−01 2.42E−01
1.84E−01 1.41E−01 1.45E−01
1.32E−01 3.13E−01 1.50E−01
3.83 3.80 4.04
1001 1001 1001
12056 12862 14053
I
σ 2 = 0.0012 σ 2 = 0.0102 σ 2 = 0.1002
3.75E−03 7.94E−04 2.55E−03
5.43E−02 3.23E−02 3.12E−02
1.39E−01 8.74E−02 5.96E−02
7.08E−02 5.22E−02 3.42E−02
3.05E−02 1.66E−02 7.11E−03
2.43 2.47 2.45
1001 1001 1001
1051 1051 1044
II
σ 2 = 0.0012 σ 2 = 0.0102 σ 2 = 0.1002
1.66E−02 7.65E−03 2.50E−03
8.85E−02 6.53E−02 8.25E−02
2.19E−01 1.71E−01 2.28E−01
1.27E−01 9.47E−02 1.58E−01
1.36E−01 1.39E−01 4.53E−01
2.43 2.41 2.72
1001 1001 1001
1056 1056 1065
III
σ 2 = 0.0012 σ 2 = 0.0102 σ 2 = 0.1002
1.38E−02 3.91E−02 2.88E−02
8.55E−02 1.24E−01 1.12E−01
1.85E−01 3.50E−01 2.44E−01
9.96E−02 1.53E−01 1.43E−01
1.77E−01 4.03E−01 4.10E−01
2.81 2.71 3.01
1001 1001 1001
1068 1068 1068
2
DE
Accuracy
mean weight deviation (MWD), root mean square error (RMSE), normalized error function, Theil’s inequality coefficient (TIC) and coefficient of Determination R2 . The mathematical relations for all these indices are given here. The MWD is mathematically written as:
MWD =
n ⏐ ⏐ ∑ ⏐ ˆ i ⏐⏐ ⏐Θ i − Θ
(15)
Thiel’s Inequality Coefficient (TIC) is formalized as:
√
)2 ∑n ( ˆi Θ − Θ i i=1 ). TIC = (√ √ ∑ ∑n 2 n 2 1 1 ˆ Θ + Θ i i i=1 i=1 n n 1 n
(18)
Coefficient of Determination R2 is mathematically written as:
i=1
⎛ ˆ is an estimated parameter vector of desired vector Θ . where Θ The RMSE can be given mathematically as:
n 1 ∑ ˆ i )2 , RMSE = √ (Θ i − Θ n
∥Θ∥
,
where ∥∥ is representing the L2 norm.
∑n √ i=1
for Θ=
(Θ i − Θ )2
n 1∑
n
i=1
ˆ Θ i , Θ=
√
∑n
i=1
n 1∑
n
⎠ ,
ˆ i − Θ )2 (Θ
ˆ
ˆ i. Θ
(19)
i=1
The error in R2 (ER2 ) is given as:
Normalize error function δ is defined as:
δ=
ˆ ˆ i=1 (Θ i − Θ )(Θ i − Θ )
(16)
i=1
ˆ Θ − Θ
R2 = ⎝
⎞2
∑n
ER2 = 1 − R2 .
(17)
(20)
Accordingly, the global MWD (GMWD ) is mathematically formulated as: GMWD =
R 1∑
R
r =1
(
)
R n ⏐ ⏐ 1 ∑ ∑⏐ ˆ i ⏐⏐ , MWDr = ⏐Θ i − Θ R r =1
i=1
(21)
A. Mehmood, N.I. Chaudhary, A. Zameer et al. / Applied Soft Computing Journal 80 (2019) 263–284
275
Table 4 Comparison of DE, GA, PS and AS results through statistics and its operators for HCARARMA model for all three noise variation of example-1. Method
DE
GA
PS
SA
ˆi Approximate parameter vector Θ
Noise levels
Model
i = 1
i = 2
i = 3
i = 4
i = 5
i=6
i=7
σ 2 = 0.0012
Best Mean Worst
0.50 0.50 0.50
0.06 0.06 0.06
0.10 0.10 0.10
0.20 0.20 0.20
0.03 0.03 0.03
0.06 0.06 0.06
0.52 0.61 0.66
σ 2 = 0.0102
Best Mean Worst
0.50 0.50 0.50
0.06 0.06 0.06
0.10 0.10 0.10
0.20 0.20 0.20
0.03 0.03 0.03
0.06 0.06 0.06
1.16 1.16 1.16
σ 2 = 0.1002
Best Mean Worst
0.16 0.16 0.16
0.03 0.03 0.03
0.12 0.12 0.12
0.17 0.17 0.17
0.01 0.01 0.01
−0.02 −0.02 −0.02
0.82 0.82 0.82
σ 2 = 0.0012
Best Mean Worst
0.50 0.54 0.98
0.06 0.07 0.16
0.10 0.10 0.10
0.20 0.20 0.20
0.03 0.03 0.08
0.06 0.07 0.16
0.01 0.49 0.06
σ 2 = 0.0102
Best Mean Worst
0.50 0.54 0.50
0.06 0.07 0.06
0.10 0.10 0.10
0.20 0.20 0.20
0.03 0.03 0.03
0.06 0.07 0.06
0.55 0.55 0.51
σ 2 = 0.1002
Best Mean Worst
0.52 0.44 0.41
0.07 0.06 0.05
0.10 0.09 0.09
0.20 0.20 0.20
0.03 0.03 0.03
0.07 0.05 0.04
0.50 0.55 0.56
σ 2 = 0.0012
Best Mean Worst
0.90 0.85 0.62
0.14 0.13 0.08
0.10 0.10 0.10
0.20 0.20 0.20
0.07 0.06 0.04
0.14 0.13 0.08
0.00 0.68 0.99
σ 2 = 0.0102
Best Mean Worst
0.98 0.85 0.74
0.16 0.13 0.11
0.10 0.10 0.10
0.20 0.20 0.20
0.08 0.07 0.05
0.16 0.13 0.11
0.61 0.85 0.52
σ 2 = 0.1002
Best Mean Worst
0.95 0.84 0.80
0.13 0.12 0.12
0.10 0.10 0.09
0.19 0.19 0.20
0.08 0.06 0.06
0.15 0.13 0.12
0.68 0.52 0.55
σ 2 = 0.0012
Best Mean Worst
0.01 0.56 0.27
0.12 0.29 0.41
0.15 0.10 0.17
0.15 0.20 0.18
0.01 0.05 0.00
0.00 0.10 0.09
0.86 0.47 0.07
σ 2 = 0.0102
Best Mean Worst
0.25 0.59 0.70
0.57 0.27 0.19
0.15 0.10 0.07
0.20 0.20 0.19
0.03 0.06 0.01
0.01 0.10 0.14
0.74 0.45 0.45
σ 2 = 0.1002
Best Mean Worst
0.73 0.57 0.87
0.15 0.27 0.50
0.11 0.09 0.10
0.25 0.20 0.17
0.04 0.06 0.15
0.15 0.09 0.21
0.01 0.48 0.57
0.50
0.06
0.10
0.20
0.03
0.06
0.50
True Values θi
where, the total number of independent runs is represented by a constant R, and one independent execution is defined as a process of the algorithm with distinct random seeds. Similarly, the global RMSE (GRMSE ) is defined as,
Global ER2 (GER2 ) can be illustrated as:
GER2 =
GRMSE =
R 1∑
R
⎛ ⎞ n R ∑ ∑ 1 ˆ i )2 ⎠ . ⎝√ 1 RMSEr = (Θ i − Θ R
r =1
r =1
n
⎧ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎨ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎩
(22)
1 R
R ∑
ER2
r =1 1 R
⎞2 . ˆ) ˆ i−Θ Θ i − Θ )(Θ ⎝ ⎠ √ ∑n √ ∑n ˆ 2 2 ˆ r =1 (Θ i − Θ ) (Θ i − Θ ) i=1 i=1
R ∑
⎛
∑n
i=1 (
(24)
i=1
The mean fitness ε based on total number of runs R is formulized as: Global Thiel’s inequality coefficient (GTIC ) is defined as:
ε=
GTIC Θ
⎧ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎨
R 1 R
∑
TICr
r =1
⎛ n ( R )2 / ∑ 1 ∑ = . 1 √ ˆi ⎝ Θi − Θ ⎪ R ⎪ n ⎪ ⎪ r = 1 i = 1 ⎪ (√ )) √ ⎪ ⎪ ∑n ∑n 2 ⎪ 2 1 1 ⎪ ˆ ⎩ i=1 Θ i + i=1 Θ i n n
(23)
⎧ ⎪ ⎪ ⎪ ⎪ ⎨
1 R
⎪ ⎪ ⎪ ⎪ ⎩
1 R
R ∑ r =1 R
∑ r =1
εr (
N 1 ∑ ((
N
ˆ ϕ (ti,r )Θ + v (ti,r ) − ϕ (ti,r )Θ T
)
(
T
))2
) . (25)
i=1
For the ideal models, the standard magnitudes of these metrics MWD, RMSE, TIC and ER2 should be zero. 3. Simulation with discussion The numerical experimentation is conducted for three case studies for identification problem of HCARARMA model with
276
A. Mehmood, N.I. Chaudhary, A. Zameer et al. / Applied Soft Computing Journal 80 (2019) 263–284
Table 5 Comparison of DE, GA, PS and AS results through statistics and its operators for HCARARMA model for all three noise variation of example-2. Method
DE
GA
PS
SA
ˆi Approximate Parameter Vector Θ
Noise Levels
Model
i = 1
i = 2
i = 3
i = 4
i = 5
i = 6
i = 7
i =8
i=9
σ 2 = 0.0012
Best Mean Worst
0.50 0.50 0.50
0.36 0.36 0.36
0.19 0.19 0.19
0.03 0.03 0.03
0.10 0.10 0.10
0.20 0.20 0.20
0.03 0.03 0.03
0.01 0.01 0.01
0.50 0.67 1.30
σ 2 = 0.0102
Best Mean Worst
0.50 0.50 0.50
0.36 0.36 0.36
0.19 0.19 0.19
0.03 0.03 0.03
0.10 0.10 0.10
0.20 0.20 0.20
0.03 0.03 0.03
0.01 0.01 0.01
1.14 1.14 1.14
σ 2 = 0.1002
Best Mean Worst
0.09 0.09 0.09
0.17 0.17 0.17
0.08 0.08 0.08
−0.07 −0.07 −0.07
0.05 0.05 0.05
0.24 0.24 0.24
0.00 0.00 0.00
−0.08 −0.08 −0.08
0.46 0.46 0.46
σ 2 = 0.0012
Best Mean Worst
0.51 0.54 0.79
0.36 0.37 0.46
0.19 0.20 0.28
0.03 0.04 0.08
0.10 0.10 0.10
0.20 0.20 0.20
0.03 0.03 0.06
0.01 0.02 0.07
0.14 0.52 0.01
σ 2 = 0.0102
Best Mean Worst
0.50 0.54 0.74
0.36 0.37 0.45
0.19 0.20 0.27
0.03 0.04 0.07
0.10 0.10 0.10
0.20 0.20 0.20
0.03 0.03 0.05
0.01 0.02 0.05
0.09 0.47 0.03
σ 2 = 0.1002
Best Mean Worst
0.49 0.48 0.77
0.40 0.39 0.51
0.16 0.16 0.27
0.04 0.03 0.07
0.10 0.10 0.10
0.20 0.20 0.20
0.03 0.03 0.06
0.01 0.01 0.06
0.25 0.68 0.94
σ 2 = 0.0012
Best Mean Worst
0.93 0.88 0.92
0.53 0.51 0.53
0.33 0.31 0.33
0.11 0.10 0.11
0.10 0.10 0.10
0.20 0.20 0.20
0.08 0.07 0.08
0.09 0.08 0.09
0.18 0.24 0.00
σ 2 = 0.0102
Best Mean Worst
0.89 0.90 0.91
0.52 0.52 0.53
0.32 0.32 0.32
0.11 0.10 0.11
0.10 0.10 0.10
0.20 0.20 0.20
0.07 0.07 0.07
0.09 0.09 0.09
0.40 0.06 0.38
σ 2 = 0.1002
Best Mean Worst
0.98 0.85 0.90
0.56 0.51 0.53
0.33 0.29 0.31
0.10 0.09 0.09
0.11 0.11 0.11
0.20 0.20 0.20
0.08 0.06 0.07
0.10 0.08 0.09
0.72 0.70 0.80
σ 2 = 0.0012
Best Mean Worst
0.82 0.62 0.74
0.69 0.45 0.36
0.72 0.41 0.77
0.13 0.27 0.14
0.03 0.10 0.18
0.41 0.20 0.22
0.05 0.05 0.05
0.06 0.07 0.07
0.98 0.55 0.07
σ 2 = 0.0102
Best Mean Worst
0.45 0.58 1.00
0.22 0.46 0.27
0.30 0.38 0.01
0.42 0.32 0.63
0.15 0.09 0.10
0.22 0.20 0.04
0.00 0.05 0.06
0.06 0.07 0.26
0.08 0.56 0.49
σ 2 = 0.1002
Best Mean Worst
0.50 0.58 0.40
0.83 0.46 0.73
0.62 0.41 0.82
0.38 0.34 0.18
0.11 0.11 0.11
0.27 0.20 0.26
0.04 0.06 0.06
0.04 0.07 0.02
0.38 0.52 0.66
0.50
0.36
0.19
0.03
0.10
0.20
0.03
0.01
0.50
True Values Θi
different number of optimization parameters along with scenarios based on variation in signal to noise ratios using modern computing paradigms of DE, GAs, PS and SA. Example 1 (HCARARMA Model with Seven Unknown). This case study considers the HCARARMA system identification with seven unknown elements, i.e., optimization parameters, with following detail:
[
P(z) R(z)
[ =
Q (z) S(z)
]
[ =
1 + 0.2z −1 1 + 0.5z −1
1 + p1 z −1 1 + r1 z −1 1z −1 1 + 0.3z −1
q1 z −1 1 + s1 z −1
]
]
u(t)
Θ = [Θ1 , Θ2 , Θ3 , Θ4 , Θ5 , Θ6 , Θ7 , Θ8 , Θ9 ]T Θ = [0.5, 0.36, 0.19, 0.03, 0.10, 0.20, 0.03, 0.01, 0.5]T
Example 3 (HCARARMA Model with Eleven Unknown). In this study, parameter estimation model of HCARARMA system having eleven elements in the parameter vector is taken with following details:
[
u(t) = f (u(t)) = γ1 u(t) + γ2 u2 (t) = 0.1u(t) + 0.2u2 (t),
Θ = [Θ1 , Θ2 , Θ3 , Θ4 , Θ5 , Θ6 , Θ7 ]
T
P(z) R(z)
Q (z) S(z)
]
[
1 + 0.2z −1 + 0.3z −2 + 0.1z −3 1 + 0.3z −1
1z −1 + 0.8z −2 1 + 0.5z −1
]
[
1 + 0.2z −1 + 0.3z −2 + 0.1z −3 1 + 0.5z −1
1z −1 + 0.8z −2 1 + 0.3z −1
]
u(t)
= f (u(t)) = γ1 u(t) + γ2 u2 (t) = 0.1u(t) + 0.2u2 (t),
=
Θ = [0.5, 0.06, 0.1, 0.2, 0.03, 0.06, 0.5]T Example 2 (HCARARMA Model with Nine Unknown). Here, an identification problem of HCARARMA model based on nine parameters is evaluated and is given as: [ ] [ ] P(z) Q (z) 1 + p1 z −1 + p2 z −2 + p3 z −3 q1 z −1 = R(z) S(z) 1 + r1 z −1 1 + s1 z −1 [ ] 1 + 0.2z −1 + 0.3z −2 + 0.1z −3 1z −1 = 1 + 0.5z −1 1 + 0.3z −1
= f (u(t)) = γ1 u(t) + γ2 u2 (t) = 0.1u(t) + 0.2u2 (t),
=
Θ = [Θ1 , Θ2 , Θ3 , Θ4 , Θ5 , Θ6 , Θ7 , Θ8 , Θ9 , Θ10 , Θ11 ]T Θ = [0.5, 0.36, 0.19, 0.03, 0.1, −0.05, −0.024, 0.2, −0.01, −0.05, 0.5]T
A. Mehmood, N.I. Chaudhary, A. Zameer et al. / Applied Soft Computing Journal 80 (2019) 263–284
277
Table 6 Comparison of DE, GA, PS and AS results through statistics and its operators for HCARARMA model for all three noise variation of example-3. Method
DE
Model
i = 1
i = 2
i = 3
i = 4
i = 5
i = 6
i = 7
i =8
i =9
i =10
i =11
σ 2 = 0.0012
Best Mean Worst
0.50 0.50 0.50
0.36 0.36 0.36
0.19 0.19 0.19
0.03 0.03 0.03
0.10 0.10 0.10
−0.05 −0.05 −0.05
−0.02 −0.02 −0.02
0.20 0.20 0.20
−0.01 −0.01 −0.01
−0.05 −0.05 −0.05
0.50 0.20 0.08
= 0.0102
Best Mean Worst
0.50 0.50 0.50
0.36 0.36 0.36
0.19 0.19 0.19
0.03 0.03 0.03
0.10 0.10 0.10
−0.05 −0.05 −0.05
−0.02 −0.02 −0.02
0.20 0.20 0.20
−0.01 −0.01 −0.01
−0.05 −0.05 −0.05
0.58 0.58 0.58
= 0.1002
Best Mean Worst
0.54 0.54 0.54
0.37 0.37 0.37
0.22 0.22 0.22
0.03 0.03 0.03
0.10 0.10 0.10
−0.05 −0.05 −0.05
−0.02 −0.02 −0.02
0.20 0.20 0.20
0.00 0.00 0.00
−0.06 −0.06 −0.06
0.11 0.11 0.11
= 0.0012
Best Mean Worst
0.63 0.72 0.68
0.90 0.79 0.99
0.61 0.52 0.68
0.34 0.28 0.38
0.11 0.10 0.12
0.00 0.00 0.00
0.00 0.00 0.00
0.18 0.18 0.17
0.00 0.02 0.02
0.08 0.03 0.09
0.07 0.45 0.99
= 0.0102
Best Mean Worst
0.71 0.72 0.97
0.65 0.78 0.87
0.41 0.51 0.61
0.18 0.27 0.32
0.09 0.10 0.09
0.00 0.00 0.00
0.00 0.00 0.00
0.19 0.18 0.18
0.02 0.03 0.09
0.00 0.03 0.01
0.94 0.38 0.01
= 0.1002
Best Mean Worst
0.84 0.70 0.78
0.84 0.79 0.94
0.61 0.54 0.66
0.35 0.27 0.39
0.10 0.11 0.11
0.00 0.00 0.00
0.00 0.00 0.00
0.17 0.17 0.16
0.06 0.02 0.05
0.03 0.04 0.06
0.36 0.49 0.06
= 0.0012 σ2
Best Mean Worst
0.50 0.75 0.78
0.65 0.74 0.74
0.35 0.46 0.49
0.16 0.20 0.21
0.11 0.10 0.11
0.00 0.00 0.00
0.03 0.01 0.00
0.19 0.19 0.19
0.01 0.05 0.05
0.00 0.00 0.00
1.00 0.80 1.00
σ 2 = 0.0102
Best Mean Worst
0.80 0.74 0.93
0.72 0.74 0.87
0.49 0.46 0.53
0.26 0.20 0.16
0.12 0.10 0.07
0.00 0.00 0.00
0.00 0.02 0.01
0.20 0.19 0.16
0.05 0.05 0.10
0.00 0.00 0.00
0.00 0.83 0.96
σ 2 = 0.1002
Best Mean Worst
0.80 0.74 0.78
0.75 0.73 0.71
0.50 0.45 0.47
0.19 0.19 0.30
0.09 0.10 0.13
0.00 0.00 0.00
0.00 0.02 0.00
0.19 0.19 0.20
0.05 0.04 0.05
0.00 0.00 0.00
1.00 0.88 0.66
σ 2 = 0.0012
Best Mean Worst
0.01 0.56 0.97
0.18 0.58 0.91
0.00 0.40 0.13
0.49 0.41 0.94
0.08 0.13 0.12
0.02 0.10 0.13
0.01 0.10 0.14
0.00 0.10 0.18
0.00 0.06 0.01
0.00 0.05 0.08
0.84 0.50 0.44
σ 2 = 0.0102
Best Mean Worst
0.20 0.57 0.78
0.87 0.66 0.83
0.16 0.47 0.80
0.20 0.47 0.11
0.09 0.14 0.09
0.05 0.09 0.03
0.29 0.10 0.04
0.11 0.11 0.16
0.02 0.06 0.11
0.21 0.05 0.00
0.11 0.43 0.06
σ 2 = 0.1002
Best Mean Worst
0.52 0.54 0.97
0.37 0.59 0.99
0.74 0.49 0.57
0.70 0.43 0.90
0.34 0.15 0.01
0.18 0.10 0.05
0.05 0.10 0.01
0.00 0.13 0.06
0.17 0.05 0.09
0.02 0.04 0.00
0.12 0.51 0.98
0.50
0.36
0.19
0.03
0.10
−0.05
−0.02
0.20
−0.01
−0.05
0.50
σ2
σ2
σ2 GA
σ2
σ2
PS
SA
ˆi Approximate Parameter Vector Θ
Noise Levels
True Values Θi
In all three examples of HCARARMA model, the input u(t), is considered as a randomly generated signal with zero mean and unity variance. The noise v(t), is taken as a random signal having zero mean and constant variance. The identification of HCARARMA system parameters as given in (26) to (28) is performed using proposed methodologies based on DE, GAs, PS and SA algorithms as per procedure and pseudocodes provided in the previous section. However, the fitness function provided in (14) is formulated for N = 20 for all scenarios of three examples and is given as:
εΘ =
⎧ 20 )2 ⎪ 1 ∑( ⎪ ⎪ x(ti ) − xˆ (ti ) ⎪ ⎨ 20 i=1
20 ))2 ⎪ ) ( 1 ∑ (( T ⎪ ⎪ ˆ ⎪ ϕ (ti )Θ + v (ti ) − ϕT (ti )Θ ⎩ 20
,
(26)
i=1
ˆ are desired and calculated parameter vectors where Θ and Θ for all three HCARARMA models as given in Eqs. (26) to (28), respectively. The fitness function (26) is optimized for all three HCARARMA models for three noise variances σ 2 = 0.0012 , 0.012 , and 0.12 utilizing the four designed algorithms, i.e. DE, GAs, PS and SA. The convergence analyses of all the four algorithms, i.e., DE, GAs, PS
and SA is conducted in terms of learning curves and results for DE and GAs are shown in Fig. 4, while the iterative plots for PS and SA are given in Fig. 5 for all three examples of HCARARMA model in case of all three noise variances σ 2 = 0.0012 , 0.012 , 0.12 . It is seen from the learning curves that the proposed meta-heuristics are accurate and convergent for parameter estimation of HCARARMA systems with a slight decrease in accuracy by increasing the noise level. It is also observed that DE attains better fitness in less iterations as compared to its counterparts consuming less time and function counts. Each of optimization mechanism has been executed for the 100 independent runs and value of MWD using Eq. (15) are determined for all three scenarios of each example. The value of absolute error are calculated for the run of the DE algorithm with respective minimum MWD values 10−07 , 10−05 , and 10−02 for σ 2 = 0.0012 , 0.012 , and 0.12 in case of Example 1, 10−09 , 10−05 , and 10−02 for σ 2 = 0.0012 , 0.012 , and 0.12 in case of Example 2, and 10−09 , 10−05 , and 10−04 for σ 2 = 0.0012 , 0.012 , and 0.12 in case of Example 3. Similarly, the values of AE are determined for the best runs of GAs, SA and PS algorithms. The AE results of DE and GAs are presented in Fig. 6, while PS and SA results are plotted in Fig. 7 for each case. Additionally, respective values of respective MWDs for each case of HCARARMA are also plotted in Figs. 6 and 7 for GAs and DE, SA and PS, respectively.
278
A. Mehmood, N.I. Chaudhary, A. Zameer et al. / Applied Soft Computing Journal 80 (2019) 263–284
Table 7 Magnitudes of global performance indices for each variation of all three case studies of HCARARMA system. Method
Problem
Noise σ 2
Global Operators Fitness ε
DE
GA
PS
SA
MWD
R2
RMSE
TIC
Mean
STD
Mean
STD
Mean
STD
Mean
STD
Mean
STD
2
1
0.001 0.0102 0.1002
3.60E−15 2.71E−09 2.19E−05
5.08E−17 1.29E−14 1.72E−14
1.59E−02 9.51E−02 1.20E−01
6.68E−03 5.47E−06 4.28E−10
4.20E−02 2.51E−01 1.78E−01
1.77E−02 1.45E−05 5.70E−10
1.26E−02 3.84E−01 1.94E−01
9.84E−03 4.43E−05 1.24E−09
7.01E−02 3.26E−01 2.94E−01
2.81E−02 1.33E−05 7.50E−10
2
0.0012 0.0102 0.1002
2.44E−15 3.50E−09 1.06E−05
9.07E−16 8.13E−14 6.62E−14
2.70E−02 7.11E−02 1.19E−01
3.09E−02 1.01E−05 8.23E−09
8.10E−02 2.13E−01 1.64E−01
9.28E−02 3.04E−05 1.21E−08
9.62E−03 2.92E−01 1.48E−01
7.49E−03 3.37E−05 9.43E−10
1.22E−01 2.93E−01 3.47E−01
1.16E−01 3.14E−05 3.30E−08
3
0.0012 0.0102 0.1002
3.15E−15 8.66E−10 1.79E−05
1.69E−15 1.11E−13 2.44E−13
4.37E−02 7.60E−03 4.52E−02
3.33E−02 2.55E−05 7.04E−08
1.45E−01 2.49E−02 1.19E−01
1.11E−01 8.42E−05 2.19E−07
9.88E−03 3.00E−01 1.52E−01
7.70E−03 3.46E−05 9.69E−10
2.76E−01 4.72E−02 2.48E−01
1.77E−01 1.55E−04 4.72E−07
1
0.0012 0.0102 0.1002
3.20E−06 2.30E−06 4.21E−05
9.64E−06 6.50E−06 1.11E−05
5.17E−02 3.14E−02 3.27E−02
3.45E−02 3.12E−02 1.57E−02
1.19E−01 6.57E−02 5.37E−02
6.61E−02 5.73E−02 2.75E−02
1.13E−01 4.61E−02 2.21E−02
9.36E−02 6.34E−02 2.66E−02
2.03E−01 1.07E−01 9.35E−02
1.16E−01 8.94E−02 4.20E−02
2
0.0012 0.0102 0.1002
1.35E−05 1.31E−05 3.53E−05
3.11E−05 2.75E−05 3.67E−05
3.70E−02 4.23E−02 4.22E−02
2.50E−02 2.46E−02 1.78E−02
8.84E−02 1.01E−01 8.70E−02
5.29E−02 5.22E−02 3.89E−02
8.57E−02 3.51E−02 1.68E−02
7.13E−02 4.82E−02 2.03E−02
1.49E−01 1.72E−01 1.40E−01
8.82E−02 9.19E−02 5.62E−02
3
0.0012 0.0102 0.1002
5.91E−04 5.83E−04 4.99E−04
8.44E−05 8.20E−05 5.88E−05
1.58E−01 1.62E−01 1.55E−01
3.00E−02 3.28E−02 2.80E−02
2.22E−01 2.26E−01 2.16E−01
4.21E−02 4.64E−02 4.05E−02
8.80E−02 3.60E−02 1.73E−02
7.32E−02 4.96E−02 2.08E−02
3.31E−01 3.42E−01 3.22E−01
5.13E−02 6.02E−02 5.45E−02
1
0.0012 0.0102 0.1002
9.76E−05 9.85E−05 2.68E−04
6.04E−05 5.62E−05 1.24E−04
1.42E−01 1.28E−01 8.43E−02
3.87E−02 4.25E−02 3.05E−02
2.31E−01 2.01E−01 1.43E−01
4.42E−02 6.00E−02 5.21E−02
3.38E−01 2.69E−01 1.41E−01
1.11E−01 1.29E−01 8.17E−02
3.17E−01 2.62E−01 2.08E−01
5.75E−02 6.34E−02 6.74E−02
2
0.0012 0.0102 0.1002
5.38E−05 5.79E−05 1.04E−04
2.52E−05 2.21E−05 3.65E−05
1.43E−01 1.50E−01 1.09E−01
3.40E−02 2.74E−02 2.95E−02
2.15E−01 2.24E−01 1.63E−01
3.79E−02 3.03E−02 3.78E−02
2.57E−01 2.05E−01 1.07E−01
8.45E−02 9.84E−02 6.22E−02
3.18E−01 3.37E−01 2.26E−01
5.18E−02 3.91E−02 4.57E−02
3
0.0012 0.0102 0.1002
2.41E−03 2.41E−03 2.83E−03
2.38E−04 2.41E−04 4.08E−04
1.63E−01 1.59E−01 1.57E−01
2.77E−02 2.49E−02 2.74E−02
2.30E−01 2.24E−01 2.22E−01
3.17E−02 2.89E−02 3.20E−02
2.64E−01 2.10E−01 1.10E−01
8.69E−02 1.01E−01 6.39E−02
3.27E−01 3.18E−01 3.13E−01
3.50E−02 3.36E−02 3.37E−02
1
0.0012 0.0102 0.1002
6.15E−03 6.19E−03 6.01E−03
3.14E−03 3.90E−03 3.72E−03
1.42E−01 1.36E−01 1.40E−01
4.15E−02 4.36E−02 5.00E−02
2.06E−01 1.99E−01 2.03E−01
5.72E−02 6.30E−02 7.16E−02
2.78E−01 2.64E−01 2.81E−01
1.45E−01 1.56E−01 1.68E−01
3.32E−01 3.21E−01 3.23E−01
9.22E−02 1.10E−01 1.25E−01
2
0.0012 0.0102 0.1002
1.12E−02 1.13E−02 1.18E−02
5.79E−03 6.87E−03 5.52E−03
1.68E−01 1.76E−01 1.72E−01
4.95E−02 5.09E−02 4.42E−02
2.36E−01 2.46E−01 2.40E−01
6.71E−02 6.64E−02 6.07E−02
2.12E−01 2.01E−01 2.14E−01
1.10E−01 1.19E−01 1.28E−01
3.41E−01 3.53E−01 3.51E−01
7.81E−02 7.69E−02 8.08E−02
3
0.0012 0.0102 0.1002
5.08E−02 4.61E−02 4.83E−02
2.96E−02 2.22E−02 2.27E−02
2.10E−01 2.21E−01 2.13E−01
5.72E−02 5.35E−02 5.08E−02
2.80E−01 2.98E−01 2.87E−01
7.49E−02 7.09E−02 7.35E−02
2.18E−01 2.07E−01 2.19E−01
1.13E−01 1.22E−01 1.31E−01
4.26E−01 4.39E−01 4.26E−01
9.22E−02 7.58E−02 9.07E−02
The plots in these illustrations are given in semi-logarithmic scale in order to decipher the results more evidently. The values of the performance operators through accuracy measures of fitness, MWD, TIC, RMSE and ER2 are calculated for all methods based on best run and results are given in Table 3 along with complexity metrics in term of time, generations and function counts. It is cleared that the AE values of DE for Example 1 are nearly around 10−07 to 10−06 , 10−05 to 10−04 , and 10−02 to 10−01 for noise σ 2 = 0.0012 , 0.012 , and 0.12, respectively. The AE values for GAs are also lie in the same range, while performance degraded considerably for PS and SA algorithm, however, results of PS are bit better than that of SA method. Generally, the best results are obtained by DE for Examples 1 and 2, GAs for Example 3 on the basis of accuracy indicators while in case of complexity measures DE is found much more efficient than GAs, which overshadows its slight degraded accuracy performance for Example 3. The smaller values of all five evaluation measures establish the consistent accuracy of the proposed DE and GAs methods for solving HCARARMA systems. Before proceeding to statistics for reliably inferences, the comparison of the proposed standalone techniques with hybrid or integrated or memetic computing based on combination of global and local search methodologies is narrated for HCARARMA identification model to rationalize their usage. Recently, memetic
computing-based methodologies are broadly used to find the candidate solution of the constrained/unconstrained and convex/nonconvex optimization problems, such as GA aided with active-set method (GA-ASM) [81], GA integrated with sequential quadratic programming (GA-SQP) [82] and GA combine with interior-point method (GA-IPM) [83]. All these hybrid approaches generally outperformed the stand alone optimization mechanism, so we applied memetic combination of DE, GA, PS and SA with local search techniques of SQP, IPM and ASM for finding the parameter vectors of HCARARMA model but no noticeable improvement in the results is observed in any scenario. Beside that we also applied standalone local search procedure based on SQP, IPM and ASM for identification problem of HCARARMA model and found that all these methods fail to optimize because of non-convex nature of the optimization problem. Therefore, in the present study we confined our investigation to four global search methods based on DE, GA, PS and SA for parameter estimation of HCARARMA model. Statistics based Comparative Analysis Consistency in precision for DE, GAs, PS and SA approaches is examined for 100 independent runs for each scenario of HCARARMA model and results on fitness values for multiple runs of each technique are shown Fig. 8 in the form of sorted, unsorted
A. Mehmood, N.I. Chaudhary, A. Zameer et al. / Applied Soft Computing Journal 80 (2019) 263–284
279
Table 8 Complexity analysis for each scenario of HCARARMA models. Method
Problem
Noise σ 2
Complexity Operators Time
DE
GA
SA
Function Counts
STD
Mean
STD
Mean
STD
1
0.0012 0.0102 0.1002
03.72 04.38 06.05
0.49 0.32 0.36
125.15 160.81 218.74
03.65 05.18 06.48
8884.65 11416.51 15529.54
259.13 367.99 460.20
2
0.0012 0.0102 0.1002
06.40 08.14 11.21
0.34 0.58 0.75
155.66 197.39 273.48
03.93 07.77 08.91
14164.06 17961.49 24885.68
357.79 707.48 810.58
3
0.0012 0.0102 0.1002
10.89 13.96 21.31
0.32 1.01 1.21
195.00 239.19 371.25
05.52 08.27 19.92
21644.00 26549.09 41207.75
613.27 918.41 2211.03
1
0.0012 0.0102 0.1002
200.05 158.78 124.49
8.46 3.48 7.09
1500.00 1500.00 1500.00
0.00 0.00 0.00
480320.00 480320.00 480320.00
0.00 0.00 0.00
2
0.0012 0.0102 0.1002
155.48 136.14 141.11
196.73 5.93 0.00
1500.00 1500.00 1500.00
0.00 0.00 0.00
480320.00 480320.00 480320.00
0.00 0.00 0.00
0.0012 0.0102 0.1002
144.26 166.32 150.47
1.52 216.89 6.08
1500.00 1500.00 1500.00
0.00 0.00 0.00
480320.00 480320.00 480320.00
0.00 0.00 0.00
1
0.0012 0.0102 0.1002
2.90 3.03 2.55
0.11 0.07 0.69
999.64 1001.00 790.07
9.80 0.00 232.13
9066.76 9932.79 8656.03
252.49 369.70 2686.57
2
0.0012 0.0102 0.1002
3.49 4.17 3.90
0.26 0.93 0.51
980.70 974.57 939.49
69.03 71.52 92.36
12151.19 12779.22 12829.25
998.33 1360.87 1286.88
3
0.0012 0.0102 0.1002
4.01 3.79 3.96
0.25 0.11 0.12
1001.00 1000.99 1001.00
0.00 0.10 0.00
12498.97 12792.72 13715.61
567.83 451.31 469.30
1
0.0012 0.0102 0.1002
2.42 2.42 2.40
0.05 0.08 0.04
1001.00 1001.00 1001.00
0.00 0.00 0.00
1035.01 1045.68 1045.05
104.08 3.00 2.51
2
0.0012 0.0102 0.1002
2.42 2.67 2.73
0.06 0.32 0.32
1001.00 1001.00 1001.00
0.00 0.00 0.00
1057.80 1058.61 1059.87
3.62 4.10 4.48
3
0.0012 0.0102 0.1002
2.81 2.73 2.82
0.14 0.09 0.15
1001.00 1001.00 1001.00
0.00 0.00 0.00
1069.43 1069.32 1068.99
3.72 3.59 3.16
3
PS
Generations
Mean
and zoomed plots for better understanding. The results of MWD based evaluation metric are shown in Fig. 9 for all designed schemes DE, GAs, PS and SA. The results of DE on the basis of performance index MWD are calculated and plotted in semi-log scale in order to magnify small variation as shown in Fig. 9(a). The values of MWD for HCARARMA model 1, 2, 3 lie around 10−03 to 10−02 , 10−04 to 10−01 and 10−05 to 10−02 for noise variance σ 2 = 0.0012 which shows the consistent behavior of the results. In order to draw a reliable inference on correctness, histogram studies are conducted and results in term of MWD values are shown in Fig. 9(b) to (e) for DE, Fig. 9(f ) to (i) for GAs, and Fig. 9(j) to (m) for PS approach. While MWD values of SA for HCARARMA models 1 and 2 are plotted in Fig. 9(n). All these illustrations show that all four algorithms can be used for identification problem of HCARARMA model, however, the DE and GAs results are relatively superior from PS and SA. No big difference is observed for DE and GAs accuracy level, but in case study 1 and 2 DE performs better while in Example 3 performance of GAs is relatively better. In order to analyze the accuracy level further, the best, mean, and worst, values of AE in each case of noise variances σ 2 = 0.0012 , 0.012 , and 0.12 for HCARARMA systems are determined based on 100 independent executions of DE, GAs, PS and SA methods. Results of statistical operators for all four algorithms are
presented in Tables 4–6 for each scenario of HCARARAMA system represented in Examples 1–3 along with their true parameters of the model, respectively. By increasing the value of noise variance, the performance of all four methods deteriorate for each variant of HCARARMA system in all three case studies. Additionally, the precision of the methods slightly declines by increasing the number of unknown variables in HCARARMA system because of more degree of freedom in the identification example. The performance analysis is further carried out through magnitude of global evaluation indicators GMWD , GRMSE , GTIC , GER2 and ε are defined as in Eqs. (21)–(25), respectively. The values of these global performance indices based on 100 independent executions of all proposed algorithms are presented in Table 7. Generally, the global fitness values of DE are around 10−05 to 10−15 , while the respective values for GAs, PS and SA are 10−05 to 10−06 , 10−03 to 10−05 and 10−02 to 10−03 respectively. The magnitude of MWD, RMSE, TIC and ER2 evaluation metrics lies around 10−02 to 10−03 for DE and GAs, and 10−01 to 10−02 for PS and SA methods. These global operators values are generally found close to their desired magnitude, that ascertain accurate performance of DE and GAs based schemes for HCARARMA system identification. Computational complexity for all four algorithms DE. GAs, PS and SA is investigated through average values of consumed
280
A. Mehmood, N.I. Chaudhary, A. Zameer et al. / Applied Soft Computing Journal 80 (2019) 263–284
Table 9 Statistical analysis based on ANOVA testing for example 1 of HCARARMA model.
DE
GAs
Factor
N
Mean
STD
95% CI
Group1
30
0.02564
0.02251
(0.01596, 0.03531)
Group2
30
0.02140
0.02153
(0.01172, 0.03107)
Group3
30
0.02986
0.03409
(0.02019, 0.03954)
Source
DF
Adj SS
Adj MS
F-Value
P-Value
Factor
2
0.001075
0.000538
0.76
0.472
Error
87
0.061837
0.000711
Analysis of Variance
Total
89
0.062913
Factor Group1
N 30
Mean 0.05799
StDev 0.02770
95% CI (0.04530, 0.07068)
Group2
30
0.04942
0.04208
(0.03673, 0.06211)
Group3
30
0.04832
0.03362
(0.03563, 0.06101)
Source
DF
Adj SS
Adj MS
F-Value
Source
Factor
2
0.001681
0.000841
0.69
Error
87
0.106378
0.001223
Analysis of Variance
PS
Factor Error
Total
89
0.108060
Factor Group1
N 30
Mean 0.14841
StDev 0.03866
95% CI (0.13421, 0.16262)
Total
Group2
30
0.14057
0.03596
(0.12636, 0.15477)
Group3
30
0.14287
0.04254
(0.12866, 0.15707)
Source
DF
Adj SS
Adj MS
F-Value
Source
Factor
2
0.000976
0.000488
0.32
Error
87
0.133311
0.001532
Analysis of Variance
SA
Factor Error
Total
89
0.134287
Factor
N
Mean
StDev
95% CI
Total
Group1
30
0.14623
0.03993
(0.13118, 0.16128)
Group2
30
0.13990
0.04268
(0.12485, 0.15495)
Group3
30
0.14267
0.04180
(0.12762, 0.15773)
Source
DF
Adj SS
Adj MS
F-Value
Source
Factor
2
0.000604
0.000302
0.18
Error
87
0.149714
0.001721
Total
89
0.150318
Analysis of Variance
time, executed generations/iteration evaluated fitness functions for optimization of variables of HCARARMA models. All three complexity measures are determined for 100 executions of each DE. GAs, PS and SA for all variations in three identification examples of HCARARMA system and results are tabulated in Table 8. Mean values of consumed time, executed cycles and evaluated fitness functions are around 10 ±6, 250±100, 20000± 5000 for DE, 160 ± 40, 1500, 480320 for GAs, 2.5 ± 1.5, 900 ±200, 10000± 2000 for PS and 2.4 ± 0.5, 1001, 1040 ± 20 for SA. It is seen that by increasing the degree of freedom in HCARARMA system, the complexity of DE and PS algorithms increase quite evidently, while no such behavior is seen in GAs and SA algorithms due to their exhaustive global search. Generally, it is observed that the complexity of GAs is higher than that of DE, PS and SA methods, while the accuracy of GAs is comparable with DE. The simulations are carried out on Notebook, Dell Inspiron 15, with core i3, 1.9 Ghz processor, 6 GB of RAM, in Windows 8.1 pro environment. Generally, it is seen that DE performs better than the other counterparts based on GAs, SA and PS algorithms. The stability and performance of all the four algorithms are further evaluated
Factor Error Total
through ANOVA test on the basis of mean weight deviation for all three examples in case of 0.001 noise variance. The results of all four algorithms are given in Tables 9–11 for Examples 1– 3, respectively, where N is number of independent runs, DF shows degree of freedom, CI is confidence interval, Adj SS means adjusted sum of square and Adj MS denotes adjusted mean of square. While considering the assumption of homogeneous variances, we are failed to reject the null hypothesis, i.e., all the means are equivalent, at the 0.05 significance level, as the p-value shows that the lowest significance level achievable through DE is 0.472, 0.263 and 0.746 for Examples 1–3 respectively. While the respective values in case of GAs, PS and SA are 0.505, 0.718 and 0.723, 0.728, 0.500 and 0.857, and 0.839, 0.205 and 0.205 respectively. Therefore, we are unable to reject the null hypothesis at the 0.05 significance level for all three examples. This verifies that the three different groups do not consider any major variation in the response variable of uniformity for all four algorithms. Thus, it is a strong evidence that the all means are equal for all four algorithms, but the mean values of DE algorithm are better than GAs, PS and SA methods in case of all three examples.
A. Mehmood, N.I. Chaudhary, A. Zameer et al. / Applied Soft Computing Journal 80 (2019) 263–284
281
Table 10 Statistical analysis based on ANOVA testing for example 2 of HCARARMA model.
DE
Factor
N
Mean
STD
95% CI
Group1
30
0.01728
0.00709
0.01463, 0.01993)
Group2
30
0.01445
0.00717
(0.01177, 0.01712)
Group3
30
0.01646
0.00623
(0.01413, 0.01879)
Source
DF
Adj SS
Adj MS
F-Value
Source
Factor
2
0.000127
0.000064
1.36
Factor
Error
87
0.004078
0.000047
Total
89
0.004205
Factor
N
Mean
StDev
95% CI
Group1
30
0.03101
0.01101
0.02548, 0.03654)
Group2
30
0.03117
0.01713
(0.02564, 0.03670)
Group3
30
0.03386
0.01677
(0.02833, 0.03939)
Source
DF
Adj SS
Adj MS
F-Value
Source
Factor
2
0.000154
0.000077
0.33
Factor
Error
87
0.0020177
0.000232
Total
89
0.0020331
Factor
N
Mean
StDev
95% CI
Group1
30
0.13914
0.03689
(0.12677, 0.15150)
Group2
30
0.14413
0.03723
(0.13177, 0.15650)
Group3
30
0.14953
0.02714
(0.13716, 0.16189)
Source
DF
Adj SS
Adj MS
F-Value
Source
Factor
2
0.001620
0.000810
0.70
Factor
Error
87
0.101003
0.001161
Total
89
0.102623
Factor
N
Mean
StDev
95% CI
Group1
30
0.16851
0.04899
(0.15006, 0.18696)
Group2
30
0.16176
0.05079
(0.14332, 0.18021)
Group3
30
0.16866
0.05265
(0.15022, 0.18711)
Source
DF
Adj SS
Adj MS
F-Value
Source
Factor
2
0.000931
0.000466
0.81
Factor
Error
87
0.224789
0.002584
Total
89
0.225720
Analysis of Variance
GAs
Error Total
Analysis of Variance
PS
Error Total
Analysis of Variance
SA
Error Total
Analysis of Variance
4. Conclusion Concluding inferences are listed as follows. 1. Modern computational paradigms through DE, GAs, PS and SA are exploited effectively for identification problem of HCARARMA models for varying degree of freedom and noise variance scenarios. 2. Comparative study on HCARARMA model with different degree of freedom reveals that all four designed algorithms are equally applicable however, a slight deterioration in accuracy is observed by increasing the length of parameter vector, however, the results of DE and GA are least effective for these changes as compared with PS and SA methods. 3. Comparative study by increasing the magnitude of the noise variance, σ 2 = 0.0012 , 0.012 , 0.12 , show that the accuracy of all four DE, GAs, PS and SA algorithms reduce a bit for identifying the HCARARMA system parameters, however, the results of both DE and GA still maintained reasonable precision. 4. Comparative study through results of statistics in terms of mean and STD value of AEs, MWD, RMSE, TIC and ER2
Error Total
performance metrics along with their global version as well as histogram and stacked bar illustrations established the consistent correctness of DE and GA based identification methodologies for each scenario of HCARARMA model. Generally, it is observed that relatively better results are obtained by DE algorithm from rest of the schemes 5. Investigation of the computational complexity for all four algorithms show that by increasing the degree of freedom in HCARARMA system, the complexity of DE and PS methods increases quite evidently while such alterations are not observed in GAs and SA algorithms. Generally, the values of complexity measures based on time, cycles and function evaluations are more for GAs than rest of optimization mechanism but its comparative accuracy performance with DE may compensated this aspect. Modern global optimization techniques including fireworks algorithm, backtracking search algorithm, genetic programming, fractional particle swarm optimization, moth flame technique and gravitational search algorithm etc. can be good alternatives to improve the performance. Moreover, one may explore the application of the proposed schemes to effectively solve complex
282
A. Mehmood, N.I. Chaudhary, A. Zameer et al. / Applied Soft Computing Journal 80 (2019) 263–284
Table 11 Statistical analysis based on ANOVA testing for example 3 of HCARARMA model.
DE
Factor
N
Mean
STD
95% CI
Group1
30
0.04623
0.03314
(0.03374, 0.05872)
Group2
30
0.04704
0.03577
(0.03455, 0.05953)
Group3
30
0.04077
0.03429
(0.02828, 0.05326)
Source
DF
Adj SS
Adj MS
F-Value
P-Value
Factor Error Total
2 87 89
0.000697 0.103045 0.103742
0.000349 0.001184
0.29
0.746
Factor Group1 Group2 Group3 Analysis of Variance Source Factor Error Total
N 30 30 30
Mean 0.16037 0.15576 0.15452
StDev 0.02764 0.02769 0.03303
95% CI (0.14964, 0.17109) (0.14503, 0.16648) (0.14380, 0.16525)
DF 2 87 89
Adj SS 0.000569 0.076027 0.076596
F-Value 0.33
P-Value 0.723
Factor Group1
N 30
Mean 0.16537
StDev 0.02876
95% CI (0.15523, 0.17550)
Group2
30
0.16512
0.02759
(0.15498, 0.17525)
Group3 Analysis of Variance Source Factor Error Total
30
0.16177
0.02744
(0.15163, 0.17191)
DF 2 87 89
Adj SS 0.000242 0.067879 0.068139
F-Value 0.15
P-Value 0.857
Factor Group1 Group2 Group3 Analysis of Variance Source Factor Error Total
N 30 30 30
Mean 0.21986 0.21915 0.19724
StDev 0.05413 0.05835 0.05375
95% CI (0.19975, 0.23998) (0.19903, 0.23932) (0.17713, 0.21736)
DF 2 87 89
Adj SS 0.009915 0.267240 0.277155
F-Value 1.61
P-Value 0.205
Analysis of Variance
GAs
PS
SA
engineering problems based on supercapacitor modeling [84], state of charge estimation [85,86] and energy management in electric vehicles [87,88].
Conflicts of interest No author associated with this paper has disclosed any potential or pertinent conflicts which may be perceived to have impending conflict with this work. For full disclosure statements refer to http://dx.doi.org/10.1016/j.asoc.2019.03.052. References [1] J.V. Beck, K.J. Arnold, Parameter Estimation in Engineering and Science, John Wiley & Sons, 1977. [2] H.P.H. Anh, N.N. Son, C. Van Kien, V. Ho-Huu, Parameter identification using adaptive differential evolution algorithm applied to robust control of uncertain nonlinear systems, Appl. Soft Comput. 71 (2018) 672–684. [3] W. Gao, Y. Zou, F. Sun, X. Hu, Y. Yu, S. Feng, Data pieces-based parameter identification for lithium-ion battery, J. Power Sources 328 (2016) 174–184. [4] X. Hu, F. Sun, Y. Zou, Online model identification of lithium-ion battery for electric vehicles, J. Cent. South Univ. Technol. 18 (5) (2011) 1525. [5] S. Zubair, N.I. Chaudhary, Z.A. Khan, W. Wang, Momentum fractional LMS for power signal parameter estimation, Signal Process. 142 (2018) 441–449. [6] L. Gutiérrez, D. Muñoz Carpintero, F. Valencia, D. Sáez, A new method for identification of fuzzy models with controllability constraints, Appl. Soft Comput. 73 (2018) 254–262.
Adj MS 0.000285 0.000874
Adj MS 0.000121 0.000780
Adj MS 0.0004985 0.003072
[7] J. Ma, F. Ding, W. Xiong, E. Yang, Combined state and parameter estimation for Hammerstein systems with time delay using the Kalman filtering, Internat. J. Adapt. Control Signal Process. 31 (8) (2017) 1139–1151. [8] N.I. Chaudhary, S. Zubair, M.A.Z. Raja, Design of momentum LMS adaptive strategy for parameter estimation of Hammerstein controlled autoregressive systems, Neural Comput. Appl. 30 (4) (2018) 1133–1143. [9] M.J. Moghaddam, H. Mojallali, M. Teshnehlab, Recursive identification of multiple-input single-output fractional-order Hammerstein model with time delay, Appl. Soft Comput. (2018). [10] N.I. Chaudhary, M.S. Aslam, M.A.Z. Raja, Modified volterra LMS algorithm to fractional order for identification of Hammerstein non-linear system, IET Signal Proc. 11 (8) (2017) 975–985. [11] W. Greblicki, M. Pawlak, HAmmerstein system identification with the nearest neighbor algorithm, IEEE Trans. Inform. Theory 63 (8) (2017) 4746–4757. [12] C. Huemmer, C. Hofmann, R. Maas, W. Kellermann, Estimating parameters of nonlinear systems using the elitist particle filter based on evolutionary strategies, IEEE/ACM Trans. Audio, Speech Lang. Proc. (TASLP) 26 (3) (2018) 595–608. [13] M.A.Z. Raja, A.A. Shah, A. Mehmood, N.I. Chaudhary, M.S. Aslam, Bioinspired computational heuristics for parameter estimation of nonlinear Hammerstein controlled autoregressive system, Neural Comput. Appl. 29 (12) (2018) 1455–1474. [14] A. Mehmood, M.S. Aslam, N.I. Chaudhary, A. Zameer, M.A.Z. Raja, Parameter estimation for Hammerstein control autoregressive systems using differential evolution, Signal, Image Video Proc. (2018) 1–8. [15] F. Chen, F. Ding, J. Li, Maximum likelihood gradient-based iterative estimation algorithm for a class of input nonlinear controlled autoregressive ARMA systems, Nonlinear Dynam. 79 (2) (2015) 927–936. [16] J. Li, Parameter estimation for Hammerstein CARARMA systems based on the Newton iteration, Appl. Math. Lett. 26 (1) (2013) 91–96. [17] W. Xiong, W. Fan, R. Ding, Least-squares parameter estimation algorithm for a class of input nonlinear systems, J. Appl. Math. (2012).
A. Mehmood, N.I. Chaudhary, A. Zameer et al. / Applied Soft Computing Journal 80 (2019) 263–284 [18] J. Li, J. Gu, W. Ma, R. Ding, Maximum likelihood forgetting stochastic gradient estimation algorithm for Hammerstein CARARMA systems, in: In Control and Decision Conference (CCDC), 2012 24th Chinese, IEEE, 2012, pp. 2533–2538. [19] E.W. Bai (Ed.), Block-Oriented Nonlinear System Identification, Vol. 1, London Springer, 2010. [20] M. Pawlak, W. Greblicki, The weighted nearest neighbor estimate for Hammerstein system identification, IEEE Trans. Automat. Control (2018). [21] R. Castro-Garcia, K. Tiels, O.M. Agudelo, J.A. Suykens, HAmmerstein system identification through best linear approximation inversion and regularisation, Internat. J. Control 91 (8) (2018) 1757–1773. [22] A. Balestrino, A. Landi, M. Ould-Zmirli, L. Sani, Automatic nonlinear autotuning method for Hammerstein modeling of electrical drives, IEEE Trans. Ind. Electron. 48 (3) (2001) 645–655. [23] J. Wang, Q. Zhang, Detection of asymmetric control valve stiction from oscillatory data using an extended Hammerstein system identification method, J. Process Control 24 (1) (2014) 1–12. [24] Z. Zou, D. Zhao, X. Liu, Y. Guo, C. Guan, W. Feng, N. Guo, Pole-placement self-tuning control of nonlinear Hammerstein system and its application to pH process control, Chin. J. Chem. Eng. 23 (8) (2015) 1364–1368. [25] S.W. Su, L. Wang, B.G. Celler, A.V. Savkin, Oxygen uptake estimation in humans during exercise using a Hammerstein model, Ann. Biomed. Eng. 35 (11) (2007) 1898–1906. [26] W. Greblicki, M. Pawlak, Identification of discrete Hammerstein systems using kernel regression estimates, IEEE Trans. Automat. Control 31 (1) (1986) 74–77. [27] Y. Han, R.A. De Callafon, Hammerstein system identification using nuclear norm minimization, Automatica 48 (9) (2012) 2189–2193. [28] Y. Mao, F. Ding, A. Alsaedi, T. Hayat, Adaptive filtering parameter estimation algorithms for Hammerstein nonlinear systems, Signal Process. 128 (2016) 417–425. [29] F. Ding, X.P. Liu, G. Liu, Identification methods for Hammerstein nonlinear systems, Digit. Signal Process. 21 (2) (2011) 215–238. [30] Y. Mao, F. Ding, A novel parameter separation based identification algorithm for Hammerstein systems, Appl. Math. Lett. 60 (2016) 21–27. [31] D. Wang, F. Ding, Parameter estimation algorithms for multivariable Hammerstein CARMA systems, Inform. Sci. 355 (2016) 237–248. [32] Y. Mao, F. Ding, E. Yang, Adaptive filtering-based multi-innovation gradient algorithm for input nonlinear systems with autoregressive noise, Internat. J. Adapt. Control Signal Process. 31 (10) (2017) 1388–1400. [33] Y. Mao, F. Ding, Parameter estimation for nonlinear systems by using the data filtering and the multi-innovation identification theory, Int. J. Comput. Math. 93 (11) (2016) 1869–1885. [34] S. Cheng, Y. Wei, D. Sheng, Y. Chen, Y. Wang, Identification for Hammerstein nonlinear ARMAX systems based on multi-innovation fractional order stochastic gradient, Signal Process. 142 (2018) 1–10. [35] N.I. Chaudhary, M.A.Z. Raja, Design of fractional adaptive strategy for input nonlinear Box–Jenkins systems, Signal Process. 116 (2015) 141–151. [36] N.I. Chaudhary, M.A. Manzar, M.A.Z. Raja, Fractional volterra LMS algorithm with application to Hammerstein control autoregressive model identification, Neural Comput. Appl. (2018). [37] M.S. Aslam, N.I. Chaudhary, M.A.Z. Raja, A sliding-window approximationbased fractional adaptive strategy for Hammerstein nonlinear ARMAX systems, Nonlinear Dynam. 87 (1) (2017) 519–533. [38] N.I. Chaudhary, M.A.Z. Raja, A.U.R. Khan, Design of modified fractional adaptive strategies for Hammerstein nonlinear control autoregressive systems, Nonlinear Dynam. 82 (4) (2015) 1811–1830. [39] M. Elloumi, S. Kamoun, Adaptive control scheme for large-scale interconnected systems described by Hammerstein models, Asian J. Control 19 (3) (2017) 1075–1088. [40] A. Mazaheri, M. Mansouri, M.A. Shooredeli, Parameter estimation of Hammerstein-wiener ARMAX systems using unscented kalman filter, in: In Robotics and Mechatronics (ICRoM), 2014 Second RSI/ISM International Conference on, IEEE, 2014, pp. 298–303. [41] M. Mansouri, H. Tolouei, M.A. Shoorehdeli, Identification of Hammersteinwiener armax systems using extended kalman filter, in: In Control and Decision Conference (CCDC), 2011 Chinese, IEEE, 2011, pp. 1110–1114. [42] F. Chen, F. Ding, J. Li, Maximum likelihood gradient-based iterative estimation algorithm for a class of input nonlinear controlled autoregressive ARMA systems, Nonlinear Dynam. 79 (2) (2015) 927–936. [43] J. Li, Parameter estimation for Hammerstein CARARMA systems based on the Newton iteration, Appl. Math. Lett. 26 (1) (2013) 91–96. [44] W. Xiong, W. Fan, R. Ding, Least-squares parameter estimation algorithm for a class of input nonlinear systems, J. Appl. Math. (2012). [45] J. Li, J. Gu, W. Ma, R. Ding, Maximum likelihood forgetting stochastic gradient estimation algorithm for Hammerstein CARARMA systems, in: In Control and Decision Conference (CCDC), 2012 24th Chinese, IEEE, 2012, pp. 2533–2538. [46] H.X. Li, Identification of Hammerstein models using genetic algorithms, IEE Proc.-Control Theory Appl. 146 (6) (1999) 499–504.
283
[47] A. Gotmare, R. Patidar, N.V. George, Nonlinear system identification using a cuckoo search optimized adaptive Hammerstein model, Expert Syst. Appl. 42 (5) (2015) 2538–2546. [48] Q. Jin, H. Wang, Q. Su, B. Jiang, Q. Liu, A novel optimization algorithm for MIMO Hammerstein model identification under heavy-tailed noise, ISA Trans. 72 (2018) 77–91. [49] M. Cui, H. Liu, Z. Li, Y. Tang, X. Guan, Identification of Hammerstein model using functional link artificial neural network, Neurocomputing 142 (2014) 419–428. [50] P.S. Pal, R. Kar, D. Mandal, S.P. Ghoshal, Identification of NARMAX Hammerstein models with performance assessment using brain storm optimization algorithm, Internat. J. Adapt. Control Signal Process. 30 (7) (2016) 1043–1070. [51] P.S. Pal, R. Kar, D. Mandal, S.P. Ghoshal, A hybrid backtracking search algorithm with wavelet mutation-based nonlinear system identification of Hammerstein models, Signal, Image Video Proc. 11 (5) (2017) 929–936. [52] S.J. Nanda, G. Panda, B. Majhi, Improved identification of Hammerstein plants using new CPSO and IPSO algorithms, Expert Syst. Appl. 37 (10) (2010) 6818–6831. [53] P.S. Pal, S. Choudhury, A. Ghosh, S. Kumar, R. Kar, D. Mandal, S.P. Ghoshal, Social emotional optimization algorithm based identification of nonlinear Hammerstein model, in: In Communication and Signal Processing (ICCSP), 2016 International Conference on, IEEE, 2016, pp. 1633–1637. [54] P.S. Pal, A. Ghosh, S. Choudhury, D. Debapriya, R. Kar, D. Mandal, S.P. Ghoshal, Identification of Hammerstein model using bacteria foraging optimization algorithm, in: In Communication and Signal Processing (ICCSP), 2016 International Conference on, IEEE, 2016, pp. 1609–1613. [55] E. Cuevas, P. Díaz, O. Avalos, D. Zaldívar, M. Pérez-Cisneros, Nonlinear system identification based on ANFIS-Hammerstein model using gravitational search algorithm, Appl. Intell. 48 (1) (2018) 182–203. [56] N. Yang, Q. Jin, A modified BBO algorithm and its application for identifying Hammerstein system under heavy-tailed noises, in: In Automation and Computing (ICAC), 2017 23rd International Conference on, IEEE, 2017, pp. 1–5. [57] R. Storn, K. Price, Differential evolution–a simple and efficient heuristic for global optimization over continuous spaces, J. Glob. Opt. 11 (4) (1997) 341–359. [58] S. Srinu, S.L. Sabat, Multinode sensing with forward error correction and differential evolution algorithms for noisy cognitive radio networks, Comput. Electr. Eng. 40 (4) (2014) 1090–1100. [59] S. Das, S.S. Mullick, P.N. Suganthan, Recent advances in differential evolution–an updated survey, Swarm Evol. Comput. 27 (2016) 1–30. [60] S. Sarkar, S. Das, S.S. Chaudhuri, Hyper-spectral image segmentation using Rényi entropy based multi-level thresholding aided with differential evolution, Expert Syst. Appl. 50 (2016) 120–129. [61] N. Swetha, P.N. Sastry, Y.R. Rao, S.L. Sabat, Parzen window entropy based spectrum sensing in cognitive radio, Comput. Electr. Eng. 52 (2016) 379–389. [62] T.R. Benala, R. Mall, DABE: Differential evolution in analogy-based software development effort estimation, Swarm Evol. Comput. (2017). [63] S. Kundu, S. Das, A.V. Vasilakos, S. Biswas, A modified differential evolution-based combined routing and sleep scheduling scheme for lifetime maximization of wireless sensor networks, Soft Comput. 19 (3) (2015) 637–659. [64] K. Deb, A. Pratap, S. Agarwal, T.A.M.T. Meyarivan, A fast and elitist multiobjective genetic algorithm: NSGA-II, IEEE Trans. Evol. Comput. 6 (2) (2002) 182–197. [65] B. Arasteh, A. Bouyer, S. Pirahesh, An efficient vulnerability-driven method for hardening a program against soft-error using genetic algorithm, Comput. Electr. Eng. 48 (2015) 25–43. [66] U. Yuzgec, Y. Becerikli, M. Turker, Nonlinear predictive control of a drying process using genetic algorithms, ISA Trans. 45 (4) (2006) 589–602. [67] S.K. Gupta, P. Kuila, P.K. Jana, Genetic algorithm approach for kcoverage and m-connected node placement in target based wireless sensor networks, Comput. Electr. Eng. (2015). [68] L. Zhang, Z. Wang, X. Hu, F. Sun, D.G. Dorrell, A comparative study of equivalent circuit models of ultracapacitors for electric vehicles, J. Power Sources 274 (2015) 899–906. [69] W. Mardini, Y. Khamayseh, M.B. Yassein, M.H. Khatatbeh, Mining internet of things for intelligent objects using genetic algorithm, Comput. Electr. Eng. (2017). [70] M.A.Z. Raja, M.A. Manzar, F.H. Shah, F.H. Shah, Intelligent computing for mathieu’s systems for parameter excitation, vertically driven pendulum and dusty plasma models, Appl. Soft Comput. 62 (2018) 359–372. [71] H. Liu, H. Tian, X. Liang, Y. Li, New wind speed forecasting approaches using fast ensemble empirical model decomposition, genetic algorithm, mind evolutionary algorithm and artificial neural networks, Renew. Energy 83 (2015) 1066–1075. [72] R. Hooke, T.A. Jeeves, Direct search solution of numerical and statistical problems, J. ACM 8 (2) (1961) 212–229.
284
A. Mehmood, N.I. Chaudhary, A. Zameer et al. / Applied Soft Computing Journal 80 (2019) 263–284
[73] V. Torczon, On the convergence of pattern search algorithms, SIAM J. Opt. 7 (1) (1997) 1–25. [74] J.B. Tan, Y. Liu, L. Wang, W.G. Yang, Identification of modal parameters of a system with high damping and closely spaced modes by combining continuous wavelet transform with pattern search, Mech. Syst. Signal Process. 22 (5) (2008) 1055–1060. [75] R. Mukherjee, B. Biswas, I. Chakrabarti, P.K. Dutta, A.K. Ray, Efficient VLSI design of adaptive rood pattern search algorithm for motion estimation of high definition videos, Microprocess. Microsyst. 45 (2016) 105–114. [76] Y. Nie, K.K. Ma, Adaptive rood pattern search for fast block-matching motion estimation, IEEE Trans. Image Proc. 11 (12) (2002) 1442–1449. [77] S. Kirkpatrick, C.D. Gelatt, M.P. Vecchi, Optimization by simulated annealing, science 220 (4598) (1983) 671–680. [78] A. Xenakis, F. Foukalas, G. Stamoulis, Cross-layer energy-aware topology control through simulated annealing for WSNs, Comput. Electr. Eng. 56 (2016) 576–590. [79] R. Nayak, J.D. Sharma, A hybrid neural network and simulated annealing approach to the unit commitment problem, Comput. Electr. Eng. 26 (6) (2000) 461–477. [80] P.L. Green, Bayesian system identification of a nonlinear dynamical system using a novel variant of simulated annealing, Mech. Syst. Signal Process. 52 (2015) 133–146. [81] M.A.Z. Raja, S.A. Niazi, S.A. Butt, An intelligent computing technique to analyze the vibrational dynamics of rotating electrical machine, Neurocomputing 219 (2017) 280–299, http://dx.doi.org/10.1016/j.neucom.2016. 09.032.
[82] Z. Masood, K. Majeed, R. Samar, M.A.Z. Raja, Design of mexican hat wavelet neural networks for solving bratu type nonlinear systems, Neurocomputing 221 (2017) 1–14. [83] M.A.Z. Raja, U. Farooq, N.I. Chaudhary, A.M. Wazwaz, Stochastic numerical solver for nanofluidic problems containing multi-walled carbon nanotubes, Appl. Soft Comput. 38 (2016) 561–586. [84] L. Zhang, X. Hu, Z. Wang, F. Sun, D.G. Dorrell, A review of supercapacitor modeling, estimation, and applications: A control/management perspective, Renew. Sustain. Energy Rev. (2017). [85] F. Sun, X. Hu, Y. Zou, S. Li, Adaptive unscented kalman filtering for state of charge estimation of a lithium-ion battery for electric vehicles, Energy 36 (5) (2011) 3531–3540. [86] L. Zhang, X. Hu, Z. Wang, F. Sun, D.G. Dorrell, Fractional-order modeling and state-of-charge estimation for ultracapacitors, J. Power Sources 314 (2016) 28–34. [87] C. Sun, X. Hu, S.J. Moura, F. Sun, Velocity predictors for predictive energy management in hybrid electric vehicles, IEEE Trans. Contr. Sys. Techn. 23 (3) (2015) 1197–1204. [88] L. Zhang, X. Hu, Z. Wang, F. Sun, J. Deng, D.G. Dorrell, Multiobjective optimal sizing of hybrid energy storage system for electric vehicles, IEEE Trans. Veh. Technol. 67 (2) (2018) 1027–1035.