Available online at www.sciencedirect.com
Simulation Modelling Practice and Theory 16 (2008) 560–570 www.elsevier.com/locate/simpat
Modeling of the grounding resistance variation using ARMA models S.Sp. Pappas a, L. Ekonomou b,*, P. Karampelas b, S.K. Katsikas c, P. Liatsis d a
University of the Aegean, Department of Information and Communication Systems Engineering, Karlovassi, 83 200 Samos, Greece b Hellenic American University, 12 Kaplanon Street, 106 80 Athens, Greece c University of Piraeus, Department of Technology Education and Digital Systems, 150 Androutsou Street, 18 532 Piraeus, Greece d City University, School of Engineering and Mathematical Sciences, Division of Electrical Electronic and Information Engineering, Information and Biomedical Engineering Centre, Northampton Square, London EC1V 0HB, United Kingdom Received 6 September 2007; received in revised form 9 February 2008; accepted 19 February 2008 Available online 29 February 2008
Abstract This study addresses the problem of modeling the variation of the grounding resistance during the year. An AutoRegressive Moving Average (ARMA) model is fitted (off-line) on the provided actual data using the Corrected Akaike Information Criterion (AICC). The developed model is shown to fit the data in a successful manner. Difficulties occur when the provided data includes noise or errors and also when an on line/adaptive modeling is required. In both cases, and under the assumption that the provided data can be represented by an ARMA model, simultaneous order and parameter estimation of ARMA models under the presence of noise is necessary. In this paper, a new method based on the multi-model partitioning theory which is also applicable to on line/adaptive operation, is used for the solution of the above mentioned problem. The simulations show that the proposed method succeeds in selecting the correct ARMA model order and estimates the parameters accurately in very few steps and even with a small sample size. For validation purposes the method introduced is compared with three other established order selection criteria presenting very good results. The proposed method can be extremely useful in the studies of electrical engineer designers, since the variation of the grounding resistance during the year affects significantly power systems performance and must be definitely considered. Ó 2008 Elsevier B.V. All rights reserved. Keywords: Adaptive multi-model; ARMA; Filtering; Grounding resistance; Kalman; Order selection; Parameter estimation
1. Introduction The grounding system is intended to provide a safe, zero volt baseline for the electrical distribution in a building or in outside electrical equipment. In industrial applications and among electrical engineers, however, the grounding resistance is a more serious issue. If the grounding resistance is low, the grounding system can be counted on to remain very close to zero volts. If the grounding resistance is too high, anything bonded to the *
Corresponding author. Tel.: +30 6972702218. E-mail address:
[email protected] (L. Ekonomou).
1569-190X/$ - see front matter Ó 2008 Elsevier B.V. All rights reserved. doi:10.1016/j.simpat.2008.02.009
S.Sp. Pappas et al. / Simulation Modelling Practice and Theory 16 (2008) 560–570
561
grounding system can potentially be elevated to hazardous voltages in the event of a ground fault or other accident. The grounding resistance of transmission and distribution lines also play also an important role in their lightning performance. Lightning strikes to transmission and distribution lines have frequently caused serious accidents involving electrical apparatus, such as failures of electrical transmission. Furthermore, the variation of the grounding resistance during the year (high temperatures and low rainfalls dry up the ground in the summer months in contrast to the winter months), a really important issue, complicates the studies of electrical engineer designers since it must definitely be taken into account. Therefore, the modeling of the variation of the grounding resistance during the year is essential and could be very useful in the studies of electrical engineer designers. The problem of fitting a multivariate (MV) ARMA model to a given time series arises in a large variety of applications, such as speech analysis [1] biomedical applications [2], hydrology [3], electric power systems [4], simulating earthquake ground motions [5], effective multi-channel identification of structures under unobservable excitation [6] and many more. The aim of this paper is not to add yet another ARMA model selection criterion to the rich literature in this area but it is focused on an extension to the model order selection criterion proposed for MV AR models [7]. The method is based on the well known adaptive multi-model partitioning theory [8–10]. It is not restricted to the Gaussian case, but it is applicable to on line/adaptive operation and it is computationally efficient. Furthermore, it identifies the correct model order and estimates the model parameters very fast. The adaptive multi-model partitioning theory [8–10], is compared to three other well established ARMA order selection criteria namely Corrected Akaike Information Criterion (AICC) [11], Akaike’s Information Criterion (AIC) [12] and Schwarz’s Bayesian Information Criterion (BIC) [13] in order to prove its effectiveness. Actual grounding resistance data measured in the area of Athens, Greece from the National Technical University of Athens [14] is used in this study and an ARMA model is produced, off-line, using MatlabÒ and ARMASA Toolbox [15,16]. 2. Modeling with ARMA processes An m-variate ARMA model of order (p, q) [ARMA (p, q)] for a stationary time series of vectors y observed at equally spaced instants k = 1, 2, n is defined as yk ¼
p X
Ai yki þ
i¼1
q X
Bj vkj þ vk;
E½vk vTk ¼ R
ð1Þ
j¼1
where the m-dimensional vector vk is uncorrelated random noise, not necessarily Gaussian, with zero-mean and a diagonal variance matrix R. h = (p, q) is the order of the predictor and A1,. . ., Ap, B1,. . ., Bq are the mxm coefficient matrices of the Multivariate (MV) ARMA model. It is obvious that the problem under consideration involves the successful determination of the predictor’s order h = (p, q) and the accurate estimation of the predictor’s matrix coefficients {Ai, Bj}. The actual grounding resistance data used in this work (Fig. 1, upper panel) has been measured in the area of Athens, Greece from the National Technical University of Athens [14]. The first necessary step that must be done is to fit an appropriate ARMA model to the provided data. Determining the appropriate ARMA process is usually the most important part of the problem. Over the past years substantial literature has been produced for this problem and various different criteria, such as Akaike’s, Rissanen’s, Schwarz’s, Wax’s [12,13,17–19], have been proposed. In this study, an ARMA process is fitted to the data by manipulating it ARMASA Toolbox, off-line. The certain toolbox applies automatic autocorrelation and spectral analysis [15,16], a method which fulfils the near-optimal-solution criterion. It takes advantage of greater computing power and robust algorithms to produce enough models and make sure of providing a suitable candidate for given data. The algorithm has a good performance for small samples as well as for a very large number of observations. A single model is selected form the provided data and the method also suggests other good models if these exist [15,16,20,21]. The optimization procedure leads to the following ARMA (4), (2) model with parameters: yk ¼
4 X i¼1
Ai yki þ
2 X
Bj vkj þ vk
j¼1
A1 ¼ ½1; A2 ¼ ½1:0424; A3 ¼ ½0:0050; A4 ¼ ½0:0659; B1 ¼ ½1; B2 ¼ ½0:0171:
ð2Þ
562
S.Sp. Pappas et al. / Simulation Modelling Practice and Theory 16 (2008) 560–570 50 Resistance 40 30 20
Resistance (Ohms)
10
0
20
40
60
80
100
120
140
160
180
60 Resistance + Noise
50 40 30 20 10
0
20
40
60
80
100
120
140
160
180
Resistance Resistance + Noise
60 50 40 30 20 10
0
20
40
60
80
100
120
140
160
180
Observations
Fig. 1. Grounding resistance – real data provided (upper panel: actual data, middle panel: actual data correlated with noise, lower panel: comparison of actual and noisy data sets).
3. Problem reformulation using multi-model partitioning filter (MMPF) The model used in this study is a univariate model. The new algorithm for ARMA model order and parameter estimation proposed, concerns the general multivariate case. The method is easily transformed into univariate re arranging Eqs. (5)–(10) appropriately. Assuming that the model order fitting the data is known and is equal to h = (p, q), (1) can be written in standard state-space form as xðk þ 1Þ ¼ xðkÞ
ð3Þ
yðkÞ ¼ HðkÞxðkÞ þ vðkÞ
ð4Þ
where x(k) is an m2(p + q) 1 vector made up from the coefficients of the matrices {A1,..., Ap, B1,...,Bq}, and H(k) is an m m2(p + q) observation history matrix of the process {y(k)} up to time k (p + q). Assuming that the general forms of the matrices Ap and Bq are as follows: 2 p 3 a11 . . . ap1m 6. . . .. 7 6. 7 ð5Þ . . 5 4. ap apmm 2 m1 3 q b11 . . . bq1m 6. . . .. 7 6. 7 ð6Þ . . 5 4. q bm1 bqmm
S.Sp. Pappas et al. / Simulation Modelling Practice and Theory 16 (2008) 560–570
. . . . xðkÞ,½a111 a121 a1m1 ..a112 a122 a1m2 .. a1mm .. apmm .. . . . T b111 b121 b1m1 ..b112 b122 b1m2 .. b1mm .. bqmm . . . HðkÞ,½y 1 ðk 1ÞI y m ðk 1ÞI .. ..y 1 ðk pÞI y m ðk pÞI .. . . v1 ðk 1ÞI vm ðk 1ÞI .. ..v1 ðk qÞI vm ðk qÞI
563
ð7Þ
ð8Þ
where I is the m m identity matrix and h = (p, q), is the model order. In case that the system model and its statistics were completely known, the Kalman filter (KF) in its various forms would be the optimal estimator in the minimum variance sense. Moreover, in case that the prediction coefficients are subject to random perturbations (3) becomes xðk þ 1Þ ¼ xðkÞ þ wðkÞ
ð9Þ
where v(k), w(k) are independent, zero-mean, white processes, not necessarily Gaussian. The form of w(k) is . . . . wðkÞ,½w111 w121 w1m1 ..w112 w122 w1m2 .. w1mm .. wpmm .. . . . T w111 w121 w1m1 ..w112 w122 w1m2 .. w1mm .. wqmm
ð10Þ
A complete system description requires the value assignments of the variances of the random processes w(k) and v(k). Adopting the usual assumption that w(k) and v(k) are at least wide sense stationary processes, hence their variances, Q and R, respectively, are time invariant. To obtain these values is not always trivial. If Q and R are not known they can be estimated by using a method such as the one described in [22]. In the case of coefficients constant in time, or slowly varying, Q is assumed to be zero. It is also necessary to assume an a priori mean and variance for each {Ai, Bj}. The a priori mean of the Ai(0)’s and Bj(0)’s can be set to zero if no knowledge about their values is available before any measurements are taken (the most likely case). On the other hand, the usual choice of the initial variance of the Ai’s and Bj’s, is denoted by P0 is P0 = nI, where n is a large integer. Considering the case where the system model is not completely known, the adaptive multi-model partitioning filter (MMPF) is one of the most widely used approaches for similar problems. This approach was introduced by Lainiotis [8–10] and summarizes the parametric model uncertainty into an unknown, finite dimensional parameter vector whose values are assumed to lie within a known set of finite cardinality. A non-exhaustive list of the reformulation, extension and application of the MMPF approach as well as its application to a variety of problems can be found in [23–29]. In the studied problem in this paper it is assumed that the model uncertainty is the lack of knowledge of the model order h. It is also assumed that the model order h lies within a known set of finite cardinality: 1 6 h 6 M, where h = (p, q), is the model order. The MMPF operates on the following discrete model: xðk þ 1Þ ¼ Fðk þ 1; k=hÞxðkÞ þ wðkÞ yðkÞ ¼ Hðk=hÞxðkÞ þ vðkÞ
ð11Þ ð12Þ
where h = (p, q) is the unknown parameter, the model order in this case. A block diagram of the MMPF is presented in Fig. 2. In the Gaussian case the optimal MMSE estimate of x(k) is given by ^ðk=kÞ ¼ x
M X
^ðk=k; hj Þpðhj =kÞ x
ð13Þ
j¼1
A finite set of models is designed, each matching one value of the parameter vector. In the case that the prior probabilities (p(hj/k)) for each model are already known, these are assigned to each model. In the absence of any prior knowledge, these are set to p(hj/k) = 1/M where M is the cardinality of the model set.
564
S.Sp. Pappas et al. / Simulation Modelling Practice and Theory 16 (2008) 560–570
Fig. 2. MMPF block diagram.
A bank of conventional elemental filters (non adaptive, e.g. Kalman) is then applied, one for each model, which can be run in parallel. At each iteration, the MMPF selects the model which corresponds to the maximum posteriori probability as the correct one. This probability tends to 1, while the others tend to 0. The overall optimal estimate can be considered either as the weighted average of the estimates produced by the elemental filters described in (13), or as the individual estimate of the elemental filter which exhibits the highest posterior probability and called the maximum posteriori (MAP) estimate [29], and is the case used in this paper. The weights are determined by the posterior probability that each model in the model set is in fact the true one. The posterior probabilities are calculated on-line in a recursive manner as follows: Lðk=k; hj Þ pðhj =k 1Þ j¼1 Lðk=k; hj Þpðhj =k 1Þ 1=2 : Lðk=k; hj Þ ¼ P~y ðk=k 1; hj Þ
pðhj =kÞ ¼ PM
exp½ 12 ~ y ðk=k 1; hj Þ yT ðk=k 1; hj ÞP 1 ~ y ðk=k 1; hj Þe
ð14Þ ð15Þ
where ~ yðk=k 1; hj Þ ¼ yðkÞ Hðk; hj Þ^ xðk=k 1; hj Þ T
P~y ðk=k 1; hj Þ ¼ Hðk; hj ÞPðk=k; hj ÞH ðk; hj Þ þ R
ð16Þ ð17Þ
It must be mentioned that in (8)–(17) the value of j = 1,2,. . ., M. An important feature of the MMPF is that all the Kalman filters needed to be implemented can be independently realized. This enables to implement them in parallel, saving an enormous computational time [29]. Eqs. (13) and (14) refer to the current case where the sample space is naturally discrete. However in real world applications, h’s probability density function (pdf) is continuous and an infinite number of Kalman filters have to be applied for the exact realization of the optimal estimator. The usual approach considered to overcome this difficulty is to approximate h’s pdf by a finite sum. Many discretization strategies have been proposed in the literature and some of them are presented in [30,31]. When the true parameter value lies outside the assumed sample space, the adaptive estimator converges to a value in the sample space which is closer (i.e. minimizes the Kullback Information Measure) to the true one [32]. This means that the value of the unknown parameter cannot be exactly defined. The application of hybrid techniques that combine the MMPF with Genetic Algorithms are able to overcome this difficulty [24,33].
S.Sp. Pappas et al. / Simulation Modelling Practice and Theory 16 (2008) 560–570
565
4. Application The problem of ARMA modeling is much more difficult when an adaptive on-line procedure is required and when noise is present. The earlier mentioned criteria, Akaike’s, Rissanen’s, Schwarz’s, and Wax’s, [12,13,17–19], are not always optimal and are also known to suffer from deficiencies; for example, Akaike’s information criterion suffers from over fit [34]. Also, their performance depends on the assumption that the data is Gaussian and upon asymptotic results. In addition to this, their applicability is justified only for large samples; furthermore, they are two pass methods, so they cannot be used in an on line or adaptive fashion. The actual grounding resistance data provided (Fig. 1: upper panel) is correlated with a significant amount of noise with variance R = [1.25]. As it can be seen from Fig. 1 (middle and lower panel), the new data is obviously different from the original one (upper panel). A comparison amongst the following criteria will be performed: b 2ðpþqþ1Þn Akaike’s Corrected Information Criterion (AICC), AICC ¼ log R h þ npq2 Akaike Information Criterion (AIC), 2ðp þ qÞ b ð18Þ log R h þ n Bayesian Information Criterion (BIC), b n log R h þ ðp þ qÞ logðnÞ
ð19Þ
MMPF In order to assess further the performance of our method, the simulation experiments were conducted for 100 Monte Carlo Runs for three different data sets. One with 50 samples, one with 100 and finally one with almost all the data provided, that is 150 samples. For details of the application of stochastic Monte Carlo techniques see [35,36]. As far as MMPF is concerned, the initial cardinality M was set to 10 for all three data sets. 5. Results This section presents the simulation results. The actual data, provided by the measurements performed in the area of Athens, Greece from the National Technical University of Athens [14] (Fig. 1-upper panel), is correlated with a significant amount of noise (Fig. 1-middle and lower panel). Each criterion was run 100 Monte Carlo Runs for three different data sub-sets (50 samples, 100 samples and 150 samples) of the produced noisy data set. The aim was to investigate which criterion is able to successfully detect the model order as well as to estimate the model coefficients. The middle panel of Figs. 3–6 represents the logarithmic spectrum. This is because linear spectra are much less informative. As a general comment, it can be said that all of the criteria performed quite satisfactorily and their performance is inside the 95% confidence interval (Figs. 3–6-lower panel). However, according to the results MMPF performs better amongst the other methods since the characteristics of its estimated model are very close to that of the true one (Figs. 3–6). Moreover, it is infallible in recognizing the correct model order (Fig. 7), by using either a small or a larger data set. Additionally MMPF needs the smallest data set for accurate parameter estimation (Table 1). The only criterion that achieves the same performance is BIC for the large data set of the 150 samples. In addition to that, MMPF identifies the correct model order very fast, in just 16 steps (Fig. 8). Once MMPF has converged to the correct model order, in this case h = 6, its probability tends to 1 and the probabilities of the rest of the models tend to 0. Observing Fig. 8, it is obvious that P(h = 6) is constantly 1 after step 16, while all the others are almost 0 after that time instant. Convergence is taken to occur when the posterior probability of the model exceeds 0.9. Note that the abscissa in Fig. 8 starts from the 10th time instant. This is because of the initial assumption that the unknown model order h lies between [1, M], were M = 10. Consequently, the first M samples of the data set are used for algorithm initialization.
566
S.Sp. Pappas et al. / Simulation Modelling Practice and Theory 16 (2008) 560–570 True and estimated autocorrelation function 1 True MMPF 0.5
0
Log scale for PSD
-0.5
0
5
10
15
20
25
→ time lag
30
35
40
45
50
True MMPF
True and estimated spectrum 0
10
-2
10
0
0.05
0.1
0.15
0.2
0.25
0.3
0.35
→ normalized frequency
0.4
0.45
0.5
4
Prediction of continuation of data with 95 % confidence interval 2 0 -2 -4
0
5
10
15
→ Prediction starting with last observation data(N)
20
25
Fig. 3. Performance of the MMPF estimated model.
True and estimated autocorrelation function 1 True AICC 0.5
0
-0.5
0
5
10
15
20
25
30
35
40
45
50
Log scale for PSD
→ time lag True AICC
True and estimated spectrum 0
10
-2
10
0
0.05
0.1
0.15
0.2
0.25
0.3
→ normalized frequency
0.35
0.4
0.45
0.5
4
Prediction of continuation of data with 95 % confidence interval 2 0 -2 -4
0
5
10
15
→ Prediction starting with last observation data(N)
Fig. 4. Performance of the AICC estimated model.
20
25
S.Sp. Pappas et al. / Simulation Modelling Practice and Theory 16 (2008) 560–570
567
True and estimated autocorrelation function 1 True AIC 0.5
0
-0.5
0
5
10
15
20
25
30
35
40
45
50
Log scale for PSD
→ time lag True AIC
True and estimated spectrum 0
10
-2
10
0
0.05
0.1
0.15
0.2
0.25
0.3
0.35
0.4
0.45
0.5
→ normalized frequency 3
Prediction of continuation of data with 95 % confidence interval
2 1 0 -1 -2 -3
0
5
10
15
20
25
→ Prediction starting with last observation data(N)
Fig. 5. Performance of the AIC estimated model.
True and estimated autocorrelation function 1 True BIC 0.5
0
Log scale for PSD
-0.5
0
5
10
15
20
25
→ time lag
30
35
40
45
50
True BIC
True and estimated spectrum 0
10
-2
10
0
0.05
0.1
0.15
0.2
0.25
0.3
→ normalized frequency
0.35
0.4
0.45
0.5
4
Prediction of continuation of data with 95 % confidence interval 2 0 -2 -4
0
5
10
15
→ Prediction starting with last observation data(N)
Fig. 6. Performance of the BIC estimated model.
20
25
568
S.Sp. Pappas et al. / Simulation Modelling Practice and Theory 16 (2008) 560–570
Criteria
MMPF BIC AICC AIC 0
20
40
60
80
100
120
Monte Calro Runs sample size 50
sample size 100
sample size 150
Fig. 7. Criteria comparison. Table 1 Estimated ARMA parameters, using a 50 sample noisy data set ARMA parameters
MMPF estimated parameters
AICC estimated parameters
AIC estimated parameters
BIC estimated parameters
1 1.0424 0.005 0.0659 1 0.0171
0.999 1.0492 0.0047 0.0665 1 0.0176
0.9992 0.923 0.0043 0.0557 0.9989 0.0169
0.9542 0.9342 0.0037 0.0565 0.9783 0.0198
0.9842 1.031 0.0041 0.0664 0.9874 0.0152
1 p(theta = p(theta = p(theta = p(theta = p(theta =
p(theta = 1 ... 5)
0.8
1) 2) 3) 4) 5)
0.6
0.4
0.2
0 10
15
20
25
30
35
time 1 p(theta = p(theta = p(theta = p(theta = p(theta =
p(theta = 6 ... 10)
0.8
6) 7) 8) 9) 10)
0.6
0.4
0.2
0 10
15
20
25
30
35
time
Fig. 8. MMPF-probability sequence.
6. Conclusions The proposed method (MMPF) successfully selects the correct model order in very few steps (16 iterations are enough) and identifies very accurately the ARMA parameters. Comparison with other established order
S.Sp. Pappas et al. / Simulation Modelling Practice and Theory 16 (2008) 560–570
569
selection criteria (AIC, AICC and BIC) show that the developed method needs the shortest data set for successful order identification and accurate parameter estimation; whereas the other criteria require longer data in order either to achieve the same performance or to attain a performance greater than 90%. The proposed method, which successfully models the grounding resistance variation, can be extremely useful in the studies of electrical engineer designers, since the variation of the grounding resistance during the year significantly affects the power systems performance. Acknowledgment This paper is dedicated to the memory of Prof. Dimitrios G. Lainiotis, the founder of the multi-model partitioning theory, who suddenly passed away on 2006. References [1] C.P. Chen, J.A. Bilmes, MVA processing of speech features, IEEE Transactions on Audio, Speech and Language Processing 15 (1) (2007) 257–270. [2] Lu Sheng, Ju Ki Hwan, K.H. Chon, A new algorithm for linear and nonlinear ARMA model parameter estimation using affine geometry [and application to blood flow/pressure data], IEEE Transactions on Biomedical Engineering 48 (10) (2001) 1116–1124. [3] M. Kourosh, H.R. Eslami, R. Kahawita, Parameter estimation of an ARMA model for river flow forecasting using goal programming, Journal of Hydrology 331 (1–2) (2006) 293–299. [4] J. Derk, S. Weber, C. Weber, Extended ARMA models for estimating price developments on day-ahead electricity markets, Electric Power Systems Research 77 (5–6) (2007) 583–593. [5] A.A. Mobarakeh, F.R. Rofooei, G. Ahmadi, Simulation of earthquake record using time-varying Arma (2, 1) model, Probabilistic Engineering Mechanics 17 (2002) 15–34. [6] V. Papakos, D.S. Fassois, Multichannel identification of aircraft skeleton structures under unobservable excitation: a vector AR/ ARMA framework, Mechanical Systems and Signal Processing 17 (6) (2003) 1271–1290. [7] S.Sp. Pappas, A.K. Leros, S.K. Katsikas, Joint order and parameter estimation of multivariate autoregressive models using multimodel partitioning theory, Digital Signal Processing 16 (6) (2006) 782–795. [8] D.G. Lainiotis, Optimal adaptive estimation: structure and parameter adaptation, IEEE Transactions on Automatic Control AC-16 (1971) 160–170. [9] D.G. Lainiotis, Partitioning: a unifying framework for adaptive systems I: Estimation, Proceedings of the IEEE 64 (8) (1976) 1126– 1143. [10] D.G. Lainiotis, Partitioning: a unifying framework for adaptive systems II: Control, Proceedings of the IEEE 64 (8) (1976) 1182– 1198. [11] C.C. Chen, R.A. Davis, P.J. Brockwell, Order determination for multivariate autoregressive processes using resampling methods, Journal of Multivariate Analysis 57 (1996) 175–190. [12] H. Akaike, Fitting autoregressive models for prediction, Annals of the Institute of Statistical Mathematics 21 (1969) 243–247. [13] G. Schwarz, Estimation of the dimension of the model, Annals of Statistics 6 (1978) 461–464. [14] I.F. Gonos, A.X. Moronis, I.A. Stathopulos, Variation of soil resistivity and ground resistance during the year, in: 28th International Conference on Lightning Protection (ICLP 2006), Kanazawa, Japan, 2006, pp. 740–744. [15] P.M.T. Broersen, Automatic Autocorrelation and Spectral Analysis, first ed., Springer, 2006. [16] P.M.T. Broersen, ARMAsel for identification of univariate measurement data, in: Proceedings of the IEEE Instrumentation and Measurement Technology Conference (IMTC 2006), 2006, pp. 107–112. [17] J. Rissanen, Modeling by shortest data description, Automatica 14 (1978) 465–471. [18] J. Rissanen, A predictive least squares principle, IMA Journal of Mathematical Control and Information 3 (1986) 211–222. [19] M. Wax, Order selection for AR models by predictive least squares, IEEE Transactions on Acoustics, Speech and Signal Processing 36 (1988) 581–588. [20] P.M.T. Broersen, S. de Waele, Automatic identification of time series models from long autoregressive models, IEEE Transactions on Instrumentation and Measurement 54 (5) (2005) 1862–1868. [21] P.M.T. Broersen, S. de Waele, Finite sample properties of ARMA order selection, IEEE Transactions on Instrumentation and Measurement 53 (3) (2004) 645–651. [22] A.P. Sage, G.W. Husa, Adaptive filtering with unknown prior statistics, in: Proceedings of Joint. Automatic Control Conference, Boulder, Colorado, 1969, pp. 760–769. [23] K. Watanabe, Adaptive Estimation and Control: Partitioning Approach, Prentice Hall, Englewood Cliffs, NJ, 1992. [24] S.K. Katsikas, S.D. Likothanassis, G.N. Beligiannis, K.G. Berkeris, D.A. Fotakis, Genetically determined variable structure multiple model estimation, IEEE Transactions on Signal Processing 49 (10) (2001) 2253–2261. [25] D.G. Lainiotis, P. Papaparaskeva, A partitioned adaptive approach to nonlinear channel equalization, IEEE Transactions on Communications 46 (10) (1998) 1325–1336.
570
S.Sp. Pappas et al. / Simulation Modelling Practice and Theory 16 (2008) 560–570
[26] V.C. Moussas, S.D. Likothanassis, S.K. Katsikas, A.K. Leros, Adaptive on-line multiple source detection, IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP’05) 4 (2005) 1029–1032. [27] N.V. Nikitakos, A.K. Leros, S.K. Katsikas, Towed array shape estimation using multimodel partitioning filters, IEEE Journal of Oceanic Engineering 23 (4) (1998) 380–384. [28] V.C. Moussas, S.Sp. Pappas, Adaptive network anomaly detection using bandwidth utilization data, in: 1st International Conference on Experiments/Processes/System Modelling/Simulation/Optimization, Athens, 2005. [29] D.G. Lainiotis, S.K. Katsikas, S.D. Likothanassis, Adaptive deconvolution of seismic signals: performance, computational analysis, parallelism, IEEE Transactions on Acoustics, Speech, and Signal Processing 36 (11) (1988) 1715–1734. [30] R.L. Sengbush, D.G. Lainiotis, Simplified parameter quantization procedure for adaptive estimation, IEEE Transactions on Automatic Control AC-14 (1969) 424–425. [31] B.D.O. Anderson, T.S. Brinsmead, F. De Bruyne, J. Hespanha, D. Liberzon, A.S. Morse, Multiple model adaptive control, Part 1: Finite controller coverings, International Journal of Robust and Nonlinear Control 10 (2000) 909–929. [32] R.M. Hawks, J.B. Moore, Performance of Bayesian parameter estimators for linear signal models, IEEE Transactions on Automatic Control AC-21 (1976) 523–527. [33] G. Beligiannis, L. Skarlas, S. Likothanassis, A generic applied evolutionary hybrid technique, IEEE Signal Processing Magazine 21 (3) (2004) 28–38. [34] H. Lutkepohl, Comparison of criteria for estimating the order of a vector AR process, Journal of Time Series Analysis 6 (1985) 35–52. [35] N. Christakis, Modelling of the microphysical processes that lead to warm rainformation, Ph.D. Thesis, UMIST Manchester, 1998. [36] A.N. Shiryaev, Porbability, Springer-Verlag, New York Inc., 1996.