Accepted Manuscript Economic statistical design of ARMA control chart through a Modified Fitnessbased Self-Adaptive Differential Evolution A. Costa, S. Fichera PII: DOI: Reference:
S0360-8352(16)30510-1 http://dx.doi.org/10.1016/j.cie.2016.12.031 CAIE 4583
To appear in:
Computers & Industrial Engineering
Received Date: Revised Date: Accepted Date:
26 August 2016 21 December 2016 22 December 2016
Please cite this article as: Costa, A., Fichera, S., Economic statistical design of ARMA control chart through a Modified Fitness-based Self-Adaptive Differential Evolution, Computers & Industrial Engineering (2016), doi: http://dx.doi.org/10.1016/j.cie.2016.12.031
This is a PDF file of an unedited manuscript that has been accepted for publication. As a service to our customers we are providing this early version of the manuscript. The manuscript will undergo copyediting, typesetting, and review of the resulting proof before it is published in its final form. Please note that during the production process errors may be discovered which could affect the content, and all legal disclaimers that apply to the journal pertain.
Economic statistical design of ARMA control chart through a Modified Fitness-based SelfAdaptive Differential Evolution
Costa A.* and Fichera S. University of Catania Dept. of Industrial Engineering V.le A. Doria 6 – 95125 – Catania – Italy
(*) Email:
[email protected] Tel. +39 (0)95 7382457 Fax: +39 (0)95 330258 URL: www.dii.unict.it/users/costa
Economic statistical design of ARMA control chart through a Modified Fitness-based SelfAdaptive Differential Evolution
1. Introduction The use of control chart has been increasingly adopted in modern industries since it was introduced as a Statistical Process Control (SPC) tool to monitor processes and ensure quality. Control chart design issue consists in the optimal selection of a set of parameters, depending both on the kind of chart to be adopted and on the objective to be achieved. For instance, the design of Shewhart X charts needs three parameters to be selected: sample size, sampling interval and width of control limits. Initially, the decision about the design parameters was based solely on statistical criteria (statistical design), aiming to reduce Type-I or Type-II errors occurrence. From Eighties on, the design of control charts under an economic viewpoint has been gaining considerable attention [Montgomery, 1980] and the seminal work of Duncan (1956), wherein a cost model for the selection of Shewart
X
chart parameters is proposed, may
be considered as a corner stone about the modern economic design of control charts. However, due to the poor statistical performance of the Duncan’s cost model, control chart design based on a purely economic approach has been successively revised. Saniga (1989) proposed an economic-statistical design involving statistical measures such as Type-I and Type-II error probabilities, to bridge the gap between designs based on solely statistical criteria and those based only on economic considerations. Similarly, Lorenzen and Vance (1986) included the average run length when the process is in-control (ARL0) and the average run length when the process goes to an out-of-control state (ARL1) in their unified cost function. Subsequently, McWilliams (1994) introduced an economic statistical design of
X
control chart, while
Montgomery et al. (1995) adopted the economic statistical approach for designing EWMA control charts.
The costs of sampling, testing and investigating out of control signals, correcting assignable causes and costs of allowing non-conforming units that potentially may reach the final consumer are affected by control chart parameters. In order to improve the performance of control charts in shifts monitoring and in cost reduction as well, several search techniques have been applied to select the optimal chart parameters. Since the problem under investigation can be classified as a mixed discrete-continuous nonlinear problem, stochastic optimization methods based on evolutionary algorithms (EAs) have been extensively used by literature. With no doubts, Genetic Algorithms (GAs) are the most popular technique, widely employed by the body of literature coping with the economic design of control charts (Chou et al. 2006; Chen et al. 2007; Torng et al. 2009; Niaki et al. 2010; Franco et al. 2012; Lee et al. 2012; Lin et al. 2012; Franco et al. 2014). However, in the last decades, several alternative metaheuristic (ME) algorithms inspired to natural phenomena have sparkled the interest of both academics and practitioners for their effectiveness in solving nonlinear problems with complex and concave objective functions. Only a few of them have been implemented in the field of control charts design, so far. Chich et al. (2011) demonstrated as the particle swarm optimization (PSO) is a promising method for solving the problem related to the economic and economic statistical design of an X control chart. Niaki et al. (2011) adopted the same evolutionary approach to perform a comparison study between the economic and the economic statistical designs for Multivariate Exponentially Weighted Moving Average (MEWMA) control charts. Recently, the following research contributions focused on the economic statistical design of the MEWMA control chart through novel metaheuristics. Malaki et al. (2011) developed a comparative analysis involving four different metaheuristics, namely GA, PSO, Simulated Annealing (SA) and Differential Evolution (DE). Niaki and Ershadi (2012) developed an Ant Colony Optimization algorithm for determining the best chart parameters. Barzinpour et al. (2015) employed a hybrid PSO involving a proper Nelder-Mead technique for both the economic and the economic-statistical design. Most academic contributions dealing with the economic or economic statistical control chart design issue assume that measures from the process are independents. Actually, many industrial processes are
characterized by the presence of autocorrelation in the measured data, thus affecting the statistical performance of control charts. Indeed, autocorrelation may trigger a series of false out-of-control alarms so that the reliability of any control chart can be nullified. To monitor autocorrelated observations, various control charts have been developed to detect shifts in the mean of the process. Furtheremore, realworld industrial applications of control chart or control schemes dealing with autocorrelated processes are ascribable to Mertens et a. (2009), Kadri et al. (2016), Jamshidieini and Fazaee (2016) and Fu et al. (2107). The autoregressive moving average (ARMA) control chart is a valid approach to monitor autocorrelated data (Jiang et al. 2000). According to Zhang (1998), the ARMA chart applied to stationary processes can be denoted as ARMAST chart, which also includes both the special cause chart (SCC) and the EWMAST chart as special cases. To the best of our knowledge, only two literary contributions may be ascribable to the economic design of ARMA control charts with autocorrelation (Low and Lin, 2010; Lin et al. 2012). Specifically, both of them approached the purely economic design issue and made use of a genetic algorithm to select the optimal set of parameters for ARMA control chart. Finally, it is worthy pointing out that no validation, neither in terms of comparison analysis with alternative metaheuristics, nor in terms of genetic parameters, has been adopted for the genetic procedures they proposed. In this paper, the statistically constrained economic design of ARMA control chart with autocorrelated data has been addressed. Since most literary contributions coping with the control chart design issue makes use of regular metaheuristics, which require a time-consuming control parameters calibration, a Modified Fitness-based Self-Adaptive Differential Evolution, hereinafter denoted as MF-SADE, has been devised. Process data are assumed to be autocorrelated; thus, a proper simulation approach has been used to assess the average run lengths affecting the objective function to be minimized, i.e., the Lorenzen and Vance (1986) cost function. Since most papers approaching the control charts design issue disregards any validation analysis concerning with the evolutionary technique needed to select the chart parameters, a comprehensive comparison campaign involving four alternative MEs from the relevant literature has been carried out to
verify the effectiveness of the proposed metaheuristics in solving the control chart design problem at hand. In particular, two of them have been already used to cope with control chart design issues, while the other two arise from computer science and applied mathematics related fields. Since regular metaheuristics usually need to be calibrated in terms of control parameters, a tuning analysis has been performed on two out of four algorithms used for validation purposes. Then, an extended benchmark of test cases, also involving different scenarios in terms of underlying process, was configured to compare the different metaheuristics through a series of statistical analyses. Finally, once the proposed MF-SADE algorithm demonstrated its outperformance, it has been used to perform a sensitivity analysis considering cost model and underlying process parameters as independent variables on the one hand, and ARMA chart variables, ARL0, ARL1 as well as total cost, as dependent variables on the other hand. The reminder of the paper is as follows. Section 2 deals with the ARMA control chart for autocorrelated processes. In Section 3 the computational procedure developed to simulate the autocorrelated process is illustrated. In Section 4 the adopted economical statistical model is formalized. Section 5 encompasses the structure of the proposed MF-SADE algorithm as well as a brief overview of the alternative metaheuristic procedures. In Section 6 the calibration analysis involving only two of the alternative metaheuristics is presented. Section 7 holds the output of the comparative analysis aiming at identifying the most performing optimization technique among the five competitors. To this end, a series of non-parametric statistical tests have been fulfilled to support the performance evaluation of metaheuristics. Finally, in Section 8 findings from an extensive sensitivity analysis are discussed. Conclusions and references complete the paper.
2. The ARMA control chart for autocorrelated processes The ARMA control chart (Jiang et al. 2000) is an effective method for monitoring autocorrelated processes. Assume that an underlying process model Xt is known and it is characterized by an auto-
correlated structure, i.e. at time t the variable Xt can be expressed as a linear combination of the measurement at time t-1 (Xt-1), the noise factor at time t (at), and the noise factor at time t-1 (at-1), that is: X t at vat 1 uX t 1 , for u 1 and v 1
(1)
where the series a1 , a2 ,, an N (0, a2 ) represents an independent, identically distributed (iid) process, with normality, in-control mean equal to 0 and variance a2 . The constants u and v are the autoregressive parameter and the moving average parameter of the process, respectively, with u 1 and v 1 . According to Jiang et al (2000), the variance of the underlying process is:
X2
1 2uv v 2 2 a 1 u2
(2)
The corresponding ARMA statistics for an autocorrelated process at time t can be represented by: Zt 0 X t X t 1 Zt 1 0 ( X t X t 1 ) Zt 1
(3)
where and are respectively the autoregressive parameter and the moving average parameter of the ARMA control chart, / 0 and 0 1 . In order to ensure both reversibility and stationary of the monitoring process, constraints 1 and 1 must be satisfied. It is worthy to remind that the ARMA chart reduces to the EWMA chart when 0 , with 1 (0<<1). Thus the ARMA chart can be considered as an extension of the EWMA chart (Jiang et al.2000). Then, differently from what erroneously reported in Lin et al. 2012 (Eq.4) and Cheng and Chou 2008 (Eq.2), the steady-state variance of the monitoring statistics is defined by: 2( )(1 ) 2 1 X 1
Z2
(4)
Due to the autocorrelation structure of the ARMA chart on an ARMA(1,1) process, the performance of the chart itself strongly depends on the parameters of the charting process ( and ) as well as on those pertaining to the original process (u and v). Upper Control Limit (UCL) and Lower Control Limit (LCL) as well as Center Line (CL) of the ARMA control chart are denoted by: UCL k z
(5)
CL
(6)
LCL k z
(7)
Where is the process mean and k is the control limit coefficient, i.e. one of the parameters involved in the economic design of the ARMA control chart, along with n, h, , where n is the sample size and h is the sampling interval. As a result, choosing the parameters of the charting process under an economicalstatistical viewpoint is a challenging task.
3. Simulation model In this paper, the measurements monitored by ARMA control chart refer to a generalized first-order autoregressive moving average process ARMA(1,1). Whenever measurements related to the monitoring process are independent, a Markov chains based approach can be used for the design of control charts as both ARL0 and ARL1 may be accurately evaluated. Since in the present study measurements are assumed to be autocorrelated, a simulation approach is needed to approximately assessing both in-control and outcontrol average run length values. Therefore, an ARMA Simulation (AS) procedure, whose pseudo-code is reported in Appendix A, has been developed. For each simulation run, necessary to evaluate the performance of the ARMA chart connected to a set of design parameters (n, h, k, , ), i=1,…,M measurements and j=1,…,S scenarios have been taken into account. In order to support the robustness of
the proposed simulation approach, the number of simulation scenarios was set to 500, similarly being done by Lin et al. [2012]. Preliminarily to lunch the AS procedure, the time series related to both the noise factor and the autocorrelated underlying process depending on the selected parameters u and v are generated. By assuming that the in-control process mean is equal to 100 (i.e. X0=100) and X2 =10, the following pseudo-code shows the generation procedure of the aforementioned time series. Step 1: select u and v, and calculate a2 from Eq.(2) Step 2: generate a series ai , j for each scenario j from a normal distribution N(0, a2 ):
a0, j 0, j 1,..., S
(8)
ai , j N (0, a2 ) i 1,..., M , j 1,..., S
(9)
Step 3: generate a series Xi,j for each scenario j from Eq.(1)
X 0, j X 0 , j 1,..., S
(10)
X i , j ai , j v ai 1, j u X i 1, j i 1,..., M , j 1,..., S
(11)
Step 4: generate the time series X i', j for each scenario j to simulate the mean shift:
X i', j X i , j X
i 1,..., M , j 1,..., S
(12)
where represents the coefficient of the mean shift. Once the time series generation has been completed, the AS procedure can be launched to determine the values of ARL0 and ARL1, that will be used for the economical statistical design of the chart. In particular, both ARL0 and ARL1 consist of the average run lengths over the 500 provided scenarios connected to X i , j and X i', j time series, respectively.
4. Economical statistical design As mentioned before, the design of an ARMA control chart requires the determination of five parameters including those of the charting process ( and ), the sample size (n), the sampling interval (h), and the upper control coefficient (k). Generally, the pure economic design of a control chart tends to disregard the
statistical performance of the chart itself, thus leading to poor statistical properties. Therefore, an economic-statistical design approach is preferable by adding several statistical constraints to the economic model. Indeed, the regular economic-statistical design of a control chart provides both a lower bound on the incontrol ARL (i.e. ARL0) and an upper bound on the out-of-control ARL (i.e. ARL1) along with the cost function minimization. In this paper, the economic-statistical ARMA control chart design problem consists in the minimization of the Lorenzen and Vance (1986) cost function, subject to a minimum ARL0. Basically, the aforementioned cost function consists of a cost per hour (EC), which embraces the quality cost during production, the cost of false alarm, the cost of searching the assignable cause, the cost of process correction and the cost of sampling and inspection, as follows: c0 EC
s Y Q 1 / nE h ARL1 rt 1 1 r2t 2 (a bn) W Q ARL0 h 1 st (1 r1 ) o nE h ARL1 t1 t2 ARL0
c1 nE h ARL1 rt 1 1 r2t 2
where: a = fixed cost per sample b = cost per unit sample c0 = quality cost per hour while producing in control c1 = quality cost per hour while producing out control (c1>c0) E = time to sample and chart one time Y = cost per false alarm Q = productivity loss per process cease W = cost to locate and correct the assignable cause s = expected number of sample taken while in control, and it can be demonstrated that
s e h / (1 e h )
(13)
t = expected time of occurrence of the assignable cause between two sample while in control, and it can be demonstrated that
(1 (1 h)eh ) / ( eh )
t0 = expected search time when signal is a false alarm t1 = expected time to discover the assignable cause t2 = expected time to correct the process r1 = 1, if process continues during searches r2 = 0, if process ceases during searches r2 = 1, if process continues during correction r2 = 0, if process ceases during correction = 0, if r1 + r2 =2 = 1, if r1 + r2 <2 ARL0 = average run length while in control ARL1 = average run length while out of control A lower bound on ARL0 denoted as ARLL=370 has been considered as a statistical constraint. Hence, the mathematical model of the problem under investigation can be expressed as follows:
min EC ( , , n, h, k )
(14)
subject to: ARL0 ARLL
(15)
1 n 20
(16)
0.001 h 5
(17)
0.001 k 5
(18)
0.001 1
(19)
0.001 1
(20)
n Z ; h, k , , R
(21)
5. Metaheuristics for the ARMA control chart design In the last decades, the body of literature developed various metaheuristics for economically and/or statistically determining the most suitable values of the control charts design parameters. Generally, to work effectively, metaheuristic algorithms have to be tailored to the problem under investigation in terms of control parameters, i.e, those parameters driving both the exploration and exploitation phase of any evolutionary technique. In fact, a specific metaheuristic configuration may result effective for a small range of problems and, at the same time, ineffective for other kinds of optimization issues. As a result, most metaheuristics needs a preliminary time-consuming calibration campaign for tuning such parameter values, in respect to a certain optimization problem. Transferring a part of this customization effort to the algorithm itself is a long pursued goal in the field of metaheuristics.
n
this paper a kind of self-adaptive differential evolution algorithm has been developed to cope with the ARMA control charts economic-statistical design problem at hand. In order to validate both efficacy and efficiency of the proposed technique, four different metaheuristics (two of them from the relevant literature on control-charts design) have been involved into a comprehensive comparison analysis. The proposed self-adaptive differential evolution algorithm and the alternative optimization methods will be descripted in the following sections. 5.1
The Modified Fitness-based Self-Adaptive Differential Evolution (MF-SADE)
Differential Evolution (DE), first proposed by Storne & Price (1995), is an efficient and powerful population-based stochastic search technique for solving complex computational problems, which has been widely applied in many scientific and engineering fields, including non-linear constraints, nonconvex, non-differentiable, multi-objective and dynamic components (Price 1999; Price, Storn and Lampinen, 2005; Das & Sugantham, 2011). Nevertheless, to the best of our knowledge, only two literary contributions in the field of control charts economic design adopted a regular DE, so far (Malaki et al. 2011, Kasarapu & Vommi, 2011). Similarly to a standard evolutionary algorithm (EA), DE operates
through similar computational steps and a few control parameters, namely scaling factor F and crossover rate CR, need to be selected with the aim of properly controlling the two phases typical of any metaheuristics, namely exploration and exploitation. In fact, such phases, alternatively called intensification and diversification, are the two major phases of metaheuristic algorithms, (Blum and Roli, 2003). Diversification means to generate different solutions as to explore the search space on a global scale, while intensification means to focus the search in a local region knowing that a current good solution is found in this region. A good combination of these two parameters will usually ensure that global optimality is achievable. A first reasonable attempt of choosing CR value can be 0.1. However, CR = 0.9 may also be a good initial choice whether fast convergence is desired. [Zaharie 2009, Alguliev et al. 2012]. The value of the scaling factor (F) is usually selected in the range from 0.5 to 1.0 [Storn 1996] and values of F (F< 0.4 or F > 1.0) are occasionally effective [Pomareda 2012]. Storn and Price [1995] suggested that a reasonable value for the population size should be between 5D and 10D, where D is the dimension of the problem, whereas a good initial choice of F is 0.5. Qin et al. (2009) confirmed that the effective range of values for F is between 0.4 and 1. Gämperle et al. [2002] examined different parameter settings for DE on different kinds of functions. They recommended that the population size NP be between 3D and 8D, the scaling factor F equal 0.6, and the crossover rate CR be between [0.3; 0.9]. In general, a set of parameters that works well at the early stages of the evolution process may not perform well at the later stages and vice versa. In order to overcome any trial-and-error approach as well as any tedious tuning analysis, in the last decade several DE variants equipped with different mechanisms to select and/or manage the dynamic changes of the control parameters have been introduced. Based on how the control parameters are adapted to the evolutionary path, the mechanisms can be classified into the following three classes [Qin et al. 2009]: 1) deterministic parameter control in which parameters are changed based on some predetermined rationales, regardless of any feedback from the algorithm, e.g., the time-dependent change of the mutation rates; 2) adaptive parameter control in which parameters are dynamically updated, based on learning from
the evolution process; and 3) self-adaptive parameter control in which parameters are directly encoded within individuals and undergo recombination. The adaptive and self-adaptive mechanisms outperformed the classical DE algorithms, i.e., without parameter control, in terms of reliability and rate of convergence for many benchmark problems (Teng, Teo & Hijazi, 2009; Qin et al. 2009, Sarker et al. 2014). This section presents a Modified Fitness-based Self-Adaptive Differential Evolution, hereinafter denoted as MF-SADE
Fi and CRi,
DE
Step 1:
Initialization The proposed differential evolution makes full use of a self-adaptive evolutionary strategy; thus, both CRi and Fi for each individual i=1,…,NP may vary within a range based on two bounds, denoted as [CRmin; CRmax] and [Fmin; Fmax], respectively. Motivated by the aforementioned literary contributions, such bounds have been chosen as follows: CRmin = 0.5, CRmax= 0.9, Fmin=0.5, Fmax = 0.9. The boundary constraints concerning with the variables of the problem under investigation, the stopping criterion on the maximum number of generations (T), as well as the initial generation counter t = 1 are set in this phase. Adopting a self-adaptive configuration for a differential evolution implies each individual in the population is extended with parameter values. As a result, each individual’s encoding consists of a (D+2)-dimensional vector, which includes D values related to the problem variables and two adding values related to parameters F and CR, respectively. As far as the initial population, NP individuals are randomly generated and each individual consists of a (D+2)-dimensional string, with D=5. As a
result, xit [ xit,1 , xit,2 ,..., xit, D ,xit, D 1 , xit, D 2 ] denotes the i-th solution (individual) at generation t; xit, D 1 and xit, D 2 refer to Fi t and CRit , respectively. The following equation is used to initialize
each individual i (i=1,…, NP) in the initial population [Chen et al. 2002]:
xij0 x j min rand (0,1) ( x j max x j min ),
j 1,..., D 2
(22)
where xjmin and xjmax denote the lower and upper bounds, respectively, for each decision variable j (j=1,…,D) and for each parameter, namely Fi and CRi, related to each individual of the population. The size of population (NP=40) is kept constant throughout the execution of the entire algorithm, while T has been fixed to 125 to ensure 5,000 objective function evaluations.
Step 2:
Mutation Both exploration and exploitation phases are actually performed through mutation since various individuals are combined in such a way to generate a new population. There exist different mutation strategies (Das and Suganthan 2011) to generate the mutant vectors, also called donor vectors, from the population. In this paper the conventional DE/rand/1 scheme has been implemented; thus, for each xit in the parent population, the mutant vector vit is generated as follows:
vit xgt 1 F xgt 2 xgt 3
(23)
where, i = 1,2,…,NP is the index of individuals, t is the generation, g1 g2 g3 are mutually exclusive integers randomly chosen from the range [1, NP] at generation t. The “DE/rand/1” mutation strategy usually combines a stronger exploration capability with a slow convergence
speed (Qin et al. 2009). As a result, it is suitable for solving constrained optimization problems usually requiring effective exploration ability. Due to the self-adaptive characteristics of the proposed algorithm, the static scale factor F in Eq. 23 must be replaced by a dynamic parameter denoted as Fi. According to the fitness-based adaptive criterion proposed by Gosh et al. (2011), in this paper we aim to reduce Fi whenever the objective function value of any vector nears the local optimum so far obtained in the population. As a consequence, the current vector should be subject to less perturbation and a finer search in the neighborhood of the suspected optima can be executed. In the next paragraph three schemes for varying the value of F for each individual are illustrated. In particular, Scheme_1 and Scheme_2, which work as fitness-based criteria, were derived by Ghosh et al. (2011); Scheme_3 consists of an evolution-based criterion inspired to the adaptive strategy developed by Brest et al. (2009), the only difference consisting of the range of values Fi may assume. In fact, being aware that higher values of F may favor the exploration phase of the algorithm, in order to enhance the diversity of the population, the scale factor may take a value in the range [Fi, Fmax] in a random manner, whether the probability 1 is fulfilled. Conforming to Brest et al. 2009 in our experiments we set 1=0.1. fi fi
Scheme_1: Fi Fmax
(24)
Where fi /10 104 and fi f ( xi ) f ( xbest ) . The adding very small number 10-4 is necessary to avoid a division by zero when f i =0. Differently from the original work of Ghosh et al. (2016) in which the adding number is equal to 10-14, for our experiments we decided to strongly reduce that to 10 -4 for a twofold reason: i) Fmin is set to 0.5 and fi 104 generates a value of Fi (by means of Scheme 1) equal about to 0.43; ii) in our experiments, the objective function values are always rounded off to four decimal digits. Scheme_2: Fi Fmax 1 ef
i
(25)
Scheme_3: Fi Fi rand1 Fmax Fi
if rand2<1
(26)
Where rand1 and rand2 are two distinct numbers randomly drawn from a uniform distribution in the rage [0,1]. The adaptation of the scale factor for the i-th target vector is: Fi max( F1 , F2 , F3 )
(27)
As stated in Ghosh et al. 2011, it is evident that as long as F 2.4 Scheme_2 results the greater value of F, thus allowing to explore a larger space of solutions. Once F falls below the threshold of 2.4, Scheme_2 drastically reduces its influence while Scheme_1 will work to adapt the scale factor F at lower rates, thus inducing the transition from the explorative phase to the intensification phase. Although it is controlled by a small probability, it could be said that the role of Scheme_3 is to introduce a disturbance in the fitness-based adaptation of F, which aims to maintain positive population diversity as well as an explorative tendency of the algorithm against any premature stagnation.
Step 3:
Crossover (recombination)
vit
xit
exponential
t vij u t xij t ij
binomial
rand 3[0,1) CR or j jrand
j 1, 2,..., D
otherwise
jrand {1,2,
, D}
uit
j-th component of the
vit
xijt
and CR is the crossover
rate, which controls the diversity of the population during the evolutionary path. Similarly to mutation, the adaptive adjustment of the crossover rate CRi depends on the fitness of the donor vector vi. It is known that the higher is CR the higher is the amount of genetic information passing from the donor vector to the trial vector ui. On the other hand, whether the value of CR assumes a low value, then more genetic information will be transferred from the parent vector xi to the trial vector. Therefore, if the fitness of the donor vector gets worse the CR value should be lower as to reduce the genetic information migrating from the donor to the trial vector, and vice versa. Ghosh et al. (2011) proposed the variable f donor _ i f (v i ) f ( xbest ) as a measure of performance connected to the quality of the donor vector, to adaptively adjust the value of CRi for each individual. Whenever f donor _ i takes a lower value, it means that the quality of the donor vector is good enough to be transferred to the trial vector, hence CRi should be higher. On the other hand, if the f donor _ i is higher then passing information from the donor vector to the trial one should be discouraged and CRi value should be decreased. Although a higher value of f donor _ i should provide a significant reduction of the corresponding CRi value, in accordance
with the adaptive scheme of Brest et al. (2009), a randomness criterion has been introduced with the aim of enhancing the exploration ability of the proposed algorithm, thus avoiding to be prematurely trapped in a local optimum. In fact, based on a small probability value denoted as
2, CRi may randomly assume a smaller value in the range [CRi,CRmin], as reported in the following equation: CRi CRmax if ( f (vi ) f ( xbest ) (CRmax CRmin ) if rand 4> 2 CRi CRmin 1 f donor _ i otherwise CR min CR (CRmax CRmin ) ; CR rand 5(CR CR ) min min i min i 1 f donor _ i
(29) if rand 4 2
where rand4 and rand5 are two distinct numbers randomly extracted from a uniform distribution U[0,1]. Conforming to Brest et al. (2009) in our experiments we set 2 0.1 .
Step 4:
Selection
xjmin xjmax f (uit ) are
t t t ui f (ui ) f ( xi ) xit 1 t xi otherwise
Step 5:
update the iteration counter: t t 1 ;
Step 6:
Go to Step 2 if the number of generation is lower than T. Otherwise stop the algorithm.
5.2
(30)
The objective function
Over the last few decades, several methods have been proposed to handle constraints in the evolutionary algorithms. A common approach makes use of penalties (Coello, Coello & Mezura Montes, 2002; Yeniay, 2005). Several penalty methods may be used to transform a constrained problem into an unconstrained one. In this paper we consider a static approach, as the penalty applied to any unfeasible solutions do not depend on the current generation number. In detail, the actual objective function denoted as f ( x ) combines two contributions as follows: the total cost EC(x) and a penalty term viol(x) to be computed whether the average ARL0(x) is lower than the provided lower bound ARLL: f ( x) EC ( x) 1 viol ( x)
where,
(31)
if ARL0 ( x) ARLL 0 viol ( x ) ARLL ARL0 (x ) otherwise
(32)
5.3 Overview of the other metaheuristics In order to validate the effectiveness of the proposed MF-SADE algorithm, four well-known metaheuristics arising from the relevant literature has been implemented to successively perform an extensive comparison campaign. Two of those, namely a genetic algorithm (GA) and a particle swarm optimization, hereinafter denoted as PSO, have been already employed for addressing control charts design issues. Then, two innovative real-encoded metaheuristics, named GPSO and Water Wave Optimization (WWO), usually adopted to address non-linear and even non-differentiable objective functions, have been derived from the literature on applied mathematics and computer science. All the proposed metaheuristics have been implemented on the VBA® development framework, within a MS Excel® worksheet executed on a PC equipped with an Intel® Core duo 2,33GHz processor and 2Gb RAM memory 5.3.1
Genetic algorithm
Genetic Algorithms (GA) are stochastic optimization techniques exploiting a population based heuristic methodology that simulates the principle of survival of the fittest individuals by Charles Darwin. In the last decades, both efficacy and efficiency of GAs has been largely demonstrated in many fields of research involving heuristic optimization. Since the performance of a metaheuristic technique is strictly connected to the kind of problem to be addressed, the genetic algorithm used by Lin et al. (2012) for addressing the economic design of ARMA control charts has been implemented. The initial population is randomly generated by assigning to each gene a value respecting the domain of the corresponding decision variable. The fitness of each individual is evaluated according to the same objective function used in MF-SADE and common to the rest of tested algorithms. The authors set the total amount of generations to 50 and extracted the best solution after 10 different runs, i.e., replicates, at varying random
seeds. In this paper, due to comparison purposes, the number of generations to stop the algorithm was set to 80 and the output from a single run has been taken into account.
5.3.2
GLBest Particle Swarm Optimization
Particle Swarm Optimization (PSO) is a modern evolutionary technique introduced by Kennedy and Eberhart (1995). It consists of a metaheuristic algorithm based on the social metaphor and gained attention as an effective tool for solving various optimization problems. Similarly to genetic algorithms, PSO is a stochastic, population-based optimization technique. Conversely, it is easier to be implemented and does not require a significance computational burden. Due to its algorithm structure, PSO allows to carry out a diversification phase at the beginning of the search and a local search at the end, according to a set of control parameters, namely inertia weight and acceleration coefficients. Unlike the canonical PSO that uses a large inertia weight and static acceleration coefficients, the proposed PSO, hereinafter coded as GPSO, is inspired to the GLBest version developed by Arumugam et al. (2008), where the aforementioned parameters are adaptively managed in terms of global best and local best values. The initial swarm has been randomly generated and its dimension has been fixed to 20 particles. The algorithm stops once a number of iterations able to assure 5,000 evaluations has been achieved.
5.3.3
Particle Swarm Optimization
A binary code PSO, denoted as BPSO, has been recently used by Chih et al. (2011) to tackle the economic and economic statistical design of X control charts. In this paper, a real-encoding version of the same PSO has been implemented due to a twofold motivation: i) Beheshti et al. (2015) demonstrated that both PSO and BPSO performances are comparable and they suffer from the same shortcomings in solving discrete optimization problems; ii) the problem under investigation cannot be classified as a pure discrete (or binary) optimization problem as it is governed by one discrete variable and four continuous variables. Basically, it consists of a nonlinear discontinuous optimization problem. Similarly to the other
algorithms, the population is initialized with random particles whose positions and velocities are within their respective domains. Differently from the GPSO, the PSO evolutionary mechanism depends on the following control parameters: the cognition learning factor (c1), the social learning factor (c2), the inertia weight (w) and the population size (m). Although Chih et al. (2011) performed an accurate calibration campaign, in this paper the most influencing parameters, i.e., m and w, have been tuned again to better fit the ARMA control chart design problem at hand. The outcomes of such calibration analysis are discussed in Section 6. On the other hand, values assigned to c1 and c2 have been derived from the original paper. The stopping criterion depends on the number of iterations able to guarantee 5,000 objective function evaluations.
5.3.4
Water Wave Optimization
The Water Wave Optimization (WWO) is a novel nature-inspired metaheuristics, recently proposed in literature (Zheng 2015). Propagation, refraction and breaking, characterizing the typical phenomena of the water waves, have been used to derive the effective mechanism for searching in the solution spaces. As most evolutionary algorithms, WWO works by means of a population of solutions, where each individual may be assimilated to a single wave with a height (or amplitude) h
and a wavelength , set to 0.5. In
words, propagation operator drives high fitness waves towards smaller search areas, and low fitness waves towards larger spaces of solutions; refraction operator allows waves to avoid stagnation and, as a result, improves the diversity of the population, thus reducing the risk of a premature convergence; the breaking operator enables the intensification phase, i.e. it focuses the search mechanism around a promising area of the search space. The combination of the aforementioned operators ensures a good balance between exploration and exploitation. In light of what stated by the author, WWO performance depends on four control parameters, i.e., hmax (maximum wave height), (wavelength reduction coefficient), (breaking coefficient), kmax (number of breaking directions). Author’s guidelines indicate the following range for setting such control parameters: hmax 5;6 , 1.001, 1.01 , 0.001, 0.01
, kmax min 12, D / 2 . A set of experiments on several functions to be optimized demonstrated the effectiveness of WWO in respect to other meta-heuristics proposed in the recent literature, such as Bat Algorithm (BA), Invasive Weed Optimization (IWO) and so on. To enhance the performance of WWO, a proper calibration analysis involving parameters , hmax, has been carried out. The obtained results are reported in Section 6. The stopping criterion is based on 5,000 objective function evaluations.
6. Calibration of PSO and WWO. A one-way statistical analysis have been arranged in order to select the more suitable parameters configuration for the two following evolutionary techniques: PSO and WWO. No parameter calibration has been performed for GA, as it has been used for the same control chart design problem. Since both GPSO and MF-SADE may be classified as self-adaptive meta-heuristics, no parameter calibration is required. As far as PSO, inertia weight (w) and population size (m) factors at three levels have been considered as independent variables; thus, 3x3=9 parameter configurations have been evaluated (See Table 1). A set of preliminary trial-and-error runs highlighted the low influence of the learning factors (c1, c2) on the PSO performance. Therefore, exploiting the tuning analysis carried out by Chih et al. (2011), c1 and c2 were set to 2.5 and 1.5, respectively. It is worthy to note that the same levels contemplated by Chih et al. (2011) have been used to vary w and m. Table 2 illustrates the provided nine configurations in terms of w and m. INSERT TABLE 1 ABOUT HERE INSERT TABLE 2 ABOUT HERE Three parameters have been involved in the WWO calibration campaign, namely , , hmax, while the population size (P) and the number of breaking directions (kmax) have been derived by the WWO author’s guidelines; thus, P and kmax have been set to 10 and 2.5, respectively. Whereas, and have been varied
at three levels and hmax at two levels (See Table 1), conforming to the ranges indicated in the original paper. As a result, 3x3x2=18 different parameter configurations have been tested as shown in Table 3. INSERT TABLE 3 ABOUT HERE The benchmark of problems used for the calibration analysis is based on 6 scenarios in terms of model parameters and 9 settings in terms of (u, v) parameters related to the underlying ARMA process. Each scenario is randomly generated by extracting each model parameter in the range defined by upper (UB) and lower (LB) bounds, as reported in Table 4. Such bounds refer to the boundary values engaged in the sensitivity analysis performed by Lin et al. (2012). As for the underlying process parameters (u, v), three values for u (0.100, 0.475, 0.950) and three values for v (0.050, 0.500, 0.900) have been taken into account; the resulting nine settings are reported in Table 5. INSERT TABLE 4 ABOUT HERE INSERT TABLE 5 ABOUT HERE As a result, PSO calibration needed 6x9x9=486 runs, while 6x9x18=972 runs have been executed to calibrate the WWO algorithm. The constrained objective function (f(x) in Eq. 31) was considered as the response variable, while the following Relative Percentage Deviation (RPD) indicator was considered to compare the obtained results:
RPD
RVp RVbest RVbest
(33)
where, RVp is the value of the response variable related to a given scenario, which adopts a given parameter configuration p, while RVbest is the best value achieved by one of the alternative configurations, i.e., RVbest min RVp p 1,2..., P , where P is equal to 9 or 18 whether it refers to SPSO or WWO, respectively.
In order to find the best parameter configuration for each metaheuristics, two statistical analyses, using the RPD measure as response variable, have been arranged. First, the three main assumptions of ANOVA were checked: normality, homoscedasticity and independence [Montgomery 2012]. As expected, no group of RPDs satisfied the normality test. Therefore, a Kruskal-Wallis H-test test at 95% confidence interval has been employed as it may be considered the non-parametric counterpart of a one-way ANOVA. To calibrate each metaheuristics two distinct H-tests have been implemented by MINITAB®17. Table 6, which refers to the PSO calibration, shows that there exists a statistically significant difference between the medians related to the nine parameters configurations. Looking at the median values as well as the box plots in Figure 1, it can be argued that the most suitable configuration should be selected among the first three. To statistically infer about the difference among those three medians, three coupled post-hoc Mann-Whitney tests have been properly arranged. Since no significant difference emerged by such tests (p-values>0.05), the third parameter configuration (w=0.8, m=50) ensuring the smallest median (0.0215) has been chosen. INSERT TABLE 6 ABOUT HERE INSERT FIGURE 1 ABOUT HERE
The same approach was used for calibrating the WWO algorithm with respect to control parameters , , hmax. Table 7 shows the results of the H-test. The difference among the medians related to each parameter configuration is statistically significant as the p-value is equal to 0.001; thus, medians associated to one or more configurations are significantly lower than others. Observing medians and rank values in Table 7, several configurations could be selected. However, box plots in Figure 2 confirm as configurations number 1, 4, 7 and 10 represent the most promising candidates. Since a set of Mann-Whitney post-hoc tests among couples of those promising configurations resulted not significant (p-value>0.05), configuration number ten (=1.010, =0.055, hmax=5) ensuring the smallest median and the smallest rank value has been chosen for the proposed WWO algorithm.
INSERT TABLE 7 ABOUT HERE INSERT FIGURE 2 ABOUT HERE
7. Comparative analysis of metaheuristics An extensive comparison analysis has been carried out to validate the effectiveness of the proposed MFSADE algorithm in solving the control chart design problem at hand. The whole benchmark of scenario problems has been arranged as follows. Twelve scenarios randomly generated through the same bounds reported in Table 4 have been taken into account. Furthermore, sixteen settings in terms of parameters u and v have been derived by combining four levels for u (0.05, 0.36, 0.68, 0.99) and four levels for v (0.15, 0.40, 0.65, 0.90). In addition, five different runs per each algorithm, basically consisting in five replicates at varying random seeds, have been included in the experimental campaign. As a result, 12x16x5=960 runs per each metaheuristics, i.e. 4800 numerical total runs, have been executed. Although, due to the stochastic behavior of metaheuristics, multiple replicates generate a range of performance, just the minimum value, i.e., the best result among the five replicates, has been considered; thus, the best 192 numerical results for each metaheuristic have been involved in the comparison analysis. According to this modus operandi, the variability connected to the random seed numbers is relaxed and the different techniques can be compared on the basis of their best performance. Table 8 gathers the median of RPD values (MR_) as well as the number of violated solutions in terms of ARL0 (V_) achieved by each metaheuristics for each scenario problem. In detail, each value refers to the median of eighty RPD values, contemplating the provided sixteen (u, v) couples and five replicates. Similarly being done in the earlier section, each RPD assesses the deviation of the current solution from the best one obtained by one of the tested techniques. The last two rows summarizes both minimum and maximum value of the median RPD achieved by each metaheuristics. As the reader can notice, for each
example of the benchmark MF-SADE ensures the best result as all the median RPDs are equal to zero. The second performing algorithm is PSO, while the other metaheuristics reveal a strong performance variability depending on the specific scenario problem to be addressed. Under the violation performance indicator viewpoint, both PSO and SADE achieve the same minimum and maximum values, even though PSO slightly outperforms MF-SADE on the total amount of violated solutions. The worst performance both in terms of RPD and ARL0 constraint violation is for GA, though it was recently employed for solving a similar ARMA control chart design issue. INSERT TABLE 8 ABOUT HERE The outperformance of MF-SADE is confirmed by the non-parametric H-test, whose results are exhibited in Figure 9. Medians as well as ranking values strengthen the effectiveness of PSO and SADE against the alternative methods. The difference of performance between PSO and MF-SADE is significantly affirmed by a Mann-Whitney post-hoc test on the medians, as displayed in Table 10. INSERT TABLE 9 ABOUT HERE INSERT TABLE 10 ABOUT HERE The statistical performance of a control chart is traditionally evaluated using the average run length (ARL). The ARL is the expected number of samples taken until the chart generates a signal. When the process is in-control, the ARL (ARL0) should be large so that the rate of false alarms is low. When the process changes to an out-of-control state, the ARL (ARL1) should be as small as possible so that this outof-control state is quickly detected. Though the proposed objective function minimizes the expected cost (EC) subject to a minimum ARL 0 value, it is worthy to evaluate the performance of the proposed metaheuristics also in terms of ARL1 derived by minimizing f(x). Table 11 shows the average RPDs in terms of ARL1 for each metaheuristics, considering the deviation from the minimum value over 16 (u, v) settings and 5 replicates per each scenario. ARL1 values arising from the unfeasible solutions, i.e. solutions for which ARL0 is greater than ARLL, have been ignored from this analysis. Looking at the bold
results as well as at the grand average (g_ave) values, it can be argued that PSO and MF-SADE are again the most promising techniques in reacting to the out-of-control state while minimizing the EC objective function under the ARL0 constraint. MF-SADE achieves the minimum ARL1 average result in eight scenarios out of twelve, while PSO in the rest of instances. The other metaheuristics, namely WWO, GA and GPSO never obtain similar results, thus demonstrating again their weakness is addressing the proposed control chart design issue. Despite the charting problem was the same, the weakness of GA performed by Lin et al. (2012) may be likely motivated by the combination of the arithmetic crossover operator with a too demanding selection strategy that, generation by generation, reduce the diversity of the population, thus leading to stagnation and premature convergence. INSERT TABLE 11 ABOUT HERE Beyond the comparative analysis, the computational efficiency of each metaheuristics has been tested in terms of time to convergence. Table 12 exhibits the average time required by each metaheuristics for each scenario, which entails 16 (u,v) settings and 5 replicates. The average computational time required by MF-SADE and PSO is lower than WWO and GA, thus revealing that both of them are the most efficient methods to cope with the control chart problem at hand. On the other hand, GPSO is the quickest algorithm on the average but, as demonstrated in the previous section, it is not able to effectively solve the economic and statistical control chart problem under investigation. The weakness of both WWO and GA emerges also under the computational efficiency viewpoint. INSERT TABLE 12 ABOUT HERE 8. Sensitivity analysis This section deals with the outputs from a sensitivity analysis on the model parameters of the control chart design problem under investigation. An experimental design based on a proper Taguchi orthogonalarray as well as a multiple regression analysis involving thirteen independent variables including eleven cost model parameters and two underlying process parameters (u, v) have been employed. The whole set
of design parameters (i.e., n, h, k, , ) as well as the expected total cost EC, ARL0 and ARL1 have been considered as dependent variables. Table 13 shows the levels plan of the independent variables varied at three levels, with exception of the shift magnitude that is varied at two levels. INSERT TABLE 13 ABOUT HERE Table 14 reports the L54 orthogonal-array experiment where each trial entails a different combination of the 13 model parameters, denoted as A, B,…, N; parameters t0 and Q have been fixed to 1 and 100, respectively. For each trial the proposed MF-SADE algorithm has been applied to yield the near-optimal solution for the design problem at hand. A first evident finding from Table 14 consists of the role that parameters and play in the economic statistical design of ARMA control charts. In fact, numerical results in Table 14 highlight that higher values of are quite useless as in most trials is very close or equal to the provided lower bound, i.e., 0.001. The same phenomenon was observed by Hua (2004), who performed several ARMAST control chart simulations at varying u and v values and demonstrated as in most cases ARMAST chart tends to the EWMAST one. In fact, it is worthy to remind that the EWMAST chart is a special case of the ARMAST chart by setting =0 and =1-, with 0<<1. In order to thoroughly assess the influence of the model parameters on the economic-statistical design of ARMA control charts, a proper regression analysis based on the numerical results in Table 14 has been carried out through Minitab17® statistical software. Such analysis was conducted on each dependent variable and comprises both the analysis of variance for regression and the coefficients of regression. In Table 15 the influence of the sample size (n) may be investigated. Considering a significance level equal to 0.05, there are two model parameters (E, b) and one process parameter (u) that significantly affect the sample size. As expected, the sample size significantly depends on the magnitude of the mean shift (). Since the sign of the corresponding coefficient (-1.951) is negative, a large shift is associated to a small sample size. Similarly, the lager are b (cost per unit sampled) and E (time to sample), the smaller is the
sample size. On the other hand, a higher autoregressive parameter of the underlying process (u) generally requires a larger sample size. INSERT TABLE 14 ABOUT HERE INSERT TABLE 15 ABOUT HERE Table 16 shows the sensitivity analysis outputs for the sampling interval (h). As the reader can notice, parameters , a, u, v significantly condition the value of h. As expected, a larger shift magnitude () generally involves a larger sampling interval. At the same time, a higher fixed cost per sample (a) leads to a higher sampling interval to reduce the charting cost. The process parameters of the underlying process, namely u and v interact with h in a different way, due to the sign of the corresponding coefficients of regression. The smaller is u the larger is the required h value. The larger is the moving average parameter v the larger should be the sampling interval too. INSERT TABLE 16 ABOUT HERE Table 17 shows the parameters that significantly affect the control limit coefficient (k). Likely due to the ARL0 related constraint, a larger mean shift requires wider control limits. As far as the parameter, the larger is the frequency of the assignable cause, the tighter should be the control limits to detect the out-ofcontrol state. The parameters controlling the underlying process significantly affect the control limit coefficient, also according to a different sign of their respective coefficients. The stronger is the autoregressive characteristic of the underlying process, the larger should be k. On the other hand, the higher is the moving average parameter, the tighter should be the control limits. INSERT TABLE 17 ABOUT HERE Table 18 illustrates the outputs of the sensitivity analysis involving the autoregressive parameter of the monitoring statistics, namely . As the reader can notice, the only parameter that significantly may influence is the corresponding parameter of the underlying process u. The positive sign of the regression
coefficient denotes a proportional relationship between and u; thus, the higher is u the higher will be value to be assigned to . Anyway, in regression with a single independent variable, the coefficient tells you how much the dependent variable is expected to increase (if the coefficient is positive) or decrease (if the coefficient is negative) on average when that independent variable increases by one. In this case, since the coefficient of the independent variable M(u) is positive, it represents how much is expected to increase on average when the dependent variable u increases by one. As the allowed maximum value of u is one, then the u-related coefficient represents an average upper bound for the dependent variable . Since the u related coefficient is quite small (i.e. 0.0214), it may be argued that the difference between ARMAST and EWMAST control charts is almost negligible as u tends to lower values.
INSERT TABLE 18 ABOUT HERE Table 19 shows the outputs of the sensitivity analysis for the moving average parameter of the monitoring statistics (). Whenever a large mean shift () occurs a smaller value of should be adopted to enhance the economic-statistical performance of the control chart. Also, both u and v significantly affect the selection of according to a different sign of their respective regression coefficients. INSERT TABLE 19 ABOUT HERE Table 20 highlights the multitude of parameters significantly affecting the total charting cost. In fact, with exception of E and Y, all the provided parameters are responsible of a significant variation in the EC value. In particular, the smaller is the magnitude of the mean shift the higher is the total cost EC, because of the higher sample size required to detect the out-of-control state. INSERT TABLE 20 ABOUT HERE The effect of the model parameters on the in-control average run length (ARL0) may be observed by Table 21. The analysis of variance reveals that both the magnitude of the mean shift and v significantly affects
ARL0. Then, since the sign of those coefficients of regression is positive, it could be argued that ARL0 increases as much as either or v rise too. INSERT TABLE 21 ABOUT HERE The same model parameters, along with the regression coefficient of the original process u, significantly affect the out-control average-run-length (ARL1), as demonstrated by the outputs of the analysis of variance in Table 22. As expected, an increase of the mean shift parameter entails a decrease of ARL1. As far as u and v, a higher u results in a higher ARL1, while a higher v produces a lower ARL1, due to the sign of the corresponding regression coefficients. Since the economic objective function (EC) is positively affected by smaller ARL1 values, it is useful to investigate in depth how ARL1 is influenced by the parameters of the original process as changes. To this end, two contour plots depicting the relationship between ARL1 and the parameters of the original auto-correlated process are reported in Figure 3 (=0.5) and Figure 4 (=2), respectively. As the reader can notice, the main findings from those contour plots are: a) ARL1 increases as u tends to increase too, while the influence of v is comparatively lower than u; b) a specific ARL1 target value is associable to wider regions of u-v as increases. INSERT TABLE 22 ABOUT HERE INSERT FIGURE 3 ABOUT HERE INSERT FIGURE 4 ABOUT HERE According to the main findings arising from Tables 15-19, it could be stated that the selection of the control chart parameters, namely n, h, k, and , depends on solely the following independent variables:
, E, b, a, , u, v. In particular, regardless of the cost function related parameters, the autocorrelation component of the underlying process u affects the whole set of chart parameters, while the sample size n and are insensitive to the moving average component of the original process v. Notably, with the aim of reducing the charting cost EC, a stronger (weaker) autocorrelation of the original process requires higher
(lower) values of n, k, , and lower (higher) values of sampling interval h. On the other hand, a stronger (weaker) moving average component of the underlying process requires a higher (lower) sample interval h and lower (higher) k and parameters. In conclusion, the regression equation associated to each table helps the reader to select the most suitable values of every chart parameter once the characteristic of the underlying process and the parameters of cost function are known. In addition, the proposed analysis as well as the related regression equations represent a valid support to the analyst for detecting the main factors affecting the charting cost and the statistical performance of the chart as well.
9. Conclusions This paper studies the economic-statistical design of ARMA control charts with autocorrelated data. Indeed. differently from the ARMA control chart design issues tackled by literature so far. the proposed economic model provides a statistical constraint on the ARL0 parameter that must be greater than 370. In order to enhance the performance of the ARMA control chart in detecting a mean shift on the monitoring autocorrelated process. five distinct parameters have to be selected. To this end. a modified fitness-based self-adaptive differential evolution algorithm. denoted as MF-SADE. has been developed. Due to the selfadaptive mechanism. the proposed MF-SADE does not require any preliminary calibration analysis as the related parameters are integrated within each individual encoding and dynamically change due to a modified fitness-based strategy. The effectiveness of MF-SADE in solving the control chart design problem at hand has been confirmed through an extended comparison analysis involving four additional metaheuristics from the relevant literature. Among them. a Genetic Algorithm (GA) used by Lin et al. (2012) to tackle the ARMA control chart design under a purely economic viewpoint. and a particle swarm optimization. recently applied to the economic statistical design of X control charts (Chih et al. 2011) have been implemented.
Despite the affinity of research topic. an unexpected lack of performance characterizes the tested GA. Indeed. after having analyzed the algorithm structure in depth. it seems that the arithmetical crossover method used by authors. combined with a too elitist selection method. do not properly support the diversification through the solution space. thus leading to premature convergence and stagnation. However. regardless of the specific kind of problem. the widespread belief in literature is that before proposing any solution method authors should strengthen their research by comparing that method with other effective and recognized techniques. Once the ability of MF-SADE in solving the ARMA control chart design issue has been proved. the effect of economic model parameters as well as underlying process parameters on the chart variables have been observed by means of a sensitivity analysis based on a L54 orthogonal-array experimental design and a multiple regression. Beyond the numerous findings. such sensitivity analysis revealed that. for the proposed economic statistical ARMA control chart design issue. higher values of are almost useless as in most trials tends to zero. i.e.. ARMAST control chart tends to the EWMAST. Anyway. such tendency reduces as the autoregressive parameter u tends to higher values. Furthermore. ARL1 is significantly influenced by the mean shift magnitude along with parameters of the underlying process. namely u and v. In addition. two contour plots based on u and v as changes displayed the following findings: a) ARL1 increases as u tends to increase too. while the influence of v is lower than u; b) a specific ARL1 target value is associable to wider regions of u-v as increases. As future research, simulation and metaheuristic optimization proposed in this paper could be applied to the economic design of control charts in finite horizon processes with autocorrelated data. In fact, small production runs are becoming increasingly important in the manufacturing environment because of an increasing request of customized products at competitive costs. Moreover, the economic design of an ARMA control chart also including the use of western electric rules could represent a future research topic to be investigated.
References
1.
Alguliev. R.M.. Aliguliyev. R.M.. and Isazade. N.R.. 2012. DESAMC+ DocSum: differential evolution with self-adaptive mutation and crossover parameters for multi-document summarization. Knowledge Based Systems. 36: 21–38.
2.
Arumugam MS. Rao MVC. Chandramohan A. A new and improved version of particle swarm optimization algorithm with global-local best parameters. J Knowl Inform Syst (KAIS) 2008; 16(3): 324–50.
3.
Barzinpour. F.. Noorossana. R.. Niaki. S.T.A.. Ershadi. M.J.. 2013. A hybrid Nelder–Mead simplex and PSO approach on economic and economic-statistical designs of MEWMA control charts. International Journal of Advanced Manufacturing Technology. 65(9): 1339-1348.
4.
Beheshti. Z.. Shamsuddin. S.M.. & Hasan. S.. 2015. Memetic binary particle swarm optimization for discrete optimization problems. Information Sciences. 299: 58-84.
5.
Blum C & Roli A. 2003. Metaheuristics optimization: overview and conceptual comparison. ACM Comput Surv. 35(3): 268–309.
6.
Brest. J. Greiner. S. Mernik. M and Zumer. V.. 2009. Self-adapting control parameters in differential evolution: a comparative study on numerical benchmark problems. IEEE Trans. On Evolutionary Computation. 10(6):646-657.
7.
Chen C.W.. Chen. D.Z. and Cao. G.Z.. 2002. An improved differential evolution algorithm in training and encoding prior knowledge into feedforward networks with application in chemistry. Chemometrics and Intelligent Laboratory Systems. 64 (1): 27–43.
8.
Chen. Y-K. Hsieh. K-L & Chang. C-C. 2007. Economic design of the VSSI X control charts for correlated data. International Journal of Production Economics. 107: 528-539.
9.
Cheng. J-C. Chou. C-Y. 2008. A real-time inventory decision system using Western Electric run rules and ARMA control chart. Expert Systems with Applications. 35(3): 755–761
10. Chih. M.. Yeh. L.L. and Li. F-C. 2011. Particle Swarm optimization for the economic and economical statistical designs of the X control chart. Applied Soft Computing. 11(8): 5053-5067. 11. Chou. C-Y. Chen. C-H & Chen. C-H. 2006. Economic design of variable sampling intervals T 2 control charts using genetic algorithms. Expert Systems with Applications. 30: 233–242. 12. Coello Coello. C. A.. & Mezura Montes. E. (2002). Constraint-handling in genetic algorithms through the use of dominance-based tournament selection. Advanced Engineering Informatics. 16 (3): 193–203. 13. Das. S. and Suganthan. P. N.. 2011. “Differential Evolution: A Survey of the State-of-the-art”. IEEE Transactions on Evolutionary Computation. 15(1): 4-31. 2011. 14. Duncan. A.J.. 1956. The economic design of X charts used to maintain control of a process. Journal of the American Statistical association. 51: 228-242. 15. Franco. B.C.. Costa. A.F.B. & Machadi. M.A.G.. 2012. Economic-statistical design of the X chart used to control a wandering process mean using genetic algorithm. Expert Systems with Applications. 39: 12961– 12967. 16. Franco. B.C.. Celano. G.. Castagliola. P.. Costa. A.F.B.. 2014. Economic design of Shewhart control charts for monitoring autocorrelated data with skip sampling strategies. Int J Production Economics. 151: 121– 130. 17. Fu X., Wang R-F, Dong Z-Y, 2017. Application of a Shewhart control chart to monitor clean ash during coal preparation. International Journal of Mineral Processing. 158: 45–54. 18. Gämperle. R.. Müller. S. D.. and Koumoutsakos. P. “A parameter study for differential evolution.” in Advances in Intelligent Systems. Fuzzy Systems. Evolutionary Computation. A. Grmela and N. E. Mastorakis. Eds. Interlaken. Switzerland: WSEAS Press. 2002. pp. 293–298. 19. Ghosh. A.. Das. S. Choudhury. A and Giri R. 2011. An improved differential evolutionalgorithm with fitness based adaption of the control parameters. Information Sciences. 181: 3749-3765.
20. Hua. C.C. Statistical Process Control of stationary processes. A thesis submitted for the degree of master of science. Dept. of Statistics and Applied Probability. National University of Singapore. 2005. URL: http://scholarbank.nus.sg/handle/10635/14722. 21. Jamshidieini, B., Fazaee, R. 2016. Detecting defective electrical components in heterogeneous infra-red images by spatial control charts, Infrared Physics & Technology. 76: 510–520. 22. Jiang. W.. Tsui. K-L. and Woodall. W.H.. 2000. A new SPC monitoring method: the ARMA chart. Technometrics. 42(4): 399-410. 23. Kadri F, Harrou F, Chaabane S, Sun Y, Tahon C. 2016. Seasonal ARMA-based SPC charts for anomaly detection: Application to emergency department systems. Neurocomputing. 173(2): 102–2114. 24. Kasarapu. R.V.. Vommi. V.B.. 2011. Economic design of joint X and R control charts using differential evolution. Jordan Journal of Mechanical and Industrial Engineering. 5 (2): 149-160. 25. Kenendy. J. and Eberhart. R.C. Particle swarm optimization. in Proceedings of the 1995 IEEE International Conference on Neural Networks. vol. 4. IEEE Press. 1995. p. 1942. 26. Lee. P-H. Trong. C-C. Liao. L-F. 2012. An Economic design of combined double sampling and variable sampling interval X control chart. Int. J. Production Economics. 138: 102–106. 27. Lin. S-N. Chou. C-Y. Wang. S-L. and Liu. H-R. 2012. Economic design of autoregressive moving average control chart using genetic algorithms. Expert systems with Applications. 39: 1793-1798. 28. Lorenzen. T.J. and Vance. L.C.. 1986. The economic design of control charts: A unified approach. Technometrics. 28: 3-10. 29. Low. C. & Lin. W.-Y.. 2010. Consideration of weibull distribution under the assignable causes for economic design of the ARMA control chart. Journal of Quality. 17 (5): 365-387. 30. Malaki M.. Niaky. S.T.A. and Ershadi. M.J.. 2011. M.J. A comparative study of four evolutionary algorithms for Economic and Economic-statistical Designs of MEWMA Control Charts. Journal of Optimization Engineering. 9: 1-13.
31. Mertens K., Vaesen I., Löffel J., Kemps B., Kamers B., Zoons J., Darius P., Decuypere E. 2009. De Baerdemaeker J., De Ketelaere B. An intelligent control chart for monitoring of autocorrelated egg production process data based on a synergistic control strategy. Computers and Electronics in Agriculture 69: 100–111. 32. Montgomery. D.C.. 1980. The Economic design of control charts: a review and literature survey. Journal of Quality Technology. 12: 75-87. 33. Montgomery. D.C.. Torng. J.C.. Cochran. J.K. and Lawrence. F.P.. 1995. Statistically-Constrained of the EWMA control charts. Journal of Quality Technology. 27: 250-256. 34. Montgomery. D. C. (2012) Design and Analysis of Experiments. Wiley. New York. 35. Niaki. S.T.A. & Ershadi. M.J.. 2012. A hybrid ant colony. Markov chain. and experimental design approach for statistically constrained economic design of MEWMA control charts. Expert Systems with Applications. 39: 3265–3275. 36. Niaki. S.T.A.. Ershadi. M.J.. Malaki. M.. 2010. Economic and Economic-Statistical Designs of MEWMA Control Charts-A Hybrid Taguchi Loss. Markov Chain and Genetic Algorithm Approach. International Journal of Advanced Manufacturing Technology. 48: 283-296. 37. Niaki. S.T.A.. Malaki. M.. Ershadi. M.J.. 2011. A particle swarm optimization approach on economic and economic-statistical designs of MEWMA control charts. Scientia Iranica. 18 (6): 1529–1536. 38. Pomareda. V.. Guamán. A.V. and Mohammadnejadb. M.. 2012. Multivariate curve resolution of nonlinear ion mobility spectra followed by multivariate nonlinear calibration for quantitative prediction. Chemometrics and Intelligent Laboratory Systems. 118: 219–229. 39. Price. K. V. “An introduction to differential evolution.” in New Ideas in Optimization. D. Corne. M. Dorigo. and V. Glover. Eds. London. U.K.: McGraw-Hill. 1999. pp. 79–108. 40. Price. K.. Storn. R.. and Lampinen J. Differential Evolution - A Practical Approach to Global Optimization. Berlin. Germany: Springer. 2005. 41. Qin. K.. Huang. V.L.. 2009. Sughantam P.L. Differential Evolution Algorithm With Strategy Adaptation for Global Numerical Optimization. IEEE Transactions on Evolutionary Computation. 13(2): 398 – 417.
42. Saniga. E.M.. 1989. Economic Statistical Control Chart Designs with an application to X and R charts. Technometrics. 31: 313-320. 43. Sarker. R.. Elsayed. S.. & Ray. T.. 2014. Differential evolution with dynamic parameters selection for optimization problems. IEEE transactions on Evolutionary Computations. 18: 689-707. 44. Storn. R. and Price. K.. 1995. Differential evolution-a simple and efficient adaptive scheme for global optimization over continuous spaces. ICSI Berkeley. URL: http://http.icsi.berkeley.edu/ ~storn/litera.html. 45. Storn. R. On the usage of differential evolution for function optimization. Proceedings of the 1996 Biennial Conference of the North American Fuzzy Information Processing Society (NAFIPS). IEEE. Berkeley. 1996. pp. 519–523. 46. Teng. N.S.. Teo. J.. & Hijazi. M.H.. 2009. Self-adaptive population sizing for a tune-free differential evolution. Soft Computing. 13. 709-724. 47. Torng. C-C. Lee. P-H. Liao. H-S. Liao. N-Y. 2009. An economic design of double sampling X charts for correlated data using genetic algorithms. Expert Systems with Applications. 36: 12621–12626. 48. Yeniay. Ö.. 2005. Penalty function methods for constrained optimization with genetic algorithms. Mathematical and Computational Applications. 10 (1): 45-56. 49. Zaharie. D. 2009. Influence of crossover on the behavior of differential evolution algorithms. Applied Soft Computing. 9 (3): 1126–1138. 50. Zhang. N. F.. 1998. A statistical control chart for stationary process data. Journal of Statistical Computation and Simulation. 61: 361-378. 51. Zheng. Y-J. 2015. Water wave Optimization: A new nature-inspired metaheuristics. Computers and Operations Research. 55: 1-11.
APPENDIX A Procedure: ARMA chart simulation Step 0: read data from UPC procedure: ai ,j , X i,j , X i',j i 0,..., M, j 1,..., S Step 1: set independent variables: n. h. k. . Step 2: compute mean X and standard deviation 2X according to n and h: Step 3: initialize dependent variables:
0 1
/ 0 2( )(1 ) 2 1 X 1
Z2
Step 3: calculate upper and lower control limits. i.e. UCL and LCL. from Eq.(5. 7) Step 4: compute ARL0 and ARL1 for each scenario j=1.….S For j = 1 To S Step 4.1: Initialize Z0j. ARL0j. ARL1j. flag_0. flag_1: Z0j = flag_0 = false flag_1 = false ARL0j = 1 ARL1j = 1 Step 4.2: compute ARL0j until LCL Zi.j UCL: For i = 1 To M
Zi , j 0 ( X i , j X i 1, j) Zi 1, j If ( Z i , s < UCL And Z i , j > LCL) And flag_0 = false Then ARL0j = ARL0j + 1 Else flag_0 = true GoTo Step 4.3 End If
Next i Step 4.3: compute ARL1j until LCL Zi , j UCL: For i = 1 To M
Zi , j 0 ( X i', j X i', j ) Zi 1, j If ( Z i , j < UCL And Z i , j > LCL) And flag_1 = false Then ARL1j = ARL1j + 1 Else flag_1 = true GoTo Step 5: End If Next i Next s Step 5: average ARL0 and ARL1 computation ARL0 =
ARL0 j / S
ARL1 j / S
j 1,..., S
ARL1 =
j 1,..., S
60 50
PSO
40 30 20 10 0 1
2
3
4
5
6
7
8
9
parameter configuration (par)
Figure 1. PSO parameter configuration: box plot of the RPDs
80 70 60
RPD
50 40 30 20 10 0 1
2
3
4
5
6
7 8 9 10 11 12 13 14 15 16 17 18 parameter config. (par)
Figure 2. WWO parameter configuration: box plot of the RPDs
ARL1 < 0 0 – 10 10 – 20 20 – 30 30 – 40 40 – 50 50 – 60 > 60
0,8 0,7 0,6
v
0,5 0,4 0,3 0,2 0,1 0,2
0,3
0,4
0,5
0,6
u
0,7
0,8
0,9
Hold Values A 0,5 B 0,02 C 10 D 10 E 0,25 F 10 G 100 H 150 J 25 K 2,7 L 0,5
Figure 3. Contour plot of ARL1 Vs u, v for =0.5.
ARL1 < -20 -20 – -10 -10 – 0 0 – 10 10 – 20 20 – 30 > 30
0,8 0,7 0,6
v
0,5 0,4 0,3 0,2 0,1 0,2
0,3
0,4
0,5
0,6
u
0,7
0,8
0,9
Hold Values A 2 B 0,02 C 10 D 10 E 0,25 F 10 G 100 H 150 J 25 K 2,7 L 0,5
Figure 4. Contour plot of ARL1 Vs u, v for =2.
Algorithm factor 1
level 2
3
PSO
w m
0.8 20
1.0 30
1.2 50
WWO
1.001 0.001 5
1.010 0.055 6
1.050 0.100
hmax
Table 1. Calibration of SPSO and WWO: plan of factors and levels.
Configuration (p) 1 2 3 4 5 6 7 8 9 w-level 1 1 1 2 2 2 3 3 3 m-level 1 2 3 1 2 3 1 2 3 Table 2. SPSO parameter configurations
Configuration (p)
-level hmax-level
-level
1 1
2 1
3 1
4 1
5 1
6 1
7 2
8 2
9 2
10 2
11 2
12 2
13 3
14 3
15 3
16 3
17 3
18 3
1 1
1 2
1 3
2 1
2 2
2 3
1 1
1 2
1 3
2 1
2 2
2 3
1 1
1 2
1 3
2 1
2 2
2 3
Table 3. WWO parameter configurations
Model parameter UB LB
d
l
t1
t2
E
c0
c1
Y
W
a
b
3 0.05 20 20 0.5 15 120 200 30 5 1 0.5 0.01 2 2 0.05 5 80 100 20 0.5 0.1
Table 4. Range of variation for each cost model parameter.
setting 1 2 3 4 5 6 7 8 9 u 0.950 0.950 0.950 0.475 0.475 0.475 0.100 0.100 0.100 v 0.900 0.500 0.050 0.900 0.500 0.050 0.900 0.500 0.050
Table 5. Settings of parameters u, v.
par
N Median
Ave Rank
Z
1 2 3 4 5
54 54 54 54 54
0.0075 0.0310 0.0215 0.6240 1.3275
151.8 150.5 136.4 266.5 268.1
-5.09 -5.16 -5.94 1.28 1.37
6 7
54 54
0.8930 5.3945
253.1 328.0
0.53 4.69
8 9
54 54
8.2655 2.4525
328.8 308.3
4.73 3.60
Overall 486 H = 132.44 DF = 8 p = 0.000
243.5
H = 132.78 DF = 8 p = 0.000 (adjusted for ties)
Table 6. PSO calibration, H-test output.
par N Median Ave Rank Z 1 54 3.117 400.9 -2.31 2 54 3.459 467.7 -0.51 3 54 5.911 506.0 0.53 4 54 3.518 411.6 -2.02 5 54 3.899 458.2 -0.76 6 54 5.718 530.0 1.17 7 54 3.382 412.1 -2.00 8 54 6.334 499.9 0.36 9 54 7.619 536.7 1.35 10 54 2.689 376.7 -2.96 11 54 7.176 503.4 0.46 12 54 7.671 541.4 1.48 13 54 4.021 438.7 -1.29 14 54 9.671 583.8 2.62 15 54 4.804 511.0 0.66 16 54 5.059 480.9 -0.15 17 54 6.438 525.9 1.06 18 54 7.829 572.0 2.30 Overall 972 486.5 H = 41.94 DF = 17 P = 0.001 H = 41.95 DF = 17 P = 0.001 (adjusted for ties)
Table 7. WWO calibration, H-test output.
scenario
MRPSO
MRWWO
MRMFSADE
MRGA
MRGPSO
VPSO
VWWO
VMFSADE
VGA
VGPSO
1 2 3 4 5 6 7 8 9 10
0.410 0.903 0.169 1.400 0.248 0.257 0.008 0.002 0.007 0.001
15.212 19.599 8.073 18.137 7.690 8.882 1.125 0.688 0.452 0.194
0.000 0.000 0.000 0.000 0.000 0.000 0.000 0.000 0.000 0.000
15.793 38.361 7.489 17.966 3.577 8.748 1.410 0.413 0.543 0.413
22.224 44.402 11.991 37.105 46.879 37.453 5.990 1.478 4.683 0.485
0 2 0 0 0 0 0 0 0 0
3 5 2 5 4 2 6 2 2 2
0 1 2 1 0 0 0 0 0 0
9 17 13 17 16 15 12 13 11 11
6 8 7 8 4 4 4 4 5 5
11 12 min max
0.001 0.000 0.000 1.400
0.399 0.069 0.069 19.599
0.000 0.000 0.000 0.000
0.219 0.053 0.053 38.361
0.956 0.496 0.485 46.879
0 0 0 2
1 2 1 6
0 0 0 2
13 9 9 17
6 8 4 8
Table 8. RPD medians on the “u-v” factor and total violations.
Algorithm
N
GA MF-SADE
192 192
Median Ave Rank 4.23795 0.00000
Z
622.6 118.1
7.94 -20.25
PSO 192 0.02275 338.9 WWO 192 3.89970 609.8 GPSO 192 16.03780 713.1 Overall 960 480.5 H = 605.29 DF = 4 P = 0.000 H = 612.27 DF = 4 P = 0.000 (adjusted for ties)
-7.91 7.22 13
Table 9. Metaheuristics statistical comparison. H-test result.
Algorithm PSO MF-SADE
N 192 192
Median 0.0228 0.0000
Point estimate for ETA1-ETA2 is 0.0219 95.0 Percent CI for ETA1-ETA2 is (0.0117;0.0779) W = 51624.5 Test of ETA1 = ETA2 vs ETA1 not = ETA2 is significant at 0.0000 The test is significant at 0.0000 (adjusted for ties)
Table 10. Mann-Whitney post-hoc test comparing PSO and SADE medians.
scenario 1 2 3 4 5 6 7 8 9 10 11 12 g_ave
PSO WWO MF-SADE GA GPSO 65,3 95,5 59,8 177,2 464,7 75,4 136,0 559,6 63,8 117,4 20,5 35,0 19,8 50,0 224,0 20,9 39,4 19,6 37,8 389,1 10,2 11,7 10,0 17,5 197,1 10,2 40,3 8,1 16,3 120,4 4,2 10,2 9,3 69,8 2,5 15,4 3,8 8,0 49,5 2,7 6,2 1,4 3,2 34,2 1,2 6,9 1,4 2,2 27,0 1,3 1,1 1,6 1,7 37,3 1,1 1,3 2,1 36,4 1,1 1,1 16,9 31,7 17,0 38,4 184,1
Table 11. Median values of RPD concerning with ARL1.
scenario 1 2 3 4 5 6 7 8 9 10 11 12 g_ave min max
PSO WWO MF-SADE GA GPSO 204.1 604.0 182.6 252.9 146.4 185.8 506.8 180.3 230.2 134.0 167.5 413.3 161.5 212.0 146.8 164.8 398.5 156.4 207.1 127.3 168.8 356.2 161.5 216.2 163.3 164.7 358.9 159.3 221.8 173.6 171.5 389.2 168.5 224.6 201.6 168.4 375.3 159.3 220.8 171.2 175.5 400.0 187.5 244.1 187.8 171.7 401.7 177.9 229.3 177.7 193.6 417.2 237.9 242.5 184.9 193.4 436.1 233.5 260.3 169.0 177.5 421.4 180.5 230.1 165.3 164.7 356.2 156.4 207.1 127.3 204.1 604.0 237.9 260.3 201.6
Table 12. Average computational times (s).
factor notation level 1 2 3
A
B
C
D
E
F
G
H
J
K
L
M
N
t1
t2
E
c0
c1
Y
W
a
b
u
v
0.5 2
0.01 0.02 0.03
2 10 15
2 10 15
0.05 0.25 0.5
5 10 15
80 100 120
100 150 200
20 25 30
0.5 2.5 5
0.1 0.5 1
0.15 0.475 0.95
0.01 0.45 0.85
Table 13. levels plan for the sensitivity analysis
ARL1
n
h
k
5.8
10
0.76
3.28
0.001
0.751
370.2
5.5
9
1.66
3.00
0.001
0.734
376.8
30.7
3
0.81
4.58
0.009
0.955
31.167
375.0
7.3
7
0.63
3.19
0.002
0.828
0.45
35.626
370.2
5.5
9
1.67
3.00
0.001
0.734
0.95
0.85
48.246
370.6
30.1
3
0.82
4.52
0.002
0.956
0.1
0.15
0.01
46.190
412.3
11.2
4
0.46
3.15
0.001
0.847
2.5
0.5
0.48
0.45
50.542
370.7
7.1
6
1.31
2.92
0.001
0.801
30
5.0
1.0
0.95
0.85
62.965
371.2
30.1
3
0.81
4.52
0.002
0.956
120 200
20
0.5
0.5
0.48
0.85
38.220
384.3
1.6
10
2.05
1.48
0.002
0.218
10
120 200
25
2.5
1.0
0.95
0.01
75.405
370.1
129.3
1
0.20
2.56
0.001
0.999
0.25
10
120 200
30
5.0
0.1
0.15
0.45
41.027
371.1
2.6
10
1.54
2.00
0.002
0.490
10
0.50
15
80 100
20
0.5
0.5
0.48
0.85
40.433
371.4
3.1
6
1.60
1.73
0.001
0.595
0.02 10
10
0.50
15
80 100
25
2.5
1.0
0.95
0.01
62.428
371.1
129.4
1
0.34
2.56
0.001
0.999
0.5
0.02 10
10
0.50
15
80 100
30
5.0
0.1
0.15
0.45
42.382
370.6
2.9
9
2.22
2.01
0.001
0.572
16
0.5
0.02 15
15
0.05
5
100 150
20
0.5
0.5
0.48
0.85
45.411
374.4
1.6
10
2.81
1.47
0.001
0.260
17
0.5
0.02 15
15
0.05
5
100 150
25
2.5
1.0
0.95
0.01
73.840
370.5
141.7
1
0.24
4.42
0.172
0.974
18
0.5
0.02 15
15
0.05
5
100 150
30
5.0
0.1
0.15
0.45
47.431
370.0
2.6
10
2.11
2.01
0.001
0.490
19
0.5
0.03
5
10
0.05
15
100 200
20
2.5
0.1
0.95
0.45
71.100
371.3
64.2
10
0.32
4.19
0.001
0.999
20
0.5
0.03
5
10
0.05
15
100 200
25
5.0
0.5
0.15
0.85
47.767
375.5
1.1
10
3.98
1.09
0.001
0.068
21
0.5
0.03
5
10
0.05
15
100 200
30
0.5
1.0
0.48
0.01
58.324
370.1
23.4
2
0.38
3.41
0.001
0.914
22
0.5
0.03 10
15
0.25
5
120 100
20
2.5
0.1
0.95
0.45
84.208
370.8
66.2
9
0.30
4.11
0.001
0.999
23
0.5
0.03 10
15
0.25
5
120 100
25
5.0
0.5
0.15
0.85
62.303
372.8
1.2
9
3.89
1.14
0.001
0.055
24
0.5
0.03 10
15
0.25
5
120 100
30
0.5
1.0
0.48
0.01
69.095
372.1
31.0
1
0.25
2.79
0.001
0.941
25
0.5
0.03 15
5
0.50
10
80 150
20
2.5
0.1
0.95
0.45
61.385
370.8
66.2
9
0.42
4.11
0.001
0.999
26
0.5
0.03 15
5
0.50
10
80 150
25
5.0
0.5
0.15
0.85
44.290
373.4
1.3
8
4.44
1.19
0.001
0.184
27
0.5
0.03 15
5
0.50
10
80 150
30
0.5
1.0
0.48
0.01
50.982
370.3
18.2
3
0.63
3.59
0.002
0.917
28
2
0.01
5
15
0.50
10
100 100
20
5.0
1.0
0.48
0.45
30.305
377.0
1.2
4
4.44
3.02
0.001
0.019
29
2
0.01
5
15
0.50
10
100 100
25
0.5
0.1
0.95
0.85
27.953
370.6
1.9
3
1.13
3.50
0.001
0.058
30
2
0.01
5
15
0.50
10
100 100
30
2.5
0.5
0.15
0.01
29.513
373.9
1.3
4
3.33
3.27
0.001
0.117
31
2
0.01 10
5
0.05
15
120 150
20
5.0
1.0
0.48
0.45
33.434
389.8
1.1
5
4.78
3.07
0.001
0.047
32
2
0.01 10
5
0.05
15
120 150
25
0.5
0.1
0.95
0.85
31.060
407.5
1.3
6
1.49
4.08
0.001
0.001
33
2
0.01 10
5
0.05
15
120 150
30
2.5
0.5
0.15
0.01
32.465
373.5
1.1
6
3.59
3.35
0.001
0.024
34
2
0.01 15
10
0.25
5
80 200
20
5.0
1.0
0.48
0.45
24.157
371.6
1.2
4
5.00
3.04
0.001
0.137
35
2
0.01 15
10
0.25
5
80 200
25
0.5
0.1
0.95
0.85
22.304
416.5
1.6
4
1.59
3.79
0.001
0.083
36
2
0.01 15
10
0.25
5
80 200
30
2.5
0.5
0.15
0.01
23.428
416.2
1.2
5
3.89
3.36
0.001
0.095
37
2
0.02
5
10
0.50
5
120 150
20
2.5
1.0
0.15
0.85
37.516
384.7
1.3
2
2.23
2.25
0.001
0.121
38
2
0.02
5
10
0.50
5
120 150
25
5.0
0.1
0.475
0.01
40.901
370.3
1.9
4
1.67
4.19
0.001
0.356
39
2
0.02
5
10
0.50
5
120 150
30
0.5
0.5
0.95
0.45
41.845
371.4
13.4
1
0.29
4.84
0.005
0.691
40
2
0.02 10
15
0.05
10
80 200
20
2.5
1.0
0.15
0.85
36.334
792.9
1.0
3
4.32
2.10
0.001
0.001
41
2
0.02 10
15
0.05
10
80 200
25
5.0
0.1
0.475
0.01
37.102
372.6
1.2
10
4.11
4.67
0.001
0.065
42
2
0.02 10
15
0.05
10
80 200
30
0.5
0.5
0.95
0.45
39.748
370.7
11.8
1
0.56
4.62
0.016
0.675
43
2
0.02 15
5
0.25
15
100 100
20
2.5
1.0
0.15
0.85
43.283
589.6
1.0
3
3.71
1.99
0.001
0.001
44
2
0.02 15
5
0.25
15
100 100
25
5.0
0.1
0.475
0.01
45.240
373.1
1.4
7
2.78
4.48
0.001
0.200
45
2
0.02 15
5
0.25
15
100 100
30
0.5
0.5
0.95
0.45
47.007
374.1
13.7
1
0.34
4.82
0.001
0.680
46
2
0.03
5
15
0.25
15
80 150
20
5.0
0.5
0.95
0.01
53.082
373.8
12.5
1
1.14
4.61
0.058
0.689
47
2
0.03
5
15
0.25
15
80 150
25
0.5
1.0
0.15
0.45
42.910
416.6
1.1
3
2.99
2.35
0.001
0.032
48
2
0.03
5
15
0.25
15
80 150
30
2.5
0.1
0.475
0.85
42.739
413.2
1.1
3
2.71
2.33
0.001
0.109
49
2
0.03 10
5
0.50
5
100 200
20
5.0
0.5
0.95
0.01
53.907
370.1
13.4
1
0.70
4.97
0.005
0.706
50
2
0.03 10
5
0.50
5
100 200
25
0.5
1.0
0.15
0.45
40.082
376.2
1.5
2
1.54
2.51
0.001
0.258
51
2
0.03 10
5
0.50
5
100 200
30
2.5
0.1
0.475
0.85
40.057
476.0
1.1
3
2.07
2.36
0.001
0.071
52
2
0.03 15
10
0.05
10
120 100
20
5.0
0.5
0.95
0.01
74.190
372.0
13.9
1
0.74
4.00
0.034
0.485
53
2
0.03 15
10
0.05
10
120 100
25
0.5
1.0
0.15
0.45
60.731
403.9
1.1
3
2.44
2.34
0.001
0.102
54
2
0.03 15
10
0.05
10
120 100
30
2.5
0.1
0.475
0.85
60.390
777.9
1.0
4
2.39
2.32
0.001
0.070
trial
A
B
C
D
E
F
G
H
J
K
L
M
N
1
0.5
0.01
5
5
0.05
5
80 100
20
0.5
0.1
0.15
0.01
16.982
EC ARL0 370.9
2
0.5
0.01
5
5
0.05
5
80 100
25
2.5
0.5
0.48
0.45
21.375
3
0.5
0.01
5
5
0.05
5
80 100
30
5.0
1.0
0.95
0.85
34.635
4
0.5
0.01 10
10
0.25
10
100 150
20
0.5
0.1
0.15
0.01
5
0.5
0.01 10
10
0.25
10
100 150
25
2.5
0.5
0.48
6
0.5
0.01 10
10
0.25
10
100 150
30
5.0
1.0
7
0.5
0.01 15
15
0.50
15
120 200
20
0.5
8
0.5
0.01 15
15
0.50
15
120 200
25
9
0.5
0.01 15
15
0.50
15
120 200
10
0.5
0.02
5
5
0.25
10
11
0.5
0.02
5
5
0.25
12
0.5
0.02
5
5
13
0.5
0.02 10
14
0.5
15
Table 14. L54 orthogonal-array for the sensitivity analysis.
Analysis of Variance Coefficients Source SS df Adj MS F p-Value Coeff. SE Regression 373.73 4 93.431 25.28 0.000 Const 12.735 0.849 115.57 1 115.574 31.27 0.000 -1.951 0.349 A() E(E) 26.78 1 26.776 7.24 0.010 -3.83 1.42 L(b) 159.79 1 159.787 43.23 0.000 -4.672 0.711 M(u) 71.59 1 71.588 19.37 0.000 3.505 0.796 Error 181.11 49 3.696 Total 554.83 53 Regr. Eq.: n = 12.51 – 3.83 E – 0.1500 J + 0.309 K – 4.672 L – 3.505 M + 1.636 N
Table 15. Effect of model parameters on the n design variable.
t-Value 15.00 -5.59 -2.69 -6.58 -4.40
Analysis of Variance Coefficients Source SS df Adj MS F p-Value Coeff. Regression 78.193 4 19.5483 31.33 0.000 Const 1.066 15.916 1 15.9156 25.51 0.000 0.724 A() K(a) 15.891 1 15.8910 25.47 0.000 0.2694 M(u) 38.364 1 38.3639 61.49 0.000 -2.566 N(v) 8.023 1 8.0227 12.86 0.001 1.124 Error 30.574 49 0.6240 Total 108.767 53 Regr. Eq.: h = 1.066 + 0.724 A + 0.2947 K – 2.566 M + 1.124 N
SE 0.341 0.143 0.0584 0.327 0.313
Table 16. Effect of model parameters on the h design variable.
t-Value 3.13 5.05 5.05 -7.84 3.59
Analysis of Variance Coefficients Source SS df Adj MS F p-Value Coeff. Regression 46.308 4 11.5769 34.32 0.000 Const 2.596 3.758 1 3.7584 11.14 0.002 0.352 A() 2.358 1 2.3583 6.9988.34 0.011 -25.59 B() M(u) 29.797 1 29.7966 30.82 0.000 2.261 N(v) 10.394 1 10.3941 0.000 -1.279 Error 16.528 49 0.3373 Total 62.835 53 Regr. Eq.: k = 2.596 + 0.352 A – 25.59 B + 2.261 M – 1.279 N
SE 0.295 0.105 9.68 0.241 0.230
Table 17. Effect of model parameters on the k design variable.
t-Value 8.79 3.34 -2.64 9.40 -5.55
Analysis of Variance Source SS df Adj MS Regression 0.00266 1 0.00266 M(u) 0.00266 1 0.00266 Error 0.02960 52 0.00057 Total 0.03227 53 Regr. Eq.:= -0.00460 + 0.02137 M
F 4.67 4.67
p-Value 0.035 0.035
Coefficients Coeff. Const -0.0046 0.0214
SE 0.00612 0.00988
Table 18. Effect of model parameters on the design variable.
t-Value -0.75 2.16
Analysis of Variance Source SS df Adj MS F p-Value Regression 6.1249 3 2.04164 74.42 0.000 3.2964 1 3.2964 120.16 0.000 A() M(u) 1.7980 1 1.7980 65.54 0.000 N(v) 1.0305 1 1.0305 37.56 0.000 Error 1.3717 50 0.02743 Total 7.4966 53 Regr. Eq.: = 0.7613 – 0.3294 A + 0.5555 M – 0.4027 N
Coefficients Coeff. Const 0.7613 -0.3294 0.5555 -0.4027
SE 0.0636 0.0301 0.0686 0.0657
Table 19. Effect of model parameters on the design variable.
t-Value 11.98 -10.96 8.10 -6.13
Analysis of Variance Source SS Regression 11380.4 1370.8 A() 4788.6 B() C(t1) 485.2 D(t2) 475.5 F(c0) 151.6 G(c1) 1678.2 K(q) 120.2 L(b) 272.2 M(u) 1711.4 N(v) 326.7 Error 887.4 Total 12267.8
df 10 1 1 1 1 1 1 1 1 1 1 43 53
Adj MS 1138.04 1370.82 4788.60 485.24 475.52 151.59 1678.16 120.16 272.23 1711.38 326.70 20.64
F 55.15 66.43 232.05 23.51 23.04 7.35 81.32 5.82 13.19 82.93 25.83
p-Value 0.000 0.000 0.000 0.000 0.000 0.010 0.000 0.020 0.001 0.000 0.000
Coefficients Coeff. Const -33.33 -6.718 1153.3 0.734 0.727 0.410 0.3414 0.810 6.10 17.14 -7.17
SE 5.31 0.824 75.7 0.151 0.151 0.151 0.0379 0.336 1.68 1.88 1.80
t-Value -6.28 -8.15 15.23 4.85 4.80 2.71 9.02 2.41 3.63 9.11 -3.98
Regr. Eq.: EC = -33.33 – 6.718 A + 1153.3 B + 0.734 C + 0.727 D + 0.410 F + 0.3414 G + 0.810 K + 6.10 L + 17.14 M – 7.17 N
Table 20. Effect of model parameters on the EC design variable.
Analysis of Variance Source SS Regression 75753 35623 A( ) N(v) 40129 Error 296246 Total 371999
df 2 1 1 51 53
Adj MS 37876 35623 40129 5806
F 6.52 6.13 6.91
p-Value 0.000 0.017 0.011
Coefficients Coeff. Const 321.8 34.2 79.5
SE 24.1 13.8 30.2
t-Value 13.36 2.48 2.63
Regr. Eq.: ARL0 = 321.8 + 34.2 A + 79.5 N
Table 21. Effect of model parameters on the ARL0 dependent variable.
Analysis of Variance Source SS df Adj MS F p-Value Regression 30690.9 3 30690.9 19.77 0.000 9484.2 1 9484.2 18.33 0.000 A() M(u) 15955.1 1 15955.1 30.84 0.000 N(v) 5251.7 1 5251.7 10.15 0.002 Error 25868.9 50 25868.9 Total 56559.9 53 25658.8 Regr. Eq.: ARL1 = 24.32 – 17.67 A + 52.32 M –28.75N
Coefficients Coeff. Const 24.32 -17.67 52.32 -28.75
SE 8.73 4.13 9.42 9.02
t-Value 2.79 -4.28 5.55 -3.19
Table 22. Effect of model parameters on the ARL1 design variable.
Highlights ▶ We study the ARMA control chart with autocorrelation. ▶ We address the statistical economic design of ARMA control chart. ▶ We compare the proposed approach with other metaheuristic techniques. ▶ A non-parametric test emphasizes the effectiveness of the proposed approach. ▶ A sensitivity analysis involving the cost model parameters has been executed.
STATISTICAL PROCESS CONTROL
ECONOMIC DESIGN 5 5
2 5
1 7
0
1
5
0
1 5
7
5
5
5 1 01 0
0
EVOLUTIONARY COMPUTATION
7