Hierarchical Particle Swarm Optimization with Ortho-Cyclic Circles

Hierarchical Particle Swarm Optimization with Ortho-Cyclic Circles

Expert Systems with Applications 41 (2014) 3460–3476 Contents lists available at ScienceDirect Expert Systems with Applications journal homepage: ww...

2MB Sizes 0 Downloads 25 Views

Expert Systems with Applications 41 (2014) 3460–3476

Contents lists available at ScienceDirect

Expert Systems with Applications journal homepage: www.elsevier.com/locate/eswa

Hierarchical Particle Swarm Optimization with Ortho-Cyclic Circles Kirupa Ganapathy ⇑, V. Vaidehi, Bhairavi Kannan, Harini Murugan Department of Information Technology, Madras Institute of Technology, Anna University, Chennai, India

a r t i c l e

i n f o

Keywords: Particle Swarm Optimization Cloud computing Remote health monitoring Dynamic environment Orthogonality Dynamic Round Robin Scheduling

a b s t r a c t Cloud computing is an emerging technology which deals with real world problems that changes dynamically. The users of dynamically changing applications in cloud demand for rapid and efficient service at any instance of time. To deal with this paper proposes a new modified Particle Swarm Optimization (PSO) algorithm that work efficiently in dynamic environments. The proposed Hierarchical Particle Swarm Optimization with Ortho Cyclic Circles (HPSO-OCC) receives the request in cloud from various resources, employs multiple swarm interaction and implements cyclic and orthogonal properties in a hierarchical manner to provide the near optimal solution. HPSO-OCC is tested and analysed in both static and dynamic environments using seven benchmark optimization functions. The proposed algorithm gives the best solution and outperforms in terms of accuracy and convergence speed when compared with the performance of existing PSO algorithms in dynamic scenarios. As a case study, HPSO-OCC is implemented in remote health monitoring application for optimal service scheduling in cloud. The near optimal solution from HPSO-OCC and Dynamic Round Robin Scheduling algorithm is implemented to schedule the services in healthcare. Ó 2013 Elsevier Ltd. All rights reserved.

1. Introduction Optimization problem refers to the process of minimizing or maximizing the value of an optimization function subject to constraints. It is the process of evaluating the optimization function with selected values of input from a confined set. Evolutionary Computation (EC) is an optimization method with stochastic behavior. ECs work on a set of inputs called population with an iterative approach. Swarm Intelligence (SI) (Poli, Kennedy, & Blackwell, 2007) is the process of achieving desirable results by a swarm as a whole. This is facilitated by local interactions within the swarm and communication with the environment. Members of the swarm learn from others by synergy and move towards the goal, thereby exhibiting a social behavior. Particle Swarm Optimization (PSO) is one of the Evolutionary Algorithm (EA), which have become an active branch of SI, simulating swarm behavior like fish schooling, wasp swarming and bird flocking. PSO is found to be different from other evolutionary optimization algorithms, for not employing conventional operators like crossover and mutation. PSO solves the optimization problem by augmenting candidate solutions iteration by iteration. The solutions are represented as particles, the collection of which constitutes the swarm. The particles have distinct properties like

⇑ Corresponding author. E-mail addresses: [email protected] (K. Ganapathy), vaidehi@annauniv. edu (V. Vaidehi), [email protected] (B. Kannan), [email protected] (H. Murugan). 0957-4174/$ - see front matter Ó 2013 Elsevier Ltd. All rights reserved. http://dx.doi.org/10.1016/j.eswa.2013.10.050

velocity and position that define their state in the search space. These particles move in steps as defined by their velocity, which is determined by their local best known position and the global best position of the swarm. This way, the swarm is expected to converge at an optimum point. In order to avoid premature convergence, the particles may use the best position of sub-swarm that is formed within the neighborhood of the particle. Particle neighborhood depends on the scheme of the swarm’s population topology. PSO is found to be useful to copious applications deployed in dynamic backdrop. Applications are said to be dynamic when their environment consists of continuous changes like advent of a new process, machine failure, unanticipated downtime, network failure, unstable network connectivity and others. Using classic PSO in dynamic environment is a critical issue where the particles converge to local or global optima over some successive iterations and the arrival of new particle makes the converged particles to start from the scratch for tracking the new optima. Classic PSO exhibits major drawback due to loss of diversity. The probability of premature convergence is high when the dimensionality of the search space is large. Another reason for loss of diversity is that the particles move to a single point which is determined by the gbest and pbest. But this point is not guaranteed to be a local optimum. Drawbacks of classic PSO is more in static environment and it is eventually more severe in dynamic problems. Therefore, classic PSO needs to be modified to deal loss of diversity in real world problems. Several mechanisms are adopted to improve the performance of classic PSO in dynamic environment. These mechanisms are dynamic parameter tuning, dynamic network topology, hybridizing PSO

K. Ganapathy et al. / Expert Systems with Applications 41 (2014) 3460–3476

with genetic algorithm, multi swarm approach, multi dimensionality, dynamically changing neighborhood structures etc. However, there are a number of problems in such Dynamic Optimization Problems (DOP) which remain unsolved. This paper suggests a two level novel PSO optimization technique as a solution to DOPs, where a dynamic change in one level is overcome by the other. Contribution of this paper for improving the adaptation of PSO in dynamic environment are described as follows. 1. Multiple swarms are constructed based on the particles similarities in the form of circles/swarms. 2. The particles share the information within the circles undergoes conventional PSO for convergence. 3. Selection of similarly converging circle for information sharing between the swarms employs a special orthogonal array analysis. 4. Hierarchical PSO (second level PSO) is employed where the velocity of the gbest of the selected ortho cyclic circle is used to update the velocity of the competing circle particles and refine the position. Brief discussion on proposed Hierarchical Particle Swarm Optimization with Ortho-Cyclic Circles (HPSO-OCC) is as follows. The algorithm aims to improve the performance of dynamic PSO by offering accurate best solution and faster convergence speed. In the first level, the swarms are grouped based on the similar properties and allowed to undergo conventional PSO. In a topological structure, every swarm discovers the similarly converging neighbor swarms using cyclic property and selects the best neighbor swarm using orthogonal analysis. The information from the personal best fitness (pbest) of thus discovered ortho-cyclic circle is used along with the pbest of the circle and gbest of all circles to define the velocity and refine the position. Second level PSO is performed in the current swarm with the updated velocity equation. HPSO-OCC algorithm is found to be suitable for numerous applications such as surveillance, military, habitat monitoring, sensor selection etc. As a case study, the proposed HPSO-OCC with Dynamic Round Robin scheduler is implemented in remote health monitoring application for optimal service scheduling. Physiological sensors worn over the body of the patient’s sends the vital sign measurement to the remote cloud server. Web enabled sensor data stream are categorised as normal and abnormal class. Based on the abnormality percentage, the particles enter the swarms and HPSO-OCC identifies the optimal patient (particles) to be served with minimum waiting time and schedules them using dynamic round robin scheduler.

2. Particle Swarm Optimization Preliminaries PSO employs population-based stochastic search (Engelbrecht, 2006; Kennedy & Mendes, 2002) with an iterative learning approach, exploring optima in a multidimensional search space. The algorithm is initialized with a population of particles, where each particle betokens a plausible solution in a d-dimensional space, which is found by utilizing personal memories of the particles and shared information within a specific neighborhood. Each particle Pi has position vector ,i = [,i1, ,i2, . . . , ,id] and velocity vector ˆi = [ˆi1, ˆi2, . . . , ˆid] where ,id 2 {100, 100}; i = 1, 2, . . . , N (total number of particles); and d = 1, 2, . . . , d (Kennedy & Eberhart, 1997). The movements of the particles are guided by their own best known position in the search-space, called pbest (li), as well as the entire swarm’s best known position, called gbest (e). This is called Global PSO (GPSO). A second version of PSO exists, called Local PSO (LPSO) that considers best known position in a particle’s

3461

neighborhood (lbest) instead of gbest. Neighborhood is defined based on the topology used as shown in Fig. 1. The vectors ˆi and ,i are randomly initialized and are revised based on Eqs. (1) and (2).

tid

xtid þ up rp ðlid id Þ þ ug rg ðtid  vid Þ

vid ¼ vid þ tid

ð1Þ ð2Þ

The parameters x, up, and ug which are selected by the practitioner regulates the delivery and potency of the PSO algorithm. Coefficients up and ug are cognitive and social acceleration factors (Zhan, Zhang, Li, & Chung, 2009) that can take values in the range [0, 4] (1.49 and 2.00 are the most commonly used values), where up is the personal accelerator and ug is the global accelerator. High value of x, the inertia weight (Shi & Eberhart, 1998) advocates exploration, and a small value patronizes exploitation. x is often linearly decreased from a high value (0.90) to a low value (0.40) along the generations of PSO.The search space of the flying particles is limited in the range [,min, ,max]. Their velocities are regulated within a reasonable limit, which is taken care by the parameter ˆmax. ˆmax. is a positive integer that determines the maximum step one particle can take during one iteration, and is generally set to a value,max- ,min. The fitness of each particle is calculated in every generation using a fitness function f. The functioning of classical PSO is shown in Algorithm 1. Algorithm 1 PSO BEGIN (a) UNTIL a termination criterion is met i.e., number of iterations performed, REPEAT (b) FOR EACH particle i 0, 1, ..., N DO (i) FOR EACH dimension d 1, ..., d DO (1)Pick random numbers: rp, rg  U(0,1) (2) Update the particle’s velocity: ˆid x ˆid + up rp (li,d-,id) + ug rg (ed-,id) (ii) Update the particle’s position: ,id ,id + ˆid (iii) IF (f(,i) < f(li)) THEN (1) Update the particle’s best known position: li ,i (2) IF (f(li) < f(e)) THEN update the swarm’s best known position: e li RETURN e END

The remainder of this paper is organised as follows. Section 3 describes the related works in PSO algorithm. Section 4 elaborates the proposed HPSO-OCC algorithm. Section 5 describes the application of HPSO-OCC in remote healthcare application. Section 6 presents the result and discussions with experimental set-up, performance analysis of HPSO-OCC with existing PSO algorithms and performance of HPSO-OCC with optimization functions respectively. Section 7 discusses the conclusions. 3. Related works PSO algorithm is one of the evolutionary algorithm that solves optimization problems. Recently, much of the real time problems are solved using PSO due to its simplicity. Many researchers have modified classic PSO to solve the problems such as loss of diversity, outdated memory, reliability, convergence speed etc. Several techniques are applied to improve traditional PSO search mechanism in static and dynamic environments. In recent years, investigation of PSO in changing environment problems has become one of the most important issue for real time applications. To solve the issues,

3462

K. Ganapathy et al. / Expert Systems with Applications 41 (2014) 3460–3476

(a)

(b)

(c)

(d)

Fig. 1. Different types of particle topologies, (a) star topology used in gbest (b) ring topology used in lbest (c) von Neumann topology (d) four clusters topology.

various dynamic Particle Swarm Optimization methods have been experimented and tested (Blackwell & Branke, 2006; Greef & Engelbrecht, 2008; Hu & Eberhart, 2002). Major techniques used to enhance the search performance of dynamic optimization are parameter tuning, multi swarm schemes (Zhao, Nagaratnam & Das, 2010), multi dimensionality (Kiranyaz, Pulkkinen, & Gabbouj, 2011), update of particle velocity & position (Chen & Li, 2007; Liang & Qin, 2006), hybridized PSO (Shen, Shi, & Kong, 2008), speciation (Li, Branke, & Blackwell, 2006; Rezazadeh, Meybodi, & Naebi, 2011), exclusion & anti convergence (Blackwell & Branke, 2006) and topology investigation (Li & Dam, 2003). Few techniques recently developed to improve the global convergence performance of dynamic PSO and their issues are briefly discussed below. Liang and Suganthan (2005) proposed a modified dynamic Particle Swarm Optimization using quasi Newton’s method to achieve faster convergence velocity. This algorithm constraints the particle within the range and calculates the fitness to update the pbest only if the particle is within the range. But, the results are not satisfactory for large population; grouping & exchange of information between the swarm particles are randomized and fails to achieve global optimum in dynamic environment. Li and yang (2008) developed a fast multi swarm dynamic PSO where the swarm of particles identifies the gbest and forms a sphere of radius r which acts as a sub swarm with gbest as its centre. Repeatedly the child swarms are formed for every gbest. If the distance between the child swarms are less than r, the worse one is eliminated to avoid overlap. But, the work is completely based on the selection of radius r. This radii is randomly selected from the range values. There is no proper concept focused on the selection of radius. Liu, Yang, and Wang (2010) presented a mechanism to track promising peaks in dynamic environments. Composite particles are constructed using worst first principle and spatial information for efficient information sharing. Velocity-anisotropic reflection point is generated to replace the worst particle for better direction in the search space. However, this algorithm is not focusing on multiple swarm interaction and involves high time complexity if number of particles in the swarm is more. Connolly, Granger, and Sabourin (2012) proposed an incremental learning strategy based Dynamic Particle Swarm Optimization (DPSO). The weight and the architecture of the pool of learning networks are optimized using DPSO. User defined hyper parameters are given to each classifier in the swarm and the optimal hyper parameters are stored and managed in the long term memory. The classifier with accuracy and diversity is selected and combined from the swarm for the given testing data. But this work is near to static PSO as it is not focusing on the maintenance of diversity around each local optimum in dynamic environment. Omranpur, Ebadzadeh, Shiry, and Barzegar (2012) focused on dynamic Particle Swarm Optimization to find the best local optima by searching backspace from the local minima obtained. Any one of the particle is in local minima, the other particles nearby also converges to it. To overcome this, every particle in the search space is given the predefined Pbest and pworst. As the particle enters the space, pbest and pworst of

every particle are compared. So the particle searches the backspace to get the best national extremum. However, this technique fails to discuss about the value assumed for pbest and pworst. Hernandez and Corona (2011) proposed multi swarm PSO in dynamic environment to manage the outdated memory and diversity loss. The swarm is divided in to two groups of active swarm and bad swarm based on the quality (fitness) of the particle. A hyper sphere is drawn around the gbest of the swarm with certain radii. Control rule is given to identify and select the swarm that is towards the optimum solution. Remaining sub optimal swarms are not considered as it takes significant computational resources. However, the information from the neighbor particles outside the hyper sphere is not considered to achieve promising optima. This may worse the performance in high dimensional search space. That is, the particles not in hyper sphere may become active swarm particle after few iterations. Cai and Yang (2013) developed dynamic parameter tuning in PSO for multi target detection using multi robots. Sensors in the robots (particles) sense the direction and the sensing distance. The objective of this multi robot target detection is that a robot detects some targets and the percentage of uncovered area by the target is estimated. This facilitates the other robots to change the direction and the distance. The parameters such as closeness of the robot with other, relative directions and the area are estimated to change the velocity and the position of particles reaching the target. But, the numbers of targets are fixed and definitely fail to handle dynamic environment. This algorithm is more application specific and will give poor performance for most of the other applications. The existing techniques just try to improve the search mechanism, convergence speed and diversity by introducing a new parameter. This new parameter is assigned a random value or threshold and thus yield limited improvement in dynamic environment. Randomizing the parameter in the swarm may eliminate some important information learnt previously when tuned to another random value in a totally new environment. Parameter tuning of this kind fails to retain reliable solution when there is increase in population size. The problem becomes even more challenging in higher dimensions and multimodal problems. Weak particles that are not within the specified radius are eliminated and knowledge from the weaker ones are not used by the fitter particles thus creating high possibility of inaccurate solution (local optimum). An algorithm with proved mathematical concepts can be applied to any application in any scenario rather than randomization of the parameters. Another major drawback of the traditional PSO and the above mentioned PSOs is that the algorithm is applied to the search space of fixed/single dimensionality. Few PSO algorithms concludes that the number of dimensions are equal to the number of swarms. But in dynamic scenarios, the number of particles entering the swarm cannot be predicted and therefore the number of swarm generation is also dynamic. That is, number of swarms changes over time. Kiranyaz et al. (2011) developed a multi swarm multidimensional PSO algorithm which initially creates artificial particle in every swarm and this selects the best particle

K. Ganapathy et al. / Expert Systems with Applications 41 (2014) 3460–3476

location from the entire swarm iteration by iteration. If artificial particle is better than the gbest of the swarm, the gbest is replaced by the location of the artificial particle. Traditional PSO is used for positional optima with velocity updates and navigation of the particle through dimensions is done for dimensional optima. When the particle re enters, it remembers the position, velocity and its pbest in its dimension. The personal best fitness score achieved is the optimum solution and dimension of the particles. However, this set up enables convergence speed and diversity gain for less number of particles in a swarm. Also, the gbests of the swarms gets disturbed if a new particle enters with no previous personal best fitness score. Known particles stores its previous best information and attains its position when it re enters. Even in dynamic scenario, the known particle identifies its best position and reaches the optimum soon. But the unknown particle takes more time to adapt to the system and adjusts its parameter with the known particles. Real time problems will definitely have varying population and cannot be restricted to certain population size. Then the algorithm itself proves to be not global. Researchers have developed hybridized PSO technique with other optimization algorithm such as PSO with GA, PSO with Neural Network etc. Most of the hybridization techniques are presented to optimize the tuning parameter and network structure of the other optimization algorithm using PSO. Recent techniques of hybrid PSO model, uses operators and collaborative populations to maintain diversity of the swarm in dynamic environment. Korurek and Dogan (2010) have performed ECG beats classification using radial basis function neural network and PSO. R-peaks of the ECG beats are extracted for classification. The centre and the bandwidth of the hidden neurons are optimized using Particle Swarm Optimization. The particles are the centers and bandwidth specifying the solution. Classical PSO is applied in a search space where the classification performance is improved with 10 or less neurons in the hidden layer. This work is a hybrid technique which uses neural network and PSO for accurate classification. Recent research in PSO algorithms, incorporates Orthogonal Experimental Design (OED) in evolutionary algorithms to improve the search performance significantly. Leung and Wang (2001) proposed orthogonal genetic algorithm using OED to improve the crossover operator and population inversion. Hu, Zhang and Zhong (2006) introduced OED on chromosomes to identify the better solutions in the environment. Ho, Lin, Liauh, and Ho (2008) proposed particle move behavior using orthogonal experimental design to adjust the velocity in high dimensional vector spaces. The intelligent move mechanism generates two temporary moves and generates a good combination of partial vectors from these moves. The good combination selected using orthogonal effect is the next move of the particle. This orthogonal PSO is applied to task assignment problem and compared with the traditional PSO in static environment. However, the algorithm’s performance is based on the random initialization of the two temporary moves. Also, it is not experimented for multi swarm interaction in dynamic environment. In this paper, we propose an entirely different approach of improving PSO performance in dynamic environment by applying cyclic property, orthogonal technique and two level PSO. To overcome the issues of the existing work, the proposed algorithm emphasize on multi swarm neighbor interaction, Weak particle encouragement, higher dimensionality, unimodal and multimodal problems, population size, dynamic swarm generation, stable optima tracking and supply strong mathematical concept than randomization of parameter. The proposed algorithm employs a multi-swarm strategy, where swarms are formed based on likeliness among the particles. This multi swarm strategy adopts interaction of swarm with the neighbor swarm of similar property. It is made time consuming to interact with limited number of swarms

3463

rather interacting with the swarms in the entire search space. Neighbor swarms are identified using cyclic property for information sharing. Orthogonal test design is a robust design approach used to balance the convergence speed and the diversity of the particles. Instead of searching the better solutions from the entire swarms evenly distributed in the search space, orthogonal strategy is a time consuming approach of searching global best solution from the limited neighbor swarms (cyclic swarms). Selection of the swarm is the process of identifying the best combinations of factors that affect the chosen solution variables. Chosen better swarm’s gbest particle guides the weaker particles entering newly in other swarm to move towards the correct direction in later iterations. In changing environment, less or more particles enter newly to their respective swarm and move to the direction of their respective orthogonal swarm’s gbest. Advantages of the proposed method is summarized as follows. The particles in the swarm exploit information from another particle, in addition to the best fit particle. Weak particles interact with fitter particles in order to progress towards assuring optima. On an advent of a new particle, the swarm interacts with neighboring promising swarms, in the interest of attaining quick stability. The removal of a particle from the swarm is a disastrous effect on the swarm convergence, especially, when the removed particle is a gbest or an lbest. Hence, the swarm communicates with similarly converging swarms to regain the momentum. The experience of the weaker particles is used to educate the swarm about the discouraging positions in the search space. The changes in one swarm are ensured not to affect the diversity in other swarms. The optima in multi-swarms is discovered and tracked simultaneously to improve the performance of the algorithm. The particles’ memory becomes unreliable after a change in the environment causing outdated memory. In this approach, the memory is recovered instead of being re-evaluated or forgotten. Thus robust and statistically sound PSO algorithm is developed to handle good population, high dimensions, unimodal and multimodal problems in dynamic environment.

4. The proposed Hierarchical Particle Swarm Optimization with Ortho Cyclic Circles in dynamic environments 4.1. Circle formation HPSO-OCC employs a new phenomenon termed circles is treated analogous to particles. Circles are formed based on similarity in property among the particles in the search space. These swarms are denominated as ‘‘Circles’’. Each circle has a Circle Representative (CR) that symbolizes the properties of that circle. A circle is represented by the best fit particle conforming to that circle, known as CR. Thus, the particles in the PSO landscape are partitioned into multiple swarms called circles, and the global best particle in each circle is considered to depict the behavior of the respective circle. The CRs are treated as high-level particles and the particles within circles are treated as low-level particles. The working of this two-level PSO is depicted in Fig. 2. The low-level particles are allowed to participate in dedicated classic PSO threads and the best particle is continuously tracked. Thus, there are as many gbests (or CRs) as the number of circles. When a circle is said to participate in the novel PSO with ortho-cyclic property (at the higher level of hierarchy), it is the respective CR that participates in the algorithm in actuality. The hops made by the CRs influence the other particles conforming to that circle, and thereby guide the particles to fly towards global optimum even after an unanticipated change in the environment. This assures that particles in each circle exploit information from another particle, i.e., the best particle in other circles, in addition to the best fit particle within itself. Also, the parallel discovery and tracking of optima in circles

3464

K. Ganapathy et al. / Expert Systems with Applications 41 (2014) 3460–3476 Table 1 Factors and levels representation. Levels

Fig. 2. Circles in Hierarchical Particle Swarm Optimization.

guarantees commendable improvement in the performance of the algorithm. The memory of the particles in a circle with environmental changes is now updated by the interaction of its CR with other CRs, and in-turn the interaction of this CR with the circle particles, thereby solving the outdated memory problem. The interaction of CRs at the higher-level is a substantial process, as this phase of the algorithm decides the ability of the proposed PSO to operate in dynamic environments. Apart from the information from the pbest of the CR and the gbest of all gbests (gbest of CRs) is used as a third parameter to determine the velocity and the new position. This parameter is decided employing the orthogonal and cyclic properties of the CRs. Orthogonality property, inspired from the OLPSO, helps to reveal useful information from the surrounding CRs, resulting efficient search in complex problem spaces. Cyclic property helps to ensure that the CRs, after an atmosphere distortion, are guided by similarly-converging CRs. This is dependent on the property used to construct the circles, and circles with similar properties are expected to converge in similar fashion. Each CR has two cyclic neighbors, and of these two, an orthogonal CR is discovered. The information from the personal best fitness (pbest) of thus discovered ortho-cyclic CR is used along with the pbest of the CR and gbest of all CRs to define the velocity and refine the position. 4.2. Formation of Ortho-Cyclic Circles Discovery of OCC involves four steps which are described below. The first step involved in OCC discovery is selection of cyclic circles. The circles are allotted with CIN (Circle Identity Number) during their construction. If there are N circles, the CIN ranges from 1 to N. Circles with CIN (a  1) mod N and (a + 1) mod N are said to be cyclic to the circle with CIN(a), (1 6 a P N). This is unlike a ring neighborhood, where position is used as the base criterion. Second step involved is the Construction of Orthogonal Array that is followed after the selection of the cyclic circles. An example is given in Table 1 to introduce the basic concept of experimental design methods. The fitness function depends on the variables and these three quantities (factor 1, factor 2 and factor 3) are called the factors of the experiment. Each factor has three possible discriminative values of a factor is called as a level of the factor. P1, P2, P3 are the values of the factor 1in 3 levels. Similarly Q1, Q2, Q3 are the values of factor 2 in 3 levels and R1, R2, R3 are the values of factor 3 in 3 levels. The factors are the parameters which affect the fitness function. Number of factors and number of levels vary depending on the application variables. For example, the objective function of minimum service waiting time is affected by the variables (factors). Example: signal strength and bandwidth. Orthogonal strategy involves the

Factors (variables) Factor 1

Factor 2

Factor 3

P1 P2 P3

Q1 Q2 Q3

R1 R2 R3

construction of orthogonal array to discover potentially best combination level through factor analysis. From Table 1, there are 33 = 3  3  3 = 27 combinations. The orthogonal statistical design is used here to reduce the complexity of every particle interaction with the neighbor particle in large population. Therefore if the number of factors and levels are large, then combinations will be more. It is desirable to select certain representative set of combinations for experimentation. Orthogonal array and factor analysis is an efficient technique proposed for systematic selection of neighbor swarm particle with best factors in some combinations. The conventional generate and go method interacts with all the swarm particles to find the better solution. Thereby, the proposed orthogonality reduces the diversity of the particles and increases the convergence speed. Orthogonal strategy used in this proposed work, selects the best factors from the possible level combinations and the cyclic circle which has gbest near or exactly to this best factor is considered for interacting/ information sharing with the similarly converging circle. If the factors are two and the levels are three then 32 combination of experimental designs are possible. The algorithm identifies the orthogonal circle using an Orthogonal Array (OA) (Yang, Bouzer_ doum, & Phung, 2010), denoted as LM (Q N ). Here L denotes the OA i.e Latin square, M is the number of combinations, N_ represents the number of factors and Q is the number of levels per factor. For example, L9(34) OA obtained by the algorithm as given in Appendix I for a 3 levelled 4 factor OA is

0

1 1 1 1

B1 B B B1 B B B2 B 4 L9 ð3 Þ ¼ B B2 B B2 B B3 B B @3 3

1

2 2 2C C C 3 3 3C C C 1 2 3C C 2 3 1C C C 3 1 2C C 1 3 2C C C 2 1 3A 3 2 1

ð3Þ

L9(34) can be utilized for applications with at most 4 factors. For 3 level 4 factors, 27 combinations are possible and by applying orthogonal array nine representative combinations are selected as best combinations. Nine best combinations are shown in Table 2 Table 2 Best combinations of factors and levels. Combinations

1 2 3 4 5 6 7 8 9

Factors Factor 1

Factor 2

Factor 2

P1 P1 P1 P2 P2 P2 P3 P3 P3

Q1 Q2 Q3 Q1 Q2 Q3 Q1 Q2 Q3

R1 R2 R3 R2 R3 R1 R3 R1 R2

K. Ganapathy et al. / Expert Systems with Applications 41 (2014) 3460–3476

Orthogonal array is an array of rows and columns where each row represents the levels of the factor and each column represents the specific factor. i.e (1, 1, 1) = (P1, Q1, R1) similarly for other combinations. The array is orthogonal where each column represents a specific factor. That is one column is independent of the other. This implies that even a two-factored application can use this OA, omitting any two random columns. Thus, we get M combinations to be tested for our maximum benefit from the OA. Next step is the Factor Analysis that concentrates in exploring the best among the M combinations (Mendes, Kennedy, & Neves, 2004). For each combination m (1 6 m P M), a factor value fm is calculated. Furthermore, another parameter binary value, Bmnq is _ factor is _ ð1  n_  NÞ calculated. Bmnq is set to 1 if the level of nth q(1 6 q P Q) in mth combination and 0 otherwise. The effect value of qth level q on the factor n is denoted by Enq, which is calculated _ factor is q as the sum of all factor values fm in which the level of nth divided by sum of all binary values Bmnq as given in Eq. (4).

PM Enq ¼

m¼1 fm  Bmnq PM m¼1 Bmnq

ð4Þ

This helps to find the effect of each level on each factor, and the level with maximum effect value is considered as the best combination. Identification of OCC is followed, where the circle with CR producing maximum value effect value is considered to be the best particle for memory update. Since this circle is found to have a cyclic relationship with the current circle and is found to the best using OA factor analysis, it is chosen as the Ortho-Cyclic Circle (OCC). The gbest of this OCC (CR) is considered while calculating the new velocity, position and fitness of the particles in the current circle. 4.3. Hierarchical Particle Swarm Optimization with Ortho Cyclic Circle The proposed HPSO with OCC embeds properties of circles, orthogonality, cycle and hierarchy to handle dynamic complex problems using PSO. The steps in the algorithm are described below. Construction of circles is done based on the similarity in property of the particles. Thus it can be said that each circle is a typical multi-swarm with like-particles. The circles are identified using Circle Identity Number (CIN) allotted such that circles with adjacent CINs have similar properties. Therefore, dynamic changes in one circle can be handled by considering the information from circles with adjacent CINs. Each circle is represented by a CR which is the gbest particle within the circle. Classic PSO algorithm described in Section 2 is actuated for particles within the circles. These particles update their velocity and position as per the relations in Eqs. (1) and (2) correspondingly. This is done concurrently and continuously for all the circles by allocating dedicated threads for each circle. The next step is the Selection of CR. The concurrent execution of PSO within the circles leads to continuous changing of their gbest. This change is continuously tracked iteration-by-iteration and the CR is selected and updated accordingly. Parallel to the classic PSO within circles, for each circle an Ortho-Cyclic Circle is found, and Particle Swarm Optimization is executed for the CRs with the velocity of the CR of the OCC. The updated velocity equation at which the second level PSO is actuated as in the Eq. (5). The position of the CR is updated using Eq. (6).

tbid ¼ xtoid þ up rp ðvid  vbid Þ þ ug rg ðqd  vbid Þ

ð5Þ

vbid ¼ vbid þ tbid:

ð6Þ

3465

circles to achieve promising global optima, even after an environmental change. Classic PSO, selection of CR and PSO-OCC are repeated continuously and simultaneously until all the particles in the search space attain promising optima. Let N be the number of circles in the swarm. The position, velocity of the CR of Circle i(CRi) is the position and velocity respectively of the gbest particle within that circle, and represented as ,bi and ˆbi respectively. Let ji be the number of particles within the circle i, where the particle Pij has a position ,ij and a velocity ˆij. The fitness of each particle is calculated generation-by-generation, using a fitness function f. Let lij be the pbest of particle Pij and vi be the gbest of circle i. This conveys that vi is the best known position of CRi. Let q be the best known position of the entire swarm (gbest of CRs). Let d be the number of dimensions. The velocity of the CR of the OCC is represented as toid . The algorithm of HPSO-OCC is presented in Algorithm 2. Algorithm 2 HPSO-OCC Begin (1) Construct Circles (2) FOR EACH Circle Ci, i 1, ..., N DO (a) FOR EACH particle Pij, j 1, ..., ji DO (i) Initialize the particle’s position with a uniformly distributed random vector: ,ij  U(blow, bup), where blow andbup are respectively the lower and upper boundaries of the search-space. (ii) Initialize the Pij ‘s velocity: ˆij  U(-|bup  blow |, | bup  blow |) (iii)Initialize the particle’s best known position to its initial position: lij ,ij b) IF(f(vi) < f(q)) THEN (i) Update the circle’s best known position: q vi (3) UNTIL the number of iterations performed, REPEAT (a) FOR EACH Circle Ci, i 1, . . ., NDO (i) Perform Classic PSO (ii) Update the vi of Ci with the return value of classic PSO. (b) FOR EACH Circle Ci, i 1, . . ., NDO (i) Compute the OCC for circle Ci. (ii) Calculate ˆbid x ˆöid + up rp (vid-,bid) + ug rg (qd-,bid) (iii) Calculate ,bid ,bid + ˆbid. (iv) IF (f(,bi) < f(vi)) THEN A. Update the CRi’s best known position: vi ,bi (v) IF (f(vi) < f(q)) THEN A. Update the swarm’s best known position:

q

vi.

END

The behavior of the algorithm is expected to be outstanding in static as well as dynamic environments. When there are no changes in the backdrop, the circle chooses itself as its OCC and carries out an algorithm like classic PSO. However, when there is an unexpected change in the environment, the circle chooses one of its two cyclic circles as OCC and updates itself with the information from the CR of OCC. Therefore this algorithm avoids the necessity of rediversification, thereby reducing convergence time. 4.4. Behavior of HPSO-OCC in dynamic environment

Here, ˆbi is the velocity of the CRi, toid is the velocity of the CR of the OCC of Circlei, ,biis the position of the CRi. The velocity updated to the CR gets reflected in all the particles within that circle in the subsequent generation of classic PSO. This helps the particles in all

The HPSO-OCC algorithm is found to handle dynamic conditions with good performance, taking into advantage the two levels of optimization. The behavior of proposed algorithm and Classic

3466

K. Ganapathy et al. / Expert Systems with Applications 41 (2014) 3460–3476

Table 3 Behaviour of HPSO–OCC and PSO in dynamic environments. HPSO-OCC Arrival of new particle The new particle enters into any one of the circles based on its abnormality value. This new particle undergoes optimization, being affected by other particles within the circle, as well as, other CRs. This makes the new particle to attain optimization quicker than classic PSO Deletion of existing particles (worst case: best particle deletion) The deletion of the best particle may initially affect the state of the particles in the circle. But, in the second level of algorithm, where the CRs undergo OrthoCyclic PSO, the CR of the affected circle is rectified and stabilized by its OrthoCyclic Circle

PSO in dynamic environment is illustrated in Table 3, hence proving that HPSO-OCC performs better than Classic PSO. 5. A case study: remote health monitoring of chronic heart disease patient with optimal service scheduling in cloud Healthcare is the examination, medication and prevention of physical and mental ailments by medical practitioners. The storage, processing and retrieval of healthcare information with the application of computer hardware and software for communication and decision making are referred as Health Information Technology (HIT). However, existing HIT applications suffer from serious drawbacks such as unavailability of servers for huge data storage of all patients, difficulty in serving large number of patients, non-categorisation of data requests, absence of prioritisation of users and there is no way for the physicians to get the current physical status and image of the abnormal patients. These issues are due to access delay, unsupported hardware, infinitely large access and huge data storage. The developed system overcomes all these issues by porting the application to cloud and supporting high level of customization. Integration of healthcare with cloud computing services is a recent area of interest among researchers, identified to solve the above cited issues by offering valuable advantages like scalability, efficient data-sharing and cost. Cloud computing is the ondemand provisioning of computing resources, with an assurance of good Quality-of-Service (QoS) with adaptable service rate (Vaquero, Rodero-Merino, & Caceres, 2008). It is a serviceoriented approach with three types of services (Furht, 2010), Infrastructure-as-a-Service (IaaS), Platform-as-a-Service (PaaS), and Software-as-a-Service (SaaS). IaaS clouds provide virtual resources to store data and this reduces the maintenance cost instead of using number of servers required by the hospitals. PaaS provide tools to the user for supporting the applications deployed in cloud. SaaS helps to improve the performance of the healthcare software in terms of throughput and service time. Thus cloud computing ensures the healthcare services to be on-demand, elastic and pay-per-use. However, the existing cloud healthcare applications suffer from data redundancy. The normal as well as abnormal data of all the patients are stored in the cloud server, while it is sufficient to store the data of the abnormal patients alone. Also, the healthcare software must be ensured to work in dynamically changing environments with uncompromised performance. With an alarming rate of requests to the server, the scheduling of services by the cloud server is a yet to be solved issue. This paper proposes a scheme that optimizes the scheduling application of the cloud server using a novel Particle Swarm Optimization Algorithm (PSO), thus improving performance, accuracy and scalability. Existing scheduling schemes do not consider the degree of severity of the patients. When the percentage of abnormality is used as the factor for scheduling, there arises a problem when a

Classic PSO The new particle enters into the swarm, which has already converged. Once the swarm has converged, particles lose their ability to track new optima due to new velocity, being affected by the far-lagging newly arrived particle. This deteriorates the performance heavily If the best particle of the swarm is deleted, it affects the state of the entire swarm, as all particles rely on the position of the best particle to attain optimization

number of patients have the same level of abnormality. Another factor called service waiting time is introduced, which is the maximum time a patient can wait before being served. This time is optimized using the proposed algorithm, thereby ensuring minimum waiting. Hence the most critical patients are served first. The described healthcare application facilitates in sharing of medical records with physicians, patients, caregivers and professionals, coupled with optimized scheduling of services to the abnormal patients. The application is integrated with bio-sensors, patients’ Personal Computer which acts as the Gateway server, cloud database, and uses HPSO-OCC for optimized scheduling of services. The developed system is found to guarantee optimal performance and prioritized access of cloud data. Evolution of wireless sensor network, lead to the development of body area network. Wireless wearable sensors of small size and light weight are placed over the body of the patient and monitored continuously to detect the abnormality in vital sign. If the threshold value of vital signs exceeds, alert is generated to the caregiver/ physician for further treatment. Bioharnes 3 (ZephyrBio-Harness) chest strap body sensor node is placed on the body of the subject. Bioharness 3 consists of ECG, Heart Rate, Respiration Rate sensors and communicates to the gateway server via Bluetooth. Wrist worn wireless sensor node is used for sensing the blood pressure. Finger tip worn (Nonin Pulse oximeter) sensor is used to sense the oxygen saturation level (SPO2). These physiological parameters are of major importance for chronic heart disease patients. Remote monitoring does not restrict the mobility of the patient, whereas bluetooth can send data within a limited range of 100 m. Therefore, PDA with Wi-Fi technology is placed on the waist of the patient for communication. Gateway server uses a screener application, which computes the percentage of anomaly of the patient from the received sensor data, and identifies if the patient is abnormal. If the patient is found abnormal, the image of the patient is captured by the camera controlled by the gateway server, and is sent to the cloud sensor through internet, along with the data of the patient. The percentage of anomaly, a is calculated from various healthcare parameters like ECG, Heart Rate, Respiration Rate, Blood Pressure, Skin Temperature, SPO2 and history details such as age, gender, alcohol consumption and smoking habit. The cloud server processes the received data, and forwards it to the registered physicians, who can interpret the patient’s condition for further analysis. 5.1. System architecture of remote health monitoring system in cloud Overall working principle of remote health monitoring system is shown in Fig. 3. Various modules of this monitoring system is explained in detail in this section. The Gateway server receives the sensor data and computes the percentage of abnormality in physiological parameters. The optimal scheduling algorithm is the application deployed in cloud platform to support high scalability, good Quality of Service (QoS) and on-demand access to huge

3467

K. Ganapathy et al. / Expert Systems with Applications 41 (2014) 3460–3476

Fig. 3. Remote healthcare monitoring with optimized service scheduling in cloud.

Fig. 4. Five phases of gateway server in Remote health monitoring system.

Table 4 Normal ranges of various health parameters. Age (years)

18–24 25–29 39–34 35–39 40–44 45–49 50–54 55–59 60-64 60-69 70–79 80–99

ECG – PR interval (ms)

Heart rate (beats/s)

Blood pressure (BP) (mmHg)

Male

Female

Male

Female

Systolic BP

Diastolic BP

153 153 153 153 153 153 153 153 163 163 168 177

153 153 153 153 153 153 153 153 156 156 160 163

62–65 62–65 62–65 63–66 63–66 64–67 64–67 62–67 62–67 62–67 62–67 62–67

66–69 65–68 65–68 65–69 65–69 66–69 66–69 65–68 65–68 65–68 65–68 65–68

120 121 122 123 125 127 129 131 134 140 140 140

79 80 81 82 83 84 85 86 87 90 90 90

database. This approach ensures that there is no data redundancy. Only the data of patients with abnormal sensor readings are ported to cloud. The normal data is placed in local database in Gateway server, which deploys five main phases namely: Sensor Data Enumerator, Anomaly Appraiser, Anomaly Screener, Camera Trigger and Patient Data Handler as shown in Fig. 4. Sensor Data Enumerator is the first phase where the physiological data of the patients are measured by the wearable sensors enabled with the capacity to transmit the sensed data to the server

Table 5 Weights assigned to physiological parameter for abnormality calculation. ID (i)

Parameter

Weight Wi

0 1 2 3 4 5

Respiration rate Blood pressure ECG Heart rate SPO2 Skin temperature

0.21645373503453060 0.18326235612262132 0.17219856315198487 0.16113477018134845 0.15007097721071200 0.11687959829880270

3468

K. Ganapathy et al. / Expert Systems with Applications 41 (2014) 3460–3476

Fig. 5. Cloud server module of remote health monitoring system.

using bluetooth. The second prime work of the gateway server is to estimate the percentage of abnormality in the patient using the Anomaly Appraiser. The entire application is dependent on the accuracy of this phase, and immense care is taken to ensure the correctness. For each healthcare parameter, a range of normal and abnormal values has been identified, which is tabulated in Table 4. Abnormality percentage of the sensor data is determined by inferring the threshold values from Table 4 based on gender and age of the subject. The rules used for finding the anomaly value are as follows. Normal Skin Temperature is 98.6°F and Normal Respiration Rate is 12–20. The SPO2 range for a healthy person is 96 to 99% and for patients with respiratory disease is 90–99%. Smoking habit and alcohol habit increase abnormality by each 20%. A low heart rate in a patient having hypertension increases abnormality by 30%. Weights are entitled to each physiological parameter as given in Table 5. Weights are calculated using Eq. (7) where T is the sum of weights, = 1, n is the number of parameters, = 6, k is the intensity, = n/2, and f(x) is the homogeneous poisson process function as given in Eq. (8).

wi ¼

T þ ð1i  f ðmodði; kÞÞÞ n

f ðxÞ ¼

ekT  ðkT x Þ x!

ð7Þ

ð8Þ

The formula to calculate abnormality is given in Eq. (9) where wi is the weight assigned to the parameter i, di is the deflection of parameter i from the normal value, given in Eq. (10). Here ri is the sensor reading of the parameter i, and ni is the normal (expected) value of the parameter i. The percentage of abnormality is given as a ⁄ 100.

a¼ di ¼

X6 i¼0

wi  di

ri ni  1:0

ð9Þ

The server module is deployed in the centralized cloud, aiming to receive data of abnormal patients from the gateway server and serve them in an optimal order. This includes five main phases namely: Patient data receiver, Circle assembler, HPSO-OCC, Database update, and Dynamic Round Robin Scheduler (DRRS) is shown in Fig. 5. The data of the abnormal patients are received and HPSOOCC algorithm is used to compute the optimal service waiting time for each patient. The patients are served in a decreasing order of their abnormality, with the service waiting time being used as the scheduling parameter for patients with same abnormality. In other words, the patient with the highest abnormality and the lowest service waiting time has the highest priority. Patient Data Receiver accepts the data from patient data handler and sends to the cloud server. Using the Patient Identity (PID) and history details of patient such as age, alcohol, smoking, gender which are maintained as patient history are obtained from the cloud database. The received dynamic data and the fetched history of patients are processed. Circle Assembler constructs the circles based on the percentage of anomaly in vital sign measurement. The application is designed to construct 10 circles, with CIN C0 to C9, and hence 10 CRs. The construction of circles is done based on the mapping in Table 6. HPSO-OCC algorithm is adopted to optimize the service scheduling. Table 7 shows the mapping of HPSO-OCC to the healthcare application in study. The property used here for circle construction and CIN allocation is the ‘Percentage of Abnormality’. Fig. 6(a)

Table 6 Construction of circles. Abnormality range (%)

CIN

Abnormality range (%)

CIN

[0.00 to 9.99] [10.00, 19.99] [20.00, 29.99] [30.00, 39.99] [40.00, 49.99]

C0 C1 C2 C3 C4

[50.00, [60.00, [70.00, [80.00, [90.00,

C5 C6 C7 C8 C9

59.99] 69.99] 79.99] 89.99] 100]

ð10Þ

The next phase is the Anomaly Screener, where the computed percentage of abnormality is used as a parameter for data filtering to identify the abnormal patients. This ensures that the data of the patient in normal condition is neither processed nor stored until there is a demand. The abnormal data of the patients is sent to the Cloud Server using the Patient Data Handler. The percentage of anomaly, data from and the Patient ID are given to the cloud server for processing and service provision.

Table 7 Mapping of HPSO-OCC with healthcare application. HPSO-OCC notation

Healthcare application

Particle Swarm Fitness Circle

Sensor data of patient Group of patients who arrive at an instance of time Service time Group of patients whose abnormality falls in the same anomaly range The patient with the best fitness within a circle

Circle representative

3469

K. Ganapathy et al. / Expert Systems with Applications 41 (2014) 3460–3476

Fig. 6. (a) Particles are grouped into circles based on the percentage of abnormality, (b) classic PSO algorithm is applied within the Circles, (c) ortho-Cyclic PSO is applied among the CRs, (d) circles along with particles achieve optimization.

shows the grouping of abnormal patient data as circles. Particles with similar abnormality range are grouped together. Fig. 6(b) depicts the behavior of the particles within Circles when classic PSO is applied, where the particles in grey-scale symbolize the CR of the corresponding circle. Fig. 6(c) shows the behavior of the CRs that implements PSO-OCC for optimization. The CRs communicate with the two cyclic particles, and identifies the OCC. The CR of OCC helps in moving towards promising optima. This information is conveyed to other particles in the circle during the lower level of PSO. Thus, all the circles and all the particles within circles achieve optimization. In this way, the service waiting time is minimized. (See Fig. 6(d)) The Dynamic Round-Robin Scheduler is used in parallel with the database update after optimization. The patients are served based on the solution of HPSO-OCC and percentage of abnormality. The patient with highest percentage of abnormality and lowest service waiting time (fitness value of HPSO-OCC) is given the highest priority. Here service provision refers to the delivery of Short Message Service about the patient’s health condition to the registered physician. Whenever a patient is found to be abnormal, an alert (SMS) is generated to the physician. The message sent as alert includes details such as Patient ID, Name, Age, Gender, Abnormality, Abnormal Parameters’ value. The message code for each healthcare parameter is depicted in Table 8. The message format used by the application is , where Parami is the three-lettered

Table 8 Message codes for short message service. Parameter

Message code

ECG Heart rate Respiration rate Systolic blood pressure Diastolic blood pressure SPO2 Skin temperature

ECG _HR _RR SBP DBP SPO STP

message code for the healthcare parameter i and Valuei is a number in between 100 and +100, with + symbolizing increase, - symbolizing decrease and the number symbolizing the percentage of deflection from normalcy. For example, if a patient named XYZ with ID P2144 and age 56 years is found to be abnormal with 40% increase in Systolic Blood Pressure, 25% increase in Diastolic Blood Pressure, and 20% increase in Heart Rate, the abnormality value a is computed as 0.0844 (i.e 8.44% abnormal). The message packet is represented as .

6. Results and discussions 6.1. Experimental set-up The proposed algorithm of HPSO-OCC is experimented on the cloud healthcare application that optimally schedules the services to the abnormal patients in dynamic environments. The application is deployed and tested in real-time environment, the specifications of which are described as below. Zephyr BioHarness 3 is employed for facilitating the measurement and transmission of the Heart Rate, R-R Interval of ECG and Breathing Rate of the patients. The details of the sensor sampling interval and the time to measure the stable output as suggested by physician is listed in Table 9. The back-end of the application is coded using the Java programming language, with the user interface in JavaServer Pages (JSP). JDK 1.6 has been used as the Java platform and JUnit 4.5 for Testing. The web interface with Java Enterprise Edition 6 uses

Table 9 Sampling frequencies. Parameter

Sampling interval (ms)

Time to stable output (s)

ECG Respiration rate Heart rate

4 40 4

15 45 15

3470

K. Ganapathy et al. / Expert Systems with Applications 41 (2014) 3460–3476

Table 10 Test functions for optimization. f

Name

Formula

f1 f2 f3 f4 f5 f6 f7

Rastrigin McCormick Lévi Matyas Bukin Booth’’ Goldstein– Price

20 + (x1  x1) + (x2  x2)  (10  (cos(2  p⁄x1) + cos(2  p  x2))) sin(x1 + x2) + ((x1  x2)2)  (1.5  x1) + (2.5  x2) + 1 round((sin(3  p  x1))2) + ((x1  1)2)  (1 + ((sin(3  p  x2))2)) + ((x2  1)2)  (1 + ((sin(2  p  x2))2)) (0.26  ((x12) + (x22)))  (0.48  x1  x2) p 100  (abs(x2  0.01  x12)) + 0.01  abs(x1 + 10) (x1 + 2  x2  7)2 + (2  x1 + x2  5)2 (1 + ((x1 + x2 + 1)2)  (19  14  x1 + 3  x12  14  x2 + 6  x1  x2 + 3  x22))  (30 + ((2  x1  3  x2)2)  (18  32  x1 + 12  x12 + 48  x2  36  x1  x2 + 27  x22))

Table 11 Performance comparison of HPSO –OCC with benchmark fitness functions. I. Swarms = 1; Swarm size = 50; Total particles = 50 f

l±r

Range

II. Swarms = 2; Swarm size = 25; Total particles = 50 m

MAD

I. Swarms = 1; Swarm size = 50; Total particles = 50 f1 f2 f3 f4 f5 f6 f7

4.474564 ± 19.9 5.141371 ± 14.3 4.376585 ± 7.63 0.512558 ± 2.46 0.856607 ± 2.03 1.578561 ± 2.55 2.153872 ± 1.03

f

l±r

Range

M

MAD

II. Swarms = 2; Swarm size = 25; Total particles = 50

[4.40E-6, 62.22] [0.1084, 64.5] [1.48E-5, 38.42] [5.46E-6, 13.6] [2.20E-03, 14.3] [3.37E-4, 14.2] [1.26E-7, 3.729]

1.863 0.5645 2.303 0.6895 0.6430 2.57E-2 4.81E-2

14.7 5.95 4.82 1.27 0.717 1.06 0.641

f1 f2 f3 f4 f5 f6 f7

[4.4E-06, 62.22] [0.1084, 64.46] [1.48E-5, 38.42] [5.46E-6, 13.61] [2.20E-03, 14.27] [3.37E-4, 14.23] [1.26E-7, 3.729]

1.863 0.5645 2.303 0.6895 0.6430 2.57E-2 4.81E-2

14.7 5.95 4.82 1.27 0.717 1.06 0.641

III. Swarms = 10; Swarm size = 5; f l±r f1 4.879391 ± 9.13 f2 4.019455 ± 8.43 f3 5.888844 ± 12.8 f4 0.609521 ± 0.933 f5 1.014863 ± 0.849 f6 0.523254 ± 2.25 f7 1.05363 ± 2.12

Total particles = 50 Range [3.925E-4, 42.45] [3.6795E-3, 39.63] [2.5066E-3, 55.9] [1.489E-6, 2.790] [2.20E-3, 2.53] [6.557E-4, 15.61] [4.628E-9, 8.109]

m 0.1609 0.7479 3.205 6.91E-3 0.9960 1.93E-2 2.62E-2

MAD 4.80 3.56 5.45 0.607 0.754 0.518 1.05

IV. Swarms = 1; Swarm size = 100; Total particles = 100 f l±r Range f1 17.70795327 ± 18.7 [1.67E-2, 70.12] f2 1.981629544 ± 3.06 [2.493E-2, 16.43] f3 6.011749082 ± 14.5 [1.10E-3, 66.50] f4 1.284776956 ± 2.13 [8.71E-07, 6.633] f5 0.967083899 ± 1.01 [2.20E-03, 2.83] f6 0.667318031 ± 0.859 [1.551E-4, 2.593] f7 0.549017765 ± 0.981 [5.492E-9, 5.968]

M 9.251 0.6675 0.3258 0.205 0.3961 5.66E-2 0.1419

MAD 15.6 1.6 5.92 1.24 0.847 0.656 0.515

V. Swarms = 4; Swarm size = 25; l±r f1 5.945523 ± 10.6 f2 4.045756 ± 6.82 f3 10.90637 ± 18.7 f4 0.888171 ± 1.9 f5 0.811841 ± 0.781 f6 0.85213 ± 2.06 f7 0.936012 ± 1.4

Total particles = 100 Range [8.5369E-4, 72.2] [2.7393E-2, 28.5] [4.4903E-5, 83.8] [3.01E-08, 15.5] [2.20E-03, 2.55] [2.4115E-4, 13.9] [3.17E-16, 5.96]

m 0.7508 0.6675 2.548 0.2157 0.7243 0.1372 0.2392

MAD 5.75 3.67 10.4 0.833 0.665 0.825 0.887

VI. Swarms = 5; Swarm size = 20; Total particles = 100 f l±r Range f1 9.974601208 ± 15.9 [1.31E-5, 70.9] f2 2.366714973 ± 4.43 [1.051E-2, 29.49] f3 6.810037228 ± 10.9 [8.96E-04, 55.24] f4 1.020379153 ± 1.60 [8.987E-5, 10.32] f5 0.8633002 ± 0.822 [2.20E-03, 2.504] f6 1.325750966 ± 2.08 [4.42E-4, 9.504] f7 0.775245686 ± 1.06 [3.377E-8, 4.674]

M 1.623 0.7527 0.7790 0.3466 0.8031 0.4496 0.2246

MAD 9.64 1.95 6.62 0.978 0.736 1.22 0.717

VII. Swarms = 6; Swarm size = 50; Total particles = 300 f l±r Range f1 9.125147 ± 14.3 [7.220E-5, 61.57] f2 5.290191 ± 10.5 [3.81E-03, 61.34] f3 4.573632 ± 12.4 [4.61E-04, 79.8] f4 0.950455 ± .63 [3.900E-7, 12.47] f5 0.940586 ± 1.00 [2.200E-03, 11.9] f6 2.340497 ± 3.63 [7.303E-5, 15.61] f7 1.09257 ± 1.83 [6.409E-16, 8.375]

m 1.455 0.6675 0.2780 0.3210 0.8167 0.3666 0.1781

MAD 8.91 4.92 4.53 0.88 0.676 2.27 1.07

VIII. Swarms = 5; Swarm size = 100; Total particles = 500 f l±r Range f1 10.65370616 ± 17.2 [1.103E-4, 80] f2 4.055450897 ± 10.9 [5.160E-3, 88.72] f3 8.313125953 ± 16.3 [3.20E-6, 82.85] f4 0.819697197 ± 1.32 [1.40E-06, 10.8] f5 1.087635775 ± 0.858 [2.20E-03, 2.83] f6 1.629673505 ± 2.87 [1.73E-04, 15.61] f7 0.570570715 ± 1.22 [9.1E-15, 7.678]

M 0.9696 0.5083 0.7402 0.3496 0.9363 0.2482 3.76E-2

MAD 10.4 3.89 8.12 0.728 0.713 1.57 0.564

IX. Swarms = 10; Swarm size = 50; Total particles = 500 f l±r Range f1 9.566704 ± 15.9 [1.9991E-5, 80.00] f2 3.34618 ± 7.81 [3.827E-3, 79] f3 8.728926 ± 18 [4.842E-5, 82] f4 0.789475 ± 1.31 [3.3071E-7, 18.3] f5 1.022065 ± 0.732 [2.200E-03, 2.668] f6 1.857842 ± 3.08 [7.64E-05, 15.61] f7 0.825703 ± 1.57 [2.207E-16, 8.565]

m 1.363 0.6542 0.3523 0.4396 1.040 0.7487 7.50E-2

MAD 9.31 3.01 8.66 0.681 0.618 1.73 0.818

X. Swarms = 10; Swarm size = 100; Total particles = 1000 f l±r Range f1 8.0669 ± 16.4 [1.34E-04, 183.4] f2 3.1606 ± 9.18 [3.20E-06, 191.6] f3 7.5782 ± 23.2 [3.01E-08, 596.5] f4 1.462725 ± 6.25 [5.02E-07, 183.9] f5 1.046840427 ± 2.20 [1.73E-04, 63.20] f6 2.1274 ± 10.6 [3.18E-16, 328.1] f7 0.65869 ± 1.29 [4.85E-15, 8.153]

M 0.9318 0.6225 0.7949 0.3570 0.9384 0.5168 9.03E-2

MAD 7.82 2.87 7.46 1.39 0.732 2.04 0.648

the Personal Glassfish v3 as the server. Cloudbees, which functions as a Platform as a Service (PaaS) with agile development and deployment services is chosen as the cloud environment. Cloudbees is found to insure ingenious services, with maximum resource utilization, global access and zero downtime. Selection of Cloudbees environment ensures support to JVM based languages and framework. The database of Cloudbees is used by the application for storing the records of patients, physicians and hospitals. It also

14.88919154 ± 19.9 6.231780778 ± 14.3 5.61452948 ± 7.63 1.417710916 ± 2.46 0.907202938 ± 2.03 1.064839133 ± 2.55 0.647837205 ± 1.03

stores the abnormal data from sensors and history of patients. The database server is ec2-50-19-213-178.compute-1.amazonaws.com, and the type being MySQL/5.0.51. The supported Git is used as the repository for the application. The parameters of the novel PSO algorithm are fixed as follows. The number of swarms and the swarm size are dynamic. Cluster topology with 5 particles neighborhood is adopted. The search space is defined as [100, 100] and the inertia of 0.95 is applied. Goldstein-Price fitness function is used, where the fitness function

3471

K. Ganapathy et al. / Expert Systems with Applications 41 (2014) 3460–3476 Table 12 Best performance among various fitness functions. f

I

II

III

IV

V

VI

VII

VIII

IX

X

f1 f2 f3 f4 f5 f6 f7

4.4746 5.1414 4.3766 0.5126 0.8566 1.5786 2.1539

14.89 6.232 5.615 1.418 0.907 1.065 0.648

4.879 4.019 5.889 0.61 1.015 0.523 1.054

17.708 1.9816 6.0117 1.2848 0.9671 0.6673 0.549

5.9455 4.0458 10.906 0.8882 0.8118 0.8521 0.936

9.975 2.367 6.81 1.02 0.863 1.326 0.775

9.12515 5.29019 4.57363 0.95045 0.94059 2.3405 1.09257

10.65 4.055 8.313 0.82 1.088 1.63 0.571

9.5667 3.3462 8.7289 0.7895 1.0221 1.8578 0.8257

8.067 3.161 7.578 1.463 1.047 2.127 0.659

Table 13 Percentage of deviation from Best mean value. Best Mean

I

II

III

IV

V

VI

VII

VIII

IfX

X

% of Deviation

f1 f2 f3 f4 f5 f6 f7

0.5126 7.7299 9.0308 7.5387 0 0.6712 2.0798 3.2022

0.648 21.98 8.619 7.667 1.188 0.4 0.644 0

0.523 8.325 6.682 10.25 0.165 0.94 0 1.014

0.549 31.254 2.6094 9.95 1.3401 0.7615 0.2155 0

0.686 7.669 4.899 14.9 0.295 0.184 0.242 0.365

0.775 11.87 2.053 7.784 0.316 0.114 0.71 0

0.939 8.716 4.633 3.87 0.012 0.002 1.492 0.163

0.571 17.67 6.108 13.57 0.437 0.906 1.856 0

0.789 11.12 3.238 10.06 0 0.295 1.353 0.046

0.659 11.25 3.798 10.5 1.221 0.589 2.23 0

NIL 13.758 5.16705 9.60976 0.497391 0.486154 1.08228 0.478982

is evaluated for 300 iterations for each particle in the search space. The Dynamic Round Robin Scheduler uses a quantum time of 200ms. 6.2. Comparison for various fitness functions The test of efficiency and validation of optimization algorithms is carried out by using chosen set of common standard benchmark functions or test functions. For any new optimization problem, it is essential to validate its performance and compare it with other existing algorithms over a good set of test functions. To evaluate an algorithm, it is necessary to characterize the type of the problem where the algorithm is suitable. This is possible only if the test functions are chosen with wide variety of problems. In this paper, Seven benchmark functions listed in Table 10 are used in experimental results. These benchmark functions are widely adopted in various global optimization problems. Functions are divided in to two groups such as unimodal functions and multimodal functions. Function f6, f3 and f4 are non separable unimodal functions. f1, f2, f5, f7 are non separable multimodal functions. If the solution of the algorithm falls near to / exactly to the global optimum of the test function, then the run is judged to be successful. The optima determined using the algorithm of HPSO-OCC with various fitness functions is compared for performance analysis. The formulae for the fitness functions are presented in Table 10. The performance analyses of HPSO-OCC with various fitness functions are done by varying the number of swarms, number of

particles in the swarm (swarm size), and the total number of particles. The statistical parameters such as mean, standard deviation, range, median and Average absolute Deviation from Median are calculated for various landscapes. Each particle runs for 300 iteration and the statistical value is compared with the near optimum value of the functions. The results are shown in Table 11 where, l denotes the mean, r is the standard deviation, m is the median and MAD denotes the Average absolute Deviation from Median for the solution obtained from the HPSO-OCC. Experimentation is done using 10 test cases. For example, Test Case V denotes 4 swarms each with 25 particles, accounting to 100 particles in total. This test case gives a solution with mean 0.936012, standard deviation 1.4, lowest value 3.17E-16, highest value 5.96, median 0.2392 and MAD 0.887. Based on the abnormality category, the total particles mentioned for all cases I to X are distributed and after some iterations new particles of same numbers enter the respective swarms. Every swarm that receives new particle selects the orthogonal neighbor particle swarm particle immediately and continues exploration for the further iterations.

Table 14 Functions near to optimum. Test cases

Functions

Case Case Case Case Case Case Case Case Case Case

F4,f5 F5,f7 F4, f6 F5, f6, f7 F4, f5, f6, f7 F5, f7 F4, f5 F4, f7 F4, f7 F7

1 2 3 4 5 6 7 8 9 10

Fig. 7. Comparison of fitness functions for various datasets.

3472

K. Ganapathy et al. / Expert Systems with Applications 41 (2014) 3460–3476

Fig. 8. Comparison of fitness functions for small datasets.

Fig. 9. Comparison of fitness functions for large datasets.

The behavior of the algorithm is expected to be outstanding in static as well as dynamic environments. When there are no changes in the backdrop, the circle chooses itself as its OCC and carries out an algorithm like classic PSO. From Table 11, where Swarm = 1 and Total particles = 50 (case I), Swarm = 1, Total particles = 100 (case IV) there is no neighbor swarm and the swarm runs OCC for itself. That is, when new particle enters the swarm,

the best combination factors obtained from orthogonality is given to the solution space to locate the good particles. The new particles move to the best points for further exploration in subsequent iterations. This optimization technique avoids the swarm to track the optima from the scratch in changing environment. In case II (Swarm = 2, Total particles = 50), there is no cyclic circles. New particles entering in either swarms interacts with the other neighbor swarm for best factors selection and runs next level PSO. In case III, case V, case VI, case VII, case VIII, case IX and case X when there is an unexpected change in the environment, the circle chooses one of its two cyclic circles as OCC and updates itself with the information from the CR of OCC. Results shown in Table 11, proves that the HPSO-OCC algorithm performs well for any population size. Table 12 shows the mean values of all functions f1 to f7 for the ten test cases I to X. In every test cases, the mean values of the solutions are compared with the optimum and the functions near to optimum are given in Table 14. It is seen that f4, f5 and f7 yield comparably good performance for most test cases with respect to final solution. That is, brings solution with much higher accuracy to the problem. It is apparent that HPSO-OCC can avoid local optima and yields improved performance to obtain global optimum robustly in multimodal functions of higher dimensional search space. The best mean value obtained is boldfaced. The results show that Goldstein-Price fitness function (f7) generally outperforms other fitness functions f1 to f6. For instance, f7 does better than all other functions for the test cases II, IV, VI, VIII and X. Goldstein price function on an average perform steadily for more as well as less number of particles and swarms. Table 13 shows the best solution among the solutions obtained by functions f1 to f7, and the

Table 15 Comparison of MAD of the fitness functions. F

I

II

III

IV

V

VI

VII

VIII

IX

X

f1 f2 f3 f4 f5 f6 f7

14.7 5.95 4.82 1.27 0.717 1.06 0.641

14.7 5.95 4.82 1.27 0.717 1.06 0.641

4.8 3.56 5.45 0.607 0.754 0.518 1.05

15.6 1.6 5.92 1.24 0.847 0.656 0.515

5.75 3.67 10.4 0.833 0.665 0.825 0.887

9.64 1.95 6.62 0.978 0.736 1.22 0.717

8.91 4.92 4.53 0.88 0.676 2.27 1.07

10.4 3.89 8.12 0.728 0.713 1.57 0.564

9.31 3.01 8.66 0.681 0.618 1.73 0.818

7.82 2.87 7.46 1.39 0.732 2.04 0.648

3473

K. Ganapathy et al. / Expert Systems with Applications 41 (2014) 3460–3476

Fig. 10. Comparison of fitness functions with respect to MAD.

Table 16 Comparison of HPSO-OCC with other existing PSOs. I. Swarms = 1; Particles per swarm = 50; Total particles = 50 l±r Range HPSO-OCC 1.258599 ± 0.502 [0.1666, 1.982] HPSO 1.722566 ± 2.19 [0.188, 11.51] OLPSO 1.549234 ± 7.456E-02 [1.035, 1.569] PSO 2.217419 ± 0.470 [0.8094, 3.579] III. Swarms = 10; Particles per swarm = 5; Total particles = 50 l±r Range HPSO-OCC 0.683709 ± 0.454 [0.1663, 1.867] HPSO 1.511085 ± 2.11 [0.1870, 10.5] OLPSO 0.788224 ± 0.384 [0.2066, 1.64] PSO 1.824889 ± 2.37 [0.1895, 10.5] V. Swarms = 4; Particles per swarm = 25; Total particles = 100 l±r Range HPSO-OCC 0.6858336 ± 0.419 [0.1665, 1.674] HPSO 3.23320088 ± 4.64 [0.1871, 16.83] OLPSO 0.97726532 ± 0.502 [0.2010, 1.785] PSO 2.11131837 ± 1.75 [0.1870, 5.703] VII. Swarms = 6; Particles per swarm = 50; Total particles = 300 l±r Range HPSO-OCC 0.93916302 ± 0.588 [0.1663, 1.982] HPSO 3.53451165 ± 4.07 [0.1870, 16.83] OLPSO 1.11337511 ± 0.512 [0.2064, 2.000] PSO 3.03410006 ± 4.68 [0.1870, 13.21] IX. Swarms = 10; Particles per swarm = 50; Total particles = 500 l±r Range HPSO-OCC 0.740163 ± 0.525 [0.1663, 1.982] HPSO 3.169782 ± 3.70 [0.1870, 16.83] OLPSO 0.811775 ± 0.485 [0.1998, 1.966] PSO 2.338437 ± 2.62 [0.1870, 9.788]

m 0.7135 0.9103 1.562 2.252

MAD 0.375 1.26 1.55E-2 0.408

m 0.5821 0.5063 0.8221 0.7392

MAD 0.361 1.22 0.315 0.7392

m 0.6427 0.7249 1.009 1.545

MAD 0.347 2.91 0.386 1.50

m 0.8956 1.702 0.8636 0.358

MAD 0.508 2.99 0.411 2.81

m 0.538 1.854 0.6939 1.168

MAD 0.434 2.57 0.4 1.94

percentage of deviation for each test case. The mean deviation is also shown in the last column of the table. It is evidently seen that f7performs the best with least mean percentage of deviation (boldfaced). Bukin and Matyas are found to perform equally well with a low value of percentage of deviation. Booth’s function is discovered to deliver a mediocre solution. Also, f1 (Rastrigin) is found to have the maximum deviation from the best solution in HPSO-OCC algorithm, followed by the Levi function. Rastrigin function with the addition of cosine modulation produces frequent local minima and the location of the minima is largely disturbed.

II. Swarms = 2; Particles per swarm = 25; Total particles = 50 Range M 0.866331 ± 0.502 [0.1666, 1.982] 0.7135 3.018164 ± 4.50 [0.1870, 16.83] 1.033 0.917306 ± 0.126 [0.4335, 1.053] 0.8568 1.018746 ± 1.05 [0.1874, 2.718] 0.2343 IV. Swarms = 1; Particles per swarm = 100; Total PARTICLES = 100 l±r Range m 0.898513 ± 0.477 [0.1677, 1.663] 0.7729 1.83033 ± 3.67 [0.1870, 13.43] 0.5817 1.299588 ± 8.979E-02 [0.6486, 1.325] 1.312 1.659646 ± 1.31 [0.1889, 5.780] 1.502 VI. Swarms = 5; Particles per swarm = 20; Total particles = 100 l±r Range m 0.911628 ± 0.562 [0.1663, 1.982] 0.8224 2.750725 ± 3.40 [0.1870, 16.50] 1.476 0.957889 ± 0.332 [0.2262, 1.564] 0.9407 4.123436 ± 3.06 [0.1870, 9.725] 3.609 VIII. Swarms = 5; Particles per swarm = 100; Total particles = 500 l±r Range m 0.662357 ± 0.444 [0.1663, 1.982] 0.5416 2.925982 ± 3.56 [0.1870, 16.50] 1.111 0.917207 ± 0.606 [0.1996, 1.815] 1.178 2.753394 ± 2.32 [0.1870, 6.524] 2.563 X. Swarms = 10; Particles per swarm = 100; Total particles = 1000 l±r Range m 0.89173 ± 2.77 [0.1663, 86.60] 0.6730 2.965196 ± 3.71 [0.1870, 16.83] 1.193 0.997134 ± 0.467 [0.2017, 1.94] 1.035 2.984747 ± 4.35 [0.1870, 16.83] 0.3359

l±r

MAD 0.375 2.41 0.101 0.804 MAD 0.373 1.47 #### 0.850 MAD 0.464 2.28 0.257 2.71 MAD 0.35 2.50 0.519 2.21 MAD 0.553 2.53 0.383 2.76

Fig. 7 shows the plot of the mean of solutions of the fitness functions f1 to f7 for the test cases I to X and comparison of fitness functions with various data sets. The aim of this set of experiments shown is to test the effect of swarms and its swarm size on the performance of HPSO-OCC in dynamic environments. Experiments were carried out with the swarms set 1–10 and the value of the swarm size set 50–1000. From Fig 7, it can be seen that the total particles of 300–1000 gives a better result (less deviation from the optimum) in most of the fitness functions. Rastrigin, Levi and McCormick shows severe deviation from the current value and

3474

K. Ganapathy et al. / Expert Systems with Applications 41 (2014) 3460–3476

6.3. Performance comparison of HPSO-OCC with other existing PSOs

Fig. 11. Comparison of PSOs with mean of final solution.

Various PSO algorithms are used for comparisons. From the Table 11–15 it is proved that test function f7 outperforms other functions and chosen to be better for final solution accuracy. In this section HPSO-OCC using f7 function is compared with Classic PSO, Orthogonal Learning PSO (OLPSO) and Hierarchical PSO (HPSO). The first is the traditional PSO (Engelbrecht, 2006), second is the OLPSO (Zhan, Zhang, Li, & Shi, 2011) which uses orthogonal strategy to construct a particle that acts as a guiding exemplar for adjusting its flying velocity and direction, third is the HPSO designed in this paper, that is analogous to the proposed HPSOOCC, with classic PSO applied to both levels of hierarchy without orthogonal strategy. In HPSO, gbest’s are constructed in the first level for every swarm and all the neighboring swarms gbest’s are executed in the second level to find the global optima. The mean (l), standard deviation (r), range, median (m) and average absolute deviation from median (MAD) of the final solutions of HPSOOCC, HPSO, OLPSO and PSO are given in Table 16. It is also observed that HPSO-OCC gives the best solution for all the test cases, irrespective of the number of swarms and the swarm size. The algorithms can be ranked based on performance in terms of the statistical parameters solution accuracy. It can be observed from the solutions that HPSO-OCC offers the best performance by showing less deviation from the actual value. While the second is OLPSO, followed by HPSO and classic PSO. The results show that irrespective of swarm and swarm size, the mean values of HPSO-OCC ranges from 0.6 to 0.9. Fig. 11 plots the mean of the final solution given by HPSO-OCC, HPSO, OLPSO and PSO for the test cases I to X. It is also observed that HPSO-OCC gives the best solution for all the test cases, irrespective of the number of swarms and the swarm size. Results show that HPSOOCC performs 64.7% better than HPSO with respect to the final solution. Also, HPSO-OCC outperforms OLPSO by 16.6% and classic PSO by 59.55%. The relative performance of other three algorithms with respect to HPSO-OCC is shown in Fig. 12.

7. Conclusions

Fig. 12. Relative Performance of PSO variants with HPSO-OCC.

the optimum value (error) for smaller swarm size between 50 and 100. Fig. 8 compares the behavior of the fitness functions for small datasets namely, I (Swarms = 1; Swarm Size = 50; Total Particles = 50), II (Swarms = 2; Swarm Size = 25; Total Particles = 50), III (Swarms = 10; Swarm Size = 5; Total Particles = 50), IV (Swarms = 1; Swarm Size = 100; Total Particles = 100), V (Swarms = 4; Swarm Size = 25; Total Particles = 100) and VI (Swarms = 5; Swarm Size = 20; Total Particles = 100). Similarly, Fig. 9 compares the performance of the fitness functions with respect to the final solution for large datasets namely VII (Swarms = 6; Swarm Size = 50; Total Particles = 300), VIII (Swarms = 5; Swarm Size = 100; Total Particles = 500), IX (Swarms = 10; Swarm Size = 50; Total Particles = 500) and X(Swarms = 10; Swarm Size = 100; Total Particles = 1000). Table 15 displays the average absolute deviation from median (MAD) of solutions obtained by the fitness functions f1 to f7 for the ten test cases I to X. It is evidently seen that f7 fitness function gives the least MAD (boldfaced) for seven out of ten test cases. This proves that f7 performs the best with respect to accuracy of final solution, percentage of deviation from the best solution and MAD. The same is represented as a graph plot in Fig. 10.

In this paper, we presented a multi swarm PSO technique for an efficient and robust optimization over static and dynamic environment. The technique uses cyclic property, orthogonal strategy and Hierarchical PSO. Briefly the working principle of the HPSO-OCC is as follows. In the first level, Circle particles learns its own historical experience using traditional PSO algorithm. Orthogonal array analysis, discovers a similarly converging neighbor circle. The global best particle’s position and its velocity of ortho cyclic circle are learned by its orthogonal circle. Second level PSO in the hierarchy is executed with the updated velocity equation. In some real time applications, the particles of some properties would expect to collect the useful information from the particles of similar properties rather than exchanging information from swarms with dissimilar properties. That is, particles should exploit information from their near neighbors which holds valuable knowledge. This will be the effective interaction mode in dealing with applications both in static and dynamic environment. This paper employs a multi-swarm strategy where interaction of swarm is only with the neighbor swarm of similar property. It is made time consuming to interact with limited number of swarms rather interacting with the swarms in the entire search space. Interaction with all neighboring swarms particles may waste computations, takes more time for convergence and provide no steady search direction. If the number of swarms and number of swarm particles are high then dynamism will be severe. Selection of neighbor swarm with best factor particle using orthogonal strategy reduces multiple swarm

3475

K. Ganapathy et al. / Expert Systems with Applications 41 (2014) 3460–3476

interaction and improves convergence speed. Orthogonal strategy can be applied to any kind of topology structure. Employing cyclic property for selecting few neighbor swarms, orthogonal test design for selecting the best factor combination and update of the the weaker particle’s velocity and position with the ortho cyclic circle gbest velocity proves HPSO-OCC a statistically robust design approach that balances the convergence speed and the diversity of the particles. Comprehensive experimental tests have been conducted on 7 benchmark test functions including both uni modal and multimodal functions for large and small number of particles in the swarms. The performance analyses of HPSO-OCC with various fitness functions are done by varying the number of swarms, number of particles in the swarm (swarm size), and the total number of particles. The statistical parameters such as mean, standard deviation, range, median and Average absolute Deviation from Median are calculated for various landscapes. Each particle runs for 300 iteration and the statistical value is compared with the near optimum value of the functions. Experiments were carried out with the number of swarms from 1 to 10 and the particles in the swarms varies from 50 to 1000. It is observed that the swarms with total particles of 300–1000 gives a better result (less deviation from the optimum) in most of the fitness functions. Goldstein-Price multimodal fitness function generally outperforms other fitness functions for any population size. It is apparent that HPSO-OCC can avoid local optima and yields improved performance to obtain global optimum robustly in multimodal functions of higher dimensional search space. The behavior of the algorithm is expected to be outstanding in static as well as dynamic environments. When there are no changes in the backdrop, the circle chooses itself as its OCC and carries out an algorithm like classic PSO. If there is no neighbor swarm and the swarm runs OCC for itself. That is, when new particle enters the swarm, the best combination factors obtained from orthogonality is given to the solution space to locate the good particles. The new particles move to the best points for further exploration in subsequent iterations. This optimization technique avoids the swarm to track the optima from the scratch in changing environment. In case of only two swarms there is no cyclic circles. New particles entering in either swarms interacts with the other neighbor swarm for best factors selection and runs next level PSO. HPSO-OCC is compared with Classic PSO, Orthogonal Learning PSO (OLPSO) and Hierarchical PSO (HPSO). It is also observed that HPSO-OCC gives the best solution for all the test cases, irrespective of the number of swarms and the swarm size. It can be observed from the solutions that HPSO-OCC offers the best performance by showing less deviation from the actual value. While the second is OLPSO, followed by HPSO and classic PSO. The results show that irrespective of swarm and swarm size, the mean values HPSO-OCC ranges from 0.6 to 0.9. Overall, the proposed approach shows improvement in terms of speed and accuracy. It efficiently handles multi swarms, good population, Weak particle encouragement, higher dimensionality, unimodal and multimodal problems, dynamic swarm generation and stable optima tracking. This technique supplies strong mathematical concept than randomization of parameter. For future work, it would be valuable to apply HPSO-OCC to a real time application and change the constant coefficients in the velocity and position equations dynamically with respect to the population. Next is to evaluate HPSO-OCC with shifted and rotated benchmark functions and planning to standardise HPSO-OCC by comparing with other evolutionary learning algorithms. Acknowledgement This research project is supported by NRDMS, Department of Science and Technology, Government of India, New Delhi and Anna University, Centre for Research, Chennai. The authors would like to

extend their sincere thanks to NRDMS and Anna University for their support.

Appendix I             

_

Construction of Orthogonal Array LM ðQ N Þ N_ is the number of factors. Q is the number of levels per factor. M is the number of combinations L is the Orthogonal Array with dimensions M X P. M := Q ⁄ Q; P := Q + 1; for each i := 1to M do L[i, 1] = b(i-1)/Qc% Q L[i, 2] = mod(i-1, Q); for each j := 1 to P - 2 do L[I, 2+j] = L[i, 1]⁄j + L[i, 2]% Q return L.

References Blackwell, T. M., & Branke, J. (2006). Multiswarms, exclusion, and anti-convergence in dynamic environments. IEEE Transaction on Evolutionary Computing, 10(4), 459–472. Cai, Y., & Yang, S. X. (2013). An improved PSO-based approach with dynamic parameter tuning for cooperative multi-robot target searching in complex unknown environments. International Journal of Control, Taylor and Francis, 86(13), 1–13. Chen, X., & Li, Y. (2007). A modified PSO structure resulting in high exploration ability with convergence guaranteed. IEEE Transactions on Systems, Man, and Cybernetics, Part B, 37(5), 1271–1289. Connolly, J. F., Granger, E., & Sabourin, R. (2012). Evolution of heterogeneous ensembles through dynamic particle swarm optimization for video-based face recognition. Pattern Recognition, Elsevier, 45(7), 2460–2477. Engelbrecht, A.P. (2006). Particle swarm optimization: Where does it belong?. In Proceedings of the ieee swarm intellgence, symposium (pp. 48–54). Furht, B. (2010). Cloud computing fundamentals. Handbook of Cloud Computing. Springer, pp. 3–19. Greef, M., & Engelbrecht, A. P. (2008). Solving dynamic multi-objective problems with vector evaluated particle swarm optimization. In Proceedings of ieee congress on evolutionary computation (pp. 2922–2929). Hernandez, P. N., & Corona, C. C. (2011). Efficient multi-swarm PSO algorithms for dynamic environments. Memetic Computing, Springer, 3(3), 163–174. Hu, X., & Eberhart, R. C. (2002). Adaptive particle swarm optimization: Detection and response to dynamic systems. In Proceedings of ieee congress on evolutionary computation. Vol. 2, (pp. 1666–1670). Hu, X. M., Zhang, J., Zhong, J. H. (2006). An enhanced genetic algorithm with orthogonal design. In Proceedings of ieee congress on, evolutionary computation (pp. 3174–318). Ho, S. H., Lin, H.-S., Liauh, W. H., & Ho, S. J. (2008). OPSO: Orthogonal particle swarm optimization and its application to task assignment problems. IEEE Transactions on Systems, Man, and Cybernetics - Part A: Systems and Humans, 38(2), 288–298. Kennedy, J., & Eberhart, R. C. (1997). A discrete binary version of the particle swarm algorithm. In International conference on systems, man, and cybernetics. Vol. 5, (pp. 4104–4108). Orlando, FL. Kennedy, J., & Mendes, R. (2002). Population structure and particle swarm performance. In Proceedings of the ieee congress on, evolutionary computation. Vol. 2, (pp. 1671–1676). Kiranyaz, S., Pulkkinen, J., & Gabbouj, M. (2011). Multi-dimensional particle swarm optimization in dynamic environments. Expert Systems with Applications, 38(3), 2212–2223. Korurek, M., & Dogan, B. (2010). ECG beat classification using particle swarm optimization and radial basis function neural network. Expert Systems with Applications, Elsevier, 37(12), 7563–7569. Leung, Y. W., & Wang, Y. (2001). An orthogonal genetic algorithm with quantization for global numerical optimization. IEEE Transaction on Evolutionary Computation, 5(1), 41–53. Li, C., & Yang, S. (2008). Fast multi- swarm optimization for dynamic optimization problems. In Fourth international conference on natural computation. Vol. 7, (pp 624–628). Jinan. Li, X., Branke, J., & Blackwell, T. M. (2006). Particle Swarm with Speciation and Adaptation in a Dynamic Environment. In Proceedings of the 8th annual conference on genetic and evolutionary computation, ACM. (pp. 51–58), USA. Li, X., & Dam, K. H. (2003). Comparing particle swarms for tracking extrema in dynamic environments. In Proceedings of ieee congress on evolutionary computation. Vol. 3, (pp. 1772–1779).

3476

K. Ganapathy et al. / Expert Systems with Applications 41 (2014) 3460–3476

Liang, J. J., & Qin, A. K. (2006). Comprehensive learning particle swarm optimizer for global optimization of multimodal functions. IEEE Transactions on Evolutionary Computation, 10(3), 281–295. Liang, J. J., & Suganthan, P. N. (2005). Dynamic Multi- Swarm Particle Swarm Optimizer with Local Search. In Proceedings of IEEE congress on, evolutionary computation. Vol. 1, (pp. 522–528). Liu, L., Yang, S., & Wang, D. (2010). Particle swarm optimization with composite particles in dynamic environment. IEEE Transactions on Systems, Man and Cybernetics - Part B: Cybernetics, 40(6), 1634–1648. Mendes, R., Kennedy, J., & Neves, J. (2004). The fully informed particle swarm: Simpler, maybe better. IEEE Transaction on Evolutionary Computation, 8(3), 204–210. Nonin Pulse oximeter from http://www.nonin.com/pulseoximetry/fingertip/ onyx9550. Omranpur, H., Ebadzadeh, M., Shiry, S., & Barzegar, S. (2012). Dynamic particle swarm optimization for multimodal function. International Journal of Artificial Intelligence, 1(1), 1–10. Poli, R., Kennedy, J., & Blackwell, T. M. (2007). Particle swarm optimization. Swarm Intelligence (vol. 1). Springer. no. 1, 33–57. Rezazadeh, I., Meybodi, M. R., & Naebi, A. (2011). Adaptive Particle Swarm Optimization Algorithm for Dynamic Environments. In International conference on swarm intelligence, LNCS, Springer-Verlag (pp. 120–129), China.

Shen, Q., Shi, W. M., & Kong, W. (2008). Hybrid particle swarm optimization and tabu search approach for selecting genes for tumor classification using gene expression data. Computational Biology and Chemistry, 32(1), 53–60. Shi, Y. H., & Eberhart, R. C. (1998). A modified particle swarm optimizer. In Proceedings of the ieee world congress on computational intelligence. (pp. 69–73). Vaquero, L. M., Rodero-Merino, L., Caceres, J., & Lindner, M. (2008). A Break in the clouds: Towards a cloud definition ACM SIGCOMM computer communication. (Vol. 39, pp. 50–55). Yang, J., Bouzerdoum, A., & Phung, S. L. (2010). A particle swarm optimization algorithm based on orthogonal design. IEEE World Congress on Evolutionary Computation, 593–599. ZephyrBio-Harness, http://www.zephyr-technology.com/, http:// www.zephyranywhere.com/healthcare/zephyrlife/. Zhan, Z. H., Zhang, J., Li, Y., & Chung, H. S. H. (2009). Adaptive particle swarm optimization. IEEE Transactions on Systems, Man and Cybernetics, Part B, 39(6), 1362–1381. Zhan, Z.-H., Zhang, J., Li, Y., & Shi, Y.-H. (2011). Orthogonal learning particle swarm optimization. IEEE Transactions On Evolutionary Computation, 15(6), 832–846. Zhao, S-Z., Nagaratnam, P., Suganthan & Das, S. (2010). Dynamic Multiswarm Particle Swarm Optimizer with Sub-regional Harmony Search. WCCI 2010 IEEE World Congress on Computational Intelligence, 1983–1990.