Accepted Manuscript Cost optimized hybrid genetic-gravitational search algorithm for load scheduling in cloud computing Divya Chaudhary, Bijendra Kumar
PII: DOI: Article number: Reference:
S1568-4946(19)30407-7 https://doi.org/10.1016/j.asoc.2019.105627 105627 ASOC 105627
To appear in:
Applied Soft Computing Journal
Received date : 21 June 2017 Revised date : 13 June 2019 Accepted date : 12 July 2019 Please cite this article as: D. Chaudhary and B. Kumar, Cost optimized hybrid genetic-gravitational search algorithm for load scheduling in cloud computing, Applied Soft Computing Journal (2019), https://doi.org/10.1016/j.asoc.2019.105627 This is a PDF file of an unedited manuscript that has been accepted for publication. As a service to our customers we are providing this early version of the manuscript. The manuscript will undergo copyediting, typesetting, and review of the resulting proof before it is published in its final form. Please note that during the production process errors may be discovered which could affect the content, and all legal disclaimers that apply to the journal pertain.
*Manuscript Click here to view linked References
Cost optimized Hybrid Genetic-Gravitational Search Algorithm for load scheduling in Cloud Computing Divya Chaudhary*, Bijendra Kumar Department of Computer Engineering Netaji Subhas Institute of Technology, Dwarka, New Delhi, India
[email protected] Abstract – In cloud computing, cost optimization is a prime concern for load scheduling. The swarm based meta-heuristics are prominently used for load scheduling in distributed computing environment. The conventional load scheduling approaches require a lot of resources and strategies which are nonadaptive and static in the computation, thereby increasing the response time, waiting time and the total cost of computation. The swarm intelligence-based load scheduling is adaptive, intelligent, collective, random, decentralized, self-collective, stochastic and is based on biologically inspired mechanisms than the other conventional mechanisms. The genetic algorithm schedules the particles based on mutation and crossover techniques. The force and acceleration acting on the particle helps in the finding the velocity and position of the next particle. The best position of the particles is assigned to cloudlets to be executed on the virtual machines in the cloud. The paper proposes a new load scheduling technique, Hybrid Genetic-Gravitational Search Algorithm (HG-GSA) for reducing the total cost of computation. The total computational cost includes cost of execution and transfer. It works on hybrid crossover technique based gravitational search algorithm for searching the best position of the particle in the search space. The best position of the particle is used calculating the force. The HG-GSA is compared to the existing approaches in the CloudSim simulator. By the convergence and statistical analysis of the results, the proposed HGGSA approach reduces the total cost of computation considerably as compared to existing PSO, CloudyGSA and LIGSA-C approaches. Keywords— Cloud Computing, Load Scheduling, Gravitational Search Algorithm, Swarm Intelligence, Genetic Algorithm 1.
Introduction The virtual and distributed computing is one of the most promising technologies for data storage and
retrieval over a larger geographical domain.
The cloud computing provides the benefit of accessing the
information over remote locations on the globe. The cloud computing is a form of virtual distributed computing that utilizes the benefits of virtualization and resilient computing [1]. It is used to increase the computational power of the resources using client server architecture. Multiple requests of processing the cloudlets are processed by the virtual machine at the same instant of time. The requests are processed efficiently using the pay as you go model. The computing helps the users to schedule the cloudlets in an optimal manner such that they achieve less computational time (total time taken in assigning the cloudlets to virtual machines), less computational cost (total cost involves computational time and the different cost of bandwidth, ram and memory) and performs efficient load scheduling. The features of the cloud like scalability, manageability, fault
tolerance and virtualization help the users with the increase the ease of processing the resources in computing environment. The non-discriminatory and appropriate allocation of the resources or cloudlets on the virtual machines is termed as load scheduling [3]. The scheduling is performed to achieve maximum utilization of resources and to eliminate the disproportion of the load on the virtual machines [5]. The conventional methods like First Come First Serve (FCFS), Min-Min, Max-Min, Round Robin, Weighted Round Robin, RASA, Segmented Min-Min algorithm are static algorithms following a predefined approach of scheduling the cloudlets in the computing environment as stated by Chaudhary et al [4]. These are non-adaptive in nature and follows a specialized mechanism pre-decided for execution for scheduling the load [6]. The other conventional methods that are dynamic in nature are genetic algorithm, simulated annealing, genetic simulated annealing methods depend on the genetics and temperatures as the basic criteria for scheduling the load [7]. On the other hand, the swarmbased algorithms are adaptive, intelligent, stochastic, de-centralized and non-conventional in nature. The load is scheduled in a much efficient manner resulting in lesser execution time, response time and total cost of computation. Swarm intelligence based meta-heuristic algorithms are used to perform the scheduling to achieve faster computation. A swarm is a group of objects such as ants, particle, bees, etc [14-15]. It works on the method followed by the swarm of particles for finding the food source for scheduling the workload. Some of the prominent load scheduling algorithms like genetic algorithms, particle swarm optimization and gravitational search algorithms work on the principles of genes in chromosomes, velocity and Newton’s law of gravitational respectively [17]. These principles help in finding the next fit particle to be scheduled in the cloud. The gravitational search algorithm is one of the finest algorithms better in terms of computational time and cost than PSO for scheduling the load, but it does not work on the concept of storing the best position values for further usage. 1.1 Scope and Objectives This paper proposes a cost optimized hybrid improvised swarm intelligence algorithm which is metaheuristic in nature and based on gravitational search algorithm for scheduling the load in cloud computing environment. The Hybrid Genetic-Gravitational Search Algorithm (HG-GSA) uses the capabilities of both genetic and gravitational search algorithm to find the next best fit position of the particle in the search space. This is fulfilled by using a hybrid crossover (combination of two point and uniform crossover), mutation, storing the best positions, new cost evaluation function and memory-based concept in gravitational search algorithm. The positions of the particles are assigned to cloudlets for execution on the virtual machines. The HG-GSA supports the balanced exploitation of the search space by particles. The objective is to reduce the total cost of computation (execution and transfer) for executing cloudlets on virtual machines. This is illustrated in the results and analysis section in comparison to existing techniques. The paper in section 2 provides an overview on the scheduling of load based on meta-heuristic swarmbased methods. The section 3 introduces a cost optimized Hybrid Genetic-Gravitational Search Algorithm (HGGSA) approach for scheduling in cloud. The results and analysis of the HG-GSA along with PSO, Cloudy GSA and LIGSA-C are provided in Section 4 and in the end Section 5 summarizes the paper with a peep in future advancement.
2.
Scheduling of Load based on Meta-heuristic Swarm Intelligence Methods The cloud computing is referred to as virtual distributed computing providing the wider availability of the
resources to the users over a larger geographical area using internet. The resources can be requested and accessed by multiple cloudlets at the same instant of time. The users can have access to all resources present on the centralized server at any instant of time, place and to several users. The cloud provides the services to the users using the pay-as-you-go model. In [1] Buyya et al propose a CloudSim simulator for simulating the cloudlets on the virtual machines. These cloudlets work on the datacentres where the entire workload is scheduled by load schedulers and balancers. The classes involved in the scheduling of the node are elaborated. In [2], the strategy to be followed for scheduling the workflows from one particle to another particle in the grid is discussed. This scheduling is based on the power of the node to handle the amount of workload. The load passed to the virtual machines can cause the imbalance, which cause underbalanced and overbalances nodes. Khiyaita et al [3] discuss significance, methods and algorithms of load balancing to overcome the problems due to node imbalance. Load scheduling as stated is the process of allocating and processing the cloudlets on the VMs in the cloud in an optimal manner to reduce the total cost of computation. The primary aim of load scheduling is to achieve efficiency by reducing the transfer time, waiting time, response time, execution time, transfer time and total cost of computation. The scheduling algorithms which can be static or dynamic in nature for allocation of load are elaborated in a tabular analysis by Chaudhary et al [4]. The dynamic algorithms use bio- inspired swarm based or gravity-based algorithms to schedule the workloads. They have higher computational power to process heterogeneous tasks. The swarm-based algorithms based on particles is used for scheduling the workload in the cloud discussed by Pacini et al [5] to locate the next better particle based on velocity and the position of the particle. The characteristics and drawbacks of the various load scheduling algorithms are elaborated in [6]. In depth analysis of the scheduling algorithms in the cloud is discussed with an improved round robin scheduling method. The main objective of load scheduling is to reduce the total cost of computation. Tsai et al [7] specifies scheduling of the tasks based on the meta-heuristic methods. The roulette selection based random scheduling of the tasks is performed based on total number of iterations and fitness value. A static scheduling algorithm which is not meta-heuristic is proposed in [8] to balance the load in the cloud. The approach involves less computation cost than the primitive algorithms like FCFS and Round Robin scheduling. It suffers from inefficient search space utilization and inappropriate for large dynamic processes. Kennedy et al [9] proposed a swarm based meta-heuristic approach known as particle swarm optimization. The paper is based on a particle set which depends on the approach followed by the flock of birds to move from source to the food source. The next best position in the search space depends on the velocity and position of the existing particle. In [10], Braun et al discussed the eleven scheduling methods viz. static and dynamic for optimal load scheduling. It specifies that the PSO algorithm outperforms the other GA, SA, Min-Min, Max-Min, Tabu Search, etc. algorithms based on the performance metrics like cost. An in-depth survey of distributed swarm-based scheduling strategy is discussed [11] highlighting the mechanism of swarm optimization leading to further better solutions. In [12] a constraint-based particle swarm optimization approach has been specified for scheduling the tasks on the grid of nodes. The scheduling of the load in cloud computing environment using the
particle swarm optimization is discussed by Pandey et al [13]. The next better fit particle is calculated by a fitness function and velocity and position of the particle. The velocity depends on the particle’s best position in the iteration (pbest) and global best position in the swarm (gbest). It reduces the total cost of computation than the existing algorithms. Badr et al [14] focus on the convergence of ant colony optimization algorithms. The pheromones play an important role for the ants to reach to the food source. Kumar et al [15] provides the use of particle swarm optimization for the sharing of the resources on the virtual machines in the cloud. The Network Simulator specified by Garg at el [16] for load scheduling in the network. It uses PSO to schedule cloudlets on the VMs in the CloudSim based on a set of particles in the search space. A novel search technique based on the physical laws i.e., Newton’s law of gravitation known as Gravitational Search Algorithm has been proposed by Rashedi et al [17]. The GSA does not include the concept of storage of best position for further usage. The next particle is calculated based on the force and acceleration applied on the particle. The force depends on the masses (active and passive) of the particles and the distance between them. The fitness value determines the masses of the particles. The position of the next particle is the sum of the velocity and position of the older particle. In [18] application of the GSA has been applied in the filter modelling functions. The gravitational constant acts as an important factor for calculating the force. Gupta et al [19] discussed the usage of fuzzy partitioning based gravitational search method on the set of satellite images for proper identification of the data. Rashedi et al [20] discussed the noise filtering in ultrasound images using GSA. A new unit commitment problem via Binary Gravitational Search Algorithm for optimizing the operation schedule for generating units at every hour interval with different environment has been discussed by Yuan et al [21]. In [22] and [23] Khatibinia et al and Sahu et al proposed hybrid gravitational search algorithm for improved GSA with orthogonal crossover and pattern searching for scheduling the load in cloud computing environment. These methods have been worked upon on distributed computing environment. Sun et al in [24] and [25] proposes two new gravitational search algorithm optimization approaches. It increases the diversity of the particles and uses the concept of memory on mathematical functions. The task scheduling based on improved genetic algorithm has been proposed [26]. The trust evaluation based on the behavioural graphs has been proposed by Hajizadeh et al [27] in cloud computing environment. Xu et al [28] and Ahmad et al [29] focus on the load balancing and service allocation mechanisms in the cloud computing environment. The gravitation search algorithm based on both attractive and repulsive forces approach for solving the optimization problem has been discussed by Zandevakili et al [30]. In [31, 32] Chaudhary et al discussed the cloud based gravitational search algorithm and linear improved gravitational search algorithm for load scheduling in cloud computing environment. It is based on the particles in the search space. The next particle to be chosen is based on the position and velocity of the previous particles. The position and velocity are computed based on the fitness function in the cloud. Finally, a detailed survey on the gravitational search algorithm has been discussed by Rahedi et al [33]. It defined the scope of gravitational search algorithm, the advantages and disadvantages in various areas of computer science for the further research and innovations. The analysis of the various load scheduling algorithms as discussed by several authors is tabulated in Table 1 [4]. Algorithm Methodology Analysis (Advantages and Disadvantages) Min-Min Scheduling Uses Minimum completion 1. More utilization rate of resources Algorithm and execution time to 2. An efficient static algorithm providing minimum cost schedule of computation
Segmented Min-Min Segmentation, Based on MCT 1. It measures both parameters resource cost and Scheduling Algorithm and MET computation performance 2.Segments are divided of the tasks assigned to schedule the particles Tabu Search Algorithm Genetic Algorithm approach 1. Faster execution rates is used. 2. Provides much better distribution of load onto resources in specific time frames using GA algorithm A Double Min-Min Speed, 1. Calculates the resource cost and computational Algorithm for Task Utilization, Min-Min strategy performance Scheduling for request allocation 2. Static in nature 3. Less transfer time in computation. Resource-Aware-Scheduling Uses both Min-Min & Max- 1. Measures both resource cost and computation algorithm (RASA) Min Scheduling strategies performance 2. It is a Dynamic scheduling algorithm. 3. Blend of the advantages of both methods to reduce the total cost of computation. Genetic Algorithm Resource, Population 1. Speed of execution of the GA is high. Demand distribution , 2. High resource utilization rate Crossover, Mutation 3. 4. Less transfer time than static algorithms Simulated Annealing Temperature based 1. Measures both resource cost and computations Algorithm Iterative, Based on solution 2. Temperature is the main driving force spaces. 3. Reduced transfer time than GA. Genetic Simulated Execution cost and time, 1. To minimize the cost for better resource utilization Annealing Combination of GA and SA 2. Combines the advantages of both genetic algorithm and simulated annealing. 3. Most optimal approach Minimum Completion Time Based on Execution time and 1. Prime and basic algorithm Request provisioning 2. Optimizes both transfer time and cost of computation. A PSO Based VM Resource Virtual machine handling, 1. Resource provisioning based on PSO Scheduling Model for Cloud resource management 2. VM allocation policy-based method Computing 3. Swarm intelligence-based method 4. Follows the nature of birds to schedule the particles. Ant Colony Optimization Uses strategy of ants to locate 1. Ants locate food based on pheromones. Algorithm the food source. 2. Dynamic in nature 3. the path is located to the food source. 4. Faster execution and computation time than other static algorithms. Cloudy Gravitational Search Based on Newton’s laws of 1. Law of gravity is used. Algorithm gravitation to schedule the 2. Mass and acceleration play an important role in particles scheduling. 3. Not able to store the position of the particles 4. Less computational cost than PSO and other conventional algorithms. Linear Improved Based on the use of linear 1. Use a linear Gravitational Constant G instead of the gravitational Search improved gravitational exponential function. Algorithm constant 2. The cost of computation is reduced as compared to the cloud-based GSA. Bee Swarm Optimization Nature of bees to locate their 1. Perform the waggle dance to locate the food sources, Algorithm food source and amount of food. 2. three different types of bees are used for extracting the food source.
The proposed method includes the mathematical modelling of functions by genetic and gravitational search algorithms. These existing algorithms require higher computational cost and lesser search space optimization than the proposed HG-GSA. 3.
Proposed Hybrid Genetic-Gravitational Search Algorithm (HG-GSA) The paper presents an improved Hybrid Genetic-Gravitational Search Algorithm (HG-GSA) method based
on gravitational search algorithm approach in cloud computing. The HG-GSA approach aims to reduce the total cost of computation and to schedule the cloudlets (load) on the virtual machines in an effective manner. The cost of computation includes both the transfer and execution cost (in dollars) of the cloudlets. This approach achieves larger search space utilization and user satisfaction than the existing algorithms. It works on the bio-inspired swarm based meta-heuristics, biological and law of gravitation heuristics in the cloud. HG-GSA is a memorybased algorithm storing the best position value of the particles overcoming the drawback of GSA. The next particle is calculated based on genetic algorithm and gravitational search algorithm. It includes a fitness function evaluation, a hybrid crossover scheme, a mutation function, new gravitational constant function, memory-based velocity and position calculation functions and a new cost evaluation function. The cloudlets and virtual machine parameters like MIPS, bandwidth, execution cost and transfer cost are used to evaluate the value of the fitness function. In the cloud computing environment, scheduling of the cloudlets on the virtual machines is performed. The datacenter receives (VMs) Cloudlets possibilities of executing the cloudlets on the respective virtual machines. If 3 cloudlets are to be executed on 2 virtual machines, then the possibility is (2)
3
equal to 8. The M particles are
defined in the d dimensional search space having a large set of possible solutions. Thus, a meta-heuristic approach is needed based on Newton’s law of gravitation to find the positions. So, find the most optimal solution for the cloudlets we use particles and schedule them efficiently based on HG-GSA scheduling strategy. This algorithm provides the best position to the cloudlets for executions on VMs. The pseudo code of HG-GSA heuristics is defined. The particles M are initialized using JSwarm package in CloudSim simulator as: (1) The fitness function evaluates the fitness value of the particles in the search space. The first particle is initialized randomly in the search space followed by selection of the next particle based on better fitness value. It depends on bandwidth, MIPS, execution cost and transfer cost of the cloudlets and the VMs. Let be the total cost of execution (in dollars) all the particles allocated to calculate virtual machine resource PCj. It is calculated by summing all the weights assigned on the nodes (the cost of execution of cloudlets k on calculated virtual machine resource j) in the mapping of the particles M of all cloudlets allocated to each resource. Let
be the total transfer cost between cloudlets allocated to calculate virtual machine
resource PCj and which are not allocated to a particular resource in the mapping M. The resultant is the product of the size of output file (weight of edges ek1, k2) in respect of one particle k1 ∈ k to task k2 ∈ k and the cost of communication from the resource where k1 is mapped (M(k1)) to another resource where k2 is mapped (M(k2)). The average cost of information between two resources of communication is given by dM(k1),M(k2) and the particles are dependent on each other. For two or more particles implementing on the identical resource, the
communication cost is taken as zero. The total cost is added for each particle M by execution and transfer cost and then the cost is further minimized to evaluate the fitness value. The fitness function is: (2) ∈
(3)
∈
(4) ∈
(5) (6)
Hybrid Genetic Algorithm The fitness value of each particle
(where, t is the total number of particles in search space) is
generated using the equations. The functionality of genetic algorithm is used for better search space utilization. The probabilities pc, and pm i.e., crossover and mutation are initialized. The randomly initialized variable rand is compared with the standard deviation of the fitness value for each particle and maximum number of iterations. If the random value rand is greater than the fitness of the particle, the proposed HG-GSA heuristic is implemented on the particles otherwise the memory-based GSA is implemented. The genetic mutation on the particles is performed using
, particles s (refers to the current particle) selection probability is in search
space:
(7)
Hybrid Genetic-Gravitational Search Algorithm Heuristic 01 02 03 04
Initialize: randomly the position and velocity vectors of the particles, and Set pm, pc and Itermax. for each particle compute the fitness value fiti. Generate a random number, rand such that if ((rand > std (fit)) or (rand < t/Itermax))
05 06 07 08 09 10 11 12 13 14 15
Call HG-GSA () else Call GSA () end if end for if (t < Itermax) Go to step 2. else if the termination condition is satisfied return best solution. Calculate the cost based on new cost evaluation function based on parallel processing model return Total_Cost.
16
end if
Next, the crossover is performed by comparing the probability of selection with the probability of crossover pc as illustrated in figure 1. The parent 1 par1 stores the particles having higher selection probability value while parent 2 stores par2 the particles with values lower than the selection probability. A hybrid crossover scheme involving two-point crossover and uniform crossover is used for finding the best position of the particle.
Parent 2 (prob1 < pc)
Parent 1 (prob1 > pc) P1 1
P1 2
P1 3
P1 4
P1 5
P1 6
P1 7
P1 8
P1 9
P1 10
P2 1
P2 2
P1 2
P2 3
P2 4
P2 5
P2 6
P2 7
P1 8
P2 4
P2 5
P2 6
P2 7
P2 8
P2 9
P2 10
Offspring 2 (10 dim , 2-point crossover)
Offspring 1 (10 dim , 2-point crossover) P1 1
P2 3
P1 9
P1 10
P2 1
P2 2
P1 3
P1 4
P1 5
P1 6
P1 7
P2 8
P2 9
P2 10
P2 7
P2 8
P2 9
P2 10
Fig. 1 (a) Genetic Two-point crossover on parent 1 and parent 2 Parent 2 (prob1 < pc)
Parent 1 (prob1 > pc) P1 1
P1 2
P1 3
P1 4
P1 5
P1 6
P1 7
P1 8
P1 9
P1 10
P2 1
Offspring 3 (10 dim , Uniform crossover) P1 1
P1 2
P2 3
P1 4
P2 5
P2 6
P1 7
P1 8
P2 9
P2 2
P2 3
P2 4
P2 5
P2 6
Offspring 4 (10 dim , Uniform crossover) P1 10
P2 1
P2 2
P1 3
P2 4
P1 5
P1 6
P2 7
P2 8
P1 9
P2 10
Fig. 1 (b) Genetic Uniform Crossover on parent 1 and parent 2 The offspring 1 and 2 are generated by two-point crossover mechanism. The results of both offspring are compared and the best values between them are assigned to child 1. The offspring 3 and 4 are generated using a uniform crossover mechanism. The results of both offspring are compared and the best values between them are assigned to child 2. The parent 1 values are compared with the parent 2, child 1 and child 2. The best position is selected and assigned to the particle for further mutating the particles to fit in the new search space as illustrated in figure 2. The selection probability of all the particle is compared with the probability of mutation pm and higher selection probability are mutated by updating the maximum and minimum values of the dimensions. They are randomly allocated in the search space. The pseudo code of HG-GSA is given and further the selected particles are assigned to gravitational search algorithm.
Current Position
New Position Particle
New Mutated Position
Search Space
Fig. 2 Diversity enhancing mechanism used in mutation by HG-GSA method in cloud Call HG-GSA () 01 02 03 04
Initialize: probability of the particles probi, children C1 and C2 Calculate the selection probability probi of each particle. Apply crossover to particles with probability probP1 > pc, with parent 1 as that particle and parent 2 as a randomly selected particle with probP2 < pc Generate four offspring using a hybrid genetic crossover approach
05 06 07 08 09
do Offspring 1 (O1) and Offspring 2 (O2) with two-point crossover. Generate the child C1 using the best position from O1 and O2. Offspring 3 (O3) and Offspring 4 (O4) with uniform crossover. Generate the child C2 using the best position from O3 and O4.
10 11 12 13
end do Compare fitness of parent1 with the children. Update the position of the particle par1with the best position among par2, C1 and C3. Apply mutation to particles with fitness (probpi > pm),
14 15 16 17 18
do Form a cluster of particles of better fit particles with (probpi < pm) Update values of max and min positions on every dimension. Randomly distribute rest of the particles in the cluster formed. New search space is created.
19 20
end do Call GSA ().
Gravitational Search Algorithm The fitness values of the particle using the best .
and worst
values generate the mass
is specified as the fitness value of the particle in the
specific iteration, where t is the total number of particles. (8)
(9) (10) The load scheduling is a minimization problem, so the fitness is calculated based on the best (t) and worst (t) values. (11)
∈
(12)
∈
Where, ∈
refers to the particle j among N particles. The gravitational constant, G(t) acting
with the force at a specific instant is defined by the potential of the particle and used to increase the movement between them. It is computed based on G0 (initial value)
,
and time t. (13) (14)
The force acting on the particle acts on the mass and Euclidean distance space between the particles Mi and Mj. A static function
in d dimensional search
lies in the interval [0, 1). So, the cumulative
force acting on a particle i with respect to particle j at specific instant t on the particles is used to find next feasible position. (15) (16)
(17) (18) (19) The acceleration of particle i is based on force and inertial mass. The next particle to be executed in the search space is evaluated using the velocity of the particle. The velocity of the next iteration of the particle is calculated using a random uniform function and acceleration. The random values
and
with
among the particles. The position of the next particle velocity
that lies between [0, 1] with velocity of the particle and
also help in search optimization
is selected by particle position
and
of next particle. The pseudo code of gravitational search method is provided. (20) (21)
Fig. 3 Flowchart depicting Hybrid Genetic-Gravitational Search Algorithm (HG-GSA) in cloud computing
Call GSA () 01 02 03
Initialize: mass, acceleration and force of the particles: M, a, F. Set Gravitational Constant G, pbest and gbest. Calculate mass M using equation (12) for each particle. Compute Gravitational constant G using the pbest and gbest by equations (13) and (14)
04 05
Generate force and acceleration a based on mass and Euclidean distance by equation (18) and (19). Update velocity, vi () and position, xi () using equations (20) and (21).
The process continues till the condition of maximum iterations is met. The global best position results are returned to the cloudlets for scheduling the load on the virtual machines in the CloudSim simulator. The cloudlets are then allocated to respective VMs to run on the datacentres. The flowchart depicting the summarized proposed Hybrid Genetic-Gravitational Search Algorithm (HG-GSA) is illustrated in figure 3. The total cost of computation is computed using a parallel computing approach by cost evaluation functions. HG-GSA reduces the total cost of computation significantly in the cloud by providing effective load scheduling of cloudlets. It exploits the larger search space area and provides maximum satisfaction to the users. The results are discussed in next section. This strategy gives maximum utilization and reduced cost by the VMs. 4.
Experimental Results & Analysis The proposed (HG-GSA) load scheduling problem for cost optimization and the existing algorithms (PSO,
Cloudy GSA and LIGSA-C) are implemented on the CloudSim Simulator. The Network CloudSim Simulator is simulated based on CloudSim for scheduling the load scheduling on the datacentres by the cloudlets and VMs. The proposed and existing algorithms use the Swarm package JSwarm based on particles for simulation. The total cost of computation for proposed HG-GSA and existing algorithms are provided in a tabulated and graphical manner. The simulator works on 25 particles initialized by swarm package which are divided into 10, 15 and 20 cloudlets on 8 VMs in CloudSim. Each cloudlet has parameters of fitness value, MIPS, bandwidth, execution and transfer cost to be managed by the cloudlets and execution cost to be managed by VMs. The algorithmic set is executed on many iterations ranging from 10 to 1000 as given in Table 2. Table 2: Simulation Parameters Iterations
10, 20, 30, 40, 50, 60, 70, 80, 90, 100, 200, 300, 400, 500, 600, 700, 800, 900, 1000
Virtual Machines
8
MIPS of VMs
1.011, 1.004, 1.013, 1.0, 0.99, 1.043, 1.023, 0.998
Cloudlets
10, 15, 20
Datacentre Count
2
VM in each Datacentre
4
Particle Count
25
Dimensions in position of each particle
Cloudlets
P c, P m
0.04
100 ⍺
20
Simulator
CloudSim
The results of the total computational cost incurred using the proposed and existing algorithms by 10, 15 and 20 cloudlets on 8 VMs are specified in Table 3, Table 4 and Table 5 in respect of number of iterations. As clear from the results in the table, we can see that the total cost of computation for 10 cloudlets is reduced considerably in Hybrid Genetic-Gravitational Search Algorithm (HG-GSA) approximately by 12% in comparison to LIGSA-C, 10% in Cloudy GSA and 85% in comparison to PSO. Table 3: Total Cost for 10 cloudlets in PSO, Cloudy GSA, LIGSA-C and HG-GSA algorithms Iteration 10 20 30 40 50 60 70 80 90 100 200 300 400 500 600 700 800 900 1000
PSO 144670.354 154718.388 146023.003 151117.032 153934.604 146671.563 150630.778 144320.104 150043.444 155538.952 143757.944 155469.762 154534.972 148303.411 143482.749 148181.188 145980.430 145713.846 150183.341
Cloudy GSA 22033.564 22231.076 22646.835 25236.798 21804.132 28238.387 24134.520 27754.698 28324.327 29378.085 22364.729 26501.040 27754.698 22239.871 22364.729 33288.008 26069.204 19760.956 19879.760
LIGSA-C 27842.761 22364.729 22364.729 29819.639 28550.078 22977.632 25685.220 26092.184 27754.698 27309.156 25289.422 23337.076 29023.443 24134.520 25410.000 25229.587 25705.824 22927.794 18100.890
HG-GSA 19307.616 22127.076 20917.076 16894.164 19307.616 19237.537 18361.303 19360.000 19754.698 19307.616 18637.275 20290.968 19754.698 18035.191 18745.785 19879.760 19760.956 20514.342 19585.390
In Table 4, for 15 cloudlets the total cost of computation in HG-GSA is reduced by 17% from LIGSA-C, 21% from Cloudy GSA and 88% from Particle Swarm Optimization algorithms, approximately. Thus, the hybrid method produces optimal load scheduling in comparison to the latest gravitational search algorithms. Table 4: Total Cost for 15 cloudlets in PSO, Cloudy GSA, LIGSA-C and HG-GSA algorithms Iteration 10 20 30 40 50 60 70 80
PSO 251281.517 248417.269 248960.089 247411.824 249097.514 252863.341 233301.076 240275.467
Cloudy GSA 37857.076 35963.567 29040.000 38166.082 32648.053 38681.683 41898.990 31897.645
LIGSA-C 32257.321 28937.135 28961.424 38475.073 33050.346 36268.231 36731.930 35784.151
HG-GSA 27754.698 32304.609 25341.246 27754.698 32052.996 25341.246 27567.715 30058.651
90 100 200 300 400 500 600 700 800 900 1000
238414.151 246693.713 257046.548 241913.925 250686.566 255247.494 255640.518 240621.622 234732.638 245081.605 245697.967
36863.288 38786.142 31383.812 30846.270 43246.331 30421.034 41051.117 42062.432 35731.239 37408.506 34995.054
30352.662 31826.259 37946.693 36969.745 38615.232 34466.448 38202.020 32052.996 31027.555 38177.916 40039.803
26451.613 32027.904 30390.736 30421.034 26682.646 32676.339 28961.424 26547.972 25705.824 28961.424 27124.430
In Table 5, the total cost of computation required for scheduling the 20 cloudlets in the Hybrid GeneticGravitational Search algorithm (HG-GSA) is reduced by 13% from Linear Improved Gravitational Search Algorithm in Cloud (LIGSA-C), 16% from Cloud based Gravitational Search Algorithm (Cloudy GSA) and 89% from Particle Swarm Optimization (PSO) algorithms, approximately. Thus, on the basis of these results we state that the hybrid method (HG-GSA) hybrid genetic-gravitational search algorithm outperforms all the classical existing swarm intelligence-based algorithms. The total computational cost reduction is further depicted in the further figures. Table 5: Total Cost for 20 cloudlets in PSO, Cloudy GSA, LIGSA-C and HG-GSA algorithms Iteration 10 20 30 40 50 60 70 80 90 100 200 300 400 500 600 700 800 900 1000
PSO 368649.602 354478.049 345540.580 356553.370 353046.854 355353.891 351187.753 357996.221 371324.589 368448.353 359849.885 347655.150 350306.997 365131.362 366225.261 369217.780 366704.318 362957.682 363249.969
Cloudy GSA 44876.475 41712.666 46517.096 44648.863 50765.556 47062.315 44648.863 36958.024 47707.209 43551.325 48400.000 44876.475 44648.863 39521.912 51872.510 47914.295 42284.396 47195.216 52452.208
LIGSA-C 44087.031 45876.723 45524.083 43818.193 49868.660 46108.799 45980.000 42285.248 39759.519 42368.312 35498.519 42082.111 46404.602 45492.859 39434.343 39521.912 46171.691 50498.534 51730.622
HG-GSA 36863.288 37233.368 40483.466 38517.034 34390.456 41002.004 41028.684 36604.117 37341.227 38615.232 34390.456 39821.958 32027.904 40855.589 44085.780 41690.192 34390.456 41028.684 36201.780
The figures 4, 5 and 6 depict the graphical analysis of the total cost of computation of the particles versus the total number of iterations. The graphical comparison is performed for Cloudy GSA, LIGSA-C and Hybrid Genetic Gravitational Search Algorithm (HG-GSA) in Figure 4 for 10 cloudlets from Table 3.
Fig4. Total Cost for 10 cloudlets in Cloudy GSA, LIGSA-C and HG-GSA. For the 15 cloudlets, the total cost of computation of Hybrid Genetic-Gravitational Search Algorithm (HGGSA) in context to Cloudy GSA and Linear Improved Gravitational Search Algorithm is presented in Figure 5. As seen from the graphs, that these algorithms are stochastic in nature and every computation results in a different result. Thus, these results are calculated a number of times for a particular iteration. The HG-GSA on an average outperforms the existing algorithms as seen from the graphical representation.
Fig5. Total Cost for 15 cloudlets in Cloudy GSA, LIGSA-C and HG-GSA. In the case of 20 cloudlets, , The HG-GSA outperforms the Cloudy GSA and LIGSA-C by providing reduced total cost of computation by around 16% and 13%, respectively. This is well depicted in Figure 6 based on the total number of iterations.
Fig6. Total Cost for 20 cloudlets in Cloudy GSA, LIGSA-C and HG-GSA. The comparison of the proposed Hybrid Genetic Gravitational Search Algorithm (HG-GSA) approach with the variants of GSA (Cloudy GSA and LIGSA-C) and Particle Swarm Optimization (PSO) are depicted in Figure 7 for 10 particles. The PSO follows the serial computation as depicted by Buyya et al the algorithm. On the other hand, HG-GSA algorithm follows parallel computation.
Fig.7. Total Cost for 10 cloudlets in PSO, Cloudy GSA, LIGSA-C and HG-GSA In Figure 8 the comparison of total cost of computation between the proposed HG-GSA and the existing algorithms like Cloudy GSA, LIGSA-C and PSO is performed. The HG-GSA reduces the total cost of computation to around 88% than the primitive particle swarm optimization algorithm in the cloud computing environment.
Fig.8. Total Cost for 15 cloudlets in PSO, Cloudy GSA, LIGSA-C and HG-GSA The figure 9 showcases the graphical illustration of the algorithms i.e., proposed HG-GSA (Hybrid Genetic Gravitation Search Algorithm), Cloudy GSA, LIGSA-C with respect to PSO (Particle Swarm Optimization) in respect of the Total Cost of computation over the number of iterations. Based on the new cost function, the HGGSA provides much better results than the rest algorithms.
Fig.9. Total Cost for 20 cloudlets in PSO, Cloudy GSA, LIGSA-C and HG-GSA The experimental study on the total cost values of 10, 15 and 20 cloudlets are performed. However, the consistency of the algorithm is achieved by the convergence and the statistical analysis of the results set. 4.1. Convergence Analysis The convergence analysis is performed to locate the behavior of the algorithms in case of real time networks. It specifies the computational methodology suggesting whether the algorithms would behave, act or
execute in similar manner or not. The analysis of the convergence of the heuristics for locating the near optimal solution is very complex task. The behaviour depends on the number of iterations for each algorithm. The measurement of the mean of normalized displacement over different iterations is calculated based on the total cost of computation results. It is as, for total cost at tth iteration is calculated by , the normalized total cost of the cloudlets at tth iteration. Here, refers at 0th iteration and n refer the number of data vectors (value range is1 to 19) as: =
(22)
The results are depicted in figures 10, 11 and 12. The HG-GSA approach converges within the different iterations in accordance with other heuristic algorithms in the cloud. The analysis of the results of the other algorithms based the total cost of computation of 10, 15 and 20 cloudlets from Tables 3, 4 and 5.
Fig.10. Normalized Distance Mean for 10 cloudlets in PSO, Cloudy GSA, LIGSA-C and HG-GSA In Figure 10, the normalized convergence is performed on the existing and proposed algorithm HG-GSA. Here we see that the PSO converges on 11th iteration, Cloudy GSA converge on 10th iteration, LIGSA-C converge on 12th iteration whereas the proposed algorithm HG-GSA converge on 8th iteration. This specifies that the algorithm provides similar set of results and follows the similar nature of cost in further execution.
Fig.11. Normalized Distance Mean for 15 cloudlets in PSO, Cloudy GSA, LIGSA-C and HG-GSA The mean of normalized convergence for Total Cost in Figure 11 specifies that the algorithms PSO, Cloudy GSA, LIGSA-C and HG-GSA converge at 13th, 10th, 13th and 9th set of iterations, respectively. From the figure it is clear that HG-GSA provides a similar set of results with lesser variations in the total cost of computation.
Fig.12. Normalized Distance Mean for 20 cloudlets in PSO, Cloudy GSA, LIGSA-C and HG-GSA. The mean of normalized convergence for Total Cost in Figure 12 specifies that PSO, Cloudy GSA, LIGSA-C and HG-GSA algorithms converge at 9th, 9h, 10th and 10th set of iterations, respectively. From the figures it is clear that HG-GSA provides a much better total cost of computation in context of convergence analysis results with lesser ups and downs. Secondly, the mean of pairwise distances at different iterations is computed on the result set of the total cost. It is based on the
and
on the total cost data at ith and jth iterations respectively where n is number of data
vectors. The Mean of Normalized Pairwise Distance at tth iteration is:
=
(23)
The results of normalized pairwise displacement are illustrated in figures 13 to 18 for total cost for 10, 15 and 20 cloudlets respectively. It is observed that the proposed HG-GSA and the existing algorithms converge within the thirteen iterations when taken together. The algorithms minimize the mean of the pairwise distance between the iterations also known as displacement to converge the results and based on these the behaviour is predictable for large iteration set of values in case of large number of cloudlets and virtual machines.
Fig.13. Mean of normalized Displacement for 10 cloudlets in Cloudy GSA, LIGSA-C and HG-GSA algorithms In Figure 13, the mean of normalized displacement of the total cost for 10 cloudlets is plotted in context of the number of iterations where the proposed and existing algorithms converge after 11th set of iterations. The HGGSA follows a subtle results and lesser pairwise displacement among the iterations.
Fig.14. Mean of normalized Displacement for 15 cloudlets in Cloudy GSA, LIGSA-C and HG-GSA algorithms For the total cost of computation of 15 cloudlets over the virtual machines, the mean of normalized displacement as showcased in figure 14 too converges after the 11th iteration for the Hybrid GeneticGravitational Search Algorithm, Linear Improved Gravitational Search Algorithm and Cloudy Gravitational Search Algorithm.
Fig.15. Normalized Displacement Mean for 20 cloudlets in Cloudy GSA, LIGSA-C and HG-GSA
The Mean of normalized displacement for 20 Cloudlets for the Cloudy GSA, LIGSA-C and HG-GSA converge for the 11th iteration. Thus, from this point we can assume that the algorithms behave in similar manner. And in case of big framework too, they will behave and work on the similar strategy.
Fig.16. Normalized Displacement Mean for 10 cloudlets in PSO, Cloudy GSA, LIGSA-C and HG-GSA In Figure 16, the normalized mean of displacement is now compared with the Particle Swarm Optimization algorithm. Where the PSO converge at the 11h iteration. The rest of the algorithms are plotted in figure 13 for 10 cloudlets.
Fig.17. Normalized Displacement Mean for 15 cloudlets in PSO, Cloudy GSA, LIGSA-C and HG-GSA
Fig.18. Normalized Displacement Mean for 20 cloudlets in PSO, Cloudy GSA, LIGSA-C and HG-GSA The normalized mean of displacement of total cost for the 15 and 20 cloudlets is plotted in figure 17 and 18, respectively. These algorithms converge around the 13 th iteration. Thus, by convergence analysis the credibility of the algorithms is specified. 4.2. Statistical Analysis
The statistical analysis of the proposed HG-GSA approach and the existing scheduling algorithms provide us with the proof of efficiency of the proposed algorithm over others. The metrics of mean, standard deviation, minimum value and maximum value are specified and computed for the algorithms based on the total cost of computation for 10, 15 and 20 cloudlets. Table 6: The statistical analysis of the total cost of the cloudlets on different particles of different algorithms Parameters Mean
Standard Deviation
Minimum
Maximum
No. of Cloudlets 10 15 20 10 15 20 10 15 20 10 15 20
PSO 149119.782 246493.939 359677.771 4187.166 6849.873 7944.934 143482.749 233301.076 345540.580 155538.952 257046.548 371324.589
Cloudy GSA 24842.390 36260.437 45663.908 3618.570 4286.245 3956.376 19760.956 29040.000 36958.024 33288.008 43246.331 52452.208
LIGSA-C 25258.914 34744.365 44342.724 2863.552 3562.211 4093.485 18100.890 28937.135 35498.519 29819.639 40039.803 51730.622
HG-GSA 19462.060 28638.270 38240.610 1121.296 2468.559 3139.778 16894.164 25341.246 32027.904 22127.076 32676.339 44085.780
The graphical analysis of the mean and standard deviation for the PSO, Cloudy GSA, LIGSA-C and the proposed hybrid genetic gravitational search algorithm (HG-GSA) algorithms are given in figure 19, 20 and 21 respectively.
Fig19. Mean of Total Cost for cloudlets in Cloudy GSA, LIGSA-C and HG-GSA
Fig 20. Mean of Total Cost for cloudlets in PSO, Cloudy GSA, LIGSA-C and HG-GSA. The mean of total cost for different cloudlets is depicted in Figure 29 which is plotting of the PSO algorithm with the figure 19. The figure 21 showcases the statistical deviation of the total cost for different cloudlets which represents HG-GSA as a cost-effective algorithm in respect to other algorithms.
Fig 21. Standard Deviation of Total Cost for cloudlets in PSO, Cloudy GSA, LIGSA-C and HG-GSA. The result set is represented graphically in the mean and standard deviation versus number of particles. Thus, by convergence and statistical analysis, we state that the Hybrid Genetic-Gravitational Search Algorithm (HG-GSA) in cloud computing environment presented better results by achieving minimized cost in load scheduling problem based on hybrid genetic algorithm (combination of uniform and two-point crossover) and gravitational search algorithm. The cost function depends on parallel processing of the cloudlets over the virtual machines taking the total maximum time taken by the cloudlets for the processing on the virtual machines. The
HG-GSA reduces the total cost of execution as compared to PSO, Cloudy GSA and LIGSA-C in the cloud computing environment and is more optimal algorithm. 5.
Conclusion and Future Work The paper presents a cost optimized load scheduling problem in the cloud computing environment for
achieving higher utilization of the resources. The detailed analysis of meta-heuristic swarm-based techniques for scheduling PSO, GSA algorithm is provided in the literature. The proposed Hybrid Genetic-Gravitational Search Algorithm (HG-GSA) approach reduces the total cost of computation in the system by providing maximum utilization of VMs. This has been achieved by calculating value of the fitness function and force acting on the particles in search space. It also uses the genetic methods like two-point crossover, uniform crossover and mutation with gravitational search algorithm to schedule the particles. The proposed HG-GSA supports parallel computation by a new cost evaluation function, new gravitational constant based on the particle best and global best values and including best position storage capabilities for selection of the next particle. HG-GSA achieves more cost optimization on a large data set of cloudlets by storing the best positions and overcoming the limitations of GSA. The tabulated results and comparison of the proposed approach (HG-GSA) with PSO, GSA and LIGSA-C on several iterations are specified with convergence and statistical analysis. Thus, proposed cost optimized Hybrid Genetic-Gravitational Search Algorithm for load scheduling in the cloud reduces the total cost of computation drastically and provides higher user satisfaction. The future work aims to further minimize the total cost by using new scheduling strategies in cloud computing environment based on bio- inspired heuristics. It also includes incorporating concepts based on centre of mass-based crossover and diversity-based crossover techniques in combination with HG-GSA algorithm to achieve the goal for reducing the total cost of computation. References [1] Buyya R., Pandey S., Vecchiola C.: “Cloudbus toolkit for market-oriented cloud computing”, CloudCom 09: Proceedings of the 1st International Conference on Cloud Computing, vol. 5931, pp.: 24-44, LNCS, Springer (2009). [2] Yu J., Buyya R., Ramamohanarao K.: “Workflow Scheduling Algorithms for Grid Computing”, Metaheuristics for Scheduling in Distributed Computing Environments, vol. 146, Series of Studies in Computational Intelligence, pp.: 173-214, Springer (2008). [3] Khiyaita A., Bakkali El., Zbakh M., Kettani D. E.: “Load Balancing Cloud Computing: State of Art”, IEEE National Days of Network Security and Systems (JNS2), pp.: 106-109, IEEE (2012). [4] Chaudhary D., Kumar B.: “An analysis of the load scheduling algorithms in the cloud computing environment: A survey”, IEEE 9th International Conference on Industrial and Information Systems (ICIIS), pp.: 1-6, IEEE (2014). [5] Pacini E., Mateos C., Garino C. G.: “Distributed job scheduling based on Swarm Intelligence: A survey”, Computers and Electrical Engineering, vol. 40, pp.: 252-269, Elsevier (2014). [6] Chaudhary D., Kumar B.: “Analytical study of load scheduling algorithms in cloud computing”, IEEE International Conference on Parallel, Distributed and Grid Computing (PDGC), 2014, pp.: 7-12, IEEE (2014).
[7] Tsai C.W., Joel J., Rodrigues P. C.: “Metaheuristic Scheduling for Cloud: A Survey”, IEEE Systems Journal, vol. 8, no. 1, pp.: 279-291, IEEE (2014). [8] Chaudhary D., Chhillar R.S.: “A New Load Balancing Technique for Virtual Machine Cloud Computing Environment” International Journal of Computer Applications, vol. 69 (23), pp.: 37-40 (2013). [9] Kennedy J., Eberhart R.: “Particle swarm optimization”, IEEE International Conference on Neural Networks, vol. 4, pp.: 1942-1948, IEEE (1995). [10] Braun T.D., Siegel H.J., Beck N.: A Comparison of Eleven Static Heuristics for Mapping a Class of Independent Tasks onto Heterogeneous Distributed Computing Systems1. Journal of Parallel and Distributed Computing, vol. 61, pp.: 810-837, Elsevier (2001). [11] Pacini E., Mateos C., Garino C. G.: Distributed job scheduling based on Swarm Intelligence: A survey. Computers and Electrical Engineering, vol. 40, pp.: 252–269, Elsevier Ltd (2013). [12] Zavala A. E. M., Aguirre A. H., Diharce E. R. V., Rionda S. B.: “Constrained optimisation with an improved particle swarm optimisation algorithm”, International Journal of Intelligent Computing and Cybernetics, vol. 1 (3), pp.: 425-453, Springer (2008). [13] Pandey S., Buyya R., et al: “A Particle Swarm Optimization based Heuristic for Scheduling Workflow Applications in Cloud Computing Environments”, 24th IEEE International Conference on Advanced Information Networking and Applications, pp.: 400-407, IEEE (2010). [14] Badr A., Fahmy: “A proof of convergence for ant algorithms”, Information Sciences, vol. 160, pp.: 267279, Elsevier (2004). [15] Kumar D., Raza Z.: “A PSO Based VM Resource Scheduling Model for Cloud Computing”, IEEE International Conference on Computational Intelligence & Communication Technology (CICT), pp.: 213219, IEEE (2015). [16] Garg S. K., Buyya R.: “Network CloudSim: Modelling Parallel Applications in Cloud Simulations”, 4th IEEE/ACM International Conference on Utility and Cloud Computing, pp.: 105-113, IEEE CS Press (2011). [17] Rashedi E., Nezamabadi-pour H., Saryazdi S.: “GSA: A Gravitational Search Algorithm”, Information Sciences, vol. 179, pp.: 2232-2248, Elsevier (2009). [18] Rashedi E., Nezamabadi-pour H., Saryazdi S.: “Filter Modeling using gravitational search algorithm”, Engineering Applications of Artificial Intelligence, vol. 24, pp.: 117-122, Elsevier (2011). [19] Gupta C., Jain S.: “Multilevel fuzzy partition segmentation of satellite images using GSA”, International Conference on Signal Propagation and Computer Technology (ICSPCT), (2014). [20] Rashedi E., Zarezadeh A.: “Noise filtering in ultrasound images using Gravitational Search Algorithm”, Iranian Conference on Intelligent Systems (ICIS), (2014). [21] Yuan X., et al.: “A new approach for unit commitment problem via binary gravitational search algorithm”, Applied Soft Computing, vol. 22, pp. 249–260, Elsevier (2014). [22] Khatibinia M., Khosravi S.: “A hybrid approach based on an improved gravitational search algorithm and orthogonal crossover for optimal shape design of concrete gravity dams”, Appl. Soft Comput. J. 16, pp. 223–233, (2014). [23] Sahu R.K., Panda S., Padhan S.: “A novel hybrid gravitational search and pattern search algorithm for load frequency control of nonlinear power system”, Appl. Soft Comput. 29, pp. 310–327, Elsevier (2015).
[24] Sun G., Zhang A., Jia X., Li X., Ji S., Wang Z.: “DMMOGSA: Diversity-enhanced and memory-based multiobjective gravitational search algorithm”, Information Sciences, vol. 363, pp.: 52-71, Elsevier (2016). [25] Sun G., Zhang A., Yaob Y., Wang Z.: “A novel hybrid algorithm of gravitational search algorithm with genetic algorithm for multi-level thresholding”, Applied Soft Computing, vol. 46, pp.: 703-730, Elsevier (2016). [26] Keshanchi B., A. Souri, and N. J. Navimipour: “An improved genetic algorithm for task scheduling in the cloud environments using the priority queues: formal verification, simulation, and statistical testing,” Journal of Systems and Software, vol. 124, pp. 1-21, (2017). [27] Hajizadeh R., Navimipour N. J.: “A method for trust evaluation in the cloud environments using a behavior graph and services grouping,” Kybernetes, vol. 46, pp. 1245-1261, (2017). [28] Xu M., Tian W., Buyya R.: “A survey on load balancing algorithms for virtual machines placement in cloud computing,” Concurrency and Computation: Practice and Experience, vol. 29, pp. e4123-n/a, (2017). [29] Ahmad S., Layla A.: “A Heuristic Approach for Service Allocation in Cloud Computing,” International Journal of Cloud Applications and Computing (IJCAC), vol. 7, pp. 60-74, (2017). [30] Zandevakili H., Rashedi E., Mahani A.: “Gravitational search algorithm with both attractive and repulsive forces”, Soft Comput., https://doi.org/10.1007/ s00500-017-2785-2, (2017). [31] Chaudhary D., Kumar B.: “Cloudy GSA for load scheduling in cloud computing”, Applied Soft Computing, Vol. 71, pp. 861-871, ISSN 1568-4946, https://doi.org/10.1016/j.asoc.2018.07.046, Elsevier, (2018). [32] Chaudhary D., Kumar B.: “Linear Gravitational Search Optimization Algorithm for Load Scheduling in Cloud Computing (LIGSA-C)”, International Journal of Computer Network and Information Security, vol. 10, No. 4, pp. 38-47, (2018). [33] Rashedi E., Rashedi E., Nezamabadi-pour H.: “A comprehensive survey on gravitational search algorithm”, Swarm and Evolutionary Computation, Vol. 41, pp. 141-158, ISSN 2210-6502, Elsevier, (2018), https://doi.org/10.1016/j.swevo.2018.02.018.
*Highlights (for review)
Highlights of the Paper 1. This paper proposes Cost optimized Hybrid Genetic-Gravitational Search Algorithm (HGGSA) for load scheduling in cloud computing optimization method for solving the load scheduling in the cloud 2. It uses a hybrid genetic crossover approach (based on two point and uniform crossover) with the updated force calculation based on Gravitational constant using the pbest (particle best) and gbest (global best) values. 3. The HG-GSA reduces the total Cost of computation considerably than existing algorithms. 4. The results are computed on a large set of values and compared with the existing algorithm results using CloudSim simulator. 5. The diagram depicting crossover and mutation with flowchart and pseudo code of HGGSA is elaborated. 6. The HG-GSA efficiently exploits the larger search space than existing algorithms.
*Declaration of Interest Statement
1 Author declaration [Instructions: Please check all applicable boxes and provide additional information as requested.] 1. Conflict of Interest Potential conflict of interest exists: We wish to draw the attention of the Editor to the following facts, which may be considered as potential conflicts of interest, and to significant financial contributions to this work: The nature of potential conflict of interest is described below:
No conflict of interest exists. We wish to confirm that there are no known conflicts of interest associated with this publication and there has been no significant financial support for this work that could have influenced its outcome.
2. Funding Funding was received for this work. All of the sources of funding for the work described in this publication are acknowledged below: [List funding sources and their role in study design, data analysis, and result interpretation]
No funding was received for this work.
2 3. Intellectual Property We confirm that we have given due consideration to the protection of intellectual property associated with this work and that there are no impediments to publication, including the timing of publication, with respect to intellectual property. In so doing we confirm that we have followed the regulations of our institutions concerning intellectual property.
4. Research Ethics We further confirm that any aspect of the work covered in this manuscript that has involved human patients has been conducted with the ethical approval of all relevant bodies and that such approvals are acknowledged within the manuscript. IRB approval was obtained (required for studies and series of 3 or more cases) Written consent to publish potentially identifying information, such as details or the case and photographs, was obtained from the patient(s) or their legal guardian(s).
5. Authorship The International Committee of Medical Journal Editors (ICMJE) recommends that authorship be based on the following four criteria: 1. Substantial contributions to the conception or design of the work; or the acquisition, analysis, or interpretation of data for the work; AND 2. Drafting the work or revising it critically for important intellectual content; AND 3. Final approval of the version to be published; AND 4. Agreement to be accountable for all aspects of the work in ensuring that questions related to the accuracy or integrity of any part of the work are appropriately investigated and resolved. All those designated as authors should meet all four criteria for authorship, and all who meet the four criteria should be identified as authors. For more information on authorship, please see http://www.icmje.org/recommendations/browse/roles-andresponsibilities/defining-the-role-of-authors-and-contributors.html#two. All listed authors meet the ICMJE criteria.
We attest that all authors contributed significantly to the creation of this manuscript, each having fulfilled criteria as established by the ICMJE. One or more listed authors do(es) not meet the ICMJE criteria.
3 We believe these individuals should be listed as authors because: [Please elaborate below]
We confirm that the manuscript has been read and approved by all named authors. We confirm that the order of authors listed in the manuscript has been approved by all named authors.
6. Contact with the Editorial Office The Corresponding Author declared on the title page of the manuscript is: [Divya Chaudhary]
This author submitted this manuscript using his/her account in EVISE. We understand that this Corresponding Author is the sole contact for the Editorial process (including EVISE and direct communications with the office). He/she is responsible for communicating with the other authors about progress, submissions of revisions and final approval of proofs. We confirm that the email address shown below is accessible by the Corresponding Author, is the address to which Corresponding Author’s EVISE account is linked, and has been configured to accept email from the editorial office of American Journal of Ophthalmology Case Reports: [
[email protected]]
Someone other than the Corresponding Author declared above submitted this manuscript from his/her account in EVISE: [Insert name below]
We understand that this author is the sole contact for the Editorial process
4 (including EVISE and direct communications with the office). He/she is responsible for communicating with the other authors, including the Corresponding Author, about progress, submissions of revisions and final approval of proofs.
We the undersigned agree with all of the above.
Author’s name (Fist, Last)
1. Divya, Chaudhary_ 2. Bijendra, Kumar_
Signature
__Divya_ __Bijendra Kumar
Date
_12/06/2019_______ _12/06/2019______
3. ___________________
_____________
______________
4. ___________________
_____________
______________
5. ___________________
_____________
______________
6. ___________________
_____________
______________
7. ___________________
_____________
______________
8. ___________________
_____________
______________
9. ___________________
_____________
______________
10. ___________________
_____________
______________