Accepted Manuscript Heuristic-based load-balancing algorithm for IaaS cloud Mainak Adhikari, Tarachand Amgoth
PII: DOI: Reference:
S0167-739X(17)30732-X https://doi.org/10.1016/j.future.2017.10.035 FUTURE 3774
To appear in:
Future Generation Computer Systems
Received date : 22 April 2017 Revised date : 7 October 2017 Accepted date : 22 October 2017 Please cite this article as: M. Adhikari, T. Amgoth, Heuristic-based load-balancing algorithm for IaaS cloud, Future Generation Computer Systems (2017), https://doi.org/10.1016/j.future.2017.10.035 This is a PDF file of an unedited manuscript that has been accepted for publication. As a service to our customers we are providing this early version of the manuscript. The manuscript will undergo copyediting, typesetting, and review of the resulting proof before it is published in its final form. Please note that during the production process errors may be discovered which could affect the content, and all legal disclaimers that apply to the journal pertain.
Heuristic-based load-balancing algorithm for IaaS cloud Mainak Adhikari and Tarachand Amgoth Department of Computer Science & Engineering, Indian Institute of Technology (Indian School of Mines), Dhanbad E-mail:
[email protected],
[email protected]
Abstract: The tremendous growth of virtualization technology in cloud environment reflects the increasing number of tasks that require the services of the virtual machines (VMs). To balance the load among the VMs and minimizing the makespan of the tasks are the challenging research issues. Many algorithms have been proposed to solve the said problem. However, they lack in finding the potential information about the resources and tasks and it may lead to the improper assignment of the tasks to the VMs. In this paper, we propose a new load balancing algorithm for Infrastructure as a Service (IaaS) cloud. We devise an efficient strategy to configure the servers based on the number of incoming tasks and their sizes to find suitable VMs for assignment and maximize the utilization of computing resource. We test the proposed algorithm through simulation runs and compare the simulation results with the existing algorithms using various performance metrics. Through comparisons, we demonstrate that the proposed algorithm performs better than the existing ones. Keywords: IaaS cloud; virtual machines; server configuration; load balancing; makespan; resource utilization.
1. Introduction In the recent years, cloud computing has emerged as a new platform for modern distributed computing environment which facilitates the computing resources to the users over the Internet. The cloud service providers allocate computing resources (hardware and software) on a leased basis to the user on pay per use mode [1-4]. In cloud computing environment, there is three type of service models such as software as a service (SaaS), Platform as a service (PaaS) and Infrastructure as a service (IaaS). The objective of the IaaS service provides is to maximize the revenue and providing QoS to the users. Usually, a request from the users come in the form of virtual machines (VMs). The IaaS provides a virtualization platform by creating VMs that assist users in accomplishing their tasks within a reasonable time [5-6]. Therefore, to achieve the desired objectives, the computing resources of the system need to be managed efficiently and schedule the tasks intelligently to minimize the makespan. Each user application is roughly divided into a set of tasks and they need to be scheduled on the suitable VMs. Resource management, task scheduling, and load balancing algorithms are coming under NP-hard and NP-complete problems. Therefore, there is a need for heuristic or approximation algorithms to address the problems efficiently. Here, our focus on how to tackle the task scheduling and load balancing problems. The objective of the task scheduling and load balancing is to minimize the waiting time and completion time of the tasks. However, improper task assignment strategy may unbalance the load among the VMs, i.e., few VMs are heavily loaded and others are idle and it also increases the completion and waiting time of the tasks. As a result, overall makespan of the system also increases. Hence, the proper load balancing among the VMs becomes a critical task to improve the performance of the system resources and maintain the stability of the environment. Thus, it becomes imperative to develop an algorithm which can improve the system performance by balancing the workload among VMs and minimize the makespan [7-8]. Many algorithms have been proposed [9-12] to solve the said problem such as Round Robin (RR) [9], weighted round robin (WRR) [10], dynamic load balancing (DLB) [11], Max-Min scheduling algorithm
[12]. The RR algorithm does not consider resource availability and size of the tasks. As a result, higher priority and lengthy tasks ended up with higher completion time and it does not have any strategy to balance the workload among the VMs. The WRR algorithm considers the resource availability but it ignores the sizes of the tasks while scheduling the tasks on the VMs. The DLB algorithm considers the workload of the VMs but ignores the length of the tasks and this may increase the completion time and makespan of the system. The major difficulty with the Max-Min algorithm is to assign the longer tasks to the VMs with relatively high computational power. As a result, the early execution of the larger tasks might increase the total response time of the system and the smaller tasks need to wait for a long time to execute which may increase the makespan of the system. However, all these algorithms lack in finding the potential information about the tasks and VMs. In addition, not only the number of tasks but also the size of the tasks play a key role in task scheduling. On the other hand, server configuration also plays a great role in utilizing the resources efficiently and it tells about how many numbers of VMs and their types can host on the server at a given instance of time to execute the tasks. The detailed illustration about server configuration is given in definition 7 of Section 3. In this paper, we propose a new heuristic-based load-balancing algorithm for IaaS cloud referred as HBLBA. The proposed algorithm is divided into two phases, namely server configuration, and task-VM mapping. We devise an efficient strategy to configure the servers based on the number of tasks and their sizes to find suitable VMs for assignment. This strategy helps in utilizing the resources efficiently. In the task-VM mapping, we adopt a queuing model through tasks as assigned to the VM in order to minimize the waiting time and completion time of the tasks. We simulate the proposed algorithm and the existing algorithms, such as RR, WRR, DLB and Max-Min scheduling. To evaluate the performance of the proposed algorithm, we compare the algorithm against the existing ones using various metrics such as waiting time (WT), makespan, and scheduled length ration (SLR), CPU and VM utilization. The main contributions in this work are summarized as follows 1) We introduce a new system model of the data center and its components which is more convenient and suitable to run the proposed algorithm. 2) A review of various exiting algorithm especially their merits and demerits. 3) We devise an efficient mechanism to configure the servers based on the number of tasks and their sizes. This mechanism helps in utilizing computing resources efficiently. 4) We develop a new task-VM mapping approach to minimize the makespan by scheduling the waiting tasks efficiently. 5) Finally, a comprehensive validation of the proposed algorithm via simulation runs using various performance matrices. The rest of the paper is organized as follows. Section 2 presents the related work. The notations, definitions and problem formulation are given in Section 3. The proposed work along with the system model and pseudo code are described in Section 4. Performance evaluation of the proposed algorithm is shown in Section 5 and concluding remarks are given in Section 6.
2.
Related works
Here, we highlight some of the recent works related to load balancing and scheduling issues in a cloud environment. Zhao et al. proposed a novel heuristic based load balancing algorithm based on the concept of Bayes’ theorem and clustering [13]. The combination of these two processes obtains the optimal clustering set of the physical hosts. The algorithm makes a limited constraint about all physical hosts aiming to achieve a task deployment approach with a global search capability regarding the performance function. Xu et al. introduce a load balancing model in public cloud environment based on the concept of cloud partitioning [14]. The algorithm uses a switching concept to choose different strategies for different situations. The algorithm also applies the concept of game theory for load balancing to improve the efficiency of the public cloud environment.
Maguluri et al. developed a load balancing and scheduling algorithm based on the concept of the stochastic model [15]. The algorithm works with all non pre-emptive jobs. The jobs are first routed to the server and are queued. Each server then chooses a set of jobs so that it has enough resources to serve all of them simultaneously. The algorithm tries to balance the load of the servers and improved the throughput without assuming the size of the jobs. Pham et al proposed an algorithm to balance the workloads in the heterogeneous servers as well as reduce the power consumption of the data centers [16]. The algorithm addressed the joint consolidation and service-aware load balancing problem to minimize the operation cost of data centers using Gibbs sampling method. Yong et al. developed a dynamic load balancing method of cloud data centers based on SDN approach [17]. The SDN technology improves the flexibility of the current tasks, accomplishing real-time monitoring of the service nodes and load condition by the OpenFlow protocol. When the load of the system is imbalanced, the controller can allocate the resources globally by using dynamic. Babu et al. developed a load balancing and scheduling algorithm of non pre-emptive independent tasks on VM based on the concept of honeybee algorithm [18]. The main aim of the algorithm is to balance loads among the VM for maximizing the throughput and also balance the priority of the tasks such a way that amount of waiting time for the tasks are minimized. Hu et al. developed an optimal data migration algorithm in diffusive dynamic load balancing using the help of Lagrange multiplier of the Euclidean form of transferred weight [19]. This algorithm can effectively minimize the data movement in homogeneous environments, but it does not work in heterogeneous environments. Cao et al. proposed an important and significant of performance optimization and power reduction approach in data centers of cloud [20]. The queuing model helps to group the heterogeneous multi-core servers with different speed and size. The main aim of the algorithm is to optimal power allocation and distributes the load across multiple heterogeneous multi-core processors in the cloud. Basker et al. proposed an enhanced weighted round robin scheduling algorithm with considering job length and resource capabilities in the cloud [21]. The main aim of the algorithm is to reduce the response time of the jobs by optimally utilizing the VMs using static and dynamic scheduling by knowing the length of jobs and the resource capability. Bandar Aldawsari, Thar Baker, and David England have proposed a cloud service brokerage systems along with the weaknesses and vulnerabilities associated with each of these systems in the multi-cloud environment [22]. The main purpose of the work is to find the appropriate data center in terms of energy efficiency, QoS, and SLA with proper security. Baker et al developed an energy-aware IoT service composition algorithm in the multi-cloud environment [23]. The algorithm provides a plan for searching and integrating the least possible number of IoT services to fulfill user requirements. Baker et al proposed an autonomic meta-director framework to find the most energy-efficient route to the green data center with the help of a linear programming approach [24]. The main aim of the algorithm is formalized by the situation calculus, and evaluate the shortest path algorithm to find the minimum number of nodes traversed by the task to reach the data center. Katsaros et al developed a cloud framework to monitor the energy consumption of a Cloud infrastructure. The framework calculates the energy efficiency of the system and evaluates the gathered data in order to put in place an effective virtual machine [25]. Jobs arrive during run-time of the server in varying random interval under various load condition. However, all the above unable to tap the potential information about the VMs and tasks which is useful for assigning tasks to the suitable VM. The features of the proposed algorithm are an efficient mechanism to find required number VM instance and capacity to serve the incoming tasks and a load balancing mechanism among the VMs based on the number and sizes of the tasks.
3. Notations, definitions and problem formulation Here, we first present notations followed by definitions which are used to describe and illustrate the proposed algorithm. Next, we present the problem formulation to be addressed by the proposed algorithm. Notation and their meaning are given in Table 1.
Table 1: Notations and their meaning
Notations m n y NUM (PEi) PEMIPS Gi CTik SZk Sj Si
Meaning Number of VMs Number of host servers Number of tasks arrived at given instance of time The numbers of cores (processing elements) are assigned to the ith VM. The size of each core in the form of MIPS Number of tasks are waiting in ith VM queue including the task under execution. The completion time of kth task assigned to ith VM The size of the kth task in terms of MI unit Physical queue length of the jth HS Logical queue length of the ith VM
Definition 1. (Capacity of a VM): Let V be the set of VMs represented by (V1, V2, V3, …, Vm}, and Vi and ith VM are interchangeable. The capacity of an ith VM is denoted by CPi which is the product of number of cores assigned to it and size of the each core (in terms of MIPS)
CPi NUM ( PEi ) PEMIPS
(1)
For example, 20 cores are assigned an ith VM and size of each core is 1000 MIPS. The capacity of the VM is 20,000 MIPS. Definition 2. (Capacity of the VMs): The capacity of the VMs is the summation of the capacity of all the VMs present in an HS. i.e., CPsum. The main constraint is that the capacity of all the VMs must not be greater than the capacity of the HS.
CPsum
m
CPi
(2)
i 1
CPsum B j , 1 j n Definition 3. (Workload of a VM): The workload of ith VM is calculated as follows. P WLi (t ) i CPi
(3)
where, Pi
Gi
SZ k ,
k 1
Here, Pi is the sum of sizes of the tasks waiting in the queue of ith VM and the task under execution. Definition 4. (Execution time of a task): The execution time of a kth task at ith VM is defined as the ratio of the size of the task (size of a task is represented as million instructions (MI)) to the capacity of the ith VM, which is formulated as follows
ETik
SZ k , CPi
(4)
Definition 5. (Waiting time of a task): The waiting time of kth task is defined as the completion time of (k1)th task waiting in the queue of ith VM, which is formulated as follows
WTik CTi (k 1)
(5)
Definition 6. (Makespan of tasks): Makespan of the tasks is defined as the maximum completion time of the tasks. Hence makespan of the tasks at jth HS is defined as follows (6) MS j MaxETik WTik , k y, i m Definition 7. (Feasible server configuration): Consider a data center which consists of n numbers of host servers (HSs) and each HS may host multiple numbers of virtual machines (VMs) based on their capacities. Given an HS, the N-dimension vector of size M, is said to be a feasible configuration only if the HS can simultaneously host N1 type-1 VMs, N2 type-2 VMs, … and NM type-M VMs. For example, consider an HS with 50 numbers of cores and the capacity of each core is 1000 MIPS. The servers can host only four distinct types of VMs, i.e., Extra-large (20 cores), Large (15 cores), Medium (10 cores), and Low (5 cores). First example, N = (1, 2, 0, 0) and N = (0, 2, 2, 0) are two feasible VM configurations on the HS, where N1 is the number of extra-large VMs, N2 is the number of large VMs, N3 is the number of medium VMs and N4 is the number of small VMs. Second example, N = (2, 1, 0, 0) is not a feasible VM configuration because the server does not host two extra-large (40 cores) and one Large (15 cores) VMs at a time in the server, i.e., (2×20+1×15) > 50 (capacity of the server). Third example, N = (0, 2, 2, 1) is also not a feasible VM configuration, i.e., (2×15+2×10+1×5) > 50. Let Bj denotes maximum capacity (in terms of MIPS) of the jth host server. In another way, we say that the jth server is a feasible configurable iff m
Ni CPi B j ,
i 1
Finally, the problem can be mathematically formulated as optimization problem as given below Minimize MS j and Maximize Ui(t) (CPU utilization refer Eq. 7) Subject to
CPsum B j , 1 j n m N i CPi B j ii) i 1 i)
4. Proposed work In this section, we present the proposed system model which is more convenient and suitable to run the proposed algorithm. Next, we describe the proposed algorithm in detail followed by its illustration. They are discussed as follows
4.1 System model We consider a data center with a set of identical host servers (HSs) that can accommodate and support multiple heterogeneous VMs simultaneously. The data center model is shown in Fig. 1. The job of the admission controller is to decide whether a set of tasks can be assigned to a server or not. This decision is based on the availability of the computing resources in the host server. The main activity of the VM scheduler is to find the best feasible VM configuration for the set of tasks waiting in the queue of the host server. Each server has a bounded queue to store the tasks coming from the customers via data center broker. The length of the queues of the servers are fixed whereas the lengths of the VMs are dynamic. The task scheduler manages the queues and tasks waiting in the queues. The
VMM is responsible for holding detail informations about the running VMs and servers such as makespan, completion time, waiting time, VM and CPU utilization. Admission controller
Task Scheduler
VM Scheduler
Virtual Machine Manager
APP
OS
APP
OS
APP
OS
APP
OS
. . . . . .
APP
APP
OS
OS
VM1 VM2
VM3 VM4
VM m‐1 VM m
Host server 1
Host server 2
Host server n
Fig. 1. Data Centre Model
4.2 Heuristic-based load-balancing algorithm The proposed algorithm HBLBA is divided into two stages, namely, server configuration and task-VM mapping. They are discussed as follows-
4.2.1 Server configuration The objective of the server configuration is to host how many numbers of VM instances and their types to serve the incoming tasks in order to minimize the makespan and utilize the resources. Consider a set of tasks arrive in a batch mode and the VMM first arrange the tasks into the non-decreasing order of their sizes and dispatch and distribute them to the HSs based on their current workload. The workload of the HSs is calculated based on the number of tasks already waiting in the queue of the servers. Each HS consists a physical bounded queue to store incomming tasks based on M/M/1/L/∞ queuing model. Let us assume that set of tasks T = (T1, T2, T3, … Tk) with their sizes (SZ1, SZ2, SZ3, …, SZk) where SZ1 ≥ SZ2 ≥ SZ3 ≥ … ≥ SZk is waiting in the queue of the server HS. On the other hand, assume that the server HS can ,…, . host only M distinct types of VM instances, i.e., I1, I2, I3, …, IM where The server is configured with the best feasible VM configuration by using the following rules.
Let us consider task T1 for assignment Rule 1: Server is idle: create and assign VM of type I1 to task T1 Rule 2: Server is not idle but resources are available to create VM of type I1 then create and assign VM of type I1 to task T1. Rule 3: Server is not idle but resources are not available to create VM instance of types I1, I2, I3, …, Ih where h M buut resources are available to VM instances of types Ih+1, Ih+2, Ih+3, …, IM. Then
calculate the expected execution time ET of running task T1 of the VM instance of types I1, I2, I3, …, Ih using Eq. (4) and calculate waiting time (WT) of task T1 in the queues of VM instance of types I1, I2, I3, …, Ih using Eq. (5). Calculate the expected completion time of task T1 on the VM instances of type Ih+1, Ih+2, Ih+3, …, IM. Let us assume V1, V2, V3, …, Vh are the VMs of types I1, I2, I3, …, Im respectively and Vh+1, Vh+2, Vh+3, …., VM are the VM of types Ih+1, Ih+2, Ih+3, …, IM respectively. Let ET11, ET21, ET31 …, ETm1 are expected execution time of task T1 on virtual machines V1, V2, V3, …, Vh respectively and WT11, WT21, WT31 …, WTm1 are waiting ttime of task T1 in the queues of virtual machines V1, V2, V3, …, Vh respectively. Similarly, ET(h+1)1, ET(h+2)1, ET(h+3)1 …, ETM1 are the expected execution time of task T1 on virtual machines Vh+1, Vh+2, Vh+3, …., VM respectively. X = Min { ET11 + WT11, ET21 + WT21, ET31 + WT31, …, ETh1 + WTh1} Y= Min { ET(h+1)1, ET(h+2)1, ET(h+3)1 …, ETM1} If (X < Y) then there exist a virtual machine Vi, 1 i h, with minimum completion time (CT) and task T1 is needed to wait in the queue of VM Vi. Otherwise, create and assign VM instance of type Ih+1 to task T1. Rule 4: Server is not idle and there is no provision to create any VM instance, then there exists a VM Vi, with minimum CT for task T1 Then the is need to wait in the queue of VM Vi. Vi ∈ Min { ET11 + WT11, ET21 + WT21, ET31 + WT31, …, ETM1 + WTM1} Rule 5: If no tasks are available then the VM instances are deleted. 4.2.2 Task-VM mapping After server configuration, the VMM needs to fix the maximum queue length of each VM instance based on its performance. The VM instances use logical queues to store the tasks asigned to them based on M/M/N/P/∞ queuing model. The queue length of each HS is fixed whereas the queue lengths of the VM instances can be changed dynamically. The point to note is that the length of all the logical queues of m VM instance must not exceed than the length of the physical queue of the server, i.e., m
S i S j ,1 i m; 1 j n
i 1
The tasks are distributed among the VM instances of an HS such a way that the overall utilization of the resources is improved and minimization of makespan. The utilization of the CPU resource of each VM instance at time t is defined as the ratio of the observation time Oi(t)) (the amount of time the VM instance is monitored for its activity) and the busy time (Hi(t)) (the amount of time VM instance was active during its observation time), which is formulated as (7) Hi (t ) Ui (t )
Oi (t )
where Ui(t) is the utilization of CPU of ith VM instances at time t. The size of the queue of the ith VM is is derived from the following equation. (8) U i (t ) Si ,1 i m (1 U i (t ))
Finally, all the tasks waiting in the queue are assigned to the VM based on FCFS policy. The VMM of the HS estimates the required capacity of each task. The VMM calculates the execution time and waiting time of the tasks using Eq (4) and Eq (5) respectively and the makespan of the tasks arrived at time t is calculated using Eq (6). The pseudo code of proposed HBLBA is shown in Fig. 2. Algorithm: HB LBA Input: Task with their sizes Output: The best feasible server configuration 1: Begin 2: Assign the tasks to the HSs based on their workload i.e., tasks are waiting in the queue of their corresponding servers. 3: Sort the task in descending order of their sizes 4: For each task Tk waiting in the queue of server j do 5: If (CPsum < = Bj and provision to create highest VM instances) then Create highest VM instance and assign task Tk to highest capacity VM instance 6: Else if ((CTik CTlk ) where i l ; (i, l ) m; CPi CPl 7: Task needs to wait for type i VM instance in the VM queue 8: Else 9: 10: A new VM of type l is configured and assigns the task to that VM instance 11: End if 12: End for 13: Task-VM mapping 14: Compute Execution Time, Waiting Time, Completion Time and Makespan of the tasks 15: Delete VM instances 16: Repeat Step 1 to Step 13 for new batch of tasks 17: End Fig. 2. Pseudo code of HB LBA
4.3 Algorithm illustration Let us consider an example to illustrate the proposed algorithm. For the sake of simplicity and to understand the illustration, less number of tasks are considered. The host servers in the cloud data center can host any of the four distinct VM instances such as (E, L, M, and O). The sizes of the VM instances (E, L, M, and O) in terms of cores are 20, 15, 10 and 5 respectively. The capacity of each core is 1000 MIPS. Six tasks T1, T2, T3, T4, T5 and T6 with sizes (in terms of MI) are 2, 00,000, 1, 00,000, 80, 000, 50,000, 30, 000 and 12,000 respectively arrive in batch at server HS, i.e., waiting in the queue of the server. The capacity of the server is 50 cores or 50, 000 MIPS. The feasible VM configurations of the HS are (2, 0, 1, 0), (0, 0, 5, 0), (0, 0, 0, 10), (2, 0, 0, 2), (1, 1, 1, 1) (1, 0, 2, 2) and so on. The best feasible VM configuration for the given set of task is obtained as follows. 1) Assignment of task T1: Initially, the server is idle, therefore Rule 1 is applicable. A new VM instance V1 is created of type E and T1 is assigned to it. The expected completion time of task T1 is 10, i.e., (2,00,000/20,000 = 10). As shown in Fig. 3 (a). 2) Assignment of task T2: The server is not idle but there is an availability of resources to create VM instance V2 is created of type E, i.e. Rule 2. A new VM instance is created of type E and T2 is assigned to it. The expected completion time of T2 is 5. As shown in Fig. 3 (b). 3) Assignment of task T3: The server is not idle and there is no availability of resources to create VM instance of type E and type L. But resources are enough to create VM instances of type M and O. Rule 3 is applicable. The minimum estimate waiting time and execution time at type E VM instances is Min (V1, V2) + 4 = 9. The minimum estimated execution time on type M and L VM instance are Min (8, 16) = 8. Then, a new VM instance is created V3 of type M and T3 is assigned to it. The expected completion time of T3 is 8. As shown in Fig. 3 (c).
4) Assignment of task T4: The server is not idle and there is no availability of resources to create VM instance of any type. Rule 4 is applicable. The minimum estimate waiting time and execution time at type E VM instances is Min (V1: 10, V2: 5) + 2.5 = 7.5. The minimum estimate waiting time and execution time at type M VM instances are 8+ 5 = 13 since there is no resources to create VM of type L. Then, the task T4 is waiting for VM V2. The expected completion time of T4 on V2 is 7.5 which is minimum than any instances running currently. As shown in Fig. 3 (d). 5) Assignment of task T5: Rule 4 is applicable. The task T5 is waiting for VM V2. The expected completion time of T5 is 9. As shown in Fig. 3 (d). 6) Assignment of task T6: Rule 4 is applicable. The task T4 is waiting for VM V2. The completion time of T6 is 9.6. As shown in Fig. 3 (d). The best feasible VM configuration for the above example is (2, 0, 1, 0). The details of waiting time, completion time and makespan of the example are shown in Table 2.
V1
E: T1
E: T1
V1
V2
CT(T2) = 5
CT(T1) = 10
CT(T1) = 10
(a)
E: T2
(b)
V1
E: T1
E: T2
V2
V3
CT(T2) = 5
CT(T1) = 10
M: T3
CT(T3) = 8
(c) T6 T5 T4
V1
E: T1 CT(T1) = 10
V2
E: T2
CT(T2) = 5 CT(T4) = 7.5 CT(T5) = 9 CT(T6) = 9.6
V3
M: T3
CT(T3) = 8
(d) Fig. 3. (a) The task T1 is assigned to Type-E VM instance V1 and its CT is 10. (b) The task T2 is assigned to Type-E VM instance V2 and its CT is 5 (c) The task T3 is assigned to Type-M VM instance V3 and its CT is 8 (d) Waiting for the status of task T4, T5, and T6 at V2 and their expected CT are 7.5, 9 and 9.6 respectively.
Table 2. Completioon time and makespan m of th he tasks
Task k id
Arrival time
Wating time
Execution n Time
Complettion Timee
1 2 3 4 5 6
0 0 0 0 0 0
0 0 0 5 7.5 9.0
10 5 8 2.5 1.5 0.6
10 5 8 7.5 9.0 9.6
Makeespan
100
5. Sim mulation resu ults The propposed algorithhm HBLBA is a heuristicc algorithm, therefore, t sim mulation resullts are the beest way to measure the performaance. In this section, we ppresent the siimulation resu ults of the prroposed algorrithm and its compparison withh the existing g algorithmss such as RR R [9], WRR R [10], DLB B [11] and Max-Min M Scheduliing [12]. In our experiments, we connsidered fourr set of taskss, i.e., 50, 1000, 150 and 200. We considerr four distinctt VM instancees, E (20 corees), L (15 corres), M (10 cores) and O ((5 cores). The number of serverrs consideredd for simulattion is four aand their cap pacity rangess from 50,0000 MIPS to 1, 1 00,000 MIPS. T The simulation programs are a written inn Dev C++ an nd Matlab. To T evaluate thhe performan nce of the proposedd algorithm, we w use various performannce metrics such as makesspan (Eq. 6),, waiting timee (Eq. 5), scheduleed length ratioon [26], VM and CPU utillization (using g Eq. 7).
5.1. Perfformance coomparison Here, wee show the coomparison results of the pproposed algo orithm as welll as existing four algorith hms using Fig. 4 too Fig. 8 in teerms of makeespan, waitinng time, scheduled length ration (SLR R), VM utilization and CPU utillization respeectively over different syntthetic datasetts. Makespan: The makkespan is deffined as the ttime differen nce between the t start and end of a seq quence of E 6). In ourr experiment, we have callculated the makespan m forr different off set tasks jobs or ttasks (refer Eq. after exeecuting them m on the VM instances crreated by thee proposed seerver configuuration. Fig. 4(a)-4(d) show thee performancce of the algo orithms for 5,, 10, 15 and 20 distinct VM V types. Fro rom Fig. 4(a)-4(d), we observe that the propposed algoritthm performss better than the existing algorithms ((RR, WRR, DLB D and The justificatiion of the sup perior perform mance of the proposed Max-Minn Schedulingg) in terms off makespan. T algorithm m is given at the end of thiis section.
(a)
(b))
(c)
(d))
Fig. 4. Makesp pan for (a) 5 V VMs, (b) 10 VMs, V (c) 15 VM Ms, (d) 20 VM Ms
Waitingg time: Waitiing time of a task is defiined as how much time the t task waitts to the queu ue before getting a response froom the VM (rrefer Eq. 5). IIn other word ds, the difference between the arrival time of the task to thhe starting exxecution timee. In our simuulation, we have h computeed the waitingg time for different of set tasks after executiing them on the t VM instaances created by the propo osed server coonfiguration and taskVM mappping. Fig. 5(a)-5(d) 5 show w the perform mance of thee algorithms for 5, 10, 155 and 20 distinct VM types. Frrom Fig. 5(a))-5(d), we seee that the prooposed algorithm HBLBA A obtains less waiting timee than the existing algorithms. Scheduled length raatio: The perrformance meeasure of task ks and VMs on a host ser erver is the sccheduling length (m makespan) too its output schedule s [26 ]. A large seet of tasks with w different properties is used to normalizze the scheduuling length to o a lower bouund which iss called sched dule length raatio. In our siimulation runs, wee also figuredd out the SL LR for differeent of set tassks after executing them on the VM instances created bby the propossed server co onfiguration aand task-VM mapping. Fig. 6(a)-6(d) sshow the perrformance of the allgorithms forr 5, 10, 15 an nd 20 distincct VM types. From Fig. 6(a)-6(d), we see that the proposed algorithm m HBLBA peerforms betterr than the exiisting algorith hms in terms of SLR.
(a)
(b))
(c)
(d)
Fig. 5. Average Waaiting Time: (aa) 5 VMs, (b) 10 VMs, (c) 15 VMs, (d) 220 VMs
(a)
(c)
(b))
(d)
Fiig. 6. Averagee SLR: (a) 5 V VMs, (b) 10 VMs, V (c) 15 VMs, V (d) 20 VM Ms
VM utillization: Thiis parameter indicates thhat how man ny times VM Ms are rescheeduled to execute the differentt tasks after thheir creationss. In other woords, resched duling VMs may m lead to m minimizing the number of VMs. If VM utilizzation is higher, it also ressults in saving g of computing resources which can be b utilized o simulationn runs, we alsso derived thee VM utilizattion for differrent set of for execuuting few moore tasks. In our tasks afteer executing them on the VM V instancees created by the proposed d server config iguration and task-VM mappingg. Fig. 7(a)-7(d) show the performancee of the algorithms for 5, 10, 15 and 220 distinct VM V types. From Fiig. 7(a)-7(d),, we see thaat the propossed algorithm m HBLBA achieves highher VM utilization as compareed to the existting algorithm ms. CPU utiilization: Thiis parameter indicates i the maximum CPU utilization n irrespectivee of any VM instances while exxecuting the task. t This alsso depends oon the maxim mum number of active VM Ms. In our siimulation runs, wee also deriveed the CPU utilization foor different of o set tasks after executiing them on the VM instancess created by the proposed server connfiguration an nd task-VM mapping. Figg. 8(a)-8(d) show the performaance of the allgorithms forr 5, 10, 15 andd 20 distinct VM types. From F Fig. 8(a) a)-8(d), we seee that the proposedd algorithm HBLBA H achieeves higher C CPU utilizatio on as compareed to the existting algorithm ms.
(a)
(b)
(c) (d)) Fig. 7. A Average VM Utilization U forr (a) 5 VMs, (b b) 10 VMs, (cc) 15 VMs, (d) d) 20 VMs
(a)
(c)
(b))
(d)
Fig. 8. Average A CPU Utilization forr (a) 5 VMs, (b) ( 10 VMs, (cc) 15 VMs, (dd) 20 VMs
The reasons for thee superior peerformance of the proposed algorith hm HBLBA A are summaarized as follows a) W We have develloped a new mechanism m too find the best feasible server configurration to hostt multiple VM Ms. The algoorithm utilizes the system resources effficiently whicch maximize the VM Utiliization as weell as CPU Utilization U of the t host serveer. b) Ann intelligent decision straategy to decidde whether a task need to o assign on a low capacity y VM by creeating it or neeed to wait fo or currently ruunning high capacity c VM. c) W We have also developed d a new n task-VM mapping app proach to schedule the waiiting tasks effficiently. d) Th he existing RR R algorithm m assigns thee VMs one by b one as FC CFS mannerr without con nsidering thhe current workload w and d capacity oof the VMs. The RR allgorithm alsso does not consider he tasks. As a result, the tasks ended d up with resource availability to creeate VMs annd size of th m higher compleetion time and it providess maximum makespan. e) Th he existing WRR W algoritthm consideers the resou urce capabilities of the V VMs and asssigns the m maximum num mber of task ks to the higgher capacity y VMs based on the weeight assigneed to the
VMs. But it failed to consider the size of the tasks to select the appropriate VM. This may cause higher completion time of the tasks and provide maximum makespan. f) The existing DLB algorithm considers the workload of the VMs without considering the capacity of the VMs and the length of the tasks. This may increase the completion time and makespan of the tasks. g) In Max-Min scheduling algorithm, the task with the overall maximum expected execution time (Largest Task) is assigned to a VM that has the minimum overall completion time (low capacity VM instance). As a result, the early execution of the larger tasks might increase the total response time of the system and the smaller tasks need to wait for a long time to execute which may increase the makespan of the system. 6. Conclusion In this work, we have proposed a new heuristic-based scheduling and load balancing algorithm for IaaS cloud, referred as HBLBA. The proposed algorithm is divided into two phases, namely server configuration, and task-VM mapping. We have developed a new and efficient strategy to find the best feasible VM configurations. The outcome of the server configuration phase is host required number of VMs and their types on the server. It also incorporates, an intelligent decision strategy to decide whether a task need to assign to a low capacity VM by creating it or need to wait for currently running high capacity VM. This helps in minimizing the completion time of the tasks and finally it leads minimization of the makespan. Moreover, this strategy also helps in utilizing the resources efficiently. In the task-VM mapping, we have adopted a queuing model through which tasks are scheduled on the VMs to minimize the waiting time and completion time of the tasks. We have also tested the proposed and existing algorithm using various performance metrics. Through performance results, we shown that the proposed algorithm performs better than the existing algorithms in terms makespan, waiting time, SLR, and resource utilizations. In our future work, we try to incorporate the deadlines to the tasks and develop better scheduling load balancing algorithm to minimize the number service level agreement (SLA) violations and enhance the other performance parameters and also extend the work for workflow scheduling.
References [1]
Aarti Singh, Dimple Juneja, Manisha Malhotra, “A novel agent-based autonomous and service composition framework for cost optimization of resource provisioning in cloud computing”, Journal of King Saud University - Computer and Information Sciences, vol. 29, pp- 19-28, 2017.
[2]
S. Suresh, S. Sakthivel, “A novel performance constrained power management framework for cloud computing using an adaptive node scaling approach”, Computers & Electrical Engineering, vol. 50, pp- 30-44, 2017.
[3]
T. Chatterjee, V. K. Ojha, M. Adhikari, S. Banerjee, U. Biswas and V. Snasel (2014), “Design and Implementation of a New Datacenter Broker policy to improve the QoS of a Cloud”, Proceedings of ICBIA 2014, Advances in Intelligent Systems and Computing, vol. 303, pp- 281-290, 2014
[4]
S. Banerjee, M. Adhikari, S. Kar and U. Biswas, “Development and Analysis of a New Cloudlet Allocation Strategy for QoS Improvement in Cloud”, Arabian Journal for Science and Engineering, Springer, vol. 40, pp- 1409-1425, 2014.
[5]
Saurabh Kumar Garg, Adel Nadjaran Toosi, Srinivasa K. Gopalaiyengar, Rajkumar Buyya, “SLA-based virtual machine management for heterogeneous workloads in a cloud data center”, Journal of Network and Computer Applications, vol. 45, pp- 108–120, 2014.
[6]
R. Buyya, C.S. Yeo, S. Venugopal, J. Broberg, I. Brandic, “Cloud computing and emerging it platforms: Vision, hype, and reality for delivering computing as the 5th utility”, Future Generation Computer Systems, vol. 25, pp- 599–616, 2009.
[7]
N Ajith Singh and M. Hemalatha, “An Approach on Semi Distributed Load Balancing Algorithm for Cloud Computing System”, International Journal of Computer Applications, vol. 56, pp- 1-4, 2012.
[8]
Ali M. Alakeel, “A Guide to Dynamic Load, Balancing in Distributed Computer System”, International Journal of Computer Science and Network Security, vol. 10, pp- 153-160, 2010.
[9]
R. N. Calheiros, R. Ranjan, C. A. F. De Rose and R. Buyya, “CloudSim: A Novel Framework for Modeling and Simulation of Cloud Computing Infrastructures and Services” in Technical Report, GRIDSTR-2009-1, Grid Computing and Distributed Systems Laboratory, The University of Melbourne, Australia, 2009.
[10] R. Basker, V. Rhymend Uthariaraj, and D. Chitra Devi, “An enhanced scheduling in weighted round robin for the cloud infrastructure services,” International Journal of Recent Advance in Engineering & Technology, vol. 2, pp- 81–86, 2014. [11] O. M. Elzeki, M. Z. Reshad, and M. A. Cloud, “Improved Max-Min Algorithm in Cloud Computing”, International Journal of Computer Applications, vol. 50, pp- 22-27, 2012 [12] B. Yagoubi and Y Slimani, “Dynamic load balancing strategy for grid computing”, Transactions on Engineering, Computing and Technology, vol. 13, pp- 260–265, 2006. [13] Jia Zhao, Kun Yang, Xiaohui Wei, Yan Ding, Liang Hu and Gaochao Xu, “A Heuristic Clustering-Based Task Deployment Approach for Load Balancing Using Bayes Theorem in Cloud Environment”, IEEE Transactions on Parallel and Distributed Systems, vol. 27, pp- 305 – 316, 2015. [14] Gaochao Xu, Junjie Pang and Xiaodong Fu,” A load balancing model based on cloud partitioning for the public cloud”, Tsinghua Science and Technology, vol.18, pp- 34 – 39, 2013. [15] Siva Theja Maguluri and R. Srikant, “Scheduling Jobs With Unknown Duration in Clouds”, IEEE/ACM Transactions on Networking, vol. 22, pp- 1938 – 1951, 2013. [16] Chuan Pham, Nguyen H. Tran, Cuong T. Do, Eui-Nam Huh and Choong Seon Hong, “Joint Consolidation and Service-Aware Load Balancing for Datacenters”, IEEE Communications Letters, vol. 20, pp- 292 – 295, 2016. [17] Wang Yong, Tao Xiaoling, He Qian and Kuang Yuwen, “A dynamic load balancing method of cloudcenter based on SDN”, China Communications, vol. 13, pp- 130 – 137, 2016. [18] L. D. Dhinesh Babu and P. Venkata Krishna, “Honeybee behavior inspired load balancing of tasks in cloud computing environments”, Applied Soft Computing, vol. 13, pp- 2292–2303, 2013. [19] Y. Hu, R. Blake, and D. Emerson, “An optimal migration algorithm for dynamic load balancing”, Concurrency and Computation Practice and Experience, vol. 10, pp- 467–483, 1998. [20] J. Cao, K. Li, and I. Stojmenovic, “Optimal power allocation and load distribution for multiple heterogeneous multicore server processors across clouds and data centers”, IEEE Transactions on Computers, vol. 63, pp- 45–58, 2014. [21] D. Chitra Devi and V. Rhymend Uthariaraj, “Load Balancing in Cloud Computing Environment Using Improved Weighted Round Robin Algorithm for Nonpreemptive Dependent Tasks”, The Scientific World Journal, vol. 2016, pp- 1-14, 2016. [22] Bandar Aldawsari, Thar Baker, and David England, “Trusted Energy-Efficient Cloud-Based Services Brokerage Platform”, International Journal of Intelligent Computing Research, vol. 6, pp- 630-639, 2015. [23] Thar Baker, Yanik Ngoko, Rafael Tolosana-Calasanz, Omer F. Rana, and Martin Randles “EnergyEfficient Cloud Computing Environment via Autonomic Meta-director Framework”, Proceedings of Developments in eSystems Engineering (DeSE), 2013 Sixth International Conference on, pp- 198- 203, 2015. [24] Thar Baker, Muhammad Asim, Hissam Tawfik, Bandar Aldawsari, and Rajkumar Buyya, “An energyaware service composition algorithm for multiple cloud-based IoT applications”, Journal of Network and Computer Applications, vol. 89, pp- 96-108, 2017. [25] Gregory Katsaros, Josep Subirats, J. Oriol Fitó, Jordi Guitart, Pierre Gilet, and Daniel Espling, “A service framework for energy-aware monitoring and VM management in Clouds”, Future Generation Computer Systems, vol. 29, pp- 2077–2091, 2013.
[26] H. Topcuoglu, S. Hariri, and M. Wu, “Performance-effective and low-complexity task scheduling for heterogeneous computing”, IEEE Transaction Parallel and Distributed Systems, vol. 13, pp. 260–274, 2002.
Mr. Mainak Adhikaari, currentlyy pursuing hiis Ph. D in C Cloud Compuuting from IIIT(ISM) Dhaanbad from the year 2015. He has obtained hiis M.Tech. From Kalyanii University iin the year 20013. He earnned his B.E. Degree ffrom West B Bengal Univeersity of Techhnology in thhe year of 20011. He is ppresently worrking as an Assistannt Professor iin the Departtment of Com mputer Sciennce and Enggineering, Buudge Budge IInstitute of Technoloogy, Kolkatta, West Beengal. His aarea of reseearch includes Cloud Computing, C D Distributed Computiing and Macchine Learninng. He has Contributed numerous reesearch Articcles in variouus national and interrnational jourrnal and Connference.
Dr. Tarrachand Amgoth receeived B.Tecch in Compputer Sciencce and Engiineering froom JNTU, Hyderabbad and M.T Tech in Com mputer Scieence Engineering from N NIT, Rourkkela in 20022 and 2006 respectively and Phh.D. form Inndian Instituute of Techhnology (Inddian Schooll of Mines),, Dhanbad in 20155. Presently, he is worrking as ann Assistant professor iin the Depaartment of Computer Science and Enginneering, Inddian Schoool of Mines, Dhanbadd. His currrent researcch interest includess wireless seensor netwoorks and clouud computinng.
Highlights:
1. The objective of the proposed algorithm HBLBA is to minimize the makespan and utilize the resource efficiently. 2. We devise an efficient mechanism to configure the servers based on the number of tasks and their sizes. This mechanism helps in utilizing computing resources efficiently. 3. An intelligent decision strategy to decide whether a task need to assign on a low capacity VM by creating it or need to wait for currently running high capacity VM 4. We develop a new task-VM mapping approach to minimize the makespan by scheduling the waiting tasks efficiently. 5. Finally, a comprehensive validation of the proposed algorithm via simulation runs using various performance matrices such makespan, waiting time, SLR, VM and CPU utilization.