Journal Pre-proof Multi objective task scheduling algorithm based on SLA and processing time suitable for cloud environment M. Lavanya, B. Shanthi, S. Saravanan
PII: DOI: Reference:
S0140-3664(19)30492-X https://doi.org/10.1016/j.comcom.2019.12.050 COMCOM 6107
To appear in:
Computer Communications
Received date : 23 May 2019 Revised date : 21 December 2019 Accepted date : 27 December 2019 Please cite this article as: M. Lavanya, B. Shanthi and S. Saravanan, Multi objective task scheduling algorithm based on SLA and processing time suitable for cloud environment, Computer Communications (2020), doi: https://doi.org/10.1016/j.comcom.2019.12.050. This is a PDF file of an article that has undergone enhancements after acceptance, such as the addition of a cover page and metadata, and formatting for readability, but it is not yet the definitive version of record. This version will undergo additional copyediting, typesetting and review before it is published in its final form, but we are providing this version to give early visibility of the article. Please note that, during the production process, errors may be discovered which could affect the content, and all legal disclaimers that apply to the journal pertain. © 2019 Published by Elsevier B.V.
of
Journal Pre-proof
pro
********************
MULTI OBJECTIVE TASK SCHEDULING ALGORITHM BASED ON SLA AND PROCESSING TIME SUITABLE FOR CLOUD ENVIRONMENT 1 1,2,3
M.Lavanya, 2B.Shanthi, 3S.Saravanan
School of Computing, SASTRA Deemed University, Tamilnadu
[email protected]
re-
Abstract
urn al P
Cloud computing is a new paradigm which provides subscription-oriented services. Scheduling the tasks in cloud computing environments is a multi-goal optimization problem, which is NP hard. To exaggerate task scheduling performance and reduce the overall Makespan of the task allocation in clouds, this paper proposes two scheduling algorithms named as TBTS (Threshold based Task scheduling algorithm) and SLA-LB (Service level agreement-based Load Balancing) algorithm. TBTS is two-phase scheduling algorithm which schedules the tasks in a batch. It supports task scheduling in virtual machines with distinct configuration. Furthermore, in TBTS algorithm, threshold data generated based on the ETC (Expected Time to Complete) matrix. Virtual machines which execute tasks with the estimated execution time lesser than threshold value are allocated to the particular task. SLA-LB algorithm is a online model which schedules the task dynamically, based on the requirement of clients, like deadline and budget as the two criteria. Prediction based scheduling is implemented in TBTS to increase the system utilization and to improve the load balancing among the machines by allocation of the minimum configuration machine to the task, based on predicted robust threshold value. SLA-LB uses the level of agreement and finds the required system to reduce the Makespan and increase the cloud-utilization. Simulation of proposed algorithms is performed with benchmark datasets [15] and synthetic datasets are generated with random functions. The proposed TBTS and SLA-LB final values of the proposed algorithms are analyzed with assorted scheduling models, namely SLA-MCT, FCFS, EXIST, LBMM, Lbmaxmin, MINMIN and MAXMIN algorithms. Performance metrics such as Makespan, penalty, gain cost and also the VM utilization factor of proposed algorithm compared with existing algorithms. The comparison analysis among various existing algorithms with TBTS and SLA-LB algorithms show that the proposed methods outperform existing algorithms, even in the scalability situation of the dataset and virtual machines.
Jo
Keywords: Cloud Computing, Task Scheduling, Makespan, Virtual Machine, Threshold, Utilization Factor.
pro
of
Journal Pre-proof
1. Introduction
re-
Remarkable developments and upgrades happen in cloud technology in the last few years and also the need of cloud services are increasing now a day. Cloud Service Providers (CSP) is provoked to update their resource capabilities, to satisfy their customers [1–4]. Still, CSP’s cannot provide customers with boundless resources to manage spike or fluttered demands [2]. Hence, CSP’s needed to increase their Quality of Service (QoS) like cost, service time etc and also should have collaboration with their neighbor CSP’s For transferring workloads. Scheduling client tasks in such a podium is a challenging thing which is NP hard [2]. CSPs and clients have the service-level agreement (SLA) before the service provision. The SLA establishes assorted metrics like makespan, response time, utilization factor, price, security etc… [5–8]. The CSP provides subscription oriented services to the customers after successful completion of customer job and CSP earns the amount [9, 10] or else, CSP should pay penalty For SLA violation [10, 11]. Every CSP has unified services to their clients [6, 12]. That’s why, scheduling of jobs in heterogeneous systems is more challenging [2, 3, 13].
Jo
urn al P
In our model, we mention the scheduling method for heterogeneous systems and propose two task scheduling algorithms, called TBTS (Threshold based Task scheduling algorithm) and VBTS (Variance based task scheduling algorithm). TBTS deals with scheduling the given N tasks to M, VMs with different configuration in CPU processing speed, where as VBTS is For scheduling the tasks to VMs with homo configuration. We calculate the execution time, finish time and generate Execution time matrix (ETC), Threshold (TH) and Variance (VR) matrices, which are defined later. The rows of ETC denote tasks and column denotes VMs and the elements indicate the expected runtime of the particular task on the machine. The uniqueness and the extensive influence of recommended methods over the existing methods are as follows. Both of our proposed methods helps the customers to get their results with less Makespan, hence customer has to pay less service cost to CSP compare to existing algorithms like MIN-MIN, MAX-MIN, SJF, LJF etc with QoS parameters, namely execution time, finish time and response time. We observe voluminous simulations on both the algorithms using three benchmark datasets [15] and one synthetic dataset, to perform comparison analysis with metrics, namely Makespan and VM utilization factor. The results show that the proposed algorithm outperforms existing algorithms [2]. The paper is structured as follows. Section 2 gives the related work. Cloud model and the task scheduling problem and given in section 3. Section 4 denotes the proposed algorithm. Section 5 tells the performance metrics followed by Section 6, which demonstrates the simulation results and their comparisons with existing algorithms. Section 7 concludes our paper along with future works.
pro
of
Journal Pre-proof
2. Related works
Jo
urn al P
re-
Resource allocation is a NP Complete problem. Hence numerous meta-heuristic, sub optimal, heuristic approaches have been proposed in previous years. The algorithms are used in cluster, grid or cloud based on few changes. Min-Min is a famous heuristic based scheduling method, which allocates the VM with possible speed to the task which is small in size.Etminani et al. In paper [14] Max-Min and Min-Min based are scheduling algorithms For grid environment tasks. Dynamically mapping the independent jobs are proposed by maheswaran at al. [18]. To reduce the overall Makespan, Kokilavani et al. [16] proposed LBMM algorithm which follows standard MIN-MIN algorithm with proper load balancing strategy. Xiaoshan et al. [17] proposed a task scheduling algorithm based on QoS. Various algorithms like simulated annealing (SA), Genetic algorithm (GA), Particle Swarm Optimization (PSO) and Ant Colony Optimization (ACO) etc. are used for scheduling. ACO algorithm was proposed by Tawfeek et al. [12]. Monte Carlo technique based ACO algorithm was proposed by Fidanova et al. [19]. In paper [20] Garg, Ritu. Proposed weighted Directed Acyclic Graph (DAG) based on ACO algorithm. Algorithms to focus of mutli objectives like make span, cost, and utilization factor proposed by alias siege et al. [21]. Multi objective algorithms are very essential now days to reduce the client and CSP burdens. To find Pareto front of results, Deb et al. [22], proposed elitism and computationally fast non-dominant sorting approach based algorithm to answer multi objectives. In Ref [23] Ye, Guangchang, Ruonan Rao, and Minglu Li proposed MORSA algorithm. This algorithm combines features of NPGA and NSGA methods. Algorithm For deadline oriented tasks was introduced in paper [23].Dynamic resource allocation was proposed by geo at al [1]. Aazam et al. [24] proposed the reservation based on service level model for customers.. T.Kokilavani and Dr. D. I. George Amalarethinam proposed LBMM algorithm (Load Balanced MinMin) algorithm [44] which is the improved version of Min-Min algorithm to enhance cloud utilization. Ivanovic et al. [25] proposed SLA, based on execution time in the Form of execution time. Cloud list scheduling [2], round-robin [2], opportunistic load balancing [26], minimum completion cloud [3], MCT [27], minimum execution time [27], cloud Min-Min scheduling [2], cloud normalized Min-Min Max-Min [28], median max [3], cloud Min-Max normalization [3], and Allocation aware scheduling [29] are various scheduling algorithms based on one phase and two phase techniques. LB-MaxMin algorithm was proposed by Yingchi Mao, Xi Chen and Xiaofang Li in paper [39] which is the enhanced version of max-min algorithm to balance the load among the systems. Freund et al. [27] have proposed a GREEDY model that schedules the task to the system which has MCT (Maximum completion time). Still the algorithm did not focus on execution cost of the VM. Two-phase algorithm called MIN-MIN proposed by Ibarra et al. [30] .In paper [38] Sanjaya K. Panda and Prasanta K. Jana proposed SLA-MCT based algorithm which deals with task scheduling based on level of agreement with clients. SLA-MCT is a single phase algorithm which allocates the task to the system online and reduces gain-penalty cost ratio. Still the algorithm did not concentrate on other metrics like Makespan and cloud utilization. F. Alharbi, K.S.A. Rabigh, has proposed simple scheduling algorithm in Paper [43]. In this algorithm, the task which enters first to the queue is occupied with VM1, task which enters second will be occupied by VM2 and if all machines are allotted with a task, the new task will start again from the first VM. This algorithm seems to be very simple, but it doesn’t consider quality of the performance or cloud utilization. In our paper, we deal with Threshold -based task
pro
of
Journal Pre-proof
scheduling algorithm and variance based algorithms For different VM environments to reduce the execution cost. The proposed algorithms are anxiously calculated with three benchmark datasets which are generated by Braun et al. [15] and also with some random synthetic data set.
1. Model and Problem Statement
re-
2. 3.1 Cloud model
urn al P
Cloud service providers provide the quality service to the clients based on the service level agreement. CSP consists of a broker server, which knows, allocates and keeps track of computational resources. Customers received the service which is allotted by data center. However, no CSP can provide infinite resources to the customers. Some time resources may be overloaded, which decrease the performances of the system and also increasing the execution cost. Hence CSP, in the position to manage the customer satisfaction, as well as his profit in resource allocation. When CSP receives the request from a customer, server manager perform the SLA with customer for cost, execution time etc. Then the algorithms are used in server system to find the resource, which satisfies the SLA For a task and the particular task will be joined in the ready queue of the machine. We assume that the CPU is the major resource in IAAS For the algorithms and the disk image, memory and bandwidth are negligible parameters which are not highlighted in this paper.
3.2 Application model and problem statement
Examine a set of VMs VM = {VM1, VM2, VM3, . . . ,VMm}, where each VMi , 1 ≤ i ≤ m in CSP. Also consider a task set T = {T1, T2, T3,… Tn }.Here the task Ti , 1 ≤ i ≤ n which are independent tasks.
Jo
Definition 1. (Expected Time to Compute Matrix (ETC)) this matrix calculates the execution time of the tasks based on matrix given in below. (Eq:1).
ETC = ETC 11
ETC 21 ..... ETC n1
ETC 12
.....
ETC 22
.....
.....
.....
ETC n2
.....
pro
of
Journal Pre-proof
ETC 2m ...... ETC nm ETC
1m
(1)
re-
ETCi j , 1 ≤ i ≤ n, 1 ≤ j ≤ m denotes expected execution time of task Ti on VMj. ETC ij is calculated by dividing the given task length in million instructions MI by the processing speed of the CPU denoted by the metric called Million instructions per second (MIPS) [31-33]. For example the size of the task Ti is 4000MI and the VMj processing speed is 100 MIPS,then the ETC of taskij = 40s.
10 30 ETC = 40 90
30
20
50
40
70
50
100
95
5 20 30 80
urn al P
Sample ETC matrix with 4 task and 4 VMs are shown in matrix (Eq: 2).
(2)
Definition 2. ( Gain (GA) Matrix): The agreement which made between the client and CSP on the fulfillment of requirement is called gain cost[38].The following matrix consists of row as the tasks and columns are the cloud VMs and the cost is calculated based on the ETC matrix. The structure of GA matrix is illustrated below. Which VM outperforming the task will be allotted with maximum and it will be reduced with multiples of 2.example GA matrix For ETC matrix is shown in Eq: 3. EA12
....
EA 22
....
.....
....
EAm 2
....
EA1n EA 2 n ..... EAmn
Jo
EA11 EA 21 GA = .... EAm 1
(3)
pro
of
Journal Pre-proof
Data EAij which is calculated from the given sample ETC matrix is given below in the matrix Eq:4.Machine with the less burst time will be provided with high cost and high burst time will be with the low cost as follows in Eq:4.
8
6
8
6
8
6
8
6
2 2 2 2
(4)
re-
4 4 GA = 4 4
Gain cost entirely depends on service level agreements. Cost is based on the execution time. According to the consistent model, gain matrix is similar for the specific execution time machine.
EP 11 EP 21 PCij = .... EPm 1
EP 12
....
EP 22
....
.....
....
EPm 2
....
urn al P
Definition 3. (Penalty matrix (PC)):Task which cannot be completed by the CSP will be credited to the customers(i.e.), if the customer is not satisfied with the performance, CSP has to pay violation cost to them. PC (Penalty matrix) For the ETC matrix is shown in matrix Eq:5. EP 1n EP 2 n ..... EPmn
(5)
Jo
For example the penalty cost For the given etc matrix is calculated and given in matrix Eq:6. Assumption is made to keep 50 % as the violation penalty cost.
2 2 PCij = 2 2
4 4 4 4
pro
of
Journal Pre-proof
3 1 3 1 3 1 3 1
(6)
re-
Definition 4. (SL Agreement levels) In our proposed SLA-LB online algorithm, based on the agreement the tasks are allocated with the VMs. Agreements may be for performance, budget or both. If the customer needs, the task must be completed as early as possible, the task has to be allotted to the VM which provides less execution time. If the customers based on economic level, the task should be allotted to the VM which provide very less lease amount. If the customer needs some ratios of performance and budget, suitable VM should be provided.
urn al P
Based on the requirement service made between clients and CSP as levels .Customer has to enter the agreement value in percentage level. Levels are level 1, 2, and 3. To find the level of agreement, Ø1 and Ø2 values are taken as input. If customer enters Ø1= 100 For performance and Ø2=0 For budget, then it is under the level 1 category. If the entered values are Ø1=0 for performance and Ø2=100 For budget, it is in level 3 catogery.SLA level2 focuses on the mixture of Ø1 and Ø2 values. If Ø1 is 30 and Ø2 is 70, then the proposed algorithm will choose the proper machine by the method which is explained in section 4.2. Note1: Ø1 + Ø2 must be equal to 100.
Note2: For the normalization purpose, Ø1 and Ø2 values are divided by hundred. Therefore Ø1+Ø2 should be equal to 1. 3.3 An illustration
To clearly understand the concept of proposed model Example scenario which is given in Table 1 is considered. There are 10 client’s tasks T = {T1, T2,. . ., T10}. Arrival time of the tasks is assumed to be zero for simplicity of explanation which is followed in [2, 26]. These tasks are scheduled to three Vms, VM={VM1,VM2,VM3}.Task capacity of task set is shown in table 1.VM processing speed is shown in table 2.TBTS algorithm and SLA-LB algorithm are explained with the synthetic dataset generated with random functions. Assumptions of algorithms:
1. For simplification, start time of the tasks is marked as zero [2, 26].
Jo
2.CPU processing time is a significant, I/O communication, and memory capacity is negligible 3. All the VM are having distinct configurations 4. All the tasks are not dependent of each other.
pro
of
Journal Pre-proof
5. Task size is measured in MI (Million Instructions), and Processor speed with MIPS (Million instructions Per Second).
4.Proposed algorithms 4.1 Threshold based task scheduling algorithm (TBTS)
Task Name T1 T2 T3
Length in MI 383 886 777 915
T5 T6 T7 T8 T9 T10
793 335 386 492 649 421
urn al P
T4
ETC calculation:
Jo
Table1.Tast set
Phase 1:
re-
In this module, we present our proposed algorithm TBTS. As stated already, TBTS is a two-phase (offline scheduling) algorithm. Here the tasks, which are arrived will not be allocated to the cloud immediately[18,34], instead some pre-processing phases are designed, before allocating the task to the VM which results in better performance of the cloud as well as reduced overall Makespan value. Consider an example with 10 Tasks T10= {T1, T2,…T10} and 5 VMs VM5={VM1,VM2,…VM5} with various task lengths and processing capacities which are generated with random functions shown in table 2a and table 2b. Various phases processed in TBTS algorithm are described below VM VM1 VM2 VM3 VM4 VM5
CPU Capacity in MIPS 62 27 90 59 63
Table 2.VM capacity
pro
of
Journal Pre-proof
14.19
4.26
6.49
32.81
9.84
15.02
28.78
8.63
13.17
33.89
10.17
15.51
29.37
8.81
13.44
12.41
3.72
5.68
14.30
4.29
6.54
18.22
5.47
8.34
24.04
7.21
11.00
15.59
4.68
Phase 2: Threshold value calculation
6.08 14.06 12.33 14.52 12.59 5.32 6.13 7.81 10.30 6 ,68
urn al P
6.18 14.29 12.53 14.76 12.79 5.40 ETC = 6.23 7.94 10.47 6.79
re-
ETC matrix is calculated, based on the matrix shown in table 1 as on the given Taskset T={T1,T2,…T10} and VM set VM={VM1,VM2,…VM5}.The output matrix of the given scenario is shown in Eq:7. Algorithm 1 shows the ETC matrix calculation methodology.
7.14
(7)
Jo
Threshold value is calculated by adding the ETC1j, ETC2j … ETCnj, where 1< j
re-
7.437 17.206 15.089 17.769 15.399 6.505 TH = 7.496 9.554 12.603 8.175
pro
of
Journal Pre-proof
(8)
Allocation of Maximum task:
urn al P
Phase 3:
In the given dataset, task with the maximum threshold (TH) value is allocated to the VM first.VM for allocating the maximum task is chosen, and based on finding VM, which will finish it with earlier execution time with the help of ETC matrix. For example in the given dataset, task 4 is found to be a maximum task and it is allocated to the VM 3. Algorithm 3 shows the allocation procedure. Phase 4: Allocation of Minimum task:
Jo
In the given dataset, task with the minimum threshold (TH) value is calculated first. Secondly VM with least configuration in the VM list is chosen. As a third step ETC value of Task Ti on VMj was checked with the threshold value (TH) of Ti. If the ETC value is lesser than the TH value, the VMj allocated with Ti. Otherwise Ti puts in to non allocated taskset. For example, in the given dataset Task 6 is found to a minimum
pro
of
Journal Pre-proof
task and the least configured VM is VM 2.ETC of VM2 is 12.41, which is greater than the TH value of Task6(6.505), hence Task 6 is allocated to VM2. Algorithm 3 illustrates the allocation of maximum task and minimum task to the appropriate VM. Phase 5: Allocate tasks in NOT-ALLOCATED list:
not
re-
After executing phase 4 of TBTS algorithm, new task set will be generated with not allocated tasks. From the new task set task with maximum task set calculated and it is allotted with the VM where ETC matrix value of VM is less than TH value. Phase 5 is repeated until the not-allocated taskset become NULL. If Not-allocated tasks[]={Ǿ}, overall makespan is calculated based on the formula given in equation 2.Algorithm 4 shows the task allocation of taskset using phase5. For given example, allocated VM to the taskset, finish time and makespan output are shown in table 8.
urn al P
TASK VM FT(VM) T1 1 6.177 T2 3 20.0111 T3 1 18.79 T4 3 10.17 T5 5 18.71 T6 4 5.67 T7 3 6.12 T8 5 21.87 T9 4 16.67 T10 5 25.39 MAKESPAN(MS)=25.477
Table 3.Makespan calculation
4.2 SLA based Load balancing algorithm
SLA-LB algorithm is a single-phase (online) scheduling algorithm, where tasks are assigned as soon as they enter the queue. It is important to state that in cloud environment offline algorithms are better only where the mapped task set is revised in every step of scheduling process [2].
Jo
The key aspect of the SLA-LB Algorithm is an agreement between client and cloud service provider. During the service level agreement, customer requirement parameters like deadline oriented (α) or budget oriented (β) values taken as percentage values. Then the values are
pro
of
Journal Pre-proof
Task id T1 T2 T3 T4 T5
re-
normalized to 1 where as 0≤ (α) ≤1 and 0≤ (β) ≤1 .Cloud systems are arranged in the order based on their CPU execution capacity (MIPS). According to the agreement values, the allotment of cloud to the tasks are classified into 3 levels, named as level 1,level 2 and level For level 1, tasks α value is 1 and β value is 0. For level 3, tasks α value is 0 and β value is for level 2, tasks α value is x and β value is y where 0≤ (x+y) ≤1.Level1 tasks are allotted with the system which is last in the VM list. Level 3 tasks are allotted to the systems in the first of the VM list. For Level 2 tasks µ values are calculated with the Formula µ1= α * VMC and µ2= β * VMC. Floating values are rounded to 1.From the result if (µ1= =µ2), the system in the list in the µth position is allotted to the task. Else, the task is allotted with the system which is in the MAX (µ1, µ2) position. Algorithm 5 shows the procedure of SLA-LB algorithm. Consider an example scenario with 5 tasks in taskset and 3 resources are available with level of agreement, which are shown in table 4 and table 5. Size in MI 89383 30886 92777 36915 47793
Deadline based (%) 30 40 50 00 0
Budget based (%) 70 60 50 0 100
urn al P
Table 4.Task size and SLA value VM ID 0 1 2
VM Capacity in MIPS 335 386 492
Table5.VM capacity
Phase1: Calculation of α and β values: For the given scenario, α and β values are calculated by dividing an input by 100. For example, if the level value is 32, then α is 0.32, where as α+β=1. Table 6 shows the calculation result of given levels.
Jo
Deadline based 30 40 50
Budget based 70 60 50
α .3 .4 .5
Β .7 .6 .5
100 0
pro
of
Journal Pre-proof
0 100
1 0
0 1
Table 6. α and β value calculation
Phase 2: VM pool list is prepared for both deadline based and budget based. If VM2 is the fastest machine, then it will be in the last place in the deadline oriented pool list and first in budget oriented VM pool list. Table 7 shows the VM list for the given scenario. VM pool list (Budget oriented) 2 1 0
re-
VM pool list(Deadline oriented) 0 1 2
Table 7.VM selection list
urn al P
Phase 3: µ1 and µ2 values are calculated with the formula µ1=α * VMC and µ2=β * VMC. After finding µ values, MAX (µ1, µ2) is the VM ID of the machine which is to be allotted with the corresponding task. For example, if α is .3 and β is .7 and VMC is 3 the µ1=.3 * 3 =0.9 and µ2=.7 *3= 2.1, which are rounded to 1 and 2. MAX (1, 2) is 2.β value is greater than α value. Therefore from the budget pool list VM which is in the position of 2(because the maximum is 2) is allotted with the task. For the given scenario, VM 1 is allotted to the task1.Table 8 shows the allotment table output Task id 1 2 3 4 5
VM id 1 1 1 2 0
Table 8.Final VM allocation
Pseudo Code:
Notations and their definitions Definition
ETC
Execution Time Matrix
Jo
Notation
Task Size
VMCP
Virtual machine capacity
TC
Task count
VMC
Virtual machine count
TH
Threshold value
FT
Finish Time
POS
Position of task in taskset
AC
VM Allocation number
MS
Makespan of Taskset
Cu
Cloud utilization
urn al P
Algorithm 1 ETC Calculation Input: 1.The following 1D matrices:TS,VMCP 2.Values:TC,VMC Output: 1.The 2D matrix : ETC For i=1,2,3,…,TC do For j=1,2,3,…,VMC do ETC[i][j]=TS[i]/VMCP[j] End For End For
Algorithm 2 THRESHOLD VALUE CALCULATION Input: 1.The following 1D matrices:TS,VMCP
Jo
2. Values:TC,VMC
re-
TS
pro
of
Journal Pre-proof
pro
of
Journal Pre-proof
3.The 2D matrix : ETC Output: 1.The 1D matrix:TH For i=1,2,3,…,TC do For j=1,2,3,…,VMC do TH[i]=TH[i]+ETC[i][j] End For
For i=1,2,3,…,TC do TH[i]=TH[i]/VMC End For
urn al P
Algorithm 3: MAXIMUM AND MINIMUM TASK AND ALLOCATION
re-
End For
Input:The values :TC,Max=0,Min=∞,x=0,VMC,minimum =∞, The 2Dmatrix
The following 1D Matrices:POS={0},AC={0},Vm={0},TH{TC},FT={0},VMCP{TC}
For i=1,2,3,…,TC do if
Max < TH[i]
POS[0] = i End if If min > TH[I] POS[1]=i End For For j=1,2,3,…VMC do
Jo
Output:Allocation,finishtime of VM and TASK
pro
of
Journal Pre-proof
if Min > ETC[POS[0]][j] AC[0] = j End if If minimum > VMCP[j] Minimum=VMCP[j] AC[1]=j
if ETC[POS[0]][AC]] > =ETC[POS[1]][AC[1]] FT[AC[1]=ETC[[POS[1]][AC[1]] Else AC[1]= POS[1]=∞ End if
Algorithm 4 : TASK ALLOCATION
urn al P
FT[AC[0] ]=ETC[POS[0]][AC[0]
re-
End For
Input:The values :TC,Cu=0,L=0,N=0Max=0,Min=∞,x=0,VMC,MS,minimum =∞,x=allocated task count,m=0,f1=allocated vm count,The 2D matrix : ETC The following 1D Matrices:POS={0},AC={0},Vm={0},TH{TC},FT={0} Output: Allocation, finishtime of VM and TASK
For i=1,2,3…TC For k=1,2,3,…x If i == POS[K] M=POS[k] End if If
i<>m
If max < TH[i]
Jo
While (allocated task <> NULL)
pro
of
Journal Pre-proof
Max=TH[i] End if End if If max <> 0 x++ End if
re-
POS[x]=i For j = 1,2,3 … VMC For f=1,2,3,…f1 If j <> AC[f1] If min>etc[POS[x]][j]
Min=etc[pos[x]][j] End if End if Endif
urn al P
If etc[POS[x]][j] < = TH[POS[x]]
FT[AC[f1]]=AT[AC[f1]] + etc[POS[x]][AC[f1]] End while //finding Makespan of taskset MAX[FT[i]] //Finding cloud utilization For I =01,2,3,…VMC cu=cu+FT[i] EndFor
Jo
Cu=Cu/(VMC*MAX[FT[i]
pro
of
Journal Pre-proof
ALGORITHM 5:SLA based Task Allocation Input :The values 1D Array :DL,BL,alpha,beta alpha[i] = DL /100 beta = BL/100 µ1=alpha[i] * VMC µ2=beta[i]*VMC
re-
IF (µ1 –INT(µ1)) >= 0.5 µ1+=1 ELSE µ1- =1 IF (µ2 –INT(µ2)) >= 0.5
ELSE µ2- =1 IF alpha[i] < beta[i]
urn al P
µ2+=1
//Task i allotted with vm in the POS[µ2] IF alpha[i]>beta[i]
//Task i allotted with vm in the POS[µ1] IF alpha[i]==beta[i] //Task I allotted to the system VMC/2 Gantt chart: TBTS Algorithm
0 ~ 6.177
6.177 ~ 18.79
Jo
VM1
----- ---- ----- ---- ---- ---- ----- ----
VM3
0~6.12
6.12~10.17
VM4
0~5.67
5.67~16.67
VM5
0~18.71
10.17~20.011
18.71~21.87
21.87~25.39
Gantt chart: SLA-LB Algorithm VM0 0~142.66 VM1 0~231.56
231.56~311.577
311.577~551.93
VM2 0~75.030
re-
VM2
pro
of
Journal Pre-proof
urn al P
The following Table[9] shows the comparison analysis of the proposed algorithm TBTS with various existing two phase scheduling algorithms like opportunistic task scheduling[26], MIN-MIN[30], MIN-MAX[28], LBMM[44], LBMAXMIN[39], simple scheduling algorithm [43] . The proposed SLA-LB algorithm is compared with the existing single phase scheduling algorithm SLA-MCT [38], for the
Jo
synthetic dataset shown in table [1] and table [2].Makespan and average cloud utilization are the major quality performance parameters taken in to account for the comparisons. Since, the gain cost and the penalty cost are the parameters proposed in paper [38], we have calculated these two cost parameters also. The result table [9] shows that the proposed TBTS algorithm outperform all other two phase algorithms and SLA-LB algorithm outperform the SLA-MCT single phase algorithm in Makespan and cloud utilization and gets harmless variations in gain and penalty cost compared with SLA-MCT algorithm.
Metrics
TBTS
SLA-LB
SLAMinMin
Min-Min [30]
[38]
Opportunity scheduling [26]
25.477
61.661
102.322
77.677
30.9
Cloud Utilization
0.677
0.3388
0.2
0.23
0.57
Gain
64
56
40
68
52
32
28
20
34
26
Penalty cost
urn al P
cost
MaxMin[28]
LB-MM
LB-MaxMin
Simple
[44]
[39]
Scheduling algorithm [43]
67.077
52.1111
36.444
47.111
0.2
0.47
0.61
0.5
100
60
60
60
50
30
30
30
re-
Makespan
pro
of
Journal Pre-proof
Table 9.Comparision Table
5. Performance metrics
In this section, we commence two metrics for system performance to examine the proposed model. These are the standard metrics used in grid scheduling algorithms [33-36, 38].Two more metrics are also examined, which are used in paper [38]. 5.1 Makespan
Jo
The Makespan (Ms) is the comprehensive achievement time to process the user task with available VMs [3, 34-37]. Let J be the count of the tasks ,k be the VM count, Ms[J] is the Makespan value of one VM v, 1 ≤ v ≤ k, Ť be a task, and B[ i,v], 1 ≤ i ≤J is the binary data defined as
B[i,v]=
1 if task Ťi is assigned to vk 0 elseways
Makespan of VM v is mathematically expressed in Eqn 9 as: j
ETC [i , v ] * B[i , v ]
(9)
i =1
Hence, the overall Makespan is expressed in Eqn 10as:
Ms =Max(Ms[v]), 1 ≤ v ≤ k
(10)
urn al P
5.2 Average VM utilization
re-
Ms[v]=
pro
of
Journal Pre-proof
The VM utilization v (Ŭ[v]), 1 ≤ v ≤ k is the proportion of overall Makespan [3]. The dimension of VM utilization lies (upper bound). Methodically, Makespan of VM v and the between 0 (lower bound) to 1,given in Eqn 11. Ŭ[v] =
Ms [ v ]
(11)
Ms
The average VM utilization (Ŭ) is the proportion of the summation of all Makespan and the multiplied value of total VM k with Ms.Utilization factor formula is shown in Eqn 12. 1
Ms * k
Jo
Ŭ =
k
i =1
Ms [ k ]
(12)
pro
of
Journal Pre-proof
Note 1 : If the Makespan of a VM k is equal to the maximum of completion time of all VM, then The VM utilization v, 1 ≤ v ≤ k = 1 Note 2: Average Utilization factor value Ŭ =1, only if the VM are fully loaded.
5.3 Gain
is mathematically expressed in Eqn 13 as follows:
Ga[v] =
h i =1
EA[i ] * B[i , v ]
The total gain cost is expressed in Eqn 14 as follows
k v =1
h i =1
EA[i ] * B[i , v ]
(13)
(14)
urn al P
GC =
re-
The cost which is fixed for each VM is called gain cost. It is allotted during agreement period. After successful completion of task set, profit is credited to CSP [38]. The gain of a particular cloud VMV(i.e.) 1 ≤ V ≤ k
NOTE 3: Gain cost GC become zero if the Makespan is zero. 5.4 Penalty
The penalty is the contravention amount given by CSP to the customer because of not fulfilling the need .The penalty cost of specific cloud system VMV(i.e) 1 ≤ V ≤ k mathematically expressed in Eqn 15 as follows: Pe[v] =
h i =1
EB [i ] * B[i , v ]
(15)
The total penalty cost is expressed in Eqn 16 as follows:
PC=
k v =1
h i =1
EB [i ] * B[i , v ]
Jo
Note 4: If Makespan of cloud is 0 then penalty cost also become 0
(16)
pro
of
Journal Pre-proof
6. Simulation results
We simulate the proposed algorithms using GCC compiler version 4.4.4 20100503(Red Hat 4.4.4-2), an Intel(R) Core(TM) i3-2330M CPU @ 2.20 GHz 2.20GHz CPU and 4 GB RAM running on FEDORA(32Bit) .Experimentation of the algorithms are evaluated with Braun et al. [15] benchmark dataset and synthetic dataset are generated from pseudo random generation function. 6.1 Simulation runs on benchmark datasets
urn al P
re-
Benchmark datasets with different Task and VM instances are considered for this simulation. Data in the dataset shows the execution time of the task on the corresponding VM. Uniform distribution (u) methodology is followed to generate the various instances of datasets. The attributes of the files are 1) consistency type (x), 2) heterogeneity among the taskset (yy), and 3) Heterogeneity of cloud VM (zz). u_x_yyzz is the structures followed to store a benchmark dataset file. Consistency types are fully consistency, semi consistency and inconsistency. For the proposed model, fully consistent type is considered and with the dataset, ETC matrix is evaluated. Next, yy and zz are with high or low heterogeneity combinations of task and cloud VMs. The instances with the various sizes like 512 × 16, 1024 × 32 and 2048 x 64 are taken for evaluation. The instance one denotes that 512 tasks are scheduled to 16 cloud VMs, the instance 1024x 32 denotes 1024 tasks scheduled to 32 cloud VMs and the third size denotes the 2048 tasks are allocated to 64 VMs. Note that the 512 x 16 size consists of 4 instances, 1024 x 32 data size consists of 8 instances and 2048 x 64 size consists of 8 instance of Braun et al. [15]. These instances are widely used in task scheduling [3, 28, 29, 4042].The simulation steps are 1) one of the benchmark instances (say, u_c_hilo) is taken as a input and ETC matrix is generated. (2) Using proposed algorithms TBTS algorithm and SLA-LB algorithm, task allocation is performed and overall Makespan, cloud utilization, gain cost and penalty cost are calculated. The equivalent EG and EP matrices are generated. 3) Results are compared with SLA-MCT algorithm [38], opportunistic load balancing [26], round-robin [2], cloud Min-Max normalization [3], LB-MaxMin algorithm [39], Min-Min algorithm [30]. 4) The comparison shows that proposed offline scheduling TBTS algorithm outperforms all the static algorithms [2, 3, 26, 30, and 39] in Makespan and cloud utilization. Online scheduling algorithm provides better allotment the jobs to the cloud compared with existing online algorithm [38].
Jo
Comparision table 10-14 and the comparision chart 1-5 shows the comparison analysis of proposed algorithm with existing algorithms[26][[30][39][38][39[44][43]for the instances 512x16,1024x32,2048x64 in benchmark dataset[15] and synthetic data sets with the instances 128x16,256x16,384x16,640x32,768x32 and 896x32.The parameters called makespan,utilization,gain and penalty cost are considered for comparision and the values in the comparison tables shows that the proposed algorithm produces better performance compare to all existing models and for some cases it is equal to load balancing algorithms in utilization_factors.
pro
of
Journal Pre-proof
Comparison Table 10..Makespan ,Utilization factor comparison of Proposed TBTS and SLA-LB algorithms with SLA-Minmin[38], OpportunityScheduling[26], Minmin[30], Maxmin[28], LB-MM[44], LB-MAXMIN[39], Simplescheduling[43] -Benchmark dataset 512 x 16 with U_C_LOLO, U_C_LOHI, U_C_HILO,U_C_HIHI.
SLAOpportunityLBLBSimple MinMin[38] Scheduling[26] MinMin[30] MaxMin[28] MM[44] MAXMIN[39] Scheduling[43] 1.5750E+05 1.3408E+04 3.9582E+04 3.9582E+04 1.5297E+04 1.4880E+04 1.8131E+04 7.2020E+03 1.2844E+04 1.6384E+04 1.6384E+04 8.7040E+03 8.7040E+03 8.7040E+03
re-
Proposed SLA-LB 2.1033E+04 8.7660E+03
6.4220E+03 8.1920E+03
8.1920E+03 4.3520E+03
4.3520E+03
4.3520E+03
0.4199 0.065177 5.0323E+07 4.3083E+08 8.7660E+03 7.2320E+03
0.39 0.062 3.9240E+07 8.8899E+07 1.2968E+04 1.4812E+04
0.062 0.5 5.4163E+07 4.6425E+07 9.4120E+03 8.7040E+03
0.6 4.3149E+07 8.7040E+03
0.5 4.6589E+07 8.7040E+03
4.3830E+03 3.6160E+03
6.4840E+03 7.4060E+03
4.7060E+03 4.3520E+03
4.3520E+03
4.3520E+03
0.49 0.0643 5.2447E+05 2.3760E+06 8.7660E+03 8.7240E+03
0.312 0.0777 4.2061E+05 9.9995E+05 1.2976E+04 1.6302E+04
0.325 0.51 1.1851E+06 4.6903E+05 1.6384E+04 8.7040E+03
0.5 4.4530E+05 8.7040E+03
0.511 4.8937E+05 8.7040E+03
4.3830E+03 4.3620E+03
6.4880E+03 8.1510E+03
8.1920E+03 4.3520E+03
4.3520E+03
4.3520E+03
0.5866 0.1119 1.8734E+06 7.8599E+06 8.7660E+03 8.7240E+03
0.37 0.07 1.1684E+06 9.9985E+05 1.3020E+04 1.6246E+04
0.062 0.56 1.4531E+06 1.6313E+06 1.6384E+04 8.7040E+03
0.6 1.4303E+06 8.7040E+03
0.54 1.5174E+06 8.7040E+03
4.3830E+03 4.3620E+03
6.5100E+03 8.1230E+03
8.1920E+03 4.3520E+03
4.3520E+03
4.3520E+03
0.56
0.53
urn al P
4.3830E+03 3.6010E+03
0.423
Jo
Proposed Instances Parameters TBTS u_c_lolo Makes pan 1.3597E+04 Gain cost 1.2300E+04 Penealty cost 6.1500E+03 Cloud utilization 0.402 u_c_hihi Makes pan 3.4637E+07 Gain cost 1.2004E+04 Penealty cost 6.0020E+03 Cloud utilization 0.4 u_c_hilo Makes pan 4.1967E+05 Gain cost 1.2336E+04 Penealty cost 6.1680E+03 Cloud utilization 0.3777 u_c_lohi Makes pan 1.1309E+06 Gain cost 1.2452E+04 Penealty cost 6.2260E+03 Cloud utilization 0.36
0.1
0.33
0.117
0.06
0.5
pro
of
Journal Pre-proof
Comparison Table 11..Makespan ,Utilization factor comparison of Proposed TBTS and SLA-LB algorithms with SLA-Minmin[38], OpportunityScheduling[26], Minmin[30], Maxmin[28], LB-MM[44], LB-MAXMIN[39], Simplescheduling[43] -Benchmark dataset 1248 X 32(A_U) with U_C_LOLO, U_C_LOHI, U_C_HILO,U_C_HIHI
u_c_hi ho
u_c_lo
LBSimple MaxMin[2 LBMAXMIN[ Scheduling[ 8] MM[44] 39] 43] 1.4150E+0 1.5702E+ 4 04 1.4721E+04 1.9752E+04 6.5536E+0 3.3792E+ 4 04 3.3792E+04 3.3792E+04 3.2768E+0 1.6896E+ 4 04 1.6896E+04 1.6896E+04
re-
urn al P
u_c_lol o
SLAOpportunityProposed Proposed MinMin[3 Scheduling[ MinMin[3 Parameters TBTS SLA-LB 8] 26] 0] 1.2628E+ 1.8187E+ 1.6318E+ 1.4150E+ Makes pan 04 04 05 1.2691E+04 04 5.0166E+ 3.4052E+ 3.5352E+ 6.5536E+ Gain cost 04 04 04 5.0826E+04 04 Penalty 2.5083E+ 1.7026E+ 1.7676E+ 3.2768E+ cost 04 04 04 2.5413E+04 04 Cloud utilization 0.3000 0.4200 0.0450 0.3000 0.0312 1.3190E+ 1.6740E+ 1.5514E+ 1.5670E+ Makes pan 03 03 04 1.3560E+03 03 5.0710E+ 3.4052E+ 3.2990E+ 6.5536E+ Gain cost 04 04 04 5.0688E+04 04 Penalty 2.5355E+ 1.7026E+ 1.6495E+ 3.2768E+ cost 04 04 04 2.5344E+04 04 Cloud utilization 0.2900 0.4680 0.0510 0.2870 0.0310 1.2657E+ 1.8500E+ 1.5744E+ 1.4618E+ Makes pan 08 08 09 1.3020E+08 08 4.8350E+ 3.4052E+ 3.3404E+ 6.0592E+ Gain cost 04 04 04 5.0846E+04 04 Penalty 2.4175E+ 1.7026E+ 1.6702E+ 3.0296E+ cost 04 04 04 2.5423E+04 04 Cloud utilization 0.3110 0.4310 0.0502 0.2900 0.0630 Makes pan 1.2771E+ 1.5961E+ 1.6246E+ 1.2667E+07 4.0984E+
Jo
Instanc es u_c_lo hi
0.0312 1.5670E+0 3 6.5536E+0 4 3.2768E+0 4
0.5000 1.5780E+ 03 3.3792E+ 04 1.6896E+ 04
0.0312 1.6572E+0 8 3.9886E+0 4 1.9943E+0 4
0.5110 1.5713E+ 08 3.3792E+ 04 1.6896E+ 04
0.3234 1.0000E+0
0.5100 1.5565E+
0.5200
0.4000
1.4750E+03
1.4580E+03
3.3792E+04
3.3792E+04
1.6896E+04
1.6896E+04
0.5300
0.5400
1.5093E+08
1.6070E+08
3.3792E+04
3.3792E+04
1.6896E+04
1.6896E+04
0.5200 1.4974E+07
0.5000 1.6570E+07
hi
07 3.4052E+ 04 1.7026E+ 04
08 3.4750E+ 04 1.7375E+ 04
5.0754E+04
0.3000
0.4920
0.0480
0.3050
2.5377E+04
07 7 5.8026E+ 6.3932E+0 04 4 2.9013E+ 3.1932E+0 04 4 0.0600
0.0655
07 3.3792E+ 04 1.6896E+ 04
3.3792E+04
3.3792E+04
1.6896E+04
1.6896E+04
0.5100
0.5240
0.4800
re-
Gain cost Penalty cost Cloud utilization
07 4.0965E+ 04 2.4481E+ 04
pro
of
Journal Pre-proof
Comparison Table 12..Makespan ,Utilization factor comparison of Proposed TBTS and SLA-LB algorithms with SLA-Minmin[38], OpportunityScheduling[26], Minmin[30], Maxmin[28], LB-MM[44], LB-MAXMIN[39], Simplescheduling[43] -Benchmark dataset 1248 X 32(B_U) with U_C_LOLO, U_C_LOHI, U_C_HILO,U_C_HIHI
Parameters Makes pan Gain cost Penalty cost Utilization factor Makes pan Gain cost Penalty
0.2916 1.3578E+ 07 1.9789E+ 05 9.8945E+
0.0600 1.2885E+ 08 1.2642E+ 05 6.3209E+
Jo
u_c_hil o
SLAOpportunityProposed Proposed MinMin[3 Scheduling[ MinMin[3 TBTS SLA-LB 8] 26] 0] 1.3319E+ 1.4000E+ 3.4589E+ 6.1920E+ 08 09 09 1.3944E+08 09 2.0011E+ 1.2642E+ 1.3092E+ 1.2670E+ 05 05 05 2.0096E+05 05 1.0006E+ 6.3209E+ 6.5458E+ 6.3351E+ 05 04 04 1.0048E+05 04
urn al P
Instanc es u_c_hi hi
0.0240 3.4059E+ 08 1.3377E+ 05 6.6886E+
0.2818
1.3703E+07 2.0129E+05 1.0064E+05
LBSimple MaxMin[2 LBMAXMIN[ Scheduling[ 8] MM[44] 39] 43] 4.7594E+0 1.6622E+ 8 08 1.5429E+08 1.6089E+08 1.4753E+0 1.3312E+ 5 05 1.3312E+05 1.3312E+05 7.3765E+0 6.6560E+ 4 04 6.6560E+04 6.6560E+04
0.0157 0.1362 1.3060E+ 1.0000E+0 08 7 2.1498E+ 2.5893E+0 05 5 1.0749E+ 1.2947E+0
0.4900 1.6016E+ 07 1.3312E+ 05 6.6560E+
0.5100
0.5000
1.5151E+07
1.5469E+07
1.3312E+05 6.6560E+04
1.3312E+05 6.6560E+04
Makes pan Gain cost Penalty cost Utilization factor Makes pan Gain cost Penalty cost Utilization factor
04
0.2800 1.3860E+ 03 1.9986E+ 05 9.9931E+ 04
0.0640 1.2381E+ 04 1.2642E+ 05 6.3209E+ 04
0.0230 3.1733E+ 04 1.3305E+ 05 6.6525E+ 04
0.2720 1.3819E+ 04 1.9591E+ 05 9.7953E+ 04
0.0660 1.3731E+ 05 1.2642E+ 05 6.3209E+ 04
0.0240 3.3705E+ 05 1.2768E+ 05 6.3842E+ 04
0.2700
0.0621
0.0250
Jo
u_c_lo hi
04
05
0.2700 1.4230E+03 2.0120E+05 1.0060E+05
5
04
0.0260 0.0330 1.5740E+ 1.5750E+0 03 3 2.6214E+ 2.6214E+0 05 5 1.3107E+ 1.3107E+0 05 5
0.4900 1.6230E+ 03 1.3312E+ 05 6.6560E+ 04
0.0150 0.0150 1.6020E+ 1.6020E+0 04 4 2.6214E+ 2.6214E+0 05 5 1.3107E+ 1.3107E+0 05 5
0.4920 1.6282E+ 04 1.3312E+ 05 6.6560E+ 04
re-
u_c_lol o
04
0.2700
1.3985E+04 2.0137E+05 1.0069E+05
urn al P
cost Utilization factor
pro
of
Journal Pre-proof
0.2806
0.0150
0.0150
0.5000
0.5100
0.5000
1.5180E+03
1.7290E+03
1.3312E+05
1.3312E+05
6.6560E+04
6.6560E+04
0.5100
0.4600
1.5326E+04
1.6927E+04
1.3312E+05
1.3312E+05
6.6560E+04
6.6560E+04
0.5220
0.4800
pro
of
Journal Pre-proof
Comparison Table 13..Makespan ,Utilization factor comparison of Proposed TBTS and SLA-LB algorithms with SLA-Minmin[38], OpportunityScheduling[26], Minmin[30], Maxmin[28], LB-MM[44], LB-MAXMIN[39], Simplescheduling[43] -Benchmark dataset 2046 X 64 with U_C_LOLO, U_C_LOHI, U_C_HILO,U_C_HIHI
Penalty cost Utilization factor u_c_loh i
Makes pan Gain cost Penalty cost Utilization factor Makes pan Gain cost Penalty cost
0.2740 0.0630 0.0240 1.3587E+0 1.3413E+0 3.2092E+0 6 7 7 1.9918E+0 1.2642E+0 1.2663E+0 5 5 5 9.9590E+0 6.3209E+0 6.3314E+0 4 4 4 0.2843 0.0626 0.0250 4.1860E+0 4.1404E+0 1.0071E+0 5 6 7 1.9600E+0 1.2642E+0 1.3234E+0 5 5 5 9.8002E+0 6.3209E+0 6.6170E+0 4 4 4
Jo
u_c_hil o
Proposed SLA-LB 1.3434E+0 5 1.2642E+0 5 6.3209E+0 4
MinMin [30] 1.5856E+0 4 2.6214E+0 5 1.3107E+0 5
re-
Gain cost
Proposed TBTS 1.4073E+0 4 1.9628E+0 5 9.8141E+0 4
MaxMin [28] 1.5856E+0 4 2.6214E+0 5 1.3107E+0 5
LBMM[44] 1.6727E+0 4 1.3312E+0 5 6.6560E+0 4
LBMAXMIN [39] 1.5436E+0 4 1.3312E+0 5 6.6560E+0 4
Simple Scheduling [43] 1.6319E+0 4 1.3312E+0 5 6.6560E+0 4
0.2810 0.0150 0.0150 0.4800 0.5100 0.4900 1.3811E+0 1.0000E+0 1.5291E+0 1.6326E+0 1.5093E+0 1.8180E+0 6 6 6 6 6 6 2.0085E+0 2.6141E+0 2.6214E+0 1.3312E+0 1.3312E+0 1.3312E+0 5 5 5 5 5 5 1.0043E+0 1.3071E+0 1.3107E+0 6.6560E+0 6.6560E+0 6.6560E+0 5 5 5 4 4 4
urn al P
Instance s Parameters u_c_lol o Makes pan
Opportunit SLAyMinMin Scheduling [38] [26] 3.2322E+0 1.3771E+0 5 4 1.3235E+0 2.0133E+0 5 5 6.6174E+0 1.0067E+0 4 5
0.2780 0.0326 0.0150 0.4900 0.5100 0.4440 4.1361E+0 4.9515E+0 4.9515E+0 4.8874E+0 4.6991E+0 5.3016E+0 5 5 5 5 5 5 2.0040E+0 2.6214E+0 2.2621E+0 1.3312E+0 1.3312E+0 1.3312E+0 5 5 6 5 5 5 1.0020E+0 1.3107E+0 1.3107E+0 6.6560E+0 6.6560E+0 6.6560E+0 5 5 5 4 4 4
Makes pan Gain cost Penalty cost Utilization factor
0.2823 0.0616 0.0244 1.3587E+0 1.3413E+0 3.2092E+0 6 7 7 1.9918E+0 1.2642E+0 1.2663E+0 5 5 5 9.9590E+0 6.3209E+0 6.3314E+0 4 4 4 0.2843
0.0626
0.0250
0.2800 0.0156 0.0156 0.5040 0.5100 0.4500 1.3811E+0 1.0000E+0 1.5291E+0 1.6326E+0 1.5093E+0 1.8180E+0 6 6 6 6 6 6 2.0085E+0 2.6141E+0 2.6214E+0 1.3312E+0 1.3312E+0 1.3312E+0 5 5 5 5 5 5 1.0043E+0 1.3071E+0 1.3107E+0 6.6560E+0 6.6560E+0 6.6560E+0 5 5 5 4 4 4
re-
u_c_loh i
0.2780
urn al P
Utilization factor
pro
of
Journal Pre-proof
0.0326
0.0150
0.4900
0.5100
0.4440
Comparison Table 14..Makespan ,Utilization factor comparison of Proposed TBTS and SLA-LB algorithms with SLA-Minmin[38], OpportunityScheduling[26], Minmin[30], Maxmin[28], LB-MM[44], LB-MAXMIN[39], Simplescheduling[43] –Synthetic dataset
Parame ters Makes pan Gain cost Penalty cost
Proposed TBTS 9.2753E+ 02 2.2000E+ 03 1.1000E+ 03
SLAMinMin[38 OpportunityMinMin MaxMi ] Scheduling[26] [30] n[28] 1.1945E+0 8.7840 6.8118E 1.7382E+03 4 1.0190E+03 E+03 +03 1.2880E+0 2.6780 4.0960E 2.1780E+03 3 2.6040E+03 E+03 +03 6.4400E+0 1.3390 2.0480E 1.0890E+03 2 1.3020E+03 E+03 +03
Proposed SLA-LB
Jo
Instan ces 128 × 16
LBLBMM[44 MAXMIN[ Simple ] 39] Scheduling[43] 2.0742E +03 1.6513E+03 1.7512E+03 2.1760E +03 2.1760E+03 2.1760E+03 1.0880E +03 1.0880E+03 1.0880E+03
4.8434E+0 4 1.5440E+0 3 7.7200E+0 2
384 × Makes 16 pan Gain cost Penalty cost
5.9230E+ 03 2.5850E+04 7.2820E+ 03 6.5240E+03 3.6410E+ 03 3.2620E+03
9.0895E+0 4 1.6520E+0 3 8.2600E+0 2
640 × Makes 32 pan Gain cost Penalty cost
4.8246E+ 03 2.3450E+04 1.4952E+ 04 2.0860E+04 7.4760E+ 03 1.0430E+04
1.3647E+0 5 8.9960E+0 3 4.4830E+0 3
768 × Makes 32 pan Gain cost Penalty cost
7.1928E+ 03 3.2502E+04 2.2570E+ 04 2.4704E+04 1.1285E+ 04 1.2352E+04
2.2487E+0 5 1.2320E+0 4 6.1600E+0 3
896 × Makes 32 pan
6.0458E+ 03 4.5106E+04
1.6118E+0 5
3.6473 E+04 3.4620 E+03 1.7310 E+03
1.4181E 2.4079E +04 +04 8.1920E 4.3520E +03 +03 4.0960E 2.1760E +03 +03
2.9951 E+04 7.8580 E+03 3.9290 E+03
1.9839E 5.8887E +04 +04 1.2288E 6.5280E +04 +03 6.1440E 3.2640E +03 +03
8.0972 E+04 1.7132 E+04 8.5660 E+03
3.1336E 4.7863E +04 +04 2.0480E 2.1120E +04 +04 1.0240E 1.0560E +04 +04
1.5296E+04
1.2031 E+05 2.5012 E+04 1.2506 E+04
3.9090E 4.0551E +04 +04 4.9152E 2.5344E +04 +04 2.4576E 1.2672E +04 +04
6.3771E+03
2.2547 E+05
4.4523E 6.7829E +04 +04
3.6934E+03 4.6920E+03 2.3460E+03
re-
3.3938E+ 03 1.3312E+04 2.9780E+ 03 4.4360E+03 1.4890E+ 03 2.2180E+03
6.2268E+03 6.4460E+03 3.2230E+03
5.1391E+03
urn al P
Makes pan Gain cost Penalty cost
Jo
256 × 16
pro
of
Journal Pre-proof
2.5186E+04 1.2593E+04
7.8551E+03 3.0592E+04
2.1571E+04
2.0366E+04
4.3520E+03
4.3520E+03
2.1760E+03
2.1760E+03
5.4433E+04
1.1063E+06
6.5280E+03
6.5280E+03
3.2640E+03
3.2640E+03
8.8834E+04
4.8376E+04
1.0880E+04
2.1120E+04
5.4400E+03
1.0560E+04
3.7283E+04
4.1852E+04
2.5334E+04
2.5344E+04
1.2672E+04
1.2672E+04
6.3274E+04
6.7157E+04
1.2398E+0 4 6.1990E+0 3
3.2872E+04
1.6436E+04
2.1336 E+04 1.0668 E+04
urn al P
re-
1.3304E+ 04 2.8908E+04 6.6520E+ 03 1.4454E+04
Jo
Gain cost Penalty cost
pro
of
Journal Pre-proof
5.7344E 2.9568E +04 +04 2.8672E 1.4784E +04 +04
2.9568E+04
2.9568E+04
1.4784E+04
1.4784E+04
pro
of
Journal Pre-proof
Jo
urn al P
re-
Comparison chart 1..Makespan ,Utilization factor comparison of Proposed TBTS and SLA-LB algorithms with SLA-Minmin[38], Opportunity Scheduling[26], Minmin[30], Maxmin[28], LB-MM[44], LB-MAXMIN[39], Simplescheduling[43] - Benchmark dataset 512 x 16 with U_C_LOLO, U_C_LOHI ,U_C_HILO, U_C_HIHI.
pro
of
Journal Pre-proof
Jo
urn al P
re-
Comparison chart 2a .Makespan ,Utilization factor comparison of Proposed TBTS and SLA-LB algorithms with SLA-Minmin[38], Opportunity Scheduling[26], Minmin[30], Maxmin[28], LB-MM[44], LB-MAXMIN[39], Simplescheduling[43] - Benchmark dataset 1248 X 32 (A_U) with U_C_LOLO, U_C_LOHI, U_C_HILO ,U_C_HIHI
pro
of
Journal Pre-proof
Jo
urn al P
re-
Comparison chart 2b .Makespan ,Utilization factor comparison of Proposed TBTS and SLA-LB algorithms with SLA-Minmin[38], Opportunity Scheduling[26], Minmin[30], Maxmin[28], LB-MM[44], LB-MAXMIN[39], Simplescheduling[43] - Benchmark dataset 1248 X 32 (B_U) with U_C_LOLO, U_C_LOHI, U_C_HILO ,U_C_HIHI
pro
of
Journal Pre-proof
Jo
urn al P
re-
Comparison chart 3 .Makespan ,Utilization factor comparison of Proposed TBTS and SLA-LB algorithms with SLA-Minmin[38], Opportunity Scheduling[26], Minmin[30], Maxmin[28], LB-MM[44], LB-MAXMIN[39], Simplescheduling[43] - Benchmark dataset 2046 x 64 with U_C_LOLO, U_C_LOHI, U_C_HILO ,U_C_HIHI
pro
of
Journal Pre-proof
Jo
urn al P
re-
Comparison Chart 4..Makespan ,Utilization factor comparison of Proposed TBTS and SLA-LB algorithms with SLA-Minmin[38], OpportunityScheduling[26], Minmin[30], Maxmin[28], LB-MM[44], LB-MAXMIN[39], Simplescheduling[43] –Synthetic dataset
pro
of
Journal Pre-proof
6.2 Simulation results on synthetic datasets
Synthetic datasets are generated using pseudo random function with various instances like 128 × 16, 256 × 16, 384 × 16, 640 × 32, 768 × 32, and 896 × 16, respectively.
re-
Using GCC compiler program. rand () function is used, where rand is random generation function. Task set and VM capacity are generated randomly and ETC matrix is calculated with the function.
7. Conclusion
urn al P
Overall Makespan, cloud utilization, gain cost and penalty cost are calculated and equivalent EG and EP matrices are generated. Results are compared with SLA-MCT algorithm[ , opportunistic load balancing [26],round-robin [2],cloud Min-Max normalization [3], LB-MaxMin algorithm[39],Min-Min algorithm[30].The comparison shows that the proposed offline scheduling TBTS algorithm outperforms all the static algorithms[2,3,26,30,39] in Makespan and cloud utilization. Online scheduling algorithm provides better allotment of the jobs to the cloud compared with existing online algorithm [38].Table 14 shows the comparison analysis of the proposed algorithm with the existing algorithms. The comparison shows the proposed model outperforms the existing methods even in heterogeneity and scalability situation.
Jo
We have proposed two task scheduling algorithms, namely TBTS offline algorithm and SLA-LB online scheduling algorithm for heterogeneous systems. These algorithms are simulated and tested with 20 instances of 3 different benchmark dataset and 5 instances of various synthetic datasets. The proposed algorithms are compared with SLA-MCT, Opportunistic algorithm, MINMIN, Load balanced MAX-MIN, and Load balanced MIN-MIN, Round robin, MAX-MIN algorithms. The proposed algorithm outperforms the existing and standard algorithms in terms of Makespan and cloud utilization. Even Load balanced MaxMin and MinMin algorithm cloud utilization seem to be equal or better in some cases, the overall quality of the performance is improved in the proposed algorithm TBTS.SLA-LB algorithm provide better cloud utilization compared to the existing online scheduling algorithm SLA-MCT and the gain-penalty cost also calculated For proposed algorithm and compared with existing model which shows that the proposed algorithms provide better performance in allocation of task to the cloud dynamically with slight changes in gain and Penalty cost. However, our proposed algorithm provides QOS like Makespan, cloud utilization, cost effective in our future work. We have a plan to consider energy consumption, also as a parameter and to achieve a better result in it.
pro
of
Journal Pre-proof
8. References
Jo
urn al P
re-
[1] Gao Y, Guan H, Qi Z, Song T, Huan F, Liu L (2014) Service level agreement based energy-efficient resource management in cloud data centers. Comput Electr Eng 40:1621–1633 [2] Li J, Qiu M, Ming Z, Quan G, Qin X, Gu Z (2012) Online optimization for scheduling preemptable tasks on IaaS cloud system. J Parallel Distrib Comput 72:666–677 [3]Panda SK, Jana PK (2015) Efficient task scheduling algorithms for heterogeneous multi-cloud environment. J Supercomput 71(4):1505–1533 [4] Durao F, Carvalho JFS, Fonseka A, Garcia VC (2014) A systematic review on cloud computing. J. Supercomput 68(3):1321–1346 [5] Son S, Jung G, Jun SC (2013) An SLA-based cloud computing that facilitates resource allocation in the distributed data centers of a cloud provider. J Supercomput 64(2):606–637 [6] Cloud Service Level Agreement Standardisation Guidelines. http://ec.europa.eu/information_society/newsroom/cf/dae/document.cfm?action=display&doc_id=6138. Accessed on 4 June 2015 [7] Liu L,Mei H, XieB(2016) Towards a multi-QoS human-centric cloud computing load balance resource allocation method. J Supercomput 72(7):2488–2501 [8] Son S, Kang D, Huh SP, Kim W, Choi W (2016) Adaptive trade-off strategy for bargaining-based multi-objective SLA establishment under varying cloud workload. J Supercomput 72(4):1597–1622 [9]Ranaldo N, Zimeo E (2016) Capacity-driven utility model for service level agreement negotiation of cloud services. Future Gen Comput Syst 55:186–199 [10]Baset SA (2012)Cloud SLAs:present and future. ACM SIGOPS Oper Syst Rev 46:57–66 [11] Emeakaroha VC, Netto MAS, Calheiros RN, Brandic I, Buyya R, Rose CAFD (2012) Towards autonomic detection of SLA violations in cloud infrastructures. Future Gen Comput Syst 28:1017–1029 [12] Maurer M, Emeakaroha VC, Brandic I, Altmann J (2012) Cost-benefit analysis of an SLA mapping approach for defining standardized cloud computing goods. Future Gen Comput Syst 28:39–47 [13] Wu F, Wu Q, Tan Y (2015) Workflow scheduling in cloud: a survey. J Supercomput 71(9):3373–3418 [14] Etminani, Kobra, and M. Naghibzadeh. "A min-min max-min selective algorihtm for grid task scheduling." In Internet, 2007. ICI 2007. 3rd IEEE/IFIP International Conference in Central Asia on, pp. 1-7. IEEE, 2007.
pro
of
Journal Pre-proof
Jo
urn al P
re-
[15]BraunFN(2015)https://code.google.com/p/hcsp-chc/source/browse /trunk /AE /Problem Instances/HCSP/Braun_et_al/u_c_hihi.0?r=93. Accessed on 3 June 2015 [16] Kokilavani, T., and DI George Amalarethinam. "Load balanced min-min algorithm for static meta-task scheduling in grid computing." International Journal of Computer Applications 20, no. 2. (2011):pp 43-49. [17] XiaoShan H, XianHe S, Laszewski GV (2003) QoS guided min-min heuristic for grid task scheduling.J Comput Sci Technol 18(4):442–451 [18] Braun TD, Siegel HJ, Beck N, Boloni LL, Maheswaran M, Reuther AI, Robertson JP, Theys MD, Yao B, Hensgen D, Freund RF (2001) A comparison of eleven static heuristics for mapping a class of independent tasks onto heterogeneous distributed computing systems. J Parallel Distrib Comput 61(6):810–837. [19] Fidanova, Stefka, and Mariya Durchova. "Ant algorithm for grid scheduling problem." Large-Scale Scientific Computing (2006) : pp 405412. [20] Garg, Ritu. "Multi-Objective Ant Colony Optimization for Task Scheduling in Grid Computing." In Proceedings of the Third International Conference on Soft Computing for Problem Solving, pp.133-141. Springer, New Delhi, 2014. [21]Ali S, Siegel HJ, Maheswaran M, Hensgen D, Ali S (2000) Task execution time modeling for heterogeneous computing systems. In: 9th Heterogeneous Computing Workshop. IEEE Computer Society,pp 185–200 [22] Deb, Kalyanmoy, Samir Agrawal, Amrit Pratap, and Tanaka Meyarivan."A fast elitist non-dominated sorting genetic algorithm for multiobjective optimization: NSGA-II." In International Conference on Parallel Problem Solving From Nature, pp. 849-858. Springer, Berlin, Heidelberg, 2000. [23]Ye, Guangchang, Ruonan Rao, and Minglu Li. "A multiobjective resources scheduling approach based on genetic algorithms in grid environment." In Grid and Cooperative Computing Workshops, 2006. GCCW'06. Fifth International Conference on, pp. 504-509. IEEE, 2006. [24]Dai, Yuan-Shun, Gregory Levitin, and Kishor S. Trivedi. "Performance and reliability of tree-structured grid services considering data dependence and failure correlation." IEEE Transactions on Computers 56, no. 7 (2007). [25] Ivanovic D, Carro M, Hermenegildo M (2011) Constraint-based runtime prediction of SLA violation in service orchestrations. In: 9th International Conference on Service-oriented Computing. Springer, Berlin, pp 62–76. [26] Wang S, Yan K, Liao W, Wang S (2010) Towards a load balancing in a three-level cloud computing network. In: 3rd IEEE International Conference on Computer Science and Information Technology, vol 1, pp 108–113 [27]Freund RF, Gherrity M, Ambrosius S, Campbell M, Halderman M, Hensgen D, Keith E, Kidd T,Kussow M, Lima JD, Mirabile F, Moore L, Rust B, Siegel HJ (1998) Scheduling resources in multiuser, heterogeneous, computing environments with SmartNet. In: 7th IEEE Heterogeneous Computing Workshop, pp 184–199 [28]Panda SK, Jana PK (2014) An efficient task scheduling algorithm for heterogeneous multi-cloud environment. In: 3rd International Conference on Advances in Computing, Communications and Informatics, IEEE, pp 1204–1209 [29]Panda SK, Gupta I, Jana PK (2015) Allocation-aware task scheduling for heterogeneous multi-cloud systems. In: 2nd International Symposium on Big Data and Cloud Computing Challenges, vol 50. Procedia Computer Science, Elsevier, pp 176–184
pro
of
Journal Pre-proof
Jo
urn al P
re-
[30]Ibarra OH, Kim CE (1977) Heuristic algorithms for scheduling independent tasks on nonidentical processors. J Assoc Comput Mach 24(2):280–289 [31]AbdullahiM, Ngadi MA, Abdulhamid SM(2016) Symbiotic organism search optimization based task scheduling in cloud computing environment. Future Gen Comput Syst 56:640–650 [32]Loo SM, Wells BE (2006) Task scheduling in a finite-resource, reconfigurable hardware/software codesign environment. INFORMS J Comput 18(2):151–172 [33]Demiroz B, Topcuoglu HR (2006) Static task scheduling with a unified objective on time and resource domains. Comput J 49(6):731–743 [34] Maheswaran M, Ali S, Siegel HJ, Hensgen D, Freund RF (1999) Dynamic mapping of a class of independent tasks onto heterogeneous computing systems. J Parallel Distrib Comput 59:107–131 [35]XiaoShan H, XianHe S, Laszewski GV (2003) QoS guided min-min heuristic for grid task scheduling.J Comput Sci Technol 18(4):442–451 [36]Decai H, Yuan Y, Li-jun Z, Ke-qin Z (2009) Research on tasks scheduling algorithms for dynamic and uncertain computing grid based on a+bi connection number of SPA. J Softw 4(10):1102–1109 [37]Miriam DDH, Easwarakumar KS (2010) A double min-min algorithm for task metascheduler on hypercubic P2P grid systems. Int J Comput Sci Issues 7(5):8–18. [38]SLA-based task scheduling algorithms for heterogeneous multi-cloud environment, Sanjaya K. Panda1 , Prasanta K. Jana2, J Supercomput 73:2730–2762, © Springer Science+Business Media New York 2017. [39] Max–Min Task Scheduling Algorithm for Load Balance in Cloud Computing, Proceedings of International Conference on Computer Science and Information Technology, Advances in Intelligent Systems and Computing 255, DOI: 10.1007/978-81-322-1759-6_53,Springer India 2014. [40] Panda SK, Jana PK (2016) Normalization-based task scheduling algorithms for heterogeneous multicloud environment, information systems frontiers. Springer, Berlin. [41] Xhafa F, Carretero J, Barolli L, Durresi A (2007) Immediate mode scheduling in grid systems. Int J Web Grid Serv 3(2):219–236 [42] Xhafa F, Barolli L, Durresi A (2007) Batch mode scheduling in grid systems. Int J Web Grid Serv 3(1):19–37 [43] F. Alharbi, K.S.A. Rabigh, Simple scheduling algorithm with load balancing for grid computing, Asian Trans. Comput. 2 (2) (2012) 8–15. [44]T.Kokilavani, Dr. D.I. George Amalarethinam, Load Balanced Min-Min Algorithm for Static Meta-Task Scheduling In Grid Computing,International Journal of Computer Applications (0975 –8887)Volume 20–No.2, April 2011.
pro
of
Journal Pre-proof
Conflict of Interest
Jo
urn al P
re-
Even Load balanced MaxMin and MinMin algorithm cloud utilization seem to be equal or better in some cases, the overall quality of the performance is improved in the proposed algorithm TBTS.SLA-LB algorithm provide better cloud utilization compared to the existing online scheduling algorithm SLA-MCT and the gain-penalty cost also calculated For proposed algorithm and compared with existing model which shows that the proposed algorithms provide better performance in allocation of task to the cloud dynamically with slight changes in gain and Penalty cost. However, our proposed algorithm provides QOS like Makespan, cloud utilization, cost effective in our future work. We have a plan to consider energy consumption, also as a parameter and to achieve a better result in it