Software-defined network assisted packet scheduling method for load balancing in mobile user concentrated cloud

Software-defined network assisted packet scheduling method for load balancing in mobile user concentrated cloud

Computer Communications 150 (2020) 144–149 Contents lists available at ScienceDirect Computer Communications journal homepage: www.elsevier.com/loca...

1MB Sizes 0 Downloads 21 Views

Computer Communications 150 (2020) 144–149

Contents lists available at ScienceDirect

Computer Communications journal homepage: www.elsevier.com/locate/comcom

Software-defined network assisted packet scheduling method for load balancing in mobile user concentrated cloud V. Deeban Chakravarthy ∗, B. Amutha Department of Computer Science and Engineering, SRM Institute of Science and Technology, Kattankulathur, Tamil Nadu, India

ARTICLE

INFO

Keywords: Congestion control Estimated transmission Load balancing Packet scheduling SDN

ABSTRACT Cloud Network provides a variety of applications and services to meet the requirements of increasing user demands. With its dedicated components, the cloud enables anywhere anytime access of resources to its associated users. A significant factor that holds the performance of the cloud is its congestion, due to unpredictable traffic and asynchronous user demands. Though load balancers serve the purpose of optimizing network traffic, congestion is unavoidable when the level of user demand increases. In this manuscript, a Resource-aware Packet-level Scheduling with Load Balancing (RPS-LB) algorithm is proposed to minimize congestion. In RPS, the incoming network traffic is disseminated through instantaneous neighbors by analyzing their Store and Forward (SF) factor. SF factor is pre-estimated through Estimated Transmission Count (ETX) metric. With this analysis, the reliability of the instantaneous neighbor for handling the current packetlevel transmission is verified. This algorithm addresses the issues in user-level and flow-level to mitigate stagnancy and overflow of network traffic. Besides, the proposed load-balancing algorithm is defined as a linear optimization problem within definite constraints to prevent degradation proportional to user density. The performance of the proposed system is assessed by using the performance evaluation parameters such as Queue Utilization, Request Success Rate, Request Failure Rate, Response Time and Link Utilization. The simulation results show that the proposed system performs well than the existing systems.

1. Introduction Software Defined Networking (SDN) is the network paradigm which decouples the data plane from the control plane which can be easily programmed and has a global view about the entire network. Cloud computing is a prominent technology that enables sharing of resources in a distributed manner to serve applications. Despite of local storage and computational components, the resources are shared on-demand depending on the application demands of the user in a ubiquitous manner. Resources like servers, storage and services are shared across endusers, industries and third-parties backboned with high-performance parallel computing [1,2]. The cloud platform is a large-scale dynamic heterogeneous infrastructure that encompasses wide physical region through different communication technologies. Applications and services provisioned/requested from cloud are different with respect to the resources and user demands. Therefore, managing the characteristics, resource allocation, workload of the requirements becomes a tedious task for which dependability, stability and reliability are necessary [3, 4]. With the increase in end-user demands and service requirements, cloud platform balances the requirements by deploying multiple servers, distributed storage and hosting services. Shared users across

the globe access the servers from their location on-demand by means of rents concurrently. The challenging task of a cloud Service Provider (SP) is to ensure seamless, uninterrupted service response with maximum resource utilization. Cloud achieves the objective of seamlessness by scheduling and distributing the incoming traffic requests across the available resources and computational servers [5–7]. Managing incoming network traffic and proper resource allocation in Data Centers (DCs) helps to retain the Quality of Service (QoS) in the cloud. Data centers categorize the type of incoming traffic before assigning resources and responding the users. As mentioned earlier, cloud services are distinct from each other that require a like-class load balancing technique. Load balancers are common solution for addressing different traffic types, contrarily, delay and expensive nature of the technique serves as a drawback. Besides, constructing a load balancer requires dedicated hardware and controlling software for which integration and virtualization techniques have been introduced [8–10]. The objective of the manuscript is as follows, (1) To minimize the congestion by employing Resource-aware Packet-level Scheduling with Load Balancing (RPS-LB) method. (2) Traffic scheduling and neighbor selection methodologies are applied in order to address the issues in user-level and flow-level to mitigate stagnancy and overflow of network traffic.

∗ Corresponding author. E-mail addresses: [email protected] (V. Deeban Chakravarthy), [email protected] (B. Amutha).

https://doi.org/10.1016/j.comcom.2019.11.012 Received 27 May 2019; Received in revised form 6 October 2019; Accepted 11 November 2019 Available online 19 November 2019 0140-3664/© 2019 Elsevier B.V. All rights reserved.

V. Deeban Chakravarthy and B. Amutha

Computer Communications 150 (2020) 144–149

The proposed an SDN-enhanced Inter Cloud Manager (S-ICM) [21] for effective distribution of unplanned load. It employs a decision making process and control message for network flow allocation and delay minimization. SDN supported predictive load balancing technique [22] is developed using neural network and k-means learning process for encouraging delay-less traffic propagation in cloud services. With a wild-mask configuration, the load is scheduled directly on the switches and routers by as predicted by the users. The contrary part is that the prediction does not withstand for distinct user requirements. Time dependent load balancing service degradation framework [23] is designed to retain the performance of optical data center networks. Bandwidth allocation in the network is facilitated by assigning link weights to prevent congestion. The proposed framework achieves a lesser blocking rate with improved profit. The proposed SDC on and network topology-aware VM placement algorithm reduces the overall network traffic [24]. An intelligent approach of real-time processing and dynamic scheduling that can make full use of the network resources. The traffic among the data centers can be classified into various types and strategies are proposed for these types of real-time traffic [25,26]. From the above discussion, the load balancing techniques proposed so far improves the request service rates despite incoming network traffic. The drawback of the techniques is that the minimum optimal requests to be transmitted to utilize link and resource to a maximum rate are not estimated. Pre-estimation of requests and resource allocation minimizes resource exploitation and also increases the rate of responses.

The contributions of the manuscript are given as follows: • Designing a packet scheduling algorithm with the consideration of time and capacity of the resource to improve request serving rate • An optimal neighbor selection method for minimizing request serving time and improving link utilization • Analysis of the proposed load balancing technique through extensive simulation and a comparative study to verify its consistency The rest of the manuscript is organized as follows; Section 2 describes the related works. Resource-aware Packet-level Scheduling with Load Balancing is explained in Section 3. Section 4 describes the Proposed Methodology. System Settings and Performance Analysis is done in Section 5. Section 6 concludes this paper. 2. Related works A load balancing approach based on resource scheduling for mobile users is proposed in [11] to improve request processing rate in the cloud. This approach assimilates local and public cloud services to serve user requirements. The synchronization time observed in mapping two distinct services consumes additional time. SDN based Load Balancing [12] proposed by Ahmed Abdelltif intends to improve resource utilization and to minimize response delays. This model of load balancing requires a dedicated controller with openflow switches that facilitates dynamic load balancing. The architecture for this model requires external monitoring application running on a distributed system. In [13] proposed an SDN based load balancing scheme for machineto-machine networks. Unlike [12], the monitoring and controller operations are confined to a single distributed system to disseminate traffic so as to achieve QoS. The traffic is disseminated with the guidance of flow tables that requires a periodic update as per resource availability. The authors in [14] proposed a virtualization framework that addresses the issues of resource isolation and load balancing. A committed link discovery algorithm facilitates load balancing and a traffic matrix supportive virtual machine restricts resource isolation problem. Though the performance of the virtualization framework is better, it fails in handling bandwidth constraints of the server link. The proposed a load balancing scheme for assigning different-sized data to heterogeneous cloud network [15]. The proposed scheme reduces overloading of data by identifying the required and non-required data at the time of access. The imbalance in resource utilization is confined in this scheme to improve the quality of service. Energy-aware traffic efficient load balancing [16] is a joint optimization scheme designed for data center cloud networks. The fast iterative shrinkagethresholding algorithm designed for the conjoint optimization achieves lesser battery expenditure and network cost. A scalable two-tier resource management algorithm is proposed in [17] for facilitating admission control and load balancing in the cloud. The algorithm employs a fuzzy controller and Nash equilibrium for admission control and load balancing respectively. This scalable algorithm improves the request handling rate of the cloud with lesser loss. Cloud Load Balancing (CLB) [18] addresses the issue of uneven server loading caused due to static load balancing. CLB considers the power and data handling range of the servers for assigning loads preventing uneven load distribution. The authors in [19] introduced a service-oriented load balancing mechanism for resolving disproportionate load distribution and scalability in SDN. The load is evenly disseminated across the controllers through fore-hand flow-request and service type updates. A dynamic QoS optimization routing assigns the path that satisfies the limitations in data dissemination. This load balancing model achieves 79% of link utilization. Distributed Flow-by-Flow Fair Routing (DFFR) algorithm [20] is designed to handle and disseminate load in data center networks. The proposed algorithm is scalable and adaptive that intends to resource utilization of the network.

3. Resource-aware packet-level scheduling with load balancing Resource-aware packet scheduling is introduced to organize incoming traffic request to maintain congestion-less communication in the cloud. The proposed algorithm is deployed in the controller plane of the network. Fig. 1 illustrates the architecture of the considered cloud model. The architecture of cloud is categorized into Cloud, controller and user planes. The description of the planes is as follows: 3.1. Cloud plane The components of a cloud plane include resources like servers, storage, applications and services. Cloud plane shares resources in either of the categories: software as service or infrastructure as service or platform as service. Cloud gateway act as an interface between the control plane and cloud plane in which the cloud network consists of server, storage, services, network infrastructures and applications. It also forward requests to the cloud network. In this architecture, a conventional cloud model is used. 3.2. Controller plane Controller plane is slotted in between user and cloud planes. The controller is responsible for segregating and admitting requests to the cloud. SDN switches (SDNS) interfaced with the SDN Controller (SDNC) is responsible for accepting traffic requests from the user plane. Packet level scheduling, resource information maintenance and other functions are executed by the controller. 3.3. User plane User plane is a collection of mobile users and gateways that establishes a communication link with its upper layers. Mobile users are a combination of physical hardware devices with smart capabilities with a software-based user interface. User requests are initiated from the software and reach the cloud server through the controller plane. 145

V. Deeban Chakravarthy and B. Amutha

Computer Communications 150 (2020) 144–149

Fig. 2. State interval representations.

On the other hand, if the resource is assigned previously/the resource is inaccessible, then the requests are rejected. If 𝑡𝑎 and represents the active and idle time of a cloud resource R, then the availability of the resource 𝛥at is estimated using Eq. (1) 𝛥𝑎𝑡 =

𝑡𝑎 (𝑡𝑎 + 𝑡𝑖 )

(1)

An illustration of the response and reject intervals with respect to a request time interval is illustrated in Fig. 2. Let X and Y denote the active and idle interval states S of an R with respect to time. Depending on the user application and service demand, resource availability and idle time are not constant i.e. x ≠ y. Any resource is mapped to a request based on the service level agreement that is valid for a time 𝑡sla . Now, the interval slot of X and Y is estimated as in Eq. (2) 𝑋=

𝑡𝑠𝑙𝑎 𝑡

(2)

where, 𝑡 = 𝑡𝑎 + 𝑡𝑖 Therefore, the total instances at which R is available 𝛥A𝑡 is finalized using Eq. (3) } 𝛥𝐴𝑡 = 1 − 𝑥𝑦 (3) 𝛥𝐴𝑡 = 1 − 𝑡𝑦∗𝑡 𝑠𝑙𝑎

Within the computed 𝛥A, availability is further influenced by the capacity of the resource. Let 𝐶𝑃 represent, the processing capacity of the resource, the availability of the resource with respect to capacity 𝛥A𝐶 is expressed as Eq. (4) ∑ 𝑠 (𝑐𝑝 ∗ 𝛥𝑎𝑡 ) 𝛥𝑎𝑐 = (4) max 𝑐𝑝 Where max C 𝑝 is the maximum processing capacity of the resource. From the given states, the resource availability is estimated in its active time and therefore Eq. (5) is modified as: ∑ 𝑥 (𝑐𝑝 ∗ 𝛥𝐴𝑡 ) 𝛥𝑎𝑐 = (5) max 𝑐𝑝

Fig. 1. Cloud model with SDN.

4. Proposed methodology The gateway (GW) in the user plane propagates volumes of traffic request to the SDNS that forwards the same to the SDNC. SDNS is nonconfigurable and hence it acts as a multicasting device by accepting one-to-many or many-to-many requests from the user plane. The process of SDNC is vital and is discussed in two operations modes: Traffic scheduling and Neighbor selection.

The maximum availability is at a time interval 𝑡𝑖 with the sharable resource capacity is thus estimated by the SDNC. After estimating the availability, SDNC performs traffic scheduling that meets the requirements of the resource availability. Scheduling is designed as a dynamic First Come First Serve (FCFS) process with the consideration of estimated transmission count from the user plane. 2D-scheduling is incorporated to handle congested and non-congested flows. Let M denote the number of request message transmitted for i flows that is allocated at a time slot 𝑠𝑖 . The estimated rate of scheduled bytes for transmission is estimated as Eq. (6).

4.1. Traffic scheduling SDNC schedules the incoming traffic and admits them based on the availability of the resources. Resource availability depends on two factors: time and capacity attributes associated with it. A cloud resource is said to available if it is active at the requested time that refers to the current access of the resource. Cloud resources exhibit reject and response states during their available interval. If a cloud resource is free/unassigned, then it responds to the request at any time instance t.

𝜇𝑖 = 𝑡𝑠𝑖 ∗ 𝑀

(6)

Requests from mobile users are of variable sizes 𝛥𝐿 depending upon the application it is coupled with. For uninterrupted traffic flow, the following optimization condition must be achieved to ensure a 146

V. Deeban Chakravarthy and B. Amutha

Computer Communications 150 (2020) 144–149

minimum of requests has been admitted to the cloud plane and its Eq. (7). ) ( 𝜇𝑗 𝑀 ∑ min(𝑡 (7) 𝑡𝑠𝑗 𝜇𝑗 ) 2 𝑡𝑠𝑗 − 1

the SDNC communicates directly. The scheduling process streamlines volumes of request to be mapped with appropriate neighbors in a defined time slot. Even though, the next level of transmission depends on the request processing time at the cloud plane until which the received requests have to be preserved by the neighbors. Retaining the messages improves the success rate of the request processing by controlling drops. The storage and the message size determine the existence of the message in the neighbor or the cloud plane. The queue utilization of the neighbor is estimated using Eq. (12) ( ) 𝑞𝑙𝑒𝑛 = 1 − 𝑤𝑡 ∗ 𝑞𝑙𝑒𝑛 + 𝑞𝑢∗ 𝑤𝑡 (12)

𝑗=1

∑ Eq. (7) must be confined to 𝑀 𝑗=1 𝑡𝑠𝑗 𝜇𝑖 ≤ 𝑐𝑝 ∗ 𝛥𝑎𝑡 for it achieves congestion-free request processing. SDNC stores the requests in a temporary dispatching queue in the order of arrival, to transmit them in their order of arrival. The request messages are mapped with the contemporary time slot for transmission. Request mapping (RM(t)) with time is represented as Eq. (8) ⎡ 𝑅𝑀 (𝑡) = ⎢ ⎢ ⎣

𝑚1 𝑡𝑠1 ⋮ 𝑚𝑗 𝑡𝑠1

⋯ ⋱ ⋯

𝑚1 𝑡𝑠𝑖 ⋮ 𝑚𝑗 𝑡𝑠𝑖

⎤ ⎥ ⎥ ⎦

Where, 𝑞𝑙𝑒𝑛 is the queue length, 𝑞𝑢∗ is the current queue estimation with respect to the number of packets stored and 𝑤𝑡 is the transmission weight factor, 0<𝑤𝑡 <1. The number of request messages that are transmitted in the 𝑖th low, to the neighbors from the control plane is estimated from Eq. (13) as 𝜇 (13) 𝑀= 𝑖 𝑡𝑠𝑖

(8)

This is simplified to estimate the time of mapping as in Eq. (9) 𝑅𝑀 (𝑡) = 𝜇𝑖 (𝑡) − 𝑀 (𝑡) + 𝑅𝑀(𝑡)

(9)

At 𝑡𝑠𝑙𝑎 , if 𝑞𝑙𝑒𝑛 < 𝑀, the neighbor that satisfies this condition is opted for transmitting request messages. The time variation affects the above condition when 𝑡𝑎 ≤t𝑠𝑙𝑎 or 𝑅𝑀(𝑡) + 𝑡𝑡 +𝑡𝑤 ≤𝑡 is not satisfied. In this case, neighbor selection varies with a difference in time slot 𝑡𝑠𝑖 . There are two cases of analysis when the above said conditions are not satisfied. Case-I The queue is underflowed i.e. 𝑞𝑢∗ (𝑅𝑀 (𝑡)) < q𝑙𝑒𝑛 Analysis: This notifies that the queue is ready to accept messages at RM(t) and 𝑡𝑡 required is less than that required in 𝑡𝑤 . As the transmission time is very less, it can be regarded that 𝑡𝑡 = 0 nearly. Therefore, 𝑡𝑎 + 𝑡𝑖 = 2𝑡𝑖 is the wait time when the resource is idle and 𝑡 . From this, the number of messages that can be transmitted to 𝑋 = 𝑠𝑙𝑎 𝑡 the neighbor from the control plane is x, 0<𝑥<𝑀. The queue utilized is now (𝑞𝑢∗ + x) that requires a time of 2𝑡𝑖 when the resource is idle or it requires a time 𝑡𝑠𝑙𝑎 is the resource is active. From Eq. (7), the minimum number of messages that can be serviced is x beside 𝑞𝑢∗ that satisfies ∑𝑥 𝑗=1 𝜇𝑖 ≤ 𝑐𝑝 ∗ 𝛥𝑎𝑡 , 0 < x < M. Case-II: Queue experiences an overflow condition if new messages are transmitted i.e. 𝑞𝑢∗ (𝑅𝑀 (𝑡)) < 𝑞𝑙𝑒𝑛 Analysis: In this case, the neighbor storage is full and hence no more request messages can be transmitted to the cloud plane. Therefore, the new messages have to wait in the scheduler for 𝑡𝑤 time. The wait time of the messages depends on the first service response in the cloud. If the cloud resource is available instantly for the accepted request message from the neighbor, then 𝑡𝑤 = 𝑡𝑠𝑙𝑎 − 𝑡𝑎 and the next transmission from SDNC is resumed after the first message is responded. Eq. (7) is satisfied by transmitting one message at any instance of ∑ RM(t) and 1𝑗=1 𝜇𝑖 ≤ 𝑐𝑝 ∗ 𝛥𝑎𝑡 , is the optimality achieved. Similarly, ∗ 𝑞𝑢 (𝑅𝑀 (𝑡)) < 𝑞𝑙𝑒𝑛 until 𝑥 = 𝑀. On the other hand, if the resource is not available instantly as it has entered a reject state, then 𝑡𝑤 = 𝑡𝑖 − 𝑡𝑎 is the wait time of the message in the scheduler. From Eq. (7), the optimal condition here is ∑1 𝑗=1 𝜇𝑖 ≤ 𝑐𝑝 ∗ 𝛥𝑎𝑡 is achieved. It describes the remaining messages that are to be dispatched to the cloud plane. The minimum number of messages transmitted here is (M-x) observed at 𝛥A𝑡 . The remaining messages that are to be transmitted are the queue utilization of the neighbor in the next set of sequences. The process of neighbor selection is illustrated in Fig. 3.

The transmission time 𝑡𝑡 between controller plane and cloud plane varies with the availability of resource and communication link. Therefore, the optimal time at which M reaches the cloud plane is 𝑅𝑀 (𝑡) + 𝑡𝑡 ≤ 𝑡 ensures that the request M will be processed. More specifically, the request that reaches the cloud plane when 𝑡𝑎 ≤𝑡𝑠𝑙𝑎 and 𝜇i ≤ 𝛥ac will be responded within the same time limit. The dynamicity of the scheduling process varies with variable 𝜇𝑖 and 𝑡𝑠𝑖 > 𝑡𝑎 . In this case, the wait time of the scheduler is determined to prevent the request from being dropped. Let 𝑡𝑤 denote the wait time of the message in the scheduled queue that is estimated as in Eq. (10) 𝑡𝑤 = 𝑡𝑖 − 𝑡𝑎 , 𝑡𝑖 > 𝑡𝑎

(10)

Now, 𝑅𝑀 (𝑡) + 𝑡𝑡 + 𝑡𝑤 ≤ 𝑡 ensures M will be processed and congruently, M will be processed in 𝑡𝑖+1 interval when 𝑡𝑎+1 > 𝑡𝑖 . Therefore, the optimal time at which the resource is available after a reject phase is given as in Eq. (11). 𝑡 = 𝑡𝑎 + 𝑡𝑖 + 𝑡𝑤

(11)

Solving Eq. (11) to substitute the value of 𝑡𝑤 in Eq. (10), then, the total time 𝑡 = 2𝑡𝑖 is the estimated idle time of the resource i.e. 𝑡 = 𝐶𝑡𝑖 , where 𝐶 = 𝑜𝑐𝑐𝑢𝑟𝑟𝑒𝑛𝑐𝑒(𝑡𝑖 ) − 1 is a variable integer. Contrarily, the variable integer is subject to change with M and 𝜇𝑖 . SDNC accepts multiple flows from SDNS and organizes them in the order of arrival in the transmission queue. The transmission queue of the available neighbor gateway that interfaces with the cloud plane is selected. The neighbor gateway that satisfies Eq. (7) under RM(t) + 𝑡t ≤t or RM(t) + 𝑡t + 𝑡w ≤t is selected if the transmission occurs in the first 𝑡𝑠𝑖 / the following slots respectively. Prior to the scheduling process, SDNC confirms the availability of the resource and the 𝑡𝑠𝑙𝑎 of the incoming traffic request. If the active time of the resource is less than 𝑡𝑠𝑙𝑎 then the neighbor with the ability to forward the request is selected. If the resource is unavailable/idle, then the message is made to wait for a time 𝑡𝑎 +𝑡𝑖 in the scheduler. If the number of incoming traffic increases, the messages are moved to the next available neighbor. In this case, the transmission is probed to the cloud plane after 𝑡𝑤 time ensuring the availability of the resource. The adversary that is to be addressed here is the storage capacity of the neighbors and its queue utilization rate. Improper selection of neighbors without the consideration of their storage and message handling capacity will result in congestion and request drop. This is addressed using the neighbor selection phase that is discussed in the following section.

5. System setting and performance analysis The proposed RPS technique is evaluated using Mininet 2.0 in Ubuntu 14.10 LTS. The physical configuration of the system includes a 32-bit processor with 2.1 GHz clock speed with a memory of 8 GB. The proposed RPS is compared with the existing Load balancing aware cloud resource scheduling (LBCRS) [11] and Distributed Geographical load balancing (DGLB) [16] for the metrics: Queue Utilization, Success Rate, Failure Rate, Response Time and Link Utilization. Table 1 illustrates the further simulation parameter settings used for performance evaluation.

4.2. Neighbor selection Communication between controller and cloud plane is established using wireless infrastructures like gateways/access points. These infrastructures are termed as neighbors of control plane through which 147

V. Deeban Chakravarthy and B. Amutha

Computer Communications 150 (2020) 144–149

Fig. 3. Neighbor selection process.

Table 1 Simulation parameters and values. Parameter

Value

Number of mobile users Request message size Maximum load Flush time of a message Bandwidth Request rate

25 512–4096 Bytes 0.2–0.8 1800 ms 5–200 Mbps 300 requests/s

Fig. 5. Request success rate comparison.

Fig. 6. Comparison of request failure rate.

Fig. 4. Comparison of queue utilization.

5.1. Queue utilization Fig. 4 illustrates the comparison of queue utilization observed in the existing and proposed load balancing techniques. The number of requests increases with time that requires underflow queue storage to ensure lesser drops. The numbers of M that are transmitted is less than 𝑞𝑙𝑒𝑛 , utilization (𝑞𝑢 * x) of the cloud gateway storage. This ensures that a variable number of packets are being transmitted that does not exceed the range of the current queue. In the other methods, the queue results in overflow as the number of request packets are not scheduled with storage consideration.

Fig. 7. Response time comparison.

𝑡𝑤 + 𝑡𝑡 ). The occurrence of both cases is confined by optimal neighbor selection and scheduling based on resource availability. The minimum ∑ number of requests handled that satisfies Eq. (7) such that 1𝑗=1 𝜇𝑖 ≤ 𝑐𝑝 ∗ 𝛥𝑎𝑡 is true, the rate of failure is less.

5.2. Request success rate Increase in network bandwidth increases the number of requests processed per until time. This leads to storage overflow resulting in request failures. In the proposed RPS, 𝑡𝑤 of a request message is estimated in forehand depending on the resource availability. If the resource is unavailable, the request experiences 𝑡𝑤 in the scheduler else the message is transmitted in 𝑡𝑡 . Therefore, the overflow experienced in a queue is limited by selecting appropriate neighbors that achieve 𝑡𝑤 ∕𝑡𝑡 with the resource availability knowledge. Hence, the number of packets passing the scheduler and (GW increases, achieving better success rates (Fig. 5).

5.4. Response time Comparison of response time with respect to varying request density is illustrated in Fig. 7. The number of the request being served lies in 𝑡𝑠𝑙𝑎 when the resource is active and it requires (𝑡𝑎 + 𝑡𝑖 ) when the resource is idle. Besides C and 𝑡𝑡 are the other time factors that are to be considered. Response time in RPS is less compared to the other methods as 𝛥A𝑡 modeled defines acceptance time of the request. X slots are allocated for the incoming M messages that are not made to experience additional 𝑡𝑤 with respect to RM(t). The Response time is evaluated by using the equation given below,

5.3. Request failure rate

(ta + ti ) 𝑜𝑟 (ta + ti + tw + tt )

The comparison of request failure rate is portrayed in Fig. 6 between the existing and proposed techniques. In the proposed RPS, a service request is dropped in two cases (i.e) ((𝑞𝑢∗ + 𝑥)>qlen ) and 𝑡𝑠𝑙𝑎 > (𝑡𝑎 + 𝑡𝑖 +

where (ta + ti ) for active resource state and (ta + ti + tw + tt ) for idle resource state. The Average Response time is evaluated by using the 148

V. Deeban Chakravarthy and B. Amutha

Computer Communications 150 (2020) 144–149

Declaration of competing interest The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper. References [1] M. Carvalho, W. Cirne, F. Brasileiro, J. Wilkes, Long-term SLOs for reclaimed cloud computing resources, in: Proceedings of the ACM Symposium on Cloud Computing - SOCC 14, 2014. [2] B. Kang, H. Choo, A cluster-based decentralized job dispatching for the large-scale cloud, EURASIP J. Wireless Commun. Networking 2016 (1) (2016). [3] S. Bhardwaj, L. Jain, S. Jain, Cloud computing: a study of infrastructure as a service (saas), Int. Eng. Inf. Technol. 2 (1) (2010) 60–63. [4] M. Carvalho, W. Cirne, F. Brasileiro, J. Wilkes, Long-term slos for reclaimed cloud computing resources, in: Proceedings of the ACM Symposium on Cloud Computing, ACM, 2014, pp. 1–13. [5] A. Paya, D.C. Marinescu, Energy-aware load balancing and application scaling for the cloud ecosystem, Int. J. Mod. Trends Eng. Res. 4 (3) (2017) 155–161. [6] B.P. Rimal, M. Maier, Workflow scheduling in multitenant cloud computing environments, IEEE Trans. Parallel Distrib. Syst. 28 (1) (2017) 290–304. [7] F. Jiang, Y. Liu, B. Wang, X. Wang, A relay-aided device-to-device-based load balancing scheme for multitier heterogeneous networks, IEEE Internet Things J. 4 (5) (2017) 1537–1551. [8] H. Anandakumar, K. Umamaheswari, Supervised machine learning techniques in cognitive radio networks during cooperative spectrum handovers, Cluster Comput. 20 (2) (2017) 1505–1515. [9] X. Gao, L. Kong, W. Li, W. Liang, Y. Chen, G. Chen, Traffic load balancing schemes for devolved controllers in mega data centers, IEEE Trans. Parallel Distrib. Syst. (2016) 1. [10] Sudhakar Sengan, S. Chenthur Pandian, Hybrid cluster-based geographical routing protocol to mitigate malicious nodes in mobile Ad Hoc network, Int. J. Ad Hoc Ubiquitous Comput. 21 (4) (2016) 224–236. [11] L. Chunlin, Z. Min, L. Youlong, Efficient load-balancing aware cloud resource scheduling for mobile user, Comput. J. 60 (6) (2017) 925–939. [12] A.A. Abdelltif, E. Ahmed, A.T. Fong, A. Gani, M. Imran, SDN-based load balancing service for cloud servers, IEEE Commun. Mag. 56 (8) (2018) 106–111. [13] Y.-J. Chen, L.-C. Wang, M.-C. Chen, P.-M. Huang, P.-J. Chung, SDN-enabled traffic-aware load balancing for M2M networks, IEEE Internet Things J. 5 (3) (2018) 1797–1806. [14] J. Duan, Y. Yang, A data center virtualization framework towards load balancing and multi-tenancy, in: 2016 IEEE 17th International Conference on High-Performance Switching and Routing (HPSR), 2016. [15] Y. Gao, K. Li, Y. Jin, Compact, popularity-aware and adaptive hybrid data placement schemes for heterogeneous cloud storage, IEEE Access 5 (2017) 1306–1318. [16] T. Chen, A.G. Marques, G.B. Giannakis, DGLB: Distributed stochastic geographical load balancing over cloud networks, IEEE Trans. Parallel Distrib. Syst. 28 (7) (2017) 1866–1880. [17] E. Iranpour, S. Sharifian, A distributed load balancing and admission control algorithm based on Fuzzy type-2 and Game theory for large-scale SaaS cloud architectures, Future Gener. Comput. Syst. 86 (2018) 81–98. [18] S.-L. Chen, Y.-Y. Chen, S.-H. Kuo, CLB: A novel load balancing architecture and algorithm for cloud services, Comput. Electr. Eng. 58 (2017) 154–160. [19] F. Shang, L. Mao, W. Gong, Service-aware adaptive link load balancing mechanism for Software-Defined Networking, Future Gener. Comput. Syst. 81 (2018) 452–464. [20] C.-M. Cheung, K.-C. Leung, DFFR: A flow-based approach for distributed load balancing in Data Center Networks, Comput. Commun. 116 (2018) 1–8. [21] B. Kang, H. Choo, An SDN-enhanced load-balancing technique in the cloud system, J. Supercomput. 74 (11) (2016) 5706–5729. [22] C.-T. Yang, S.-T. Chen, J.-C. Liu, Y.-W. Su, D. Puthal, R. Ranjan, A predictive load balancing technique for software-defined networked cloud services, Computing (2018). [23] Y. Zong, C. Yu, Y. Liu, Q. Zhang, Y. Sun, L. Guo, Time-dependent load-balancing service degradation in optical data center networks, Photonic Netw. Commun. 34 (3) (2017) 411–421. [24] J. Son, R. Buyya, SDCon: Integrated control platform for software-defined clouds, IEEE Trans. Parallel Distrib. Syst. 30 (1) (2019) 230–244, http://dx.doi.org/10. 1109/TPDS.2018.2855119, 1 Jan. [25] Dong Sun, Kaixin Zhao, Yaming Fang, Jie Cui, Dynamic traffic scheduling and congestion control across data centers based on SDN, Future Internet 10 (64) (2018) 1–12. [26] Sudhakar Sengan, S. Chenthur Pandian, Analysis of attribute aided data aggregation through dynamic routing in wireless sensor networks, J. Eng. Sci. Technol. (2016).

Fig. 8. Link utilization comparison.

Table 2 Performance values of RPS, DGLB and LBCRS. Metric

DGLB

LBCRS

RPS

Queue utilization Request success rate Request failure rate Avg. response time (s) Link utilization (%)

215 0.72 0.41 9 0.548

240 0.77 0.37 6.1 0.429

275 0.842 0.26 5.5 0.717

equation given below, (ta + ti )/Number of resources — for active resource state (ta + ti + tw + tt )∕Number of resources — for idle resource state The backpropagation, waited for a response, due to resource unavailability does not affect RPS response time in the other methods. 5.5. Link utilization Fig. 8 compares the link utilization of the proposed RPS with the existing LBCRS and DGLB. Storage and resource awareness improves the rate of request transmission that demands optimal link utilization. Link utilization is high at 𝑡𝑎 and it drops at 𝑡𝑖 . This variation ensures lesser failure rates. Different from the other method, link utilization is not constant at all time intervals. The performance values of the proposed RPS and the existing DGLB and LBCRS is charted in Table 2. 6. Conclusion This manuscript proposes Resource-aware Packet-Scheduling (RPS) for load balancing in the mobile-user cloud environment. The method is designed to improve the request processing rate in cloud despite the density of mobile users and application demands. A dedicated control plane is responsible for scheduling the incoming traffic requests based on resource availability and capacity to minimize failure rate and response time. This plane also serves the process of cloud gateway neighbor selection by assessing its storage capacity to prevent unnecessary message overflows and prolonged wait time. The overall methodology is intended to improve request servicing rates in the cloud for the increasingly mobile user demands in a streamlined manner. The interoperability features of the proposed RPS will be improved in the future by integrating appropriate scalable and network management algorithms. The limitation of the proposed system is Single Point Of Failure (SPOF) of control plane. That is if the control plane is failed then the incoming network traffic cannot be handled and it is planned to develop a distributed control plane in future. Ethical approval This article does not contain any studies with human participants or animals performed by any of the authors. 149