Journal Pre-proofs Adaptive User-Oriented Fuzzy-Based Service Broker for Cloud Services Mutaz Al-Tarawneh, Amjed Al-mousa PII: DOI: Reference:
S1319-1578(19)30495-1 https://doi.org/10.1016/j.jksuci.2019.11.004 JKSUCI 700
To appear in:
Journal of King Saud University - Computer and Information Sciences
Received Date: Revised Date: Accepted Date:
9 April 2019 6 November 2019 7 November 2019
Please cite this article as: Al-Tarawneh, M., Al-mousa, A., Adaptive User-Oriented Fuzzy-Based Service Broker for Cloud Services, Journal of King Saud University - Computer and Information Sciences (2019), doi: https:// doi.org/10.1016/j.jksuci.2019.11.004
This is a PDF file of an article that has undergone enhancements after acceptance, such as the addition of a cover page and metadata, and formatting for readability, but it is not yet the definitive version of record. This version will undergo additional copyediting, typesetting and review before it is published in its final form, but we are providing this version to give early visibility of the article. Please note that, during the production process, errors may be discovered which could affect the content, and all legal disclaimers that apply to the journal pertain.
Missing
Adaptive User-Oriented Fuzzy-Based Service Broker for Cloud Services
Abstract This paper presents an adaptive fuzzy-based cloud service broker algorithm (AFBSB). The proposed algorithm employs an adaptive fuzzy-based engine to select the most appropriate data center for user cloud service requests considering user preferences in terms of cost and performance. The algorithm is implemented using an open-source cloud computing simulation tool. The algorithm results are tested against the results of other existing techniques within two types of cloud environments. First, a performanceconstrained environment in which performance improvement is the main objective for cloud users. Second, a preference-aware environment, in which cloud users have different cost and performance preferences. Simulation results show that the proposed algorithm can achieve significant performance improvement in performance-constrained environments. As for the preference-aware environment, they show that AFBSB can operate in useroriented manner that guarantees performance/cost improvement, as compared to other algorithms. Keywords: Fuzzy Logic, Cloud Services, Broker, Performance, Cost, User Preference
1. Introduction Cloud computing is an ubiquitous computing platform in which computing services are provided in a pay-as-you-go premise. It has rapidly become an essential part of today’s computing infrastructure. The cloud consists of a large number of geographically distributed data centers (DC), in which virtual machines (VM) are instantiated on top of the available hardware resources to execute user requests on-demand [1]. In order to maintain the reliability and availability of the cloud-based services, cloud service providers typically deploy their services on different data centers with different pricing Preprint submitted to Journal Name
November 18, 2019
policies and performance characteristics. On the other hand, cloud users tend to have different quality of service (QoS) requirements. Whereas some users prefer a low-cost DC, others may prefer a high-performance one. Hence, an intermediary agent has to act on behalf of cloud users in order to direct their requests to the most appropriate DC that best fits their performance and cost requirements. This agent is referred to as cloud service broker (CSB). It represents an important component in cloud computing service stack [2]. The CSB employs a specific brokering algorithm that dynamically selects the most suitable DC for each incoming user request. The DC selection process has a direct consequence on cloud resource utilization, performance, service cost and user satisfaction levels. Therefore, service brokering algorithms have to robustly and optimally choose the most suitable DC for each incoming request based on the requester’s predefined QoS requirements and the current status of the cloud environment. In order to evaluate the efficacy of CSB algorithms, researchers can either test their algorithms on a commodity-available cloud infrastructure or on a specialized simulation tools. Apparently, the former option provides higher accuracy as compared to the later. However, the cloud environment is very complex and its performance depends on many uncontrollable factors such as network congestion levels and DC’s time-varying workloads, which adversely affect repeatability of testing results. Hence, simulation tools provide a viable option that facilitates parameterability of DC and network configurations, and the repeatability of simulation results. Hence, this paper presents and evaluates a simulation-based cloud service brokering algorithm. The baseline cloud service brokering algorithms can be summarized as follows: • Service Proximity-Based (SPB): This algorithm maintains a statically sorted list of DCs based on network latency [3, 4]. When a user base (UB) generates a service request, the CSB routes the incoming request to the closest DC i.e. the DC with the shortest network latency regardless of user preference, DC pricing policy or performance characteristics. Therefore, this policy can potentially direct user requests to unsuitable DC that does not comply with user cost and performance preferences. In addition, it can overload the closest DC and its communication pathway since it does not take into account the complexity of the incoming request (i.e. processing requirements and data size) and the available network bandwidth upon the arrival of the current request. 2
• Optimized Response Time (ORT): This policy keeps track of the response times of the most recently serviced requests [5]. For each DC, it records the response time of the request that was recently serviced by that DC. When an incoming request arrives, ORT will direct this request to the DC which is expected to yield the shortest response time. Apparently, ORT overcomes the shortcomings of SPB since it takes into account both DC last known workload and the current available network bandwidth. However, ORT excludes DCs that have not previously received any requests while these DCs might be the most suitable target for the incoming requests. In addition, it assumes that all requests have the same processing requirements and execution times.Also, it assumes that all requests have the same data size, that is fixed to 1. Moreover, it does not take into account user’s cost and performance preferences. • Dynamically Reconfigurable Broker (DRB): This has the same working principle as the SPB [3]. However, it increases the number of VMs on the closest DC in response to an elevation in incoming requests volume. However, increasing the number of VMs under the same hardware infrastructure could deteriorate the performance of the closest DC. In addition, it overlooks user’s cost and performance preferences. Therefore, this paper proposes an Adaptive Fuzzy-Based Cloud Service Broker (AFBSB) that selects, for each incoming request, the most appropriate DC, taking into account the user cost and performance preferences, the request’s processing requirements, its data size, DC workload and the current available network bandwidth. The main contributions of this paper are threefold. First, it presents a review on the existing cloud service brokering algorithms. Second, it proposes an adaptive brokering algorithm that takes into account dynamic changes of the cloud environment including both DC loading conditions and network congestion levels.Third, this paper is based on an on-demand cloud cost model which provides the ability to assess the capability of the proposed algorithm in meeting user cost requirements. 2. Literature Review Several service brokering algorithms have been proposed in order to select an appropriate data center to serve user requests in a cloud computing 3
environment. These algorithms can be classified as either static or dynamic. Apparently, static algorithms rely on salient characteristics of DC such as million instructions per second (MIPS) rating,number of physical machines, number of processors, available memory, virtual machine (VM) cost, Data transfer (DT) cost per Gigabyte..etc, to make their DC selection decisions. On the other hand, dynamic algorithms take into account dynamically derived features such as DC response time and network status. Admittedly, the work in [3, 4] i.e. the SPB algorithm represents the baseline for the majority of the static algorithms. However, the SPB algorithm has several disadvantages as outlined earlier. Hence, Sharma et. al. [6] have proposed a round-robin service brokering algorithm in which incoming user requests are distributed among DCs within the same region in a round-robin manner. However, this algorithm assumes that all DCs, within the same region, have the same configuration. In addition, it overlooks DC service costs. Therefore, [7] has proposed a priority-based brokering algorithm in which DCs with higher processing power are given higher priority and receive more user requests. However, this algorithm ignores the fact that sending more requests to the high-priority DC can deteriorate its performance due to high processing demands and network congestion. In addition, the DC priority is assigned based on its processing power without considering its service costs. Also, the work in Tordsson et al. [8] has proposed a static brokering mechanism that statically deploy virtual machines among multiple cloud service providers without considering dynamic changes of the cloud environment such as DC loading and network congestion levels. Similarly, [9] has proposed a weight-based brokering algorithm in which each DC is assigned a weight based on its number of available VMs as compared to other DCs within the same region. The number of requests directed to each DC is proportional to its weight. In addition, [10] has proposed a fuzzy rating-based (FRB) proximity-aware brokering algorithm in which a fuzzy engine is employed to assign a weight to each DC, within a particular region, taking into account each DC’s performance and cost characteristics. The number of requests directed to each DC is proportional to its weight. , However, the algorithm in [10] is based on static performance characteristics such as MIPS rating and number of processors and does not take into account dynamic changes of the cloud environment. In addition, it considers only regional DCs without considering other distant ones that would potentially provide better performance as compared to the regional ones. Moreover, it does not take into account user cost and performance preferences. 4
However , none of these weight-based strategies has considered the dynamic changes in the cloud environment such as DC performance degradation, due to overload conditions, and network congestion. On the other hand, [11] has proposed a cost-based refined version of the SPB policy. In this work, user requests within a particular region are routed to the DC with the lowest DT cost. Apparently, this policy increases the response time and overlooks other aspects such as network congestion and DC overload. Rekha et. al. [12] has proposed a cost-aware policy in which user requests within a particular region are directed to the next neighbouring region during peak hours.However, they have overlooked dynamic network behavior and assumed that user requests have the same processing requirement and data size. In summary, cost-aware static brokering algorithms increase the DC response time and neglect dynamic aspects such as DC overloading and network congestion levels. Likewise, performance-aware static brokering algorithms can lead to increased service costs and rely solely on static performance characteristics and do not respond to run-time changes in the cloud environment. In addition, both types assume that all user requests have the same processing requirements and data size. In addition, they do not take into account user cost and performance preferences. Among the dynamic brokering policies, manasrah et. al. [13] have proposed a variable service broker policy. This policy makes the brokering decision based on the incoming request’s data size. Whereas requests whose data size is less than a particular threshold value are directed to the closest DC, other requests are sent to the DC that is expected to yield the lowest response time. However, this policy had no consideration for the DC cost. In addition, it assumed that all user requests have the same processing requirements. Moreover, they did not consider user preferences, when making the brokering decision. The work in [14] has proposed three variants of a cost-aware brokering algorithms. First, a cost-aware brokering algorithm that direct user requests to the lowest-cost regional DC. Second, a load-aware brokering algorithm in which incoming service requests are distributed equally among available data centers. Third, a coss/load-aware algorithm that maintains a list of least-loaded low-cost data centers and divide incoming user requests equally among members of that list. However, they did not take into account user preferences and assumed that all requests have the same processing requirements and data size. On the other hand, [15] introduced a two-level scheduling policy that gives higher priority to requests with small processing requirements, when 5
scheduling incoming requests within a particular DC. While this policy could improve the average processing time - within a particular DC, it had no consideration for cost and network status. On the other hand, Li et. al. [16], has tackled task scheduling at the VM level. Their work was based on an Ant Colony optimization (ACO) meta-heuristic [17], where user requests are scheduled on the VM that is expected to yield the shortest processing time. While this policy has shown an improvement in DC response time and VM utilization, it overlooked DC cost and run-time network congestion. Similarly, [18] has implemented an ACO based service brokering policy. Also, [19] has proposed a Cuckoo optimization-based service brokering algorithm [20]. Both policies focused on improving DC response time without considering DC cost and user cost/performance preference. In addition, [21] has modeled the task allocation as an optimization process whose objective is to minimize the overall response time without considering DC cost. 3. Design of the Adaptive Fuzzy-Based Service Broker (AFBSB) 3.1. System Requirements The proposed service broker needs to satisfy multiple requirements: • Account for user preference: The service broker would need to sway the selection of the data center in a way that satisfy the user preference. Users are asked to assign numbers from (1 − 5) for the cost and performance preference. With a value of 1 being the lowest priority, and 5 the highest. For example, if the user performance preference is set at 5 and cost preference is at 2; then the service broker will focus the selection criteria on the performance expected from data centers, and give very little consideration to the cost. • Distribute load across data centers: When the user preference is not a decisive factor, the broker needs to distribute the requests on different data centers fairly. 3.2. System Architecture The system architecture is depicted in Figure 1. The system relies on two type-1 Mamdani fuzzy engines; a Cost Fuzzy Engine (CFE) and a Performance Fuzzy Engine (PFE). The details of both of these engines are detailed in Section 3.3. Each of these Fuzzy Engines is evaluated per each available data center (i.e. creating the instances CF EDC1 , CF EDC2 , ...CF EDCn , 6
P F EDC1 , P F EDC2 ... P F EDCn ) The CFE takes two parameters: the current user preference for cost and the actual cost for the respective data center, the engine produces a fit value; which is how fit this data center for this user based on his preference. A Max block selected the data center with the highest fit to produce the Best Cost Selection. Similarly, the PFE is fed with the performance preference per the current user along with the latest performance statistics for each data center. Once the performance fit numbers are calculated for all data centers, the Best Performance Selection is calculated. Finally, a smart selection algorithm is deployed to decide which data center should be chosen to service the current user. The details of the smart selection algorithm are explain in section 3.4 . This process is repeated for each incoming request. However, if performance is an issue, the proper selection for certain user preference values can be cached for reuse, and reevaluated on periodic basis. 3.3. Fuzzy Engines As mentioned in section 3.2, the proposed architecture relies on using two type-1 Mamdani fuzzy engines; one that determines the fit of data centers based on cost, and the second fuzzy engine determines the fit based on performance. Any typical fuzzy engine consists of three stages, the fuzzification, the fuzzy inference engine, and finally the defuzzification. The details of both engines are explained next: 3.3.1. The Cost Fuzzy Engine (CFE) • Fuzzification stage: The fuzzification stage receives the numerical input values of the inputs: user cost preference (1 − 5) and the cost at different data centers and maps them to the qualitative membership functions. In the case of the user preference, the levels (1 − 5) are mapped into three membership functions (L: Low, M: Medium, and H: High), as shown in Figure 2a. As for the cost at different data centers it was normalized before fuzzification. First, the average cost is calculated, as in Equation 1, then the cost of any data center is normalized, as in Equation 2:
7
Figure 1: System Level Architecture
n
CostAvg
1 X = ∗ Cost(i) n i=1
Cost(i)N orm =
Cost(i) CostAvg
(1) (2)
The normalized cost values are then fuzzified into the membership functions (VL: Very Low, L: Low, M: Medium, H: High, VH: Very High), as shown in Figure 2b • Defuzzification stage: The defuzzification stage is responsible to reverting the qualitative output from the Fuzzy engine rules back to numerical output. The rules rate Data center using a rating system (A − E). These values are then mapped to a numerical fit value as show in Figure 2c.
8
(a) Fuzzification of the Users’ Preferences
(b) Fuzzification of the Data Centers Cost
(c) Defuzzification of the Data Centers Rating Figure 2: Fuzzy Membership Functions of CFE
9
Table 1: Rules for the Cost Fuzzy Engine
Normalized Cost VH
H
M
L
VL
H
E
D
C
B
A
User Cost M Preference L
D
C
B
A
A
E
E
E
E
E
• Fuzzy inference engine: The Fuzzy Inference Engine (FIE) constitutes the core logic of any fuzzy system. It comprises of a set of rules that map the different memberships of the inputs to output memberships. Table 1 shows the rules for the Cost FIE. So for example, when the normalized cost is 1.05, then it would map to 50% M and 50% H. And if the user cost preference was 5, then it would map to 100% H. Now, according to the table the following two rules will be used: (N orm. Cost = M ) & (Cost P ref. = H) → DC Rating = C (N orm. Cost = H) & (Cost P ref. = H) → DC Rating = D 3.3.2. The Performance Fuzzy Engine (PFE) • Fuzzification stage: Again in the performance engine the user preference is an input, it is fuzzified exactly like the user preference for cost, shown in Figure 2a. The second input for the Performance Fuzzy Engine is the response time associated with each data center. First, the Average Response time for all data center is calculated, as in Equation 3. Then the response is normalized as shown by Equation 4. The Normalized response time is then fed into the fuzzy engine. It is then fuzzified into five membership functions: (VL: Very Low, L: Low, M: Medium, H: High, VH: Very High), shown in Figure 3. n
ResponseAvg =
1 X ∗ Response(i) n i=1
Response(i)N orm =
10
Response(i) ResponseAvg
(3)
(4)
The response time Response(i) which is the total time required to handle an incoming service request by a particular DC is given by Equation 5. Response(i) = TP roc (i) + TT ran (i)
(5)
Where TP roc is the processing time and TT ran is the network transfer delay for request (i). Apparently, TP roc depends on the processing requirements of user requests characterized in terms of the number of instructions per request and the processing power of the designated DC. Since the execution time of a particular service request is not apriori known, the proposed algorithm relies on estimating the execution time of each incoming request for every potential DC. The processing power of each DC is estimated as shown in Equation 6.
P rocessing P ower(DCj ) =
N o.of Instr. (instructions/sec.) T imeExec.
(6)
Where N o.of Instr. is the number of instructions of last request handled by DCj and T imeExec. is the execution time of that request, as recorded by the service broker upon receiving the corresponding response from the selected DC. Subsequently, the processing power of each DC is updated based on its most recent execution history. For each incoming service request, the proposed algorithm estimates the execution time of that request on each potential DC as shown in Equation 7.
Estimated TP roc (Request(i), DCj ) =
N o.of Instr.(i) (sec.) P rocessingP ower(DCj ) (7)
Where N o.of Instr.(i) is the processing requirement of the current user request and P rocessingP ower(DCj ) is the last recorded processing power of the DC. In order to calculate the total estimated response time, for current service request, the proposed algorithm needs also to estimate the network 11
Figure 3: Fuzzification of the Data Centers Normalized Response Time
transfer delay TT ran required to handle the current service request as shown in Equation 8. Evidently, the network transfer delay depends on both the data size of the incoming service request and the current available network bandwidth between each pair of geographic regions.
Estimated TT ran (Request(i), DCj ) = TN et.Lat. +
DataSize(i) (sec.) BandwidthDCi,j (8)
where TN et.Lat. is the fixed network latency between the geographic regions between the service requester and the serving DC, DataSize(i) is the data size of the current service request and BandwidthDCi,j is the current available network bandwidth between the service requester (i) and the DCj . By taking into account both the last recorded processing power of each DC and the current available network bandwidth, the proposed algorithm is guaranteed to operate under different DC and network loading conditions and forward user requests to the most appropriate DC that would optimize their service requirements. • Defuzzification stage: The defuzzification stage in the PFE is exactly the same as the CFE. The ratings of the data centers are defuzzified using the membership functions A − E shown in Figure 2c. • Fuzzy inference engine: Similarly, the rules for the Performance Fuzzy Engine are shown in Table 2. The table relies on the two in12
Table 2: Rules for the Performance Fuzzy Engine
Normalized Response Time VH
H
M
L
VL
E
D
C
B
A
M D
C
B
A
A
L
E
E
E
E
H User Performance Preference
E
put parameters; the user performance preference, and the normalized response time for each data center. 3.4. Smart Selection Module The smart selection module is responsible for making the final decision about the selection of a data center. The smart selection module makes the decision based on the user cost and performance preferences, as well as the Best Cost Selection DC and the best performance selection DC. The algorithm governing the smart selection module is shown in Figure 4. First, the smart selection algorithm reads the best cost DC and the best performance DC, as well as the user cost and performance preferences. Initially when there are no existing metric to use, the smart selection uses Round Robin algorithm (where the first n service requests are sequentialy sent to the n available data centers in order to make sure that each DC has an entry in the request handling history )to distribute initial requests to all data centers in order to collect the performance metrics. Next, once a request is received and the fuzzy engines have computed the cost and the performance fits, if both engines recommended the same DC, then it gets selected. Otherwise, it selects the DC with the higher fit. And when both fits are equal it selects the one with the higher user preference. For example, if the cost preference was 5, while performance preference was 3 and both fits were equal, say 3.5, then it would select the BestCostDC since the user had stronger preference towards cost. Finally, if all parameters are equal it would resort back to using Round Robin for selection.
13
Figure 4: Smart Selection Flowchart
14
4. Results & Discussion 4.1. Simulation Setup & Configuration In order to implement and test the proposed service brokering algorithm (i.e the AFBSB), the CloudAnalyst simulator [22] has been used. The optimization library has been modified to implement the AFBSB. First, the performance of AFBSB has been evaluated under different scenarios and compared to that of SPB, ORT and FRB [10] algorithms with both cost and performance preference set to be equal, as detailed in sub-section 4.2. Next, in sub-section 4.3, the AFBSB is tested under conditions with varying cost and performance preferences, in order to show the value of incorporating customers preferences. A preliminary step towards CloudAnalyst simulation is to define network delay and bandwidth matrices. These matrices, given in table 3, define the network delay and bandwidth between each pair of geographic regions defined within the simulation environment.The values in these matrices are the default values typically used in the CloudAnalyst environment. Table 3: Network Configuration
Region 0 1 2 3 4 5
0 25 100 150 250 250 100
Network Delay 1 2 3 100 150 250 25 250 500 250 25 150 500 150 25 350 150 500 200 200 500
(ms) 4 250 350 150 500 25 500
5 100 200 200 500 500 25
0 2000 1000 1000 1000 1000 1000
Network Bandwidth 1 2 3 1000 1000 1000 800 1000 1000 1000 2500 1000 1000 1000 1500 1000 1000 1000 1000 1000 1000
(Mbps) 4 1000 1000 1000 1000 500 1000
In addition, the data center (DC) hardware configuration and service costs as shown in table 4. All DCs were configured to have an x86 architecture, a Linux operating system and a Xen virtual machine manager (VMM) and a single physical machine. The physical machine configuration is also shown in table 4. Other important configuration parameters include: user grouping factor in a particular UB, which is the number of simultaneous users from that UB. The request grouping factor that defines the number of concurrent requests a single application server instance can sustain. Instruction length per request which is equivalent to the number of executable instructions 15
5 1000 1000 1000 1000 1000 2000
Table 4: DC Configuration
DC ID DC DC DC DC DC DC
1 2 3 4 5 6
Region
VM Cost Memory Storage DT No. of ($/hr) Cost Cost Cost Physical Machines 0 0.1 0.05 0.1 0.1 1 1 0.1 0.05 0.1 0.1 1 2 0.1 0.05 0.1 0.1 1 3 0.1 0.05 0.1 0.1 1 4 0.1 0.05 0.1 0.1 1 5 0.1 0.05 0.1 0.1 1 Physical Machines Configuration Machine Memory Storage Available No. of Processor ID (GB) (GB) Bandwidth Processors MIPS 1 20 100 1,000,000 4 10000
within a particular service request. Finally, the VM load balancing algorithm is responsible for distributing the incoming requests among available VMs within a particular DC. The values of these factors were set to 10,10,4096 and Round Robin, respectively. Table 5: User Base Configuration
UB ID UB0 UB1 UB2 UB3 UB4 UB5
Region 0 1 2 3 4 5
Peak time users 20000 5000 15000 7500 2500 4000
off-peak time users 2000 500 1500 750 250 400
4.2. Performance-constrained Cloud Environment Simulation Here, the AFBSB has been compared to SPB, ORT and FRB in a performance constrained environment, under different simulation scenarios. For each simulation scenario, the following parameters needed to be defined: the number of UBs, the geographic region for each UB, the average number of hourly service requests per user, the average data size per request and the peak cloud usage hours besides the average number of users during peak and 16
off-peak hours. These configurations are summarized in table 5 and 6. Table 6: Performance-constrained Simulation Scenarios
Scenario ID 1 2
Number of UBs 3 6
UB ID
Requests per hour UB2,UB4,UB5 60 UB0,UB1,UB2,UB3,UB4,UB5 60
Request Peak hours Size (KB) (GMT) 65536 7-9 65536 7-9
4.2.1. Performance-Constrained - Scenario 1: In this scenario, the simulation environment was configured to have 3 UBs located in 3 distinct regions, namely, region 2 , 4 and 5 (Table 6). In addition, each region was configured to have a single DC. The goal of this scenario is to test the proposed algorithm in the presence of multiple UBs with different number of users and geographic locations. Figure 5 summarizes the simulation results for this scenario. Figure 5a depicts the overall DC response and processing times for the three brokering policies. Clearly, AFBSB has resulted in a better performance as compared to SPB, ORT and FRB . It has achieved 47%, 22%, 47% improvement in response time as compared to SPB, ORT and FRB, respectively. In addition, it has achieved 70%, 35% and 70% improvement in DC processing time as compared to SPB, ORT and FRB, respectively. Figure 5b illustrates the average response time perceived by each UB, under each brokering policy. AFBSB has outperformed the other three policies for all of the 3 UBs. In order to test the proposed algorithm under different DC and network loading conditions, user bases were configured to have different geographic regions and different number of users during peak and off-peak hours (As shown in table 5). Apparently, UB2 was configured to have higher number of users as compared to the other two user bases in Scenario 1. Therefore, it will generate more service requests as compared to the other two UBs which, in turn, will lead to more demand placed on the processing power and network bandwidth of its regional DC. Once the performance of the regional DC (i.e. DC3) starts deteriorating, AFBSB will route incoming service requests generated from UB2 to other distant, better performance DCs. On the other hand, the number of service requests generated from UB4 and UB5, and in turn, the demand for processing power and network band17
(a) Performance comparison
(b) Response Time by UB
(c) DC Processing Time
(d) Cost comparison
Figure 5: Results of Performance-constrained Scenario 1.
width from the corresponding regional DCs will be lower than that of UB2 since they have lower number of users. Hence, the number of service requests that AFBSB will route to DCs other than the associated regional DCs will be lower than that of UB2. Therefore, UB2 has witnessed higher performance improvement as compared to the other two UBs. These results can be explained by the results shown in figure 5c which illustrates the average DC processing time per request. In SPB and FRB, a UB within a particular region is serviced only by the DC that resides within that region. On the other hand, ORT and AFBSB route user requests not only to the regional DC but also to other distant DCs based on the current status of the cloud environment. However, AFBSB has significantly outperformed ORT and achieved lower average DC processing time on all DCs since it takes into account both processing requirement and data size of user requests. Finally, figure 5d shows the total cost encountered by each UB, under each possible brokering policy. In this scenario, AFBSB did not cause any noticeable increase in the total cost when compared to the other two policies. Intuitively, since all data centers are configured to have the same cost
18
ratings in this scenario and the number of service requests generated from each UB is the same under all brokering policies, all brokering policies would result in the same service cost from user perspective. 4.2.2. Performance-Constrained - Scenario 2: In order to exercise the performance side of AFBSB more, scenario 2 has been simulated. In this scenario, each geographic region has a single UB and a single DC. This scenario would create a situation where DCs will have higher utilization levels for their processing resources. In addition, network infrastructure will encounter higher traffic volume as compared to the previous scenarios. Figure 6 illustrates the simulation results of scenario 2. As shown in figure 6a, AFBSB has accomplished lower overall DC response and processing times as compared to the other three policies. As compared to SPB and FRB, AFBSB has resulted in 50% and 70% reduction in response and processing times, respectively. On the other hand, it has reduced response and processing times by 19% and 34%, respectively, as compared to ORT. Figure 6b depicts the average response time of user requests for each individual UB, under the four brokering policies.It can be observed that AFBSB has improved the response time when compared to the other three policies. This improvement is driven by AFBSB’s ability to distribute user requests among available DCs in a manner that guarantees performance improvement taking into account both processing and network bandwidth requirements of individual requests. As shown in figure 6c, AFBSB has the ability to utilize all available DCs and achieve lower DC processing times. Although ORT has also the ability to distribute user requests to DCs other than the regional ones, it overlooks the processing requirement of different user requests. In addition, it does not take into account the network bandwidth requirements of those requests. Hence, its has caused higher response times in comparison with the AFBSB. As for the cost, figure 6d shows the total service cost each UB will incur under each brokering policy. It is obvious that AFBSB did not cause any noticeable increase service costs in this scenario as well. Apparbently, the behavior of the FRB algorithm is identical to that of the SPB algorithm. Hence, the remainder of this work will compare the proposed algorithm with the SPB and ORT algorithms only.
19
(a) Performance comparison
(b) Response Time by UB
(c) DC Processing Time
(d) Cost comparison
Figure 6: Results of Performance-constrained Scenario 2.
4.3. Preference-Aware Cloud Environment Simulation In section 4.2, a performance-constrained cloud environment was presented. In that environment, the main objective of the service brokering algorithm was to improve the round-trip time of user service requests. In other words, cloud users were assumed to focus on performance improvement whilst overlooking total service cost. In addition, all DCs were assumed to have the same service cost. Typically, cloud users may prefer to have a low cost service, a high performance service or a reasonable trade-off between both. In addition, cloud service providers such as Microsoft Azure [23] have a pricing policy in which cloud services may vary based on the geographic region. Hence, a cloud service brokering algorithm should maintain a linkage between user preference and DC performance and cost details. Unlike exiting service brokering algorithms outlined in section 2, AFBSB can incorporate the user performance and cost preferences in its decision making process and selects the most appropriate DC that best fits these preferences. Hence, this section studies the performance of AFBSB in a preference-aware cloud environment. For this purpose, DC configuration shown in table 4 has been modified such that each DC has different VM and DT costs. The VM
20
and DT costs for DC1,DC2,DC3,DC4,DC5 and DC6 were set to 0.1,0.098, 0.096,0.094,0.092 and 0.090 $, respectively. Apparently, the maximum price difference is 10% in order to allow realistic simulation scenarios that mimic real-life pricing schemes. 4.3.1. Preference-Aware - Scenario 1: This scenario assumes a single UB (i.e. UB0), that is located is geographic region 0 (Table 5). In order to examine the ability of AFBSB to incorporate user cost and performance preferences into its DC selection process, the simulation has been performed under three different preference tuples. In the AFBSB neutral case, both cost and performance are weighted equally (performance preference = 5, cost preference =5). On the other hand, AFBSB performance has higher performance preference (performance preference = 5, cost preference =1) while AFBSB cost has higher cost preference (performance preference = 1, cost preference =5). Figure 7 shows the results of the first preference-aware scenario (PAscenario-1). The bar chart shows the cost comparison, while the line shows the performance comparison. It can be seen that the AFBSB Neutral improves on both factors in comparison with the SPB and the ORT. However with the AFBSB performance, the algorithm even improves on the Neutral configuration dropping the response time from 522ms to 447ms. While the improvement comes at the expense of increasing the cost in comparison with the Neutral case, the cost of the AFBSB Performance is still less than that of the SPB and ORT. On the other hand, AFBSB Cost has attained a noticeable cost savings bringing the cost from around 32,087 in the Neutral case down to 30,705. Hence, it can be concluded that, AFBSB is able to embed user cost and performance preferences in its decision making process and send user requests to the most appropriate DCs in a manner that guarantees cost and/or performance improvement. 4.3.2. Preference-Aware - Scenario 2: In order to further attest the ability of AFBSB in including user preference in brokering decisions, a second preference-aware simulation scenario has been proposed. In this scenario, the cloud environment was configured to have three UBs, namely, UB0, UB2 and UB3. Their preference tuples were set as AFBSB cost, AFBSB performance and AFBSB neutral, respectively.
21
Figure 7: Results for User Preference Scenario 1
(a) Response Time Comparison
(b) Cost Comparison Figure 8: Results for User Preference Scenario 2
22
Figure 8a and 8b summarizes and compares simulation results for this scenario. As shown in figure 8a, AFBSB has resulted in a pronounced performance improvement in the cases where cloud users are seeking performance improvement i.e. the AFBSB performance and neutral cases. On the other hand, figures 8b illustrates that AFBSB has resulted in an appreciable reduction in total service cost for the UB with high cost preference i.e. UB0. For the neutral case, AFBSB did not cause any significant change as compared to the other two polices. In summary, AFBSB provides a guaranteed improvement for users with different cost/performance preferences. In addition, it works in a best-effort manner for neutral cases i.e. when users have equal cost and performance preferences. In other words, it improves either cost or performance, based on the current status of the cloud environment. 5. Conclusion In this paper, an adaptive fuzzy-based cloud service broker algorithm has been proposed and evaluated. Unlike existing algorithms that overlook some important factors such as user request’s processing requirements, network bandwidth utilization levels and user cost/performance preferences, the proposed algorithm presents a comprehensive selection process, where performance-influencing factors are taken into account. The proposed algorithm has been extensively tested under different simulation scenarios with different user base and data center configurations. It has been proven that the proposed algorithm can achieve superior performance improvement as compared to other policies. In addition, it can operate in a user-oriented manner in which data centers are selected taking into account user preferences, something that other brokers lack. Future work in this area should consider additional factors in the brokering process; like the security capabilities of the service provider and security requirements of user applications. In addition, using a multi-objective optimization approach to balance both user cost and the providers revenue. References [1] R. Buyya, J. Broberg, A. M. Goscinski, Cloud Computing Principles and Paradigms, Wiley Publishing, 1st edition, 2011. [2] B. Varghese, P. Leitner, S. Ray, K. Chard, A. Barker, Y. Elkhatib, H. Herry, C.-H. Hong, J. Singer, F. P. Tso, Cloud Futurology, arXiv e-prints (2019) arXiv:1902.03656. 23
[3] D. Limbani, B. Oza, A proposed service broker policy for data center selection in cloud environment with implementation, Int.J.Computer Technology and Applications 3 (2012) 1082–1087. [4] V. Sharma, Efficient data center selection policy for service proximity service broker in cloudanalyst, Int. J. Innovative Comp. Sci. Eng.(IJICSE) 1 (2014) 21–28. [5] Q. Zhang, L. Cheng, R. Boutaba, Cloud computing: state-of-the-art and research challenges, Journal of Internet Services and Applications 1 (2010) 7–18. [6] V. Sharma, R. Rathi, S. Bola, Round-robin data centre selection in single region for service proximity service broker in cloud analyst, International Journal of Computers and Technology 4 (2013) 254–260. [7] R. K. Mishra, S. Kumar, B. S. Naik, Priority based round-robin service broker algorithm for cloud-analyst, in: 2014 IEEE International Advance Computing Conference (IACC), pp. 878–881. [8] J. Tordsson, R. S. Montero, R. Moreno-Vozmediano, I. M. Llorente, Cloud brokering mechanisms for optimized placement of virtual machines across multiple providers, Future Generation Computer Systems 28 (2012) 358 – 367. [9] S. Nandwani, M. Achhra, R. Shah, A. Tamrakar, K. Joshi, S. Raksha, Weight-based data center selection algorithm in cloud computing environment, in: S. S. Dash, M. A. Bhaskar, B. K. Panigrahi, S. Das (Eds.), Artificial Intelligence and Evolutionary Computations in Engineering Systems, Springer India, New Delhi, 2016, pp. 515–525. [10] M. Al-Tarawneh, A fuzzy logic based proximity-aware cloud service broker algorithm, International Review on Computers and Software 10 (2015) 1027–1036. [11] D. Chudasama, N. Trivedi, R. Sinha, Cost effective selection of data center by proximity-based routing policy for service brokering in cloud environment, International Journal of Computer Technology and Applications 3 (2012) 2057–2059. [12] P. M. Rekha, M. Dakshayini, Cost based data center selection policy for large scale networks, in: 2014 International Conference on Computation of Power, Energy, Information and Communication (ICCPEIC), pp. 18– 23. [13] A. M. Manasrah, T. Smadi, A. ALmomani, A variable service broker routing policy for data center selection in cloud analyst, Journal of King Saud University - Computer and Information Sciences 29 (2017) 365 – 24
[14]
[15]
[16]
[17] [18]
[19]
[20] [21]
[22]
[23]
377. R. K. Naha, M. Othman, Cost-aware service brokering and performance sentient load balancing algorithms in the cloud, Journal of Network and Computer Applications 75 (2016) 47 – 57. R. Jeyarani, R. V. Ram, N. Nagaveni, Design and implementation of an efficient two-level scheduler for cloud computing environment, in: 2010 10th IEEE/ACM International Conference on Cluster, Cloud and Grid Computing, pp. 585–586. K. Li, G. Xu, G. Zhao, Y. Dong, D. Wang, Cloud task scheduling based on load balancing ant colony optimization, in: 2011 Sixth Annual China grid Conference, pp. 3–9. M. Dorigo, C. Blum, Ant colony optimization theory: A survey, Theoretical Computer Science 344 (2005) 243 – 278. S. Raghuwanshi, S. Kapoor, The new service brokering policy for cloud computing based on optimization techniques, International Journal of Engineering and Techniques 4 (2018) 481–488. S. Tiwari, P. Parwar, A modified service broker policy based on cuckoo optimization, International Journal for Science and Advance Research in Technology 4 (2018) 202–206. X.-S. Yang, S. Deb, Cuckoo search: recent advances and applications, Neural Computing and Applications 24 (2014) 169–174. K. Li, Y. Wang, M. Liu, A Task Allocation Schema Based on Response Time Optimization in Cloud Computing, arXiv e-prints (2014) arXiv:1404.1124. B. Wickremasinghe, R. N. Calheiros, R. Buyya, Cloudanalyst: A cloudsim-based visual modeller for analysing cloud computing environments and applications, in: 2010 24th IEEE International Conference on Advanced Information Networking and Applications, pp. 446–452. Microsoft azure, https://azure.microsoft.com/en-us/, 2019.
25