Accepted Manuscript A dynamic tradeoff data processing framework for delay-sensitive applications in Cloud of Things systems Yucen Nan, Wei Li, Wei Bao, Flavia C. Delicato, Paulo F. Pires, Albert Y. Zomaya
PII: DOI: Reference:
S0743-7315(17)30263-0 https://doi.org/10.1016/j.jpdc.2017.09.009 YJPDC 3745
To appear in:
J. Parallel Distrib. Comput.
Received date : 17 November 2016 Revised date : 18 July 2017 Accepted date : 19 September 2017 Please cite this article as: Y. Nan, W. Li, W. Bao, F.C. Delicato, P.F. Pires, A.Y. Zomaya, A dynamic tradeoff data processing framework for delay-sensitive applications in Cloud of Things systems, J. Parallel Distrib. Comput. (2017), https://doi.org/10.1016/j.jpdc.2017.09.009 This is a PDF file of an unedited manuscript that has been accepted for publication. As a service to our customers we are providing this early version of the manuscript. The manuscript will undergo copyediting, typesetting, and review of the resulting proof before it is published in its final form. Please note that during the production process errors may be discovered which could affect the content, and all legal disclaimers that apply to the journal pertain.
A Dynamic Tradeoff Data Processing Framework for Delay-sensitive Applications in Cloud of Things Systems Yucen Nana , Wei Lia , Wei Baoa , Flavia C. Delicatob , Paulo F. Piresb , Albert Y. Zomayaa a
Centre for Distributed and High Performance Computing, School of Information Technologies, The University of Sydney, Australia b PPGI, Federal University of Rio de Janeiro, Rio de Janeiro, Brazil
Abstract The steep rise of Internet of Things (IoT) applications along with the limitations of Cloud Computing to address all IoT requirements leveraged a new distributed computing paradigm called Fog Computing, which aims to process data at the edge of the network. With the help of Fog Computing, the transmission latency, monetary spending and application loss caused by Cloud Computing can be effectively reduced. However, as the processing capacity of fog nodes is more limited than that of cloud platforms, running all applications indiscriminately on these nodes can cause some QoS requirement to be violated. Therefore, there is important decision-making as to where executing each application in order to produce a cost effective solution and fully meet application requirements. In particular, we are interested in the tradeoff in terms of average response time, average cost and average number of application loss. In this paper, we present an online algorithm, called unit-slot optimization, based on the technique of Lyapunov optimization. The unit-slot optimization is a quantified near-optimal online solution to balance the three-way tradeoff among average response time, average cost and average number of application loss. We evaluate the performance of the unit-slot optimization algorithm by a number of experiments. The experimental results not only match up the theoretical analyses properly, but An earlier version of this paper appeared in the Proceedings of The 15th IEEE International Symposium on Network Computing and Applications (NCA 2016) ✩
Preprint submitted to Elsevier
July 18, 2017
also demonstrate that our proposed algorithm can provide cost-effective processing, while guaranteeing average response time and average number of application loss in a three-tier Cloud of Things system. Keywords: Internet of Things, Fog Computing, Lyapunov Optimization, Average Response Time 1. Introduction
5
10
15
20
25
30
Nowadays, an increasing number of objects and physical devices are connecting to the Internet ushering in a world of interconnected smart devices and giving rise to the new paradigm of the Internet of Things (IoT) [1]. The ever-increasing number of IoT devices will inevitably produce a huge amount of data, which has to be processed, stored and properly accessed by end users and/or client applications. One distinct feature of most IoT devices is their limited computing and energy capability, which restrain such devices to perform sophisticated processing and to store large amounts of data. In this context, Cloud Computing appears as a natural infrastructure to host the processing and storage requirements of IoT systems. The theoretically unlimited resources of clouds allow efficient processing and storage of massive amounts of data generated by the IoT devices. The combination of IoT and Cloud Computing brought about a new paradigm of pervasive and ubiquitous computing, called Cloud of Things (CoT) [2, 3]. CoT denotes the synergetic integration of Cloud Computing and IoT, in order to provide sophisticated services in a ubiquitous way, for a multitude of application domains. A CoT, as a two-tier IoT system, is composed of virtual things built on top of the networked physical objects and provides on-demand provision of sensing, computational and actuation services, allowing users to pull the data from the physical things by using a pay per use, flexible and service-oriented approach. However, despite all the benefits provided by the integration of cloud and IoT, recent works [4], [5] raised several concerns of the two-tier CoT model. In particular, the unpredictable communication delay to the cloud data centres can become a high risk factor for time sensitive IoT applications. Besides, the network bandwidth may be over-utilized to transfer raw data to the cloud, while most of such data could be locally processed and discarded right away. For addressing such issues, a novel and promising computing paradigm called Edge Computing [6] has recently been exploited in the IoT 2
35
40
45
50
55
60
65
context. Fog Computing [7] as a representative paradigm of Edge Computing still retains the core advantages of using clouds as a support infrastructure, but it keeps the data and computation close to end users thus reducing the communication delay over the Internet and minimizing the bandwidth burden by wisely deciding about which data needs to be sent to the cloud. The integration of fog devices in the CoT architecture gives a raise to a three-tier system, which contains the Things tier, the Fog tier and the Cloud tier. In many cases, the nodes in the Fog tier can perform the processing of data collected by the sensors, without resorting to the cloud. On the other hand, many applications require complex processing and/or rely on historical data stored for a long period of time, which typically will not be available in fog nodes. Therefore, there is an important decision-making process to be managed, which refers to when to send data from the Fog tier to the cloud tier taking into consideration the established service level agreements (SLAs). A SLA represents the service usage and the obligations for both users and service providers, including various indicators collectively referred to as Quality of Service (QoS). QoS parameters typically include availability, reliability, throughput, data accuracy, and many other metrics but also system performance indicators, e.g. response time, task blocking probability, the mean number of applications in the system and so on [8, 9]. Therefore, multiple factors need to be taken into account in this decisionmaking process, such as the availability of fog nodes, the average response time of the applications and the monetary cost, since processing the data at the fog nodes generally has zero cost while outsourcing the processing to the cloud requires payment for the services. In addition, another important issue to be addressed regarding the processing in any CoT system is related to the blocking probability of an application request. According to [10], two types of blocking may happen to a given application request in a cloud system: (i) blocking due to a full global queue, and (ii) blocking due to insufficient resources at cloud resource pools. Both types may potentially cause what we call an application loss, meaning an application that will not be served by the cloud considering the previously agreed quality parameters. In this paper we focus on the blocking due to a full global queue in the cloud and we try to guarantee that the average number of application loss in the system is kept under a predefined value. In other words, we need to determine the buffer size used at the Cloud tier once the data centre to execute the selected applications. By doing this, we can make sure that the average number of application loss remains below a specific value at all times, so that the SLA 3
70
75
80
85
90
agreement would not be violated. In this paper, we present a comprehensive analytic framework for the three-tier CoT system to prevent the fog devices to be overloaded so that the users within the area served by such devices would not experience a poor quality of service. In general, such problem can be converted into a constrained stochastic optimization problem. Using the technique of Lyapunov optimization [11], we designed an adaptive online decision-making algorithm called unit-slot optimization for distributing the incoming data to the corresponding tiers without a prior knowledge of the status of users and system. The main goal of this algorithm is to ensure the data is processed within a certain amount of time, meanwhile the availability of the fog servers is still guaranteed, and both the cost spending on renting the cloud services and the number of applications loss on cloud is minimized. In addition, our proposed algorithm achieves the average response time arbitrarily close to the theoretical optimum. The performed discrete-event simulation demonstrates that, under the three-tier CoT scenario, our proposed mechanism outperforms the selected benchmarks and provides the best overall performance for the users. The rest of the paper is organized as follows: Section 2 contains reviews of related works. Section 3 introduces the system model. Section 4 presents system analysis. Section 5 shows the theoretical solution and the proposed algorithm. Section 6 evaluates the performance of the proposed algorithm. A conclusion is drawn in Section 7. 2. Related Work
95
100
2.1. IoT with Fog Computing The integration of Cloud Computing and IoT has drawn some attention recently, with studies addressing different aspects, such as cloud support for the IoT [12], devices virtualization and service provisioning [13], and computation offloading in mobile clouds [14]. Recently, several researchers have been turning their attention to the Fog Computing, which acts as the supplement of the Cloud Computing in several scenarios. Likewise, Fog Computing is a strong candidate to provide support for the IoT paradigm, meeting several requirements of such systems. In this context, the authors in [15] utilized the Fog Computing supported Medical Cyber-Physical System (MCPS) to solve the problem of unstable and long-delay links between the data centres in the Cloud tier and medical devices located in the Things tier. Along with 4
105
110
115
120
125
130
135
140
the introduction of Fog Computing, three-tier CoT systems started to become a top research topic. A theoretical model of three-tier CoT systems was proposed in [16]. The authors studied the joint optimization problem of service provisioning, data offloading and task scheduling with the aim of minimizing the system resource usage while meeting the defined QoS requirements. In [17], the authors focused on studying the interactions between fog layer and cloud layer in such systems. They mathematically characterized a Fog Computing network in terms of power consumption, service latency, CO2 emission, and cost. The authors in [18] modeled the Fog Computing system as a Markov decision process, and provided a method to minimize operational costs while providing required QoS. In [19], the authors aimed to minimize the task completion time while providing a better user experiences by utilizing the knowledge of task scheduling and resource management in Fog Computing system. Differently from our work, the aforementioned works did not consider the propagation delay incurred by the transmission in communication channels neither provide a cost-effective processing decision in a distributed way. 2.2. Lyapunov Optimization Approach in Fog and Cloud Computing In dynamic systems, Lyapunov optimization is a promising approach to solve resource allocation, traffic routing, and scheduling problems. Such optimization approach has been applied to the works of traffic routing and scheduling in wireless ad hoc networks [20, 21], but they did not consider processing applications in the processing network environment. In processing networks, [22] proposed a method on how to offload one application from a mobile device to a cloud data centre for its processing. [23, 24, 25] studied processing networks with limited numbers of network nodes, while we allow arbitrarily large numbers of things, nodes and data centres in the system. In [26], the authors focused on studying task processing and offloading in a wired network, but they overlooked to address the potential conflict of performance requirements between them. A preliminary version of this work was presented in [27], where only the two-way tradeoff between average response time and average cost were considered. The current version contains substantial further analytic details, simulation results, and discussion. In particular: (1) We consider M/G/1/K ∗ P S queue instead of M/M/1 queue in the Cloud tier, which is a more realistic approach due to the heterogeneity of servers used in data centres to provide services to the users. In other words, the service time 5
145
150
needs to be represented by a more general probability distribution rather than the convenient exponential distribution. As a consequence, the analysis of system performance is substantially more complex since the length of time for completing a service does not follow the Markovian property. (2) We develop a more complex optimization problem. Our new optimization goal is to address a three-way tradeoff among average response time, average cost and average number of application loss, while the preliminary version only handles the two-way tradeoff without considering the average number of application loss. (3) We perform different sets of experiments to fully evaluate the performance of our proposed algorithm and make a fair comparison with the selected benchmarks by using different parameters. We also included data traces from real sensors, to analyze the performance of the proposal under more realistic conditions. 3. System Model
155
160
165
170
175
3.1. The Overview of the Three-tier Cloud of Things As shown in Figure 1, our system model encompasses three tiers, namely the Things tier (leftmost tier), the Cloud tier (rightmost tier) and the Fog tier (intermediate tier). The Things tier includes various physical devices such as nodes composing a wireless sensor and actuator network (WSAN), RFID tags, mobile devices and several other real-world objects instrumented by sensors and/or actuators. In the Things tier, IoT devices send out the applications to the backend system for further processing. These applications could be processed in either the Fog tier or the Cloud tier. The Fog tier contains the access points and the fog nodes, which are both key components of this tier. In our system, the access points are located in amid the Things tier and the Fog tier for transmitting the data between them, thus acting as bridges connecting the things to the other tiers. In this study, we assume that all the access points are located at the edge of the network and only at one hop wireless transmission distance from the Things tier. The fog nodes (such as routers, switches etc.) connect to access points by wired links and are capable of performing complex computation. If the Fog tier is selected to process applications, then the applications will be allocated to a selected fog node. Since fog nodes have limited resources (in comparison to a cloud node), the application arrival rate may be greater than the node capacity to serve the application requests (i.e., the service rate). This will cause a growth of queueing length in the fog nodes, as well as cause the applications wait 6
Things Tier
Cloud Tier
Fog Tier
Data centre 1
Access point 1
Fog node 1 Cloud 1
Device Region 1 Data centre 2
Access point 2
Fog node 2 Cloud 2
Device Region 2
Data centre 3
Access point 3
Fog node 3 Cloud 3
Device Region 3
Figure 1: The system architecture of the three-tier CoT
180
185
too long in the queue to get the services on fog nodes. To avoid overflow and long waiting times at fog nodes, the applications can also be processed at the Cloud tier which comprises typical (macro) data centres. In these cases, the application is sent to the cloud via Internet connection. In this study, we also assume that the service rates of Internet connection and cloud nodes are both greater than the fog nodes, but their usage incurs in a certain monetary cost for renting the required resources from external stakeholders. For getting a proper overall response time, some applications could be discarded directly before being put into the processing queue at the selected data centres if their deadlines are too close to meet. To provide QoS-aware services in our system, we implement a decision maker as a standalone component located at access points to make online decisions regarding if the applications will be sent to the Fog tier or to the Cloud tier.
7
190
195
200
205
210
215
220
225
3.2. Motivational Examples: Applications in Three-tier CoT Systems There are many application domains that could benefit from using our proposed architecture, such as smart city [28, 29], smart street lighting [30] and smart buildings [31], just to name a few. In a smart buildings application, the things tier, as referenced in our system, may contain thousands of heterogeneous sensors to measure various building operating parameters: temperature, light, humidity, keycard readers and parking space occupancy. Data from these sensors needs to be analysed to determine if actions are needed, such as turn off lights when no people activity is detected in a room. Compared to some widely used wireless communication technologies, e.g. WiFi, Bluetooth, NB-IoT (Narrow Band-Internet of Things) has promising features, e.g. low cost, low power consumption, wide coverage, and high capacity for IoT devices, we thus employ it in our system. Please note that, some real-world deployments have upgraded the access points of NB-IoT to F-AP (fog access point) [32], which are generally capable of performing our proposed algorithm and making the decisions on time. With the enhanced devices and the wired network connections, the status of the backend system components can be collected in a periodic or real-time manner. The edge/fog nodes are typically installed on each floor, wing or even individual room and they are responsible for performing emergency monitoring and response functions, controlling lighting and environments, and providing a building-resident compute and storage infrastructure to supplement the limited capabilities of local smartphones, tablets and computers. From time to time, the cloud can be used to analyse the data in a relatively long term to predict trends. For example, the long-term history of building operational telemetry and control actions can be uploaded to the cloud for large-scale analytics to determine operational aspects of buildings. The stored operational history can then train machine learning models, which can be used to further optimize building operations by executing these cloud-trained machine learning models in the fog nodes. 3.3. Application Processing Flow and Notations With the previously introduced Three-tier Cloud of Things system model, we now focus on the theoretical analysis about data offloading strategy within the proposed model. In order to combine the theoretical analysis with practical application [33], we need firstly to understand how this system runs. Figure 2 shows the application processing flow in our system. To provide QoS-aware services with a shorter response time in a cost-effective way, we 8
230
235
240
245
250
implement a decision maker located at the facade of the Fog nodes to make online decisions based on our proposed algorithm. The key decision to be made regards if the applications, which are generated by IoT devices, will be sent to the Fog tier or to the Cloud tier. We run the algorithm at the beginning of each time slot by using the parameters of the first arrived application to decide which communication channel and corresponding cloud node will be used if the rest of applications are sent to the cloud for processing. Then the Lyapunov optimization aim is used to decide whether the applications will be processed in fog node or the pre-decided cloud node within this time slot. We can choose one specific cloud node and corresponding communication channel per time slot. In addition, if the waiting queues in fog node and communication channel both reach the pre-set limitation of backlog, then the application will be discarded until the length of backlog either in fog node or cloud node is smaller than the threshold. That is to say, the decision maker is the component to run the proposed algorithm to make an application offloading decision in the Three-tier Cloud of Things system in order to provide the stakeholders with a cost-effective processing service. For the sake of the simplicity of analysis, we use only one fog node to demonstrate how our proposed algorithm can optimize the system performance, but the approach applies also in the general case of multiple fog nodes. Please note that, the cooperation between fog nodes is out of the scope of this paper. Table 1 provides definitions of the important parameters used in our system modeling and theoretical analysis. 4. System Analysis 4.1. Fog Node During the tth time slot (the time slot is defined as a relatively long term period in this paper, like one hour), the arrival rate of applications in the fog node is λ(t) and the service rate is µ(F ) (t). Since the fog node has limited computation capacity, λ(t) may be greater than µ(F ) (t). We allow a portion of the applications to be offloaded to the Cloud tier. Let η (F ) (t) (C) denote the portion of the applications processed locally, and ηj (t) denote the portion offloaded to the jth cloud node. Let λ(F ) (t) = η (F ) (t)λ(t) denote the equivalent local arrival rate of the applications at the processor of the (C) (C) fog node, and λj (t) = ηj (t)λ(t) denote the equivalent arrival rate of the
9
Table 1: Key Parameters
Notation t λ(t) λ(F ) (t) (C)
λj (t) µ(F ) (t) (X) µj (t) (C)
µj (t) τ (F ) (t) (X)
τj
(t)
(C)
τj (t) τj (t) τ j (t) Pb,j (t) (C) Hj (t) ε(t) xj (t) (X)
Bj (t) (C)
Bj (t) U D (c) ρj (t) Kj (t)
Definition The tth time slot for a long-term period (for short, we use t in the rest of the paper). Arrival rate of applications at the fog node in consideration at t. Arrival rate of applications processing in the processor of the fog node at t. (decision variable) Arrival rate of applications in jth cloud node at t. (decision variable) Service rate of the fog node at t. Service rate of the communication channel to the jth cloud node at t. Service rate of the jth cloud node at t. Average response time of processing applications in the fog node at t. Average response time of transmitting applications to jth cloud node via communication channel at t. Average response time of processing applications in jth cloud node at t. A propagation delay for sending applications to the jth cloud node via communication channel at t. Overall average response time of processing applications at t. The blocking probability of applications in jth cloud node. The applications throughput of jth cloud node at t. The overall application loss number of the entire system at t. Indicator of using jth channel and cloud node at t. (decision variable) The cost of using the communication channel to the jth cloud node at t. The cost of using the jth cloud node at t. Required mean maximum application loss number of the entire system at t. Required mean response time The traffic intensity of jth cloud node at t. The capacity of the jth cloud node at t. (decision variable) 10
Access point (t)
Fog node Queue
(F )
(t)
Processor
...
µ(F )(t)
Decision maker
Queue 1 (C ) 1
(t)
(C )
...
...
1
(t)
µ1(X)(t)
(C )
2 (t)
Data centre 1
Queue 1 Processor µ1(C)(t)
... (C )
1 (t)Pb,1
Data centre 2
Queue 2
Queue 2 ...
...
(C )
2 (t)
Processor µ2(C)(t)
...
(X)
µ2 (t) (C ) 2
...
...
...
Queue j (C )
j (t)
(t)Pb,2
...
...
j (t)
µj(X)(t)
Data centre j
Queue j
(C )
(C) Processor µj (t)
... (C )
j (t)Pb,j
Figure 2: Application processing flow
communication channel to the jth cloud node. We have λ(t) = λ
(F )
(t) +
J X
(C)
xj (t)λj (t).
(1)
j=1
255
In this work, we suppose there is no application loss in fog node. For the sake of practicality, we assume that the fog node can only offload its applications to (C) one selected cloud within each time slot, so that at most one λj (t), ∀j ∈ J can be positive during the tth time slot. Let xj (t) indicates if cloud node j (C)
260
is in use. There is one and only one xj ∗ (t) = 1 λj ∗ (t) > 0 and all other xj (t) equal zero. Please note that, as mentioned before, we do not need to have the priori knowledge on the statuses of user and system when using Lyapunov optimization. However, we need to make decision at the beginning of each time slot, which means we cannot know the exact arrival and service rate of that time slot. In order to avoid misunderstandings, the prior knowledge regard11
265
270
ing the statuses of user and system mentioned before refers to the probability distribution of the applications arrival rates and service rates. The only thing that we need to know is the arrival and service rates of the current time slot. Furthermore, these arrival and service rates could be well estimated in a real application. For example, we may estimate or predict the arrival rates and service rates using historical trends (e.g. the rates at 11am will be likely to be similar to historical rates at 11am). Another approach is to sample a small portion data from the current time slot to derive the estimated rates of the entire time slot. (F ) In our system, we define ρ(F ) (t) = µλ(F ) (t) as the traffic intensity for each (t) fog node. Then the average queue length of processors at the fog node is L(F ) (t) =
ρ(F ) (t) . 1 − ρ(F ) (t)
Since the average arrival rate of jobs actually entering the queue in the fog node is λ(F ) (t), we model the processor at the fog node as an M/M/1 queue [34], and the average local response time is ρ(F ) (t)
τ
(F )
L(F ) (t) 1−ρ(F ) (t) (t) = (F ) = (F ) λ (t) λ (t) 1 , = (F ) µ (t) − λ(F ) (t)
(2)
note that we must satisfy µ(F ) (t) > λ(F ) (t).
(3)
4.2. Communication Channel from Fog to Cloud We suppose no application loss occurs in the data transmission between fog node and cloud node. During the tth time slot, the arrival rate to the jth communication channel equals the arrival rate to the jth cloud node, which (C) is denoted as λj (t). The transmission rate in communication channel from (X) fog node to cloud node is µj (t). We model the communication channel as an M/M/1 queue if the jth channel is used, i.e., xj (t) = 1, and the average transmission time in the channel is (the result obtained in the same way as
12
shown in Section 4.1 ) (X)
τj
(t) =
1 (X) µj (t)
(C)
− λj (t)
(4)
.
Furthermore, if the jth communication channel is used, there will be a propagation delay τj (t). In summary, if the jth cloud node is used, the average response time for sending applications to the jth cloud node is (X)
τj
(t) =
1 (X) µj (t)
(C)
− λj (t)
(5)
+ τj (t),
we must have (X)
(C)
(6)
µj (t) > λj (t).
275
Using the communication channel from the fog node to the jth cloud node (X) introduces an overall cost Bj (t), and the cost is related to the bandwidths of the communication channels. 4.3. Processing at Cloud The arrival rate at the cloud node is the same as that in the communication channel as mentioned before, so that the arrival rate to the cloud node (C) (C) is λj (t). The service rate is µj (t). Still, we define the traffic intensity (C)
of jth cloud node as ρj (t) =
(c)
λj (t) (C)
µj
(t)
, and we model the jth cloud node as
M/G/1/K ∗ P S queue where P S means processor sharing, which serves all jobs currently in the system at equal rates. The jth cloud node can handle at most Kj (t) applications at t, the remaining applications will be blocked if the number is reached. The blocking probability P bj (t) is equal to the probability that there are Kj (t) jobs in the system, (C)
(C)
P bj (t) = P [Nj = Kj (t)] =
(1 − ρj (t))(ρj (t)) (C)
1 − (ρj (t))
Kj (t)
Kj (t)+1
.
(7)
Also, in order to finish application processing in time, we need to drop some applications in the cloud (because the cloud is the final destination of applications in this system) while still satisfying the constraint of the 13
maximum number of application loss. In the current time slot, we assume (C) that the throughput Hj (t) is the number of completed applications in the (C) jth cloud node. When the jth cloud node reaches equilibrium, Hj (t) is equal to the rate of accepted requests, (C)
(C)
(8)
Hj (t) = λj (t) [1 − P bj (t)] .
Therefore, if the jth cloud node is used, the average processing time at the cloud node is i iKj (t)+1 h h (C) (C) (C) Kj (t)ρj (t) − Kj (t) − 1 + ρj (t) ρj (t) (C) , (9) τj (t) = i ih h Kj (t) (C) (C) (C) 1 − ρj (t) λj (t) 1 − (ρj (t))
note that we must have
(C)
(C)
µj (t) > λj (t).
(10) (C)
We also assume that using the cloud node introduces a cost Bj (t), which is affected by the capacity of jth cloud node Kj and the application (C) processing rate µj (t). 280
4.4. Data Offloading for Cost-effective quality Processing The decision maker involved in the three-tier CoT system regards deciding which application is to be offloaded to the cloud, and which cloud node should be used, in order to balance the response time, throughput and cost. As we introduced previously, we presume that there is no cost if the applications are processed in the fog node. However, as discussed in Section 4.2 and 4.3, if the applications are processed in the Cloud tier, this will introduce communication cost and processing cost. The overall cost at t is B(t) =
J X j=1
(C) (X) xj (t) Bj (t) + Bj (t) .
14
(11)
On the other hand, as discussed in Section 4.1, 4.2 and 4.3, the average response time at t is (C) J i X xj (t)λj (t) h (X) λ(F ) (t) (F ) (C) τ (t) + τj (t) + τj (t) . τ (t) = λ(t) λ(t) j=1
(12)
Furthermore, since there are no application loss in both fog nodes and communication channels, regarding Equation (8), the total application loss number of the entire system equals the application arrival rate in the cloud node minus actual cloud node’s throughput, which can be expressed as below: ε(t) =
J X j=0
=
J X j=0
(C) (C) xj (t) λj (t) − Hj (t) (C) xj (t) λj (t)Pb,j (t) .
(13)
Our aim is to minimize the long-term average cost T −1 1X E [B(t)] , lim sup T →∞ T t=0
(14)
and satisfy the following requirement about response time and the number of application loss T −1 1X E [τ (t)] ≤ D, lim sup T →∞ T t=0 T −1 1X E [ε(t)] ≤ U. lim sup T →∞ T t=0
(15)
(16)
5. Problem Solution based on Lyapunov optimization In order to satisfy the constraint of response time (15) and the number of applications loss (16), we introduce virtual queues QD (t) and QF (t), which are used to accumulate the values of the response time and the number 15
285
290
of application loss accordingly. More precisely, these two queues are used to record the applications that exceed the expected finish time and the expected maximum application loss to ensure a high quality service is provided within the system. Therefore, we define QD (0) = 0, QU (0) = 0. QD (t) and QU (t) evolves as follows QD (t + 1) = max[QD (t) − D, 0] + τ (t),
(17)
QU (t + 1) = max[QU (t) − U, 0] + ε(t).
(18)
Please note that, the τ (t) and ǫ(t) are dependent on the different constraints separately in the optimization aim. Lemma 1. If QD (t) and QU (t) is mean rate stable, then (15), (16) are satisfied Proof. Due to the definition of QD (t), we have QD (t + 1) ≥ QD (t) − D + τ (t).
(19)
Then taking expectation of the above inequality, we have E [QD (t + 1)] − E [QD (t)] ≥ −D + E [τ (t)] . Summing up both sides of the above inequality over t ∈ [0, T − 1] for some positive integer T yields and using the law of iterated, we have E [QD (T )] − E [QD (0)] ≥ −T D +
T −1 X
E [τ (t)] .
t=0
Then through dividing by T , we have T −1 1X E [QD (T )] − E [QD (0)] ≥ −D + E [τ (t)] . T T t=0
16
Applying QD (0) = 0 to the above inequality, we have T −1 E [QD (T )] 1X E [τ (t)] . ≥ −D + T T t=0
Finally, letting T→∞ , we have T −1 1X E [QD (T )] ≥ −D + lim sup E [τ (t)] . lim sup T T →∞ T T →∞ t=0
If QD (t) is mean rate stable, then lim supT →∞
E[QD (T )] T
=0
T −1 1X E [τ (t)] . 0 ≥ −D + lim sup T →∞ T t=0
Rearranging terms in the above, we have T −1 1X E [τ (t)] ≤ D. lim sup T →∞ T t=0
295
By introducing the virtual queue QD (t), we are able to convert the constraint (15) into a queue stable problem. If we can guarantee that QD (t) is mean rate stable, we are able to satisfy (15). Due to the definition of QU (t), we have QU (t + 1) ≥ QU (t) − U + ε(t).
300
(20)
Then we do the same work as previous proof by changing virtual queue QD (t) to another virtual queue QU (t) while replacing response time constraint D with the constraint of application loss U . if we can guarantee that QU (t) is mean rate stable, we are able to satisfy (16).
17
5.1. Lyapunov Optimization Formulation Next, we define a Lyapunov function as a scalar measure of response time and application loss in the system as follows 1 L (QD (t)) , Q2D (t), 2
(21)
1 L (QU (t)) , Q2U (t). 2
(22)
We combine the above two parameters together with weight parameter to show the priority between these two terms L (Q(t)) , L (QD (t)) + V2 [L (QU (t))] 1 2 1 2 = QD (t) + V2 QU (t) . 2 2
(23)
Then we define the conditional unit-slot Lyapunov drift as follows ∆ (Q(t)) = L (Q(t + 1)) − L (Q(t)) .
305
(24)
5.2. Bounding Unit-slot Lyapunov Drift Our primary aim is to optimize the upper bound of the overall response time combined with weighted packet loss, which is the upper bound of ∆ (Q(t)). Our secondary aim is to optimize the upper bound of response time only. Lemma 2. For any t ∈ {0, T − 1}, given any possible control decision, the Lyapunov drift ∆ (Q(t)) can be addressed to average response time and packet loss deterministically bounded as follows ∆(Q(t)) ≤ M + QD (t)E [−D + τ (t)|Q(t)] + V2 QU (t)E [−U + ε(t)|Q(t)] , (25) where M = M1 + V2 M2 = 12 [max E (τ (t)2 ) + D2 ] +
18
V2 2
[max E (ε(t)2 ) + U 2 ].
Proof. ∆ (Q(t)) = E [L (Q(t + 1)) − L (Q(t)) |Q(t)] 1 V2 = E Q2D (t + 1) − Q2D (t) Q(t)] + E Q2U (t + 1) − Q2U (t) Q(t)]. 2 2
Applying equation (17) and (18), we have
1 ∆(Q(t)) = E [max [QD (t) − D, 0] + τ (t)]2 − QD (t)2 |Q(t) 2 V2 + E [max [QU (t) − U, 0] + ε(t)]2 − QU (t)2 |Q(t) . 2
(26)
For any QD (t) ≥ 0, QU (t) ≥ 0, D ≥ 0, U ≥ 0, τ (t) ≥ 0, ε(t) ≥ 0, we have [max[QD (t) − D, 0] + τ (t)]2 ≤ QD (t)2 + τ (t)2 + D2 + 2QD (t)(τ (t) − D), [max[QU (t) − U, 0] + ε(t)]2 ≤ QU (t)2 + ε(t)2 + U 2 + 2QU (t)(ε(t) − U ), then we have 1 ∆(Q(t)) ≤ E τ (t)2 + D2 + 2QD (t)(τ (t) − D)|Q(t) 2 V2 + E ε(t)2 + U 2 + 2QU (t)(ε(t) − U )|Q(t) 2" # " # ε(t)2 + U 2 τ (t)2 + D2 ≤E |Q(t) + V2 E |Q(t) 2 2
(27)
+ E [QD (t)τ (t) − QD (t)D|Q(t)] + V2 E [QU (t)ε(t) − QU (t)U |Q(t)] . Defining M1 and M2 as finite constant that bound the first two terms on the right-hand-side of the above drift inequality, so that for all t, all possible Q(t), we have M1 =
M2 =
1 max E τ (t)2 + D2 2 1 max E ε(t)2 + U 2 2 19
and then we have M = M1 + V 2 M2 .
310
5.3. Minimizing the Drift-Plus-Cost Performance Defining the same Lyapunov function L (Q(t)) as in (23), and letting ∆ (Q(t)) represents the conditional Lyapunov drift at t. While taking actions to minimize a bound on ∆ (Q(t)) every time slot would stabilize the system, the resulting cost might be unnecessarily large. In order to avoid this, we minimize a bound on the following drift-plus-penalty expression instead of minimizing a bound on ∆ (Q(t)) ∆ (Q(t)) + V1 E [B(t)|Q(t)] ,
(28)
where V1 ≥ 0 is a parameter that represents an "important weight" on how much we emphasize cost minimization. We add a penalty term to both sides of (25) yields a bound on the drift-plus-penalty ∆ (Q(t)) + V1 E [B(t)|Q(t)] ≤ M + E [QD (t)τ (t)|Q(t)] + V2 E [QU (t)ε(t)|Q(t)] − DQD (t) − V2 U DU (t) + V1 E [B(t)|Q(t)] .
(29)
At each time slot, we are motivated to minimize the following term. min
(C)
λ(F ) (t),λj
(t),xj (t),∀j∈J
{E [QD (t)τ (t)|Q(t)] + V2 E [QU (t)ε(t)|Q(t)]} + V1 E [B(t)|Q(t)] ,
(30)
substituting (30), we have the following one-time slot optimization problem P1. (30) is minimized if we opportunistically minimize (31) as follows at each step[11].
20
(C)
λ(F ) (t),λj
(
min
(C)
(t),xj (t),Kj
"
(t),∀j∈J
# J X xj (t) λ(t) − λ(F ) (t) (X) λ(F ) (t) (F ) (C) QD (t) τ (t) + τj (t) + τj (t) λ(t) λ(t) j=1 #) " J J X X (X) (C) (c) xj (t) Bj (t) + Bj (t) , xj (t)λj (t)Pb,j (t) + V1 +V2 QF (t) j=1
j=1
(31)
subject to xj (t) ∈ {0, 1},
(32)
xj (t) = 1,
(33)
λ(F ) (t) ≤ µ(F ) (t),
(34)
J X j=1
(C)
(C)
(X)
λj (t) ≤ min(µj (t), µj (t)).
315
(35)
In the next subsections, we first show that when (30) is minimized at each time slot, we can achieve a quantified near optimal solution, then we provide the solution to solve P1. 5.4. Optimality Analysis Let † denote any S-only offloading policy1 , and τ † (t), ε† (t) and B † (t) denote the average response time, average application loss number and average cost at t based on policy †. Also note that, for each time slot, the average response time and average application loss in current time slot is independent 1
λ
S-only offloading (C) (t), λj (t), xj (t), ∀j ∈
(F )
(X)
λ(t), µ(F ) (t), µj on Q(t).
(C)
policy means that the decision on J depends only on the system state at t (i.e., (X)
(t), µj (t), τj (t), Bj
(C)
(t), Bj
21
(t), ∀j
∈ J ), but does not depend
of those in the previous time slot. ∆(Q(t)) + V1 E [B(t)|Q(t)] ≤ M + [QD (t)E [τ (t) − D|Q(t)] + V2 QU (t)E [ε(t) − F |Q(t)]] + V1 E [B(t)|Q(t)] (36) † † ≤ M + QD (t)E τ (t) − D|Q(t) + V2 QU (t)E ε (t) − U |Q(t) + V1 E B † (t)|Q(t) (37) † † † = M + QD (t)E τ (t) − D + V2 QU (t)E ε (t) − U + V1 E B (t) . (38) Now we assume that there exists δ > 0 such that E τ † (t) + V2 E ε† (t) ≤ R − δ can be achieved by an S-only policy [11, 23](R = D + V2 U ), and among ∗ all feasible S-only policies, B (δ) is the optimal average cost. We have ∗
(39)
(37) ≤ M − Q(t)δ + V1 B (δ). Taking expectations of (39), we have T −1 X t=0
E [∆(Q(t))] + V1
T −1 X t=0
E [E(B(t)|Q(t))] ≤ T M −
T −1 X
∗
E [Q(t)] δ + V1 T B (δ).
t=0
Using the law of iterated expectations as before yield and summing the above over t ∈ [0, T − 1] for some positive integer T yield, we have E [L(Q(T ))] − E [L(Q(0))] + V1
T −1 X t=0
E [B(t)] ≤ T M −
T −1 X
E [Q(t)] δ
t=0
∗
+ V1 T B (δ).
(40)
Rearranging the terms in the above and neglecting non-negative quantities where appropriate yields the following two inequalities ∗ T −1 V1 B (δ) − M 1X + E [Q(t)] ≤ T t=0 δ
+
V1 T
E [L(Q(0))] , Tδ
22
PT −1 t=0
E [B(t)]
δ
(41)
and T −1 E [L(Q(0))] 1X M ∗ + B (δ) + , E [B(t)] ≤ T t=0 V1 V1 T
(42)
where the first inequality follows by dividing (40) by V1 T and the second follows by dividing (40) by T δ. Taking limits as T → ∞ shows that i h ∗ ∗ T −1 B (δ) − B [M + V M ] + V 1 2 2 1 1X lim sup E [Q(t)] ≤ , (43) δ T →∞ T t=0 ∗
where B is the optimal long-term average cost achieved by any policy. We also have T −1 1 X M1 + V 2 M2 ∗ lim sup E [B(t)] ≤ + B (δ). V1 T →∞ T T =0
(44)
k
320
325
330
The bounds (43) and (44) demonstrate an [O(V1 ), O(1/V1 ), O(V2 )] tradeoff between average response time, application loss number and average cost. (43) and (44) are three-way tradeoffs. 5.5. The Solution to P1 (Unit-slot Optimization) In this subsection, we need to further solve the problem P 1 in (31)–(35) and the pseudo code of our proposed algorithm Unit-slot Optimization is given in Algorithm 1. First, we have to traverse j,to find xj (t) = 1. Our (C)
aim is to optimize λ(F ) (t) λj (t) = λ(t) − λ(F ) (t) when xj (t) = 1. This optimization is a simple optimization problem. To solve this problem, we just need to examine all feasible solutions whether satisfy that the value of the first derivate is zero. In other words, the first derivate of (21) with respect to λ(f ) (t) is zero guarantees the value of (21) is the minimal one. 6. Performance Evaluation
In this section, we first provide the details of our simulation environment, then we investigate the performance of our proposed algorithm by evaluating several performance metrics, e.g. average response time, average cost, average number of application loss, and backlogs under different settings. 23
Algorithm 1 Unit-slot Optimization Algorithm 1: Set Fopt = ∞ 2: for each j ∈ J do 3: Set xj (t) = 1. Set xi (t) = 0, ∀j ∈ J and j 6= i 4: Given the above xj (t), ∀j ∈ J , optimize (31) under the constraints (32), (33), (34) and (35). Derive optimal the λ(F ) (t), and the optimal value of (31) is denoted by F 5: if F ≤ Fopt then 6: Set Fopt = F , 7: set x∗j = xj (t), ∀j ∈ J 8: Set λ(F )∗ = λ(F ) (t) 9: end if 10: end for 11: The optimal solution is xj (t) = x∗j , ∀j ∈ J , and λ(F ) (t) = λ(F )∗ 335
340
345
350
355
6.1. Simulation Environment Setting In order to evaluate the performance of our proposed algorithm, we carried out a discrete event simulation (DES) making use of the SimPy [35]. We first established the three-tier CoT system as shown in Figure 2. There were a total of 300 things generated in the things tier by default and each of them connects to a fog node. All the generated fog nodes have a fixed service rate as shown in Table 2. We also created 10 different levels of communication channels connected to 10 different clouds, where each level has a different transmitting rate while the cloud offers more computation resources, the more expensive the rental price. In addition, we pre-set 50 as the maximum number of queueing length in both fog nodes and communication channels. Besides, regarding to our proposed algorithm, two penalty terms V1 and V2 are used to adjust how much we prioritize the average cost and average number of application loss separately. The key parameters we used in the simulations are shown in Table 2. To evaluate the performance of our proposed unit-slot optimization algorithm, we also implemented three other algorithms for comparison, namely, Fog-first, Cloud-first and Round Robin. In the "Fog-First" algorithm, each newly arrival application is sent to a fog node for processing by default until the reconfigured queueing length on the fog node is reached. After that, the remaining applications will be sent to the cloud for processing until the 24
Table 2: Default simulation settings
Symbol N lamb mu_f mu_x mu_c J V _1 V _2 D_f og D_cloud range
360
365
370
375
Definition The number of applications should be processed in time slot. The arrival rate in fog node. The service rate in fog node. The service rate in communication channel j(j=1). The service rate in cloud node j(j=1). The number of communication channel and its corresponding Cloud node. The penalty term on cost. The penalty term on application loss. The backlog limitation in fog node. The backlog limitation in cloud node. Number of experiment repetitions.
Default 300 8.6 1.9 0.68 0.33 10 10 0.1 50 50 5000
queueing length on the fog node reduces. In the "Cloud-First" algorithm, the newly arrival application is sent to the cloud for processing by default. Due to the factors of monetary cost, the upper bound of the number of dropped applications and the predefined backlog limitation, some of the applications will be sent to the fog tier for processing. In the "Round Robin" algorithm, the first application will be sent to fog for processing while the next one will be sent to the cloud, and the remaining applications will follow this distribution rule repeatedly. Additionally, in order to better study our proposed approach, we used three types of input data to evaluate the system performance. First, we generated the input by using Poisson distribution, which strictly follows our theoretical analysis so that to support its correctness. Second, we generated the input by using a general probability distribution, e.g. Erlang distribution. Finally, we used real data traces of light from the dataset provided by Intel Lab located in Berkeley (IBRL) [36]. It provides data sets collected from 54 Mica2Dot sensor nodes distributed in the IBRL facilities. The observations/samplings were performed between February 28 and April 5, 2004 and include temperature, humidity, light, and voltage data, with sampling intervals of approximately 31 seconds. In the following, all the presented data are averaged over 5000 times, consequently, a 95% confidence interval 25
with a 5% (or better) precision is achieved. 6.2. The Overall Performance Comparison among Algorithms
400
Fog-First
Unit-slot Optimization
Overall Optimal Aim
Average Reponse Time
Cloud-First
Average Total Cost
Round Robin
Average Number of Application Loss
300
200
100
0
50
60
80
120
Limited Backlog in W aiting Queue
Figure 3: The overall performance comparison among unit-slot optimization algorithm and the benchmark algorithms
380
385
In Figure 3, the change of the overall optimization aim (the weighted sum of average response time, average cost and average number of application loss in cloud) of three different algorithms are shown according to the different limitations on backlog. We use this figure to demonstrate that our algorithm has the best overall performance among other benchmark algorithms. More precisely, our algorithm not only provides a cost-effective service, but also satisfies each individual QoS metric as much as possible. The right-most one in all groups represents the result of the "Round Robin" algorithm. The leftmost column in all groups represents the result of the "Fog-First" algorithm, while the second right column represents the result of the "Cloud-First" algorithm. The second left column represents the result of our proposed unit-slot optimization algorithm. For all columns, the bottom part (in red) 26
390
395
400
405
410
415
420
425
indicates the average response time, the middle part (in black) indicates the average total cost, and the top part (in blue) indicates the average number of application loss in cloud. For the "Fog-First" algorithm, more applications are preferred to be processed at fog nodes while the limitation of backlog increases. With the costfree but relatively slow processing speed at the fog nodes, the average money cost and average number of application loss in cloud thus decreases while the average response time increases accordingly. However, for the "Cloud-First" algorithm, both average response time and average number of application loss in cloud decrease while the limitation of backlog increases. This is due to the increasing number of applications that are attempted to be processed at the cloud nodes with higher service rates. On the other hand, the average number of application loss decreases along with the increment of the limitation of backlog. Meanwhile, the "Round Robin" algorithm is the fairest algorithm since all the applications will be distributed to fog and cloud evenly. As shown in Figure 3, the overall performance of this algorithm is the worst one since no sophisticated optimization aims driven techniques are used in this approach. Compared to these three benchmark algorithms, the overall performance of our proposed unit-slot optimization algorithm is the best one in all cases. This result is a consequence of our algorithm being capable of finding an optimized tradeoff in terms of average cost, average response time and average number of application loss in cloud dynamically. The other three approaches make their application allocation decision according to the static status of the system, more precisely, the preference of using fog or cloud to process the incoming applications. 6.3. The Impact of Different Penalty Terms on the Performance of the proposed Algorithm In order to investigate the effects of the penalty terms V1 and V2 in our proposed unit-slot optimization algorithm, we varied their values and observed how the average cost, average response time and average number of application loss in cloud change accordingly. In addition, we employed three types of data, namely, Poisson distribution, General distribution and real data traces, to test the proposed algorithm. In these experiments, the value of V1 represents the importance of the average (monetary) cost while the value of V2 represents the importance of the average number of application loss in cloud. By varying these values, we can analyse how the different priorities given by the user to these different 27
22.05
5000
14
34.2
80
74
21.70 115.0
115.2
115.4
115.6
115.8
116.0
116.2
120
121
122
123
125
876000
Average Response Time
(a) Applications generated by Poisson Distributions
70
68
66
880000
884000
888000
64 892000
Average Number of Application Loss
v1=50,v2=0.1
v1=30,v2=0.1
4500 872000
126
72 v1=40,v2=0.1
v1=50,v2=0.1
124
4700
4600
10
Average Response Time
v1=10,v2=0.1
v1=40,v2=0.1
11
33.2
68
114.8
v1=10,v2=0.1
33.4
12
4800
v1=20,v2=0.1
70
21.75
33.6
13
Average Number of Application Loss
4900
Average Cost
72
33.8
Average Number of Application Loss
21.80
74
Average Number of Application Loss
v1=30,v2=0.1
21.85
v1=50,v2=0.1
v1=40,v2=0.1
76
Average Cost
34.0
v1=20,v2=0.1
v1=30,v2=0.1
v1=10,v2=0.1
21.90
v1=20,v2=0.1
Average Cost
21.95
78
Average Cost
Average Number of Application Loss
Average Number of Application Loss
Average Cost Average Cost
22.00
Average Response Time
(b) Applications generated by General Distributions
(c) Real Data Trace
Figure 4: The performance tradeoff on processing applications in unit-slot algorithm via varying V1
35 120
125
130
135
140
145
Average Response Time
(a) Applications generated by Poisson Distributions
7 126
127
128
129
130
131
132
Average Response Time
(b) Applications generated by General Distributions
4866 4863 870000
871000
872000
v1=10,v2=0.9
4869
70 65
v1=10,v2=0.7
4872
v1=10,v2=0.3
v1=10,v2=0.1
v1=10,v2=0.9
8
34.0 125
9
4875
60 55 50
873000
874000
45 875000
Average Number of Application Loss
34.2
10
Average Number of Application Loss
v1=10,v2=0.5
34.4
11
75 Average Cost
4878
Average Cost
34.6
12
Average Number of Application Loss
40
34.8
v1=10,v2=0.7
50
13
35.0
v1=10,v2=0.5
55
45
23.6 115
60
Average Number of Application Loss
v1=10,v2=0.3
v1=10,v2=0.9
v1=10,v2=0.7 v1=10,v2=0.5
24.0
v1=10,v2=0.3
24.4
v1=10,v2=0.1
Average Cost
65 24.8
Average Cost
35.2
v1=10,v2=0.1
70
Average Cost
75
Average Number of Application Loss
Average Number of Application Loss
35.4
Average Cost
25.2
4881
14
80
25.6
Average Response Time
(c) Real Data Trace
Figure 5: The performance tradeoff on processing applications in unit-slot algorithm via varying V2
430
435
440
parameters will impact the system performance. Such analysis is important to show how our proposal can be used for processing various types of data(including real data traces) as a supporting decision-making tool to manage the tradeoffsf involved in a three-tier CoT system. By properly managing such tradeoffs, it is possible to avoid violating SLAs and tune the system to deliver the best possible QoS. As depicted in Figure 4(a), by fixing the value of V2 and setting the value of V1 to a small one, we observed that the average response time is low while the average total cost and the average number of application loss are high. This occurs because the penalty of monetary cost is gentle in the system, so that the incoming applications have higher possibility to be sent to the cloud for their processing in order to get a better response time. With such scheme for applications allocation, the number of application loss in cloud is also increased because more applications cannot be completed on time. By setting V1 to a large value, the decrease of the average total cost and the average number of application loss in cloud is ob28
445
450
455
460
465
470
475
served along with the increment of the average response time. Similarly, we can observe from Figure 4(b) and Figure 4(c) that this result also holds for both the general distribution model and the real data traces. In addition, in Figure 5, we study the performance of our proposed algorithm by fixing the value of V1 and adjusting V2 . We first set V2 to a small value, and observe from Figure 5(a) that the average response time remains small while the average total cost remains small as well but the average number of application loss is large. This is because the penalty of monetary cost is large while the penalty of throughput is small in the system, so that the applications are more likely to be sent to the fog for their processing without generating monetary cost. Once the threshold is reached, a cloud with a lower total cost will be used to process the applications. However, by setting V2 to a large value, the increment of average cost and the decrement of average number of application loss are observed along with the increment of average response time. Similarly, we can observe that the trend of the change of Figure 5(a) is also found in Figure 5(b) and Figure 5(c), even when the data inputs are changed to the general distribution and real data traces. 6.4. The Change of Backlog in Three-Tier CoT Systems Apart from studying the impact of the change of V1 and V2 among average cost, average response time and average number of application loss in cloud, we also investigated how the backlog of the queues is affected. In addition, it clearly shows where a specific application is sent to process and how the backlog in fog node or cloud node changes accordingly. Figure 6 depicts the overall changes of backlog in fog node and cloud node when V1 and V2 are equally important in the system. Figure 6 depicts the overall changes of backlog in fog node and cloud node when V1 and V2 are equally important in the system. Besides, the embedded figure (at the right top corner) depicts the change of backlog in cloud node when the length of waiting queue has already reached the limitation and the new arriving applications start dropping. As shown in the figure, the backlog in Fog tier is increasing from 0 to the preset limit of the backlog (set as 50 in our experiment) very quickly. After that, the applications will be sent to cloud for processing until space becomes available in the fog node or the length of waiting queue in cloud has reached the limitation of the cloud backlogs.
29
Backlog in Fog Backlog in Cloud
120
Backlog
100
Backlog in cloud
140
50 49 48 10
80
12
14
16
18
Time
60 40 20 0 0
20
40
60
80
100
120
Time
Figure 6: The backlog number of processing applications by unit-slot optimization algorithm
480
485
490
6.5. The Application Offloading Between Fog Node and Cloud Node Figure 7 provides a global view of how the applications are processed within the system when different values are configured for the penalty terms V1 and V2 . Figure 7 shows how the applications are allocated among the three-tier CoT system. When V1 is smaller than V2 , more than half applications are allocated to the Fog tier for their processing. In such setting, we pay more attention on application successful rate within the system. In other words, we expect less number of applications loss in cloud will occur so that the better quality of service can be achieved. We also notice that, for a fixed value of V1 , the number of application loss in cloud decreases while V2 increases. On the contrary, when V1 is greater than V2 means that a cost-effective service is preferred and the number of application loss is less concerned. As shown in the figure, with a fixed value of V2 , the more money we want to save (increase the value of V1 ), the more applications are sent directly to fog node for processing. 30
loss
cloud
fog
v1=0.1,v2=2
v1=0.1,v2=10 v1=10,v2=0.1
v1=50,v2=0.1
0
50
100
150
200
250
300
Figure 7: The application offloading among three-tier CoT systems by unit-slot optimization algorithm
495
500
505
6.6. The Impact of Different Length of Time Slot on the Performance of the Proposed Algorithm Figure 8 depicts the changes of average response time, average total cost and average number of application loss in cloud of the unit-slot optimization algorithm according to the increase of the length of time slot separately. In this experiment, we used the value of how many applications are expected to be processed in order to set the length of the time slot. As shown in the upper part of the figure, the average response time of our proposed algorithm increases almost linearly since more applications need to be processed but the processing capacities of the system remain the same. Meanwhile, the middle part of the figure depicts average cost has the same trend since the more applications need to be processed, the more money have to spend. The lower part of the figure presents the trend of the average number of application loss in the cloud. It’s obvious that with the increment of arrived applications, more of them could be dropped due to the fixed processing capacity at the cloud is provided within each time slot. That is to say, the length of time slot 31
Time Application Loss
Average Number of
Average Cost
Average Response
600 400
Average Response Time
200 0 90 60
Average Cost
30 0 150 100
Average Number of Application Loss
50 0 100
200
300
400
500
600
700
800
900
1000
N
Figure 8: The performance of processing applications by using unit-slot optimization algorithm
will not affect the delivered QoS. This is because the performance increases almost linearly along with the increment of the length of the time slot. 7. Conclusion 510
515
520
Three-tier Cloud of Things is a promising model that, if well exploited, can provide efficient data processing for IoT applications. An increasing number of smart devices are emerging and joining the Internet, thus giving rise to a huge amount of data. Those data has to be processed in either Fog or Cloud tier as fast as possible with a reasonable cost and application loss. In this paper, we studied the problem of providing cost-effective data processing service, and proposed an efficient and effective online algorithm for balancing the three-way tradeoff among application processing time, cost and application loss. Simulation results showed that the proposed algorithm successfully achieves its goals. In the future, we plan to model the entire system by employing different queueing models, e.g. G/M/1 and G/G/1 for 32
the performance analysis. We also plan to investigate the uncertainties of the communication among the components within the three-tier Cloud of Things systems. 8. Acknowledgment 525
530
Dr. Wei Li’s work is supported by Faculty of Engineering and IT Early Career Researcher scheme, The University of Sydney, and the Faculty of Engineering & Information Technologies, The University of Sydney, under the Faculty Research Cluster Program. Professor Zomaya’s work is supported by an Australian Research Council Discovery Grant (DP130104591). Flavia C. Delicato and Paulo F. Pires as CNPq fellows. References [1] L. Atzori, A. Iera, G. Morabito, The internet of things: A survey, Computer Networks 54 (15) (2010) 2787 – 2805. doi:http://dx.doi.org/10.1016/j.comnet.2010.05.010.
535
540
545
550
[2] M. Aazam, I. Khan, A. A. Alsaffar, E. N. Huh, Cloud of things: Integrating internet of things and cloud computing and the issues involved, in: Proceedings of 2014 11th International Bhurban Conference on Applied Sciences Technology (IBCAST) Islamabad, Pakistan, 14th - 18th January, 2014, 2014, pp. 414–419. doi:10.1109/IBCAST.2014.6778179. [3] S. Distefano, G. Merlino, A. Puliafito, Enabling the cloud of things, in: Innovative Mobile and Internet Services in Ubiquitous Computing (IMIS), 2012 Sixth International Conference on, 2012, pp. 858–863. doi:10.1109/IMIS.2012.61. [4] M. Aazam, E. N. Huh, Fog computing and smart gateway based communication for cloud of things, in: Future Internet of Things and Cloud (FiCloud), 2014 International Conference on, 2014, pp. 464–470. doi:10.1109/FiCloud.2014.83. [5] B. Zhang, N. Mor, J. Kolb, D. S. Chan, K. Lutz, E. Allman, J. Wawrzynek, E. Lee, J. Kubiatowicz, The cloud is not enough: Saving iot from the cloud, in: 7th USENIX Workshop on Hot Topics in Cloud Computing (HotCloud 15), USENIX Association, Santa Clara, CA, 2015. 33
555
560
[6] P. Garcia Lopez, A. Montresor, D. Epema, A. Datta, T. Higashino, A. Iamnitchi, M. Barcellos, P. Felber, E. Riviere, Edge-centric computing: Vision and challenges, SIGCOMM Comput. Commun. Rev. 45 (5) (2015) 37–42. doi:10.1145/2831347.2831354. [7] F. Bonomi, R. Milito, J. Zhu, S. Addepalli, Fog computing and its role in the internet of things, in: Proceedings of the First Edition of the MCC Workshop on Mobile Cloud Computing, MCC ’12, ACM, New York, NY, USA, 2012, pp. 13–16. doi:10.1145/2342509.2342513. [8] L. Wang, G. von Laszewski, A. Younge, X. He, M. Kunze, J. Tao, C. Fu, Cloud computing: a perspective study, New Generation Computing 28 (2) (2010) 137–146. doi:10.1007/s00354-008-0081-5. URL http://dx.doi.org/10.1007/s00354-008-0081-5
565
570
[9] H. Khazaei, J. Misic, V. B. Misic, Performance analysis of cloud computing centers using m/g/m/m+r queuing systems, IEEE Transactions on Parallel and Distributed Systems 23 (5) (2012) 936–943. doi:10.1109/TPDS.2011.199. [10] H. Khazaei, J. Misic, V. B. Misic, A fine-grained performance model of cloud computing centers, IEEE Transactions on Parallel and Distributed Systems 24 (11) (2013) 2138–2147. doi:10.1109/TPDS.2012.280. [11] M. J. Neely, Stochastic network optimization with application to communication and queueing systems, Synthesis Lectures on Communication Networks 3 (1) (2010) 1–211.
575
580
585
[12] C. M. D. Farias, W. Li, F. C. Delicato, L. Pirmez, A. Y. Zomaya, P. F. Pires, J. N. D. Souza, A systematic review of shared sensor networks, ACM Comput. Surv. 48 (4) (2016) 51:1–51:50. doi:10.1145/2851510. [13] A. Botta, W. de Donato, V. Persico, A. Pescape, Integration of cloud computing and internet of things: A survey, Future Generation Computer Systems 56 (2016) 684 – 700. doi:http://dx.doi.org/10.1016/j.future.2015.09.021. [14] S. Deng, L. Huang, J. Taheri, A. Y. Zomaya, Computation offloading for service workflow in mobile cloud computing, IEEE Transactions on Parallel and Distributed Systems 26 (12) (2015) 3317–3329. doi:10.1109/TPDS.2014.2381640. 34
[15] L. Gu, D. Zeng, S. Guo, A. Barnawi, Y. Xiang, Cost efficient resource management in fog computing supported medical cyber-physical system, IEEE Transactions on Emerging Topics in Computing 5 (1) (2017) 108– 119. doi:10.1109/TETC.2015.2508382. 590
595
600
605
[16] W. Li, I. Santos, F. C. Delicato, P. F. Pires, L. Pirmez, W. Wei, H. Song, A. Zomaya, S. Khan, System modelling and performance evaluation of a three-tier cloud of things, Future Generation Computer Systems (2016) –doi:http://dx.doi.org/10.1016/j.future.2016.06.019. [17] S. Sarkar, S. Chatterjee, S. Misra, Assessment of the suitability of fog computing in the context of internet of things, IEEE Transactions on Cloud Computing PP (99) (2015) 1–1. doi:10.1109/TCC.2015.2485206. [18] R. Urgaonkar, S. Wang, T. He, M. Zafer, K. Chan, K. K. Leung, Dynamic service migration and workload scheduling in edge-clouds, Performance Evaluation 91 (2015) 205 – 228, special Issue: Performance 2015. doi:http://dx.doi.org/10.1016/j.peva.2015.06.013. [19] D. Zeng, L. Gu, S. Guo, Z. Cheng, S. Yu, Joint optimization of task scheduling and image placement in fog computing supported softwaredefined embedded system, IEEE Transactions on Computers PP (99) (2016) 1–1. doi:10.1109/TC.2016.2536019. [20] M. J. Neely, E. Modiano, C. P. Li, Fairness and optimal stochastic control for heterogeneous networks, IEEE/ACM Transactions on Networking 16 (2) (2008) 396–409. doi:10.1109/TNET.2007.900405.
610
[21] M. J. Neely, Energy optimal control for time-varying wireless networks, IEEE Transactions on Information Theory 52 (7) (2006) 2915–2934. doi:10.1109/TIT.2006.876219. [22] D. Huang, P. Wang, D. Niyato, A dynamic offloading algorithm for mobile computing, IEEE Transactions on Wireless Communications 11 (6) (2012) 1991–1995. doi:10.1109/TWC.2012.041912.110912.
615
[23] Y. Niu, B. Luo, F. Liu, J. Liu, B. Li, When hybrid cloud meets flash crowd: Towards cost-effective service provisioning, in: 2015 IEEE Conference on Computer Communications (INFOCOM), 2015, pp. 1044– 1052. doi:10.1109/INFOCOM.2015.7218477. 35
620
625
[24] S. Li, Y. Zhou, L. Jiao, X. Yan, X. Wang, M. R. T. Lyu, Towards operational cost minimization in hybrid clouds for dynamic resource provisioning with delay-aware optimization, IEEE Transactions on Services Computing 8 (3) (2015) 398–409. doi:10.1109/TSC.2015.2390413. [25] Y. Mao, J. Zhang, K. B. Letaief, Dynamic computation offloading for mobile-edge computing with energy harvesting devices, arXiv preprint arXiv:1605.05488. [26] A. Destounis, G. S. Paschos, I. Koutsopoulos, Streaming big data meets backpressure in distributed network computation, CoRR abs/1601.03876.
630
635
[27] Y. Nan, W. Li, W. Bao, F. Delicato, P. Pires, A. Zomaya, Cost-effective processing for delay-sensitive applications in cloud of things systems, in: Network Computing and Applications (NCA), 2016 IEEE 15th International Symposium on, 2016. [28] A. Zanella, N. Bui, A. Castellani, L. Vangelista, M. Zorzi, Internet of things for smart cities, IEEE Internet of Things Journal 1 (1) (2014) 22–32. doi:10.1109/JIOT.2014.2306328. [29] K. E. Skouby, P. Lynggaard, Smart home and smart city solutions enabled by 5g, iot, aai and cot services, in: 2014 International Conference on Contemporary Computing and Informatics (IC3I), 2014, pp. 874–878. doi:10.1109/IC3I.2014.7019822.
640
645
[30] M. Castro, A. J. Jara, A. F. G. Skarmeta, Smart lighting solutions for smart cities, in: 2013 27th International Conference on Advanced Information Networking and Applications Workshops, 2013, pp. 1374– 1379. doi:10.1109/WAINA.2013.254.
[31] Which iot applications work best with fog computing? URL http://www.networkworld.com/article/3147085/internet-of-things/which-iot[32] H. Zhang, Y. Wang, Architecture, key technologies and applications of nb-iotbased fog computing, ZTE TECHNOLOGY JOURNAL 23 (1) (2017) 32–36.
650
[33] A. Botta, A. PescapÃľ, IP packet interleaving for UDP bursty losses, Journal of Systems and Software 109 (2015) 177 – 191. 36
doi:http://dx.doi.org/10.1016/j.jss.2015.07.048. URL http://www.sciencedirect.com/science/article/pii/S0164121215001673 [34] P. Bocharov, Queueing Theory, Modern probability and statistics, VSP, 2004. 655
[35] N. Matloff, Introduction to discrete-event simulation and the simpy language, Davis, CA. Dept of Computer Science. University of California at Davis. Retrieved on August 2 (2008) 2009. [36] MIT Computer Science and Artificial Intelligence Lab: Intel lab sensor data. URL http://db.csail.mit.edu/labdata/labdata.html
37
Yucen Nan Wei Li Wei Bao Flavia C. Delicato Paulo F. Pires Albert Y. Zomaya
Yucen Nan received her B.E. degree in Computer Science Engineering from Southwest Jiaotong University, Chengdu, China, in 2015; and now she is a master of philosophy student in the School of Information Technology at the University of Sydney. She is currently a first-year research master in the Center for Distributed and High Performance Computing, University of Sydney. Her current research interests are in the area of cloud computing, Internet of Things and optimization.
Wei Li received his Ph.D. degree from the School of Information Technologies at the University of Sydney in 2012. He is currently a research associate in the Centre for Distributed and High Performance Computing, University of Sydney. His current research interests include wireless sensor network, Internet of Things, task scheduling, nature-inspired algorithms, and optimization. He is a senior member of IEEE and ACM.
Wei Bao received B.E. degree in Communications Engineering from Beijing University of Posts and Telecommunications, Beijing, China, in 2009; M.A.Sc. degree in Electrical and Computer Engineering from the University of British Columbia, Vancouver, Canada, in 2011; and Ph.D. degree in Electrical and Computer Engineering from the University of Toronto, Toronto, Canada, in 2016. He is currently a lecturer at the School of Information Technologies, University of Sydney, Sydney, Australia. His current research interests are in networked systems and mobile computing. He received best-paper awards in ACM International Conference on Modeling, Analysis and Simulation of Wireless and Mobile Systems (MSWiM) in 2013, and IEEE International Symposium on Network Computing and Applications (NCA) in 2016.
Flavia C. Delicato has a PhD in electrical and computer engineering (2005) from the Federal University of Rio de Janeiro. She is an associate professor of computer science at Federal University of Rio de Janeiro, Brazil. She is the author of more than 100 papers and participates in several research projects with funding from International and Brazilian government agencies. Her research interests include wireless sensor networks, Internet of Things, adaptive systems, middleware, and Software Engineering techniques applied to ubiquitous systems. She is a level 1 Researcher Fellow of the Brazilian National Council for Scientific and Technological Development.
Paulo F. Pires (PhD 2002) is an Associate Professor at the Department of Computer Science at Federal University of Rio de Janeiro (UFRJ), Brazil. Dr. Pires leads the Ubicomp laboratory at UFRJ, where he coordinates researches in cyber physical systems, and software engineering techniques applied in the development of complex distributed systems. Dr. Pires has coordinated several cooperative industry–university projects as well as research projects in those areas, with funding from several International and Brazilian government agencies, including CNPq, CAPES, FINEP, FAPERJ, Microsoft, Fundación Carolina (Spain), and Australian Government (Australian Leadership Awards - ALA). He has published over 100 refereed international journal articles, conference papers and book chapters. He is currently the editor in chief of the International Journal of Semantic and Infrastructure Services. He holds a technological innovation productivity scholarship from the Brazilian Research Council (CNPq) since 2010 and is a member of the Brazilian Computer Society (SBC).
Albert Y. ZOMAYA is the Chair Professor of High Performance Computing & Networking in the School of Information Technologies, University of Sydney, and he also serves as the Director of the Centre for Distributed and High Performance Computing. Professor Zomaya published more than 600 scientific papers and articles and is author, co-author or editor of more than 20 books. He is the Founding Editor in Chief of the IEEE Transactions on Sustainable Computing and serves as an associate editor for more than 20 leading journals. Professor Zomaya served as an Editor in Chief for the IEEE Transactions on Computers (2011-2014). Professor Zomaya is the recipient of the IEEE Technical Committee on Parallel Processing Outstanding Service Award (2011), the IEEE Technical Committee on Scalable Computing Medal for Excellence in Scalable Computing (2011), and the IEEE Computer Society Technical Achievement Award (2014). He is a Chartered Engineer, a Fellow of AAAS, IEEE, and IET. Professor Zomaya's research interests are in the areas of parallel and distributed computing and complex systems.
Highlight of the paper A Dynamic Tradeoff Data Processing Framework for Delay-sensitive Applications in Cloud of Things Systems 1) We develop an online decision making framework in three-tier cloud of things systems for providing cost-effective processing service in terms of time, cost and application loss for delay-sensitive applications. 2) We can dynamically adjust the penalty terms according to the short-term changes while still guarantee the long term optimal system performance can be achieved. 3) Based on results from our extensive simulation, the proposed algorithm outperforms two other benchmark algorithm. Furthermore, we can vary the penalty term to observe the change of average response time, average cost and average application loss accordingly.