Future Generation Computer Systems 107 (2020) 498–508
Contents lists available at ScienceDirect
Future Generation Computer Systems journal homepage: www.elsevier.com/locate/fgcs
EdgeABC: An architecture for task offloading and resource allocation in the Internet of Things ∗
Kaile Xiao a , Zhipeng Gao a , , Weisong Shi b , Xuesong Qiu a , Yang Yang a , Lanlan Rui a a b
State Key Laboratory of Networking & Switching Technology, Beijing University of Posts and Telecommunications, China Department of Computer Science, Wayne State University, Detroit, MI, USA
article
info
Article history: Received 6 September 2019 Received in revised form 30 December 2019 Accepted 7 February 2020 Available online 14 February 2020 Keywords: Internet of Things Edge computing Blockchain Resources allocation Fairness Profits of service providers
a b s t r a c t The evolving Internet of Things (IoT) faces considerable challenges in terms of sensitive delay requirements for tasks, reasonable allocation requirements of resources, and reliability requirements of resource transactions. In the paper, considering these problems, we propose an emerging IoT architecture EdgeABC introduced the blockchain to ensure the integrity of resource transaction data and the profits of service provider, and we also propose a Task Offloading and Resource Allocation (TO-RA) algorithm, where the TO-RA algorithm is implemented on the blockchain in the form of smart contracts. In other words, the architecture proposed optimizes the resource allocation in IoT based on the advantages of the blockchain. Specifically, we first propose a subtask-virtual machine mapping strategy to complete the Task Offloading (TO) and the first allocation of resources; then aiming at the possible load imbalance problem of the system, we propose a stack cache supplement mechanism to complete the Resource Allocation (RA) based on the TO strategy. Finally, simulation experiments verify that the fairness, user satisfaction, and system utility of the TO-RA algorithm are superior to traditional algorithms. © 2020 Elsevier B.V. All rights reserved.
1. Introduction Nowadays, most smart devices have many sensors inside them that enable smart devices to detect information about their surroundings in real-time performance. These Internet-connected devices are called the Internet of Things (IoT) [1]. In order to achieve the specific goals of the system, the IoT devices (such as sensors, mobile phones, laptops, smart cars, and industrial modules) can communicate with each other and complete information interaction in smart home, smart transportation, smart building [2], etc. And their tasks are analyzed and processed by processors. The results of data processing can give us much information. The IoT provides us with conveniences and changes the style we live and work. However, the evolving IoT faces enormous challenges in terms of sensitive delay requirements for tasks, reasonable allocation of resources, and reliability of resource transactions. Firstly, in the IoT, tasks have higher and higher demands on delay. The tasks of many devices must be processed within a specified time to avoid adverse consequences. In some exceptional cases (such as autonomous driving, smart factories, and so on), if the delay requirement of task processing is not met, an accident may occur. ∗ Corresponding author. E-mail address:
[email protected] (Z. Gao). https://doi.org/10.1016/j.future.2020.02.026 0167-739X/© 2020 Elsevier B.V. All rights reserved.
Due to the weak processing power of IoT devices, a large amount of data generated by the devices may need to be transmitted to the remote cloud for processing via the Internet [3]. Nevertheless, the existing centralized cloud computing model is challenging to meet the sensitive delay requirements of the task, because the amount of data is large, and the distance between the IoT device and the cloud is long. Secondly, each request for an IoT device generates a processing record. From the request of the device to the feedback from the server, this is defined as a resource transaction. Completion of the resource transaction is related to resource allocation, the processing quality of tasks, the satisfaction of users, and the profits of server providers. The actual resource transaction is generally stored in a centralized database, which may be attacked and tampered with, directly leading to the loss of the service provider’s profits. Thirdly, the servers (the offloading destination of the IoT device’s tasks, includes cloud server or others) contain wealthy resources of computing and storage, so they have excellent capabilities to process tasks. However, if the resources of the processor are not well utilized (the usage rate is too high or too low), this may directly affect the processing quality of the task and even affect the stability of the entire system. That is to say, in the IoT system, the tasks generated by the devices need to be offloaded, the resources of the server need to be allocated reasonably, and the profits of the service providers need to be guaranteed. Considering the above issues, we try the following solutions.
K. Xiao, Z. Gao, W. Shi et al. / Future Generation Computer Systems 107 (2020) 498–508
We propose a three-tier IoT architecture that takes into account the sensitive requirements of the various roles in the IoT. First, considering the sensitive requirements of the device role (the first layer) and the server provider role (the second layer), we introduce the edge computing [4,5] and the blockchain architecture [6] to solve the delay sensitivity problem of the task and ensure the profits of service providers. Also, considering the sensitive requirements of the edge computing server role (the third layer), we propose that the TO-RA algorithm is implemented in the second layer in the form of the smart contract to control and allocate the resources of the edge computing server. In particular, we propose a stacked cache supplementary mechanism to improve the fairness of resource usage among edge computing servers. The main contributions of this paper are:
• We build three-layer IoT architecture EdgeABC by using blockchain at the agent controller layer, which inherits the data tamper-proof advantages of the blockchain and ensures the integrity of resource transaction data. Furthermore, in this layer, the TO-RA algorithm proposed is implemented in the form of smart contracts. • We divide the task into multiple subtasks based on the application workflow, while virtualizing the resources of the edge computing server, and simplify the problem of task offloading of smart devices into the problem of subtask-VM mapping. • In order to solve the delay problem that the traditional blockchain may bring, we optimize the TO strategy proposed and propose a stacked cache supplementary mechanism to improve resource allocation, system utility, and fairness among edge computing servers. The rest of this paper is organized as follows: in the next section, we discuss related work. In Section 3, we present the three-layer system architecture EdgeABC proposed by us. In Section 4, we describe the basic system model, including the communication model and the computation model. Section 5 describes the TO-RA algorithm proposed, including the task offloading of smart devices and resource allocation of edge computing servers. Experiment and analysis are shown in Section 6, and conclusions are in Section 7. 2. Related work The IoT provides connectivity for anyone at any time and place to anything at any time and place [7]. In the aspect of the IoT architecture, [8] proposes the architecture of IoT smart city, which integrates multiple information and communication technology and IoT solutions to improve the quality of life of citizens, including smart business, smart education, and smart economy, and more. Considering the challenges brought by cloud security, [9] proposes the implementation method based on the interaction of private cloud and public cloud in the IoT. This work is further extended to [10]. In the IoT, [10] summarizes fog computing is an emerging architecture, including computing, storage, control, and networking. In addition to edge computing and the IoT, many papers also consider the combination of edge computing and blockchain. The blockchain has attracted more and more attention because of its unique functions, such as tamper-proof, effectiveness, and traceability [11]. [12] proposes an economical approach of edge computing resource management and a prototype of the blockchain system. [13] constructs a multi-layer neural network structure based on the optimal auction analytical solution to optimize the loss function of expected profits of the edge computing service provider. Similarly, [14] also proposes an auction-based resource
499
market for service providers in edge computing. They maximize social welfare while ensuring authenticity, individual rationality, and computational efficiency. Besides, many papers study the problem of task offloading and resource allocation in IoT or edge computing. [15] uses biologically inspired cognitive or intelligent models to optimize task scheduling for IoT applications in a heterogeneous multiprocessor cloud. Their research is further extended to [16]. In order to solve the resource requirements and the mobility of devices, [16] proposes an autonomous management framework based on deep Q learning. Furthermore, they allow agents to learn from the environment. In terms of resource allocation, [17] considers the heterogeneity of resources and the challenges brought by the request of big data applications, proposes a mobility-based resource allocation architecture Mobi-Het. In the field of edge computing, in order to minimize mobile energy consumption, [18] proposes a dynamic offloading algorithm based on the Lyapunov optimization technique. Moreover, [19] finds the monotonic properties of the CPU-cycle frequencies are related to the battery energy level, provides a viable approach to design edge computing systems with renewable energy devices. Based on the theory of Markov decision process, [20] designs a delay-optimal task scheduling strategy, which controls the state of local processing and transmission unit based on the task buffer queue length of the channel state. These excellent works have studied the IoT, blockchain, and edge computing. However, the problems about how to accomplish task offloading of smart devices, ensure the profits of the service providers and optimize resource allocation of edge computing servers in IoT architecture, have not been well studied, which prompts us to build a new framework to meet the sensitive requirements of each role in the IoT. 3. EdgeABC architecture proposed Now ‘‘Internet of Everything’’ is not just a vision. The critical development of the IoT prompts us to face a variety of challenges and take actions [21]. Based on the research on related work, we find that the existing architecture cannot meet some requirements, including the sensitive delay of tasks, the reliability of transaction data, and the rationality of resource allocation. Therefore, we propose a new three-layer architecture to solve the problems. Firstly, by introducing edge computing in the IoT, the problem can be solved that processing tasks are not timely in clouds. In the IoT, the tasks of smart devices are initiated on the edge side, and the edge computing server (as task processor) can generate faster service response, meeting the basic requirements of realtime performance. Edge computing can drive more computing resources, networking resources, and storage resources close to IoT smart devices. Moreover, it has advantages in response speed, data processing and pre-processing, reducing backbone traffic, and extending edge intelligence. Secondly, blockchain architecture brings some inspiration to the transaction’s reliability in IoT. Blockchain has now been introduced in various fields [22], such as Bitcoin, Smart Grid Power Systems, and the financial industry. The direct reason for increasing interest in blockchain is that this natural advantage of blockchain architecture can guarantee the reliability of the data. Furthermore, resource transaction records (the resources usage situation) are directly related to the profits of the service provider, so ensuring the integrity of transaction data is critical. Another reason is that the completion of a transaction now relies on a third party to create trust, and the blockchain can complete the transaction without the third party involvement. Today, we can replace existing systems with blockchain, eliminating reliance on third parties, and making transactions faster and safer.
500
K. Xiao, Z. Gao, W. Shi et al. / Future Generation Computer Systems 107 (2020) 498–508
Fig. 1. EdgeABC architecture proposed.
To the end, we propose an architecture called EdgeABC (Edge computing And BlockChain), which includes three layers ‘‘IoT Device–Agent Controller–Edge Computing Server’’, as shown in Fig. 1. At the first layer, smart devices collect environmental parameters and information through various sensors, then generate data and tasks, and send task requirements to the upper layer. In the second layer, after receiving the processing requirements and parameters of the task (such as task size, device movement, next destination of the mobile device, etc.), the agent assigns the task according to the task offload and resource allocation (TORA) strategy proposes, which will be described in Section 5. We believe that each agent covers a small connected community and is responsible for timely event detection, task scheduling, and service delivery. At the same time, in order to ensure the integrity of the resource transaction data and the profits of the service provider, we also build a distributed agent controller structure based on the blockchain, which will be described in Section 3.2. After the agent determines the task allocation plan, the solution is sent to the smart device and the corresponding edge computing server (in the third layer). In a single cycle, all servers may work simultaneously to meet the delay requirements of the task. Like agents, each edge computing server also covers the small connected communities and can analyze data and provide services quickly. Below we will describe each layer in detail. 3.1. IoT smart device layer In the IoT, the smart device plays an important role. There are many IoT application scenarios, including the following four typical applications [23,24]:
• Smart Healthcare: most computing tasks in healthcare can benefit from edge computing. In smart healthcare, the IoT can provide excellent mobility for devices, and edge computing can provide quick feedback for the requirements from devices. • Smart Home: data generated by various household appliances need to be processed. The introduction of edge computing in the IoT can provide a framework for smart home management.
• Smart Building: it makes full use of the intelligent management methods created by the Internet and the IoT and integrates video surveillance, intrusion alarm, environmental monitoring, and other subsystems, so that decision-makers can see everything at a glance. • Smart Factory: it uses the IoT and monitoring technology to strengthen information management and service. Smart factory can clearly grasp the production and sales process, improve the controllability of the production process, reduce manual intervention on the production line, and collect production line data timely and correctly. 3.2. Distributed agent controller architecture based on blockchain The traditional storage method of transaction data is generally in a centralized architecture. If there is a problem in the transaction center, the related transaction process will be interrupted. The blockchain system solves this problem: any single node in blockchain will not affect the regular operation of the entire blockchain system. Moreover, the blockchain can complete the transaction without depending on a third party. By eliminating dependence on third parties, blockchain architecture enables transactions to be completed faster. In traditional blockchains, the timestamp is essential to mark every transaction. However, if the agent controller also allocates tasks according to the timestamp, many delay-sensitive tasks cannot be completed on time, which seriously affects the user’s experience quality. Therefore, based on the blockchain architecture, we propose a new task offloading and resource allocation algorithm TO-RA in the form of the smart contract in the agent controller level. This algorithm can optimize the shortcomings which use only timestamps to deal with the transaction in traditional blockchain architecture, ensure that delay-sensitive tasks can receive effective feedback in time, which will be described in Section 5. Blockchains (such as in Ethereum) generally rely on the consensus protocol (proof of work, proof of stake, etc.) to reach an agreement, spontaneously and honestly abide by the preestablished rules in the protocol, and judge the authenticity of
K. Xiao, Z. Gao, W. Shi et al. / Future Generation Computer Systems 107 (2020) 498–508
each record. Moreover, in blockchain-based agent controller architecture, contributions of the agent are actions that occur outside of the blockchain (request responses, resource allocation, etc.), which means that another protocol is needed to prove that the occurrence and existence of contributions. We introduce ‘‘Proof of Service’’ [25] as the consensus protocol, which is inspired by two typical consensus protocols, proof of work and proof of stake, to ensure that legitimate participants can control most resources, such as computing resources. Next, we explain how to ensure data reliability. In our model, the agent controllers belong to the same company. Therefore, they are equal and trust each other. The agent controllers encrypt the resource transaction data and store it in some distributed databases outside the blockchain in the form of a hash table, and stores the hash pointer in the block. We assume that agents can manage keys securely. And then we consider the data reliability of architecture proposed in the following cases: Case 1: The architecture proposed is based on blockchain technology and inherits the advantages of blockchain’s decentralization and tamper-proof [6,11,22]. It can ensure that malicious nodes cannot impersonate legitimate nodes unless malicious nodes have mastered most of the network resources [26]. Case 2: Malicious nodes cannot obtain resource transaction data from the ledger, because only the hash pointer is stored in the ledger. Case 3: Even if the malicious nodes get the hash data from the distributed databases, they still cannot tamper with the original data because they do not have the key. Case 4: We also consider the worst case. If a malicious node gets the agent’s key and digital signature, only a small part of the data is damaged. Because we can solve it with the method of the dynamic key, for example, every 50 transaction records are encrypted with a key. Therefore, in addition to inheriting the advantages of the blockchain, we also have some tamper-proof solutions for the resource transaction data to ensure the reliability of the resource transaction data. Also, the architecture helps to improve the transparency of the model by rewarding the agents (to be considered in Eq. (18)) for making each agent’s contribution visible. 3.3. Hierarchical edge computing servers The IoT smart devices that generate data and tasks also have some computing and storage capabilities. Therefore, for some small tasks, we can try to process them on the smart device, which called local computing. Most smart devices have the ability to local processing. This change of role from ‘‘data generation’’ to ‘‘data processor’’ and ‘‘data consumer’’ is also a remarkable feature of the new generation IoT. However, the computing and storage resources of smart devices are limited, and it is impossible to perform high-complexity computing tasks for a long time. Besides, smart devices usually have other essential works, and additional task processing may affect the performance of the device. Therefore, after judging the overall situation, the tasks of the smart devices can be selected to offload to resource-rich servers. In other words, the existence of edge computing servers (the third layer) is also reasonable. In the third layer, we take full account of the cost of the service provider. Because the resource requirements are different in each region. If the same capacity and number of edge computing servers are deployed in each community, it may increase the cost of service providers and cause many resources to be idle. Therefore, deploying servers with a different capacity based on resource requirements is an excellent way to reduce the costs of the service provider. We believe that edge computing servers are
501
Table 1 Symbol used in the model. Symbol Description M N K bm,n (t) ym,n (t) an,k Ssub Rsub Usub (t) Rl (t) Tl (t) El (t) Ttotal (t) Ttr (t) Texe (t) Etr (t) vtr (t) Rk Rn Tnmax Tmmax
The number of subtask The number of VM The number of server Whether subtask m is offloaded or not Whether subtask m is offloaded to VMs n or not (if m is offloaded) Whether VM n belongs to server k or not The size of subtask m The amount of resources required for subtask n The average utility of subtasks The amount of resources available locally Time when subtask is executed locally Energy consumption of subtask m which is processed locally The total time subtask m is offloaded to VM n for processing Transfer time of subtask m to VM n Execution time of subtask m on VM n The energy consumed by the smart device to transmit data Transfer rate of subtask m to VM n Total amount of resources of server k The amount of resources of the VM m belonging to the server k Maximum allowable delay that VM n can process (abstract meaning) Maximum allowable delay for task
divided into three categories based on resources and processing capability: large-size servers, medium-size servers, and small-size servers. At the same time, we virtualize the resources of the edge computing server and assume that each server can be divided into multiple VMs that share the resources of the same server. Each VM has some computing resources, storage resources, and so on. 3.4. How the system and architecture proposed work The working process of the system and architecture is shown in Fig. 2. In EdgeABC, the working process of the system and architecture can be summarized as the following four steps: receiving task requirements from devices, selecting servers, registering transactions, and paying. In the first step, the agent controller needs to stay active to receive task requirements from smart devices in the community. Once the agent receives the task request, the second step is immediately taken. Generally, agents receive multiple tasks at the same time. In the second step, the agent allocates the tasks and provides the necessary services to the users: the agent obtains the mapping result between subtasks and VMs by using the TO-RA algorithm and notifies the corresponding servers to receive the specific tasks. At the same time, the transaction record will be frozen in the mempool. After receiving the command of the agent, the servers start processing tasks and return the result to the smart devices. After completing task processing, the agents register the transaction in the form of the block and share transactions with all agent controller nodes. In this process, transaction information is recorded in the blockchain. After the transaction is completed, the user of the smart device pays the virtual token. Then the records in the mempool will also be updated. 4. Basic models and solution Before describing how the agent performs task offloading and resource allocation plan, we need to build some basic models to describe the problem, including communication models, computing models, and system utility models. Table 1 summarizes the symbols and descriptions used in our models.
502
K. Xiao, Z. Gao, W. Shi et al. / Future Generation Computer Systems 107 (2020) 498–508
Fig. 2. The working process of the system in EdgeABC.
4.1. The model of communication In the communication model, we consider noise and interference. The data transmission rate vtr (t) of the task is as shown in Eq. (1).
vtr (t) = ωm,n (t) log(1 +
ptr (t)gm,n (t) L(t) + No
)
(1)
where ωm,n (t) is the bandwidth, ptr (t) is the transmission power, L(t) is the ground disturbance, No is the noise power, and gm,n (t) is the channel gain between the subtask m and the VM n, and gm,n (t) = dm,n (t)−a , where dm,n (t) is the physical distance between the subtask m and the VM n, and a is a fixed parameter.
smaller than the raw data, so in order to simplify the model, the feedback time of the result is not considered. Therefore, the total time of processing subtask m is: Ttotal (t) = Ttr (t) + Texe (t)
The transmission time of the subtask m is determined by the channel condition and the size of the subtask, and it can be expressed as: Ttr (t) = bm,n (t)
In this section, we will introduce the computing model. After completing the splitting of the task, the size of each subtask can be calculated. Moreover, we assume that subtasks are inseparable, and a subtask can only be processed locally or by an edge computing server. The basic parameters of a subtask Nsub are the size and the resources needed to handle them, so we can use the following 2-element tuple to describe subtask: Nsub = (Ssub , Rsub ). We will discuss two cases: local processing and edge computing server processing. 4.2.1. Local processing If the subtask m is directly processed on the smart device, the time and power consumption of the subtask can be obtained according to the capacity of the smart device processor. The equation is as follows. Rsub Rl (t)
El (t) = el (t)Tl (t) sub = (1 − bm,n (t)) el (t)R R (t)
(2) (3)
l
where el (t) is the energy consumption of the smart device in each CPU cycle, and el (t) = 10−11 Rl (t). 4.2.2. Processing by VM in edge computing server On the edge computing server, the delay and energy consumption mainly come from the transmission and execution of the subtask m. After virtualizing the resources of edge computing server, we consider the resources of VM n as: Rn = ∑N
Rk
n=1
vtr (t)
(4) an,k
The total processing time of the subtask m includes the transmission time, the execution time, and the result feedback time from servers to the smart device. Usually, the result data is
(6)
The execution time of subtask m depends on the processing capacity of the VM n, and it can be expressed as:
=
Tl (t) = (1 − bm,n (t))
Ssub
Rsub Rn ∑ N R a bm,n (t) sub Rn=1 n,k k
Texe (t) = bm,n (t)
4.2. The model of computing
(5)
(7)
If the subtask m is processed on the VM n, the energy consumption of the smart device is generated by transmitting data, and it can be expressed as: Etr (t) = ptr (t)Ttr (t)
(8)
4.3. System utility We assume that the subtask is offloaded to the VM to execute when the following conditions are met. Ttotal (t) ≤ Tl (t) Etr (t) ≤ El (t)
(9)
According to [27], we consider time consumption and energy consumption separately, and we have:
λT (t) = λE (t) =
Tl (t) − Ttotal (t) Tl (t) El (t) − Etr (t) El (t)
(10) (11)
According to the above equations, the result of Tl (t) minus Ttotal (t) is greater than 0 and the result of El (t) minus Etr (t) is greater than 0, indicating that the subtask is more suitable for execution on the server. We define the adjustment function D(t) according to the condition of the device, and it can be expressed as: D(t) = αλT (t) + βλE (t)
(12)
In Eq. (12), α is the weight of delay and β is the weight of power consumption, and α, β ∈ [0, 1]. The values of α and β can be adjusted based on the state of the smart device. For example, if the smart device is using augmented reality technology, requiring tasks to be processed in real-time performance, then α > β ; if the smart device is not connected to the power supply, and the
K. Xiao, Z. Gao, W. Shi et al. / Future Generation Computer Systems 107 (2020) 498–508
503
According to Fig. 3, the agent layer can store subtask code, all VM addresses, and idle VM address, where the subtask code is related to the content in the virtual subtask pool. We first assume that there are subtask pools to store subtasks. According to the different allowable delay of a subtask, we put the subtasks into the different subtask pool. Moreover, the number of subtasks that can be stored in each task pool is 2N. However, the storage of subtasks requires a large number of storage resources, so we set the subtask pool to be virtual, called virtual subtask pools in Fig. 3. Specifically, at the agent controller level, the agent is responsible for encoding subtasks based on some parameters, including subtask size and delay. And then, the agent gives each subtask a serial number based on the parameters. The subtask serial number and the corresponding subtask parameters are combined as a subtask code, stored in the subtask code storage. The agent performs the subtask offloading strategy in the order of subtask serial number. We believe that each subtask has a maximum allowable delay Tmmax , and the total Tmmax of all subtasks cannot exceed the maximum allowable delay Ttask of the task, as shown in Eq. (15). Fig. 3. Storages at the agent controller layer.
Ttask ≥
M ∑
Tmmax
(15)
m=1
real-time requirements of the running application are not high, then α < β . Also, when α = β , we believe that the energy and delay have the same urgent needs. In the research, we found that the task in this state (when α = β ) accounts for the largest proportion. According to [28], we define D(t) value of this state as the average subtasks utility Usub (t). Usub (t) = αλT (t) + βλE (t) (α = β )
(13)
The state of a subtask may not indicate the state of the system. We define system utility in Eq. (14). Usys (t) =
M N ∑ ∑
Usub (t)ym,n (t)
(14)
m=1 n=1
In Eq. (14), the system utility is the sum of the state of task processing in all virtual subtask pools for a certain period.
Subtasks can be executed locally (at smart devices) or offloaded to the edge computing servers to process. By dividing the tasks and virtualizing the resources of edge computing servers, we simplify the task offloading model, and the new task offloading strategy can be described as that finding a mapping set of subtasks and edge computing nodes (including some smart devices with computing capability and server) and realizing the mapping between subtasks and offloading nodes to complete the subtask offloading. In our task offloading strategy, we use the maximum system utility as a mapping condition. It can be expressed as: F : S .T
max
M N ∑ ∑
Usub (t)ym,n (t)
m=1 n=1
ym,n (t) ∈ {0, 1} N ∑
∀m, n
ym,n (t) = 1
(16)
n=1
5. Task offloading (TO) - resource allocation (RA) algorithm In this section, based on the basic model, we will describe how the agent controller layer with the TO-RA algorithm minimizes the processing time of the task (in Section 5.1), maximize system utility (in Section 5.2), and then give a solution. In the agent controller layer of EdgeABC, we implement the proposed TO-RA algorithm in the form of the smart contract. The main job of the agent controller is to allocate resources of the edge computing server. In one unit cycle, the server completes the task processing, and the agent determines which tasks will be processed in the next cycle. We refer to the subtask-VM mapping strategy (for task offloading) and the stacked cache complement mechanism (for resource allocation) as the TO-RA algorithm. In the TO-RA algorithm, a node finds n augmentation paths. In a specific augmentation path, the top mark changes at most n times, and the slack of the top mark is modified by n2 . Therefore, the maximum time complexity of the TO-RA algorithm is O(n4 ).
1≤
M ∑
ym,n (t) ≤ M
m=1
The second restrictive condition indicates that a subtask can only be processed by one VM in the same cycle. The third restrictive condition indicates that a VM can process multiple subtasks and process at least one subtask in the same cycle. Our mapping model is a mapping problem of elements in two sets. Therefore, we transform the mapping model solution problem into the optimal matching problem of the bipartite graph. A standard solving method is the Kuhn–Munkras (KM) algorithm. The traditional KM algorithm requires the number of elements in two sets to be equal. In order to make the KM algorithm applicable, we introduce virtual elements as virtual nodes in the set with fewer elements. In our algorithm, the number of subtasks in a single cycle is larger than the number of VMs. Therefore, we supplement the virtual nodes in the set of VMs until the number of subtasks and the number of VMs is equal in one cycle. Through this transformation, the improved KM algorithm can be used to solve the problem proposed.
5.1. Task offloading (TO) strategy and solution 5.2. Resource allocation (RA) mechanism and solution In this section, we propose a subtask-VM mapping strategy to complete the tasks offloading, which significantly simplifies the task offloading model.
In Section 5.1, the agent obtains the mapping relationship between the subtask and the VM through the task offloading
504
K. Xiao, Z. Gao, W. Shi et al. / Future Generation Computer Systems 107 (2020) 498–508
strategy. Before starting processing the next cycle task, the system needs to process all the subtasks in this cycle. This plan may cause the resources of many VMs being the idle state for a long time in a particular cycle. Therefore, there may not be good fairness between VMs, and the system may not have high concurrency and efficiency. In Fig. 4, the left part of the figure is the result of the subtaskVM mapping strategy. After finishing mapping between the subtask and VM address, the agent controller will check and judge the state of the current VMs. If more than 50% of the VMs are in long-term starvation, the supplement mechanism will be enabled. As shown in Fig. 4, a small yellow triangle indicates that this VM address will be pushed into the stack cache. We set up the stack cache to store the addresses of the hungry VMs, and then use the supplemental mechanism to allocate other subtasks to the VMs in the unit cycle by subtask coding index. After the subtask allocation is performed for the first time based on the task offloading strategy, the agent detects the remaining resources and sorts the idle VM addresses according to the remaining resources. The VM address with the least remaining resources is placed at the bottom of the stack cache, and VM address with the most remaining resources is placed at the top of the stack cache. The purpose of the stack is to allow VMs with more resources to be into the next round of task processing earlier. In the supplementary strategy, the VM, which corresponding to the address at the top in the stack cache, obtains the supplementary subtask in the next cycle first. The sum of the maximum allowable delay Tmmax ′ ,n (t) of the supplemental subtask and the maximum allowable delay Texe (t) of the first subtask allocated is approximately equal to the maximum delay (abstract meaning) Tnmax of the task that the VM can process. It can be described as:
∑
max Tmmax − Texe (t) ′ ,n (t) = Tn
(17)
m′ ∈M
where m′ represents the supplement subtask, Tmmax ′ ,n (t) represents the maximum time consumption of the supplemental subtask. And subtasks that satisfy Eq. (17) will be allocated to idle VMs for processing. By this supplementary mechanism, the VMs with more remaining resources in a single cycle are re-allocated, which can make the resource usage more abundant and make the task processing faster. The algorithm is shown in Algorithm 1. Algorithm 1 Stack cache supplementary mechanism for subtaskVM mapping strategy 1: 2: 3: 4: 5: 6: 7: 8: 9: 10: 11: 12: 13: 14: 15: 16: 17:
initialize the record of selected matrix H for m = 1 to M do Get the utility Usys (t) of each subtask on each VM end for add virtual VM and update Usys (t) execute KM algorithm get selection result update selected matrix H if More than 50% of VMs (VM) have more than 50% idle resources then Startup supplemental strategy for n = 1 to N do if the VM have more than 50% idle resources then push this VM address into stack end if end for find extra subtasks for the VM address in stack cache end if
6. Experiment and analysis In this section, we verify the performance of our algorithm through simulation experiments. Before starting the experiments, we explain the performance criteria. After completing the experiment, we analyze the experimental results. 6.1. Description of performance criteria In the IoT system, the processing quality for each task is related to user satisfaction. Besides, user satisfaction is directly related to the cost and benefit of the user. In the background of resource allocation, user satisfaction is introduced as the ‘‘soft demand’’ of the system. In [29,30], the same terms are described as ‘‘service quality’’ or ‘‘quality of service’’. In EdgeABC, user satisfaction can be regarded as a criterion for experimentation, and it is determined by the processing state of multiple subtasks. In addition to the direct impact of the utility, user satisfaction is also related to the user cost. The user cost includes the reward paid to the blockchain. The more user pays, the higher the cost of the user. Accordingly, user satisfaction is smaller. On this basis, the parameters of the blockchain are introduced into the user satisfaction function. The user satisfaction can be described by Eq. (18). Htask (t) =
M ∑
(Usub (t) − ηSsub )
(18)
m
where η represents the reward cost per KB task determined by blockchain. Based on Eq. (18), we normalized all satisfaction values and converted them to values between 0 and 10 [31]. Besides, the stacked cache supplementary mechanism mainly solves the load balancing problem of processing nodes, so we introduce the indicator FN to measure fairness:
∑N
FN =
(
N
2
n=1 mn ) N 2 n=1 mn
∑
(19)
where mn represents the amount of subtasks processed by the VM n. Obviously, if and only if m1 = m2 = · · · = mN , then the fairness index FN = 1. At this moment, all VM are completely fair. In the paper, we use simulation experiments to verify the performance of the system and TO-RA algorithm. We introduced the necessary information for the experiment. The number of servers K is 5, and each server contains five virtual machines. The transmission bandwidth ωm,n (t) is 5 MHz, the transmission power ptr (t) of the smart device is 80 MW, and the channel noise No is −100 dBm/Hz. The CPU cycle of virtual machine Rn is 7 GHz/5 GHz/3 GHz, the average CPU cycle of smart Rl (t) is 1.3 GHz (because smart devices have own exclusive work), and the size of the task is 50–500 kB. 6.2. Experimental results and analysis In the traditional blockchain, the agent completes the service transaction according to the timestamp, that is, the earlier the transaction occurs, the earlier the transaction is processed. This method is called the blockchain-based timestamp algorithm (BBTA) in this paper. We first compared the task processing capability of the BBTA and TO-RA algorithms by simulating the task request. The results are shown in Fig. 5. As we can see in Fig. 5, for processing the same number of tasks, the average delay of the TO-RA algorithm is shorter than BBTA; as the number of tasks increases, while the BBTA shows an exponential increase trend, the TO-RA algorithm increases linearly with a shallow slope. That is, the task processing capability
K. Xiao, Z. Gao, W. Shi et al. / Future Generation Computer Systems 107 (2020) 498–508
505
Fig. 4. The principle of stack cache supplementary mechanism.
Fig. 5. Comparison between TO-RA algorithm and BBTA.
of the TO-RA algorithm is stronger than that of BBTA. Because BBTA focuses on the transaction time, regardless of the delay sensitivity of the task and the availability of VM resources, so the total processing time is extended. The TO-RA algorithm can process multiple subtasks in a short time, and realize the parallel processing of multiple subtasks of the task. Therefore, the TO-RA algorithm can shorten the total processing time of the task, so the processing capability is superior to BBTA. Then we compare the TO-RA algorithm with the following:
• Local processing: all tasks are performed locally (by IoT smart devices).
• All offloading: the IoT smart devices offload all tasks to the edge computing server, and the task is not split [32,33].
• Independent target priority algorithm (ITPA): these algorithms focus on energy or delay [18–20], and the best value is recorded as a result. We assume that ITPA meets Eq. (9). Firstly, we compare the difference in time and energy consumption between the four algorithms. Variables are the size of the task and the distance (L) between the subtask and the server. Due to limited space, only the experimental results of 100 m and 300 m are listed here. In Fig. 6(a) and (c), when tasks size are the same, and local processing consumes the most energy. As the task size increases (from 50 kb to 300 kb), the energy consumption of the four algorithms is also increasing. Among them, the curve of energy consumption for local processing rises fastest. At the same time, we can also see that the distance also directly affects the energy consumption of smart devices (local processing). The energy consumption of data transmission of the other three algorithms
increases with distance but is still lower than local processing. Compared with local processing, the energy of other offloading algorithms is mainly consumed by the transmission of tasks. Correspondingly, the time consumption of the four algorithms is shown in Fig. 6(b) and (d). When the task is small, the difference in the time consumption of the four algorithms is also small. As the size of the task increases, the time consumption also increases. The increase in time consumption for local processing is most pronounced. Because the local processing power is weak, it is challenging to handle the tasks too much. The time consumption of the TO-RA algorithm is less than others because we divide the task according to the application workflow and make its subtasks process in parallel, which significantly saves the task processing time. Therefore, it is wise to allow some tasks (not all tasks) to be offloaded to the edge computing server. Then we introduce fairness index FN to compare the three algorithms (local processing does not involve the fairness of the server, so it does not participate in the comparison). In Fig. 7, we can see that the fairness of the TO-RA algorithm is the highest than other algorithms, with only a fluctuation of about 0.07. As the number of iterations increases, the fairness of the TO-RA algorithm tends to stabilize. This is because ITPA and all offloading algorithm consider latency or energy consumption and ignores the fairness of edge computing servers. The TO-RA algorithm divides tasks into subtasks to better match the corresponding virtual machine resources. And when the system load imbalance is detected, the stacked cache supplementary mechanism is immediately enabled to realize the fairness of the server through subtask re-allocating and supplementary, which ensures that all server resources are fully utilized for some time. Next, we compare the system utility Usys (t) of the three algorithms (local processing does not participate in the comparison). In Fig. 8, we can see that among the three algorithms, TO-RA algorithm has higher system utility. Because the subtask-VM mapping strategy in the TO-RA algorithm dramatically simplifies and accelerates task offloading, enabling tasks to be processed in parallel as quickly as possible. Also, the supplementary mechanism in the TO-RA algorithm allocates corresponding subtasks for idle resources in a single iteration, which directly improves the utility and concurrency of the system. Finally, the user satisfaction Htask (t) of the four algorithms are compared. In Fig. 9, we can see that the difference in user satisfaction of the four algorithms is small when the task size is small. ITPA is approximately equal to the user satisfaction of the TO-RA algorithm when the task is small. As the task size increases, the difference in the value of user satisfaction also becomes larger. The fastest decline in satisfaction is local processing. Compared to other algorithms, the user satisfaction advantage of
506
K. Xiao, Z. Gao, W. Shi et al. / Future Generation Computer Systems 107 (2020) 498–508
Fig. 6. Time consumption of tasks processing and energy consumption of smart devices.
Fig. 8. System utility of algorithms. Fig. 7. Fairness of algorithms.
the TO-RA algorithm is more and more apparent, whose value is stable between 8.5 and 9.5. This is because the processing capability of smart devices (local processing) is weak. As tasks increase, local processing becomes more and more laborious. The larger the task, the larger the processor load larger, and the user’s satisfaction decreases with the result that the task processing time is extended. At the same time, the main work of smart devices may be affected by additional processing tasks, which will also affect the experience quality of the user. Moreover, when the task size is small, the advantage of task segmentation in the
TO-RA algorithm is not apparent. Compared to local processing, due to the activation of the supplement mechanism, the TO-RA algorithm can complete more subtasks in a unit cycle. Therefore, the TO-RA algorithm has more stable and higher user satisfaction than other algorithms. 7. Conclusions In the paper, we consider the sensitive requirements of various roles in the IoT to design the architecture and algorithms. We propose a blockchain-based architecture EdgeABC and an algorithm
K. Xiao, Z. Gao, W. Shi et al. / Future Generation Computer Systems 107 (2020) 498–508
507
Declaration of competing interest The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper. Acknowledgments This work is supported by BUPT Excellent Ph.D. Students Foundation, National Science & Technology Pillar Program, China (2015BAH03F02), National Key Research and Development Program of China (2016YFE0204500), and Industrial Internet Project of Ministry of Industry and Information Technology, China. References Fig. 9. Comparison of user satisfaction of algorithms.
TO-RA to meet the latency sensitivity requirements of tasks, satisfaction requirements of users, fairness requirements of edge computing servers, and requirements of system utility, where the algorithm is implemented in the second layer of architecture in the form of smart contracts. 7.1. The merits of the EdgeABC and TO-RA algorithm proposed The introduction of the blockchain enables EdgeABC to inherit the advantages of the blockchain, for example, the tamper-proof of block data, which ensures the integrity of service data. The TO-RA algorithm mainly completes the task offloading of smart devices and the resource allocation of edge computing servers. The TO-RA algorithm is implemented in the EdgeABC architecture in the form of smart contracts. Specifically, considering the delay requirements of tasks (for smart device role), we first try to divide a task into multiple subtasks based on the application workflow, then propose the subtask-VM mapping strategy, which can simplify the task offloading processing. The mapping strategy allows subtasks that satisfy the parallel workflow relationship can be parallel processed, which significantly improves the processing efficiency of the task, and also improves system efficiency and user satisfaction. In this step, we transform the solution problem into an optimal matching problem for bipartite graphs to complete the task offloading of IoT devices. Next, given the resource allocation requirements of the edge computing server role, we propose a stack cache supplement mechanism to improve system concurrency and efficiency. This mechanism is an optimization and complement to the mapping strategy proposed. During the work state, the system periodically checks the resource utilization of the edge computing servers, decides whether to start a supplementary mechanism according to the load balancing state and fills idle virtual machines by re-allocating subtasks. In summary, in the IoT, the EdgeABC architecture and TORA algorithm proposed can ensure the reliability of service data and the profits of service providers, realize the rapid offloading and execution of smart device tasks, and ensure the servers’ load balance and system utility. 7.2. Future research work In the future, we will explore designing algorithms in the task offloading to ensure the fairness of edge computing servers, while maximizing user satisfaction and system utility. Besides, we will also consider the challenges of task offloading and resource allocation in specific IoT scenarios (smart cities, smart buildings, smart driving, etc.).
[1] C. Stergiou, K.E. Psannis, A.P. Plageras, Y. Ishibashi, B.-G. Kim, Algorithms for efficient digital media transmission over IoT and cloud networking, J. Multimedia Inf. Syst. 5 (1) (2018) 1–10. [2] A.P. Plageras, K.E. Psannis, C. Stergiou, H. Wang, B.B. Gupta, Efficient IoT-based sensor BIG data collection-processing and analysis in smart buildings, Future Gener. Comput. Syst. 82 (2018) 349–357. [3] Brij Bhooshan Gupta Kostas E. Psannis, Advanced media-based smart big data on intelligent cloud systems, IEEE Trans. Sustain. Comput. 4 (1) (2019) 77–87. [4] W. Shi, C. Jie, Z. Quan, et al., Edge computing: Vision and challenges, IEEE Internet Things J. 3 (5) (2016) 637–646. [5] Weisong Shi, Schahram Dustdar, The promise of edge computing, IEEE Comput. 49 (5) (2016) 78–81. [6] M. Iansiti, K.R. Lakhani, The truth about blockchain, Harv. Bus. Rev. 95 (1) (2017) 118–127. [7] R. Khan, S.U. Khan, R. Zaheer, et al., Future internet: The internet of things architecture, possible applications and key challenges, in: Frontiers of Information Technology (FIT), 2012 10th IEEE International Conference. [8] Vaesileios Memos, Kostas E. Psannis, Yutaka Ishibashi, Byung-Gyu Kim, Brij Gupta, An efficient algorithm for media-based surveillance system (EAMSuS) in IoT smart city framework, Future Gener. Comput. Syst. (2017). [9] J. Gubbi, R. Buyya, S. Marusic, et al., Internet of Things (IoT): A vision, architectural elements, and future directions, Future Gener. Comput. Syst. 29 (7) (2013) 1645–1660. [10] M. Chiang, T. Zhang, Fog and IoT: An overview of research opportunities, IEEE Internet Things J. 3 (6) (2016) 854–864. [11] Arvind Narayanan, et al., Bitcoin and Cryptocurrency Technologies, Princeton Press, 2016. [12] Z. Xiong, et al., When mobile blockchain meets edge computing, IEEE Commun. Mag. 56 (8) (2018) 33–39. [13] N.C. Luong, et al., Optimal auction for edge computing resource management in mobile blockchain networks: A deep learning approach, in: Proc. IEEE ICC’18. Kansas City, MO, 2018, pp. 1–6. [14] Y. Jiao, et al., Social welfare maximization auction in edge computing resource allocation for mobile blockchainh, in: Proc. IEEE ICC’18. Kansas City, MO, 2018, pp. 1–6. [15] S. Basu, M. Karuppiah, K. Selvakumar, K.C. Li, S.H. Islam, M.M. Hassan, M.Z.A. Bhuiyan, An intelligent/cognitive model of task scheduling for IoT applications in cloud computing environment, Future Gener. Comput. Syst. 88 (2018) 254–261. [16] M.G.R. Alam, M.M. Hassan, M.Z. Uddin, A. Almogren, G. Fortino, Autonomic computation offloading in mobile edge for IoT applications, Future Gener. Comput. Syst. 90 (2019) 149–157. [17] A. Enayet, M.A. Razzaque, M.M. Hassan, A. Alamri, G. Fortino, A mobilityaware optimal resource allocation architecture for big data task execution on mobile cloud in smart cities, IEEE Commun. Mag. 56 (2) (2018) 110–117. [18] D. Huang, P. Wang, D. Niyato, A dynamic offloading algorithm for mobile computing, IEEE Trans. Wirel. Commun. 11 (6) (2012) 1991–1995. [19] Yuyi Mao, Jun Zhang, Khaled Ben Letaief, Dynamic computation offloading for mobile-edge computing with energy harvesting devices, IEEE J. Sel. Areas Commun. 34 (12) (2016) 3590–3605. [20] J. Liu, Y. Mao, J. Zhang, K.B. Letaief, Delay-optimal computation task scheduling for mobile-edge computing systems, in: Proc. IEEE Int. Symp. Inf. Theory (ISIT), Barcelona, Spain, 2016, pp. 1451–1455. [21] C. Stergiou, K.E. Psannis, B.-G. Kim, B. Gupta, Secure integration of IoT and cloud computing, Future Gener. Comput. Syst. 78 (3) (2018) 964–975. [22] M. Crosby, P. Pattanayak, S. Verma, V. Kalyanaraman, Blockchain technology: beyond bitcoin, Appl. Innov. 2 (2016) 6–10. [23] Md. Abu Sayeed, Saraju P. Mohanty, Elias Kougianos, Hitten P. Zaveri, eSeiz: An edge-device for accurate seizure detection for smart healthcare, IEEE Trans. Consum. Electron. 65 (3) (2019) 379–387.
508
K. Xiao, Z. Gao, W. Shi et al. / Future Generation Computer Systems 107 (2020) 498–508
[24] Mahbuba Afrin, Jiong Jin, Ashfaqur Rahman, Yu-Chu Tian, Ambarish Kulkarni, Multi-objective resource allocation for edge cloud based robotic workflow in smart factory, Future Gener. Comput. Syst. 97 (2019) 119–130. [25] Pradip Kumar Sharma, Mu-Yen Chen, Jong Hyuk Park, A software defined fog node based distributed blockchain cloud architecture for IoT, IEEE Access (6) (2018) 115–124. [26] Ishan Garg, Vitalik’s new Consensus Algorithm to make 51% attack obsolete, requires 99% nodes for attack. [Online]. Available: https://block manity.com/news/ethereum/vitaliksnewconsensusalgorithmmake51attack obsoleterequires99nodesattack. [27] Xinchen Lyu, Hui Tian, Cigdem Sengul, Ping Zhang, Multiuser joint task offloading and resource optimization in proximate clouds, IEEE Trans. Veh. Technol. 66 (4) (2017) 3435–3447. [28] Jinlai Xu, Balaji Palanisamy, Heiko Ludwig, Qingyang Wang, Zenith: Utilityaware resource allocation for edge computing, in: IEEE International Conference on Edge Computing, IEEE Computer Society, 2017, pp. 47–54. [29] Wei Li, Flávia C. Delicato, Paulo F. Pires, Young Choon Lee, Albert Y. Zomaya, Claudi Miceli, Luci Pirmez, Effcient allocation of resources in multiple heterogeneous wireless sensor networks, Parallel Distrib. Comput. 74 (1) (2014) 1775–1788. [30] Farzad Samie, Vasileios Tsoutsouras, Sotirios Xydis, Lars Bauer, Dimitrios Soudris, Jörg Henkel, Distributed QoS management for internet of things under resource constraints, in: Proceedings of the International Conference on Hardware/Software Codesign and System Synthesis (CODES+ISSS’16). [31] Ing-Ray Chen, Jia Guo, Fenye Bao, Trust management for SOA-based IoT and its application to service composition, IEEE Trans. Serv. Comput. 9 (3) (2016) 482–495. [32] S. Sardellitti, G. Scutari, S. Barbarossa, Distributed joint optimization of radio and computational resources for mobile cloud computing, in: Proc. 3rd Int. Conf. CloudNet, 2014, pp. 211–216. [33] S. Sardellitti, G. Scutari, S. Barbarossa, Joint optimization of radio and computational resources for multicell mobile-edge computing, IEEE Trans. Signal Inf. Process. Over Netw. 1 (2) (2015) 89–103.
Kaile Xiao is a Ph.D. candidate at the State Key Laboratory of Networking and Switching Technology, Beijing University of Posts and Telecommunications(BUPT), Beijing, China. She received M.S. degree from Beijing University of Posts and Telecommunications(BUPT) in 2017. Her interests include edge computing for IoT scenarios, including task offloading and resource allocation.
Zhipeng Gao received the Ph.D. degree from the Beijing University of Posts and Telecommunications (BUPT), Beijing, China, in 2007. He is currently a Professor and the Ph.D. Supervisor at the State Key Laboratory of Networking and Switching Technology, Beijing University of Posts and Telecommunications (BUPT). He presides over a series of key research projects on network and service management, including the projects supported by the National Natural Science Foundation and the National High-Tech Research and Development Program of China. He received eight provincial scientific and technical awards.
Weisong Shi is an IEEE Fellow and ACM Distinguished Scientist. He is a Charles H. Gershenson Distinguished Faculty Fellow and a professor of Computer Science at Wayne State University. His research interests include Edge Computing, Computer Systems, energy-efficiency, and wireless health. He received his BS from Xidian University in 1995, and Ph.D. from the Chinese Academy of Sciences in 2000, both in Computer Engineering. He is a recipient of the National Outstanding Ph.D. dissertation award of China and the NSF CAREER award.
Xuesong Qiu was born in 1973. He received the Ph.D. degree from the Beijing University of Posts and Telecommunications, Beijing, China, in 2000. Since 2013, he has been the Deputy Director with the State Key Laboratory of Networking and Switching Technology, Beijing University of Posts and Telecommunications, where he is currently a Professor and Ph.D. Supervisor. He has authored over 100 SCI/EI index papers. His current research interests include network management and service management. He is presiding over a series of key research projects on network and service management, including the projects supported by the National Natural Science Foundation and the National High-Tech Research and Development Program of China. His awards and honors include 13 national and provincial scientific and technical awards, including the national scientific and technical awards (second-class) twice.
Yang Yang received the Ph.D. degree from the Beijing University of Posts and Telecommunications (BUPT). She was an Instructor with the Beijing University of Posts and Telecommunications in 2011. She holds a post-doctoral position at the University of Science and Technology Beijing. She is currently an Associate Professor with the State Key Laboratory of Networking and Switching Technology, Beijing University of Posts and Telecommunications. Her current research interests are cooperation and management in the MANETs.
Lanlan Rui received the Ph.D. degree in computer science technology from the Beijing University of Posts and Telecommunications (BUPT), Beijing, China, in 2010. She is currently an Associate Professor with the State Key Laboratory of Networking and Switching Technology, BUPT, China. Her research interests include IOT, MEC, content-based measurement and analysis, quality of service (QoS) and intelligent theory, and technology of network services. As a result of the standardization in network management, she received the 3GPP SA5 Outstanding Contribution Award.