Computation Offloading Scheme to Improve QoE in Vehicular Networks with Mobile Edge Computing Qiaorong Liu + , Zhou Su+,∗ , and Yilong Hui+ + School of Mechatronic Engineering and Automation, Shanghai University, Shanghai 200072, P. R. China ∗ Corresponding Author:
[email protected]
Abstract—Mobile edge computing (MEC) is a new paradigm to improve the quality of vehicular services by providing computation offloading close to vehicular terminals (VTs). However, due to the computation limitation of the MEC servers, how to optimally utilize the limited computation resources of MEC servers while maintaining a high quality of experience (QoE) of VTs becomes a challenge. To address the problem, we investigate a novel computation offloading scheme based on the MEC offloading framework in vehicular networks. Firstly, the utility of VTs for offloading their computation tasks is presented, where the utility is jointly determined by the execution time, computation resources and the energy for completing the computation tasks. Next, with the theoretical analysis of the utility, the QoE of each VT can be guaranteed. Then, combined with the pricing scheme of the MEC servers, we propose an efficient distributed computation offloading algorithm to make the optimal offloading decisions for VTs, where the utility of the MEC servers is maximized and the QoE of the VTs is enhanced. In addition, simulation results demonstrate that the proposal can lead to a higher utility for MEC servers than the conventional schemes. Index Terms—Vehicular networks, computation offloading, mobile edge computing.
I. I NTRODUCTION With the development of Internet of Things (IoTs) and wireless communication technologies, vehicular networks have become an important part of future intelligent transportation systems (ITS) [1], [2]. Compared to current transportation systems, vehicular networks provide information exchange via vehicle-to-vehicle (V2V) and vehicle-to-infrastructure (V2I) communications, which may induce enormous Internet traffic. To cope with the increasing traffic demands of vehicular terminals (VTs), the vehicular cloud networks are proposed, which can offload some contents or computation tasks of VTs to remote cloud servers. With the adequate computation capacities, the vehicular cloud networks can be recognized as an effective paradigm for improving the performance of the vehicular services [3]–[5]. Although vehicular cloud networks have greatly improved both resource utilization and computation performance [6], the delay fluctuation of the long distance transmission between VTs and remote cloud servers may decrease the offloading efficiency and bring extra network operation cost. Besides, the extensive application demands from VTs may cause the backbone networks congestion. To overcome these problems, mobile edge computing (MEC) is proposed as a promising solution [7]–[10], enabling the computing services close to VTs. Due to the physical proximity, MEC servers realize a
low-latency connection and a quick interactive response by offloading the contents or computation tasks to an adjacent computing center rather than merely relying on the remote cloud servers. MEC has been considered as the most efficient technology to improve the vehicular service performance of both MEC servers and VTs. However, there still exist some fundamental challenges. 1) Based on various resource demands of different types of the computation tasks, how to determine the computation offloading strategies to maximize the QoE of each VT should be taken into account. 2) The intermittent connectivity between VTs and MEC servers makes that not all VTs are willing to offload their computation tasks. To obtain more utility, how to design an efficient pricing scheme to encourage more VTs to offload the tasks is essential. It is necessary to jointly resolve these issues together to obtain the optimal offloading strategies of VTs and the optimal pricing scheme of MEC servers with a high utility. In this paper, we propose a MEC offloading framework in vehicular networks, where a novel computation offloading scheme between VTs and MEC servers is investigated. Firstly, we present the utility of VTs for offloading their computation tasks. Next, by proving that there always exists a maximum value of the utility, the QoE of VTs can be greatly guaranteed. Then, combined with the optimal pricing scheme of MEC servers, an efficient distributed computation offloading algorithm is developed. By using this algorithm, the utility of MEC servers can be maximized and the QoE of VTs can be greatly improved. In addition, the simulation results demonstrate the efficiency of our proposal especially when massive computation tasks need to be offloaded. The rest of the paper is organized as follows. In Section II, related work is reviewed. Section III describes the system model of the MEC offloading framework in vehicular networks. In Section IV, based on the optimal offloading decisions of VTs and the pricing scheme of MEC servers, we propose the distributed computation offloading algorithm. The simulation results are discussed in Section V. Finally, we conclude the paper in Section VI. II. R ELATED W ORK A. Computation Offloading in Vehicular Networks Nowadays, with the explosive data traffic demands of VTs, computation offloading has become an attractive scheme in vehicular networks. Zhu et al. [11] study a novel traffic
978-1-5386-6119-2/18/$31.00 ©2018 IEEE
Backbone network
$
MEC server $
Base Station
RSU $
MEC server
$
Within the limited computation capacity, the MEC servers are also connected to the backbone networks. In this paper, we denote the set of the MEC servers as M = {1, 2, ..., M } and max their computation capacities are {F1max , F2max , ..., FM }. For VT i(i ∈ N = {1, 2, ..., N }), it can execute the computation tasks locally on its resources or offload the tasks to the MEC servers in proximity. Each VT has one computation task to accomplish, and a VT can connect to only one MEC server for offloading. It is denoted the set of the VTs in the coverage of MEC server m(m ∈ M) as Nm , and Nm ∩ Nm0 = ∅, ∀m, m0 ∈ M. Furthermore, a MEC server may provide services for multiple VTs as long as it can support the requested resources. B. Communication Model
V2I Links
V2V Links
I2V Links
Wired Links
$
Offloading Payment Coverage of RSU or BS
Fig. 1. Network model.
offloading system with the integration of cellular networks and the opportunistic communications in vehicular networks. Cheng et al. [12] consider the opportunistic WiFi offloading scheme in a vehicular environment, where data traffic of the vehicular users (VUs) is offloaded within the coverage areas of WiFi hotspots. Benjamin et al. [13] take advantage of the mobility and the storage capacities of private vehicles, by which large amounts of delay-tolerant traffic can be offloaded. B. Mobile Edge Computing In recent years, with the advantage of reducing the latency, the emerging MEC technology has drawn much attention. Kiani et al. [14] combine the MEC and nonorthogonal multiple access techniques to jointly improve the quality of 5G wireless networks. Su et al. [15] adopt an incentive edge mechanism to optimally deliver the layered contents of the mobile users, where the contents are stored on the cache nodes in advance. Zhang et al. [16] propose the hierarchical vehicular edge computing (VEC) offloading scheme to optimize the utility of both vehicles and the computing centers. Although the computation offloading schemes in vehicular networks have been widely studied, few of the existing works consider the QoE-based offloading decisions of VTs. Furthermore, how to jointly consider the utility of MEC servers and the QoE of VTs is still not studied. III. S YSTEM M ODEL In this section, the system model is described. We first present the network model in detail. Then the communication model and computation model are introduced, respectively. A. Network Model A scenario of MEC computation offloading system in vehicular networks is shown in Fig. 1, where MEC servers are deployed at the roadside units (RSUs) or base stations (BSs).
Each VT can offload its computation task only when it is within the communication coverage of the corresponding MEC server, where the transmission suffers from the interference caused by other VTs and noises. Let xi,m ∈ {0, 1}, ∀i ∈ Nm , m ∈ M denote the computation offloading decision of VT i. Specifically, xi,m = 1 indicates that the MEC server m can be a candidate for the task offloading service of VT i. And conversely, VT i decides to execute its computation task locally on its own computation resources when xi,m = 0. Considering the mutual interference generated by other VTs and noises, according to Shannon bound, the available transmission rate between VT i and its connected MEC server m can be obtained ! pi hi,m (1) ri,m = Bm log2 1 + P 2 j∈Nm ,j6=i pj hj,m + σ where Bm denotes the spectrum bandwidth allocated to MEC server m from backbone networks, pi is the transmission power density of VT i, and hi,m is the channel gain between VT i and the corresponding MEC server m. σ 2 is used to represent the power spectrum density of the additive white Gaussian noise. C. Computation Model Different VTs have various types of computation tasks since their demands are always not the same. For VT i within the coverage of MEC server m, its computation task can be accomplished either locally on its own computation resources or remotely on the connected MEC server m by computation offloading. Here, we consider that each computation task is completed within a certain delay constraint. In our model, there are K types of computation tasks, for each type k ∈ K = {1, 2, ..., K}, the required resources may be different. ∆ in Let Ti,k = Zi,k , Di,k , tmax denote that the computation i,k in task requested by VT i, where Zi,k is the size of the input data of the task. It includes program codes, input files, etc. Di,k denotes the calculated load of the task, i.e., the total number of CPU cycles required to complete the task Ti,k , and tmax i,k stands for the maximum latency constraint of the task. We formulate the calculated load as in Di,k = hk Zi,k
(2)
where hk is the coefficient related to the type of the tasks. Local computing: When VT i chooses to complete its computation task locally on its own resources, the computation execution time can be presented as l in l tli,k = Di,k /fi,k = hk Zi,k /fi,k
(3)
l where fi,k denotes the computation capability of VT i, i.e., the number of CPU cycles executed within a unit time. MEC server computing: If VT i determines to offload its computation task to MEC server m, both transmission time and execution time need to be considered in the offloading overhead. VT i should submit the input data of its computation task to the connected MEC server m within the transmission time, which can be denoted as toi,k,m,of f . Furthermore, the execution time, i.e., toi,k,m,exe , is the time to execute the offloaded computation task of VT i by MEC server m. They can be expressed as follows in toi,k,m,of f = Zi,k /ri,m
(4)
o in o toi,k,m,exe = Di,k /fi,k,m = hk Zi,k /fi,k,m
(5)
o where fi,k,m is the number of the computation resources allocated to VT i from MEC server m. Due to the limited computation capacities of MEC servers, the maximum number of the computation tasks for MEC server m needs to hold N m P o max , m ∈ M. fi,k,m ≤ Fm i=1
As a result, the total consumption time of the computation offloading from VT i to MEC server m can be obtained by toi,k,m = toi,k,m,of f + toi,k,m,exe
(6)
IV. O PTIMAL O FFLOADING D ECISION AND P RICING S CHEME OF MEC O FFLOADING S YSTEM In this section, we firstly introduce the optimal offloading decision of each VT aiming at improving its QoE. After that, combined with the pricing scheme of MEC servers, we propose a distributed computation offloading algorithm to maximize the utility of both VTs and MEC servers. A. Optimal Offloading Decision Constrained by the limited computation resources, sometimes a VT is not able to accomplish its computation task locally, especially when it needs a lot of resources. In this case, VTs choose to offload their computation tasks to the nearby MEC servers, which is helpful to save the computation space of VTs and reduce the transmission latency compared to the conventional remote servers. When VT i decides to offload its computation task to the nearby MEC server m, it is guaranteed the execution time of MEC server m can not exceed that of the local computation. Thus, the minimum offloading rate from VT i to MEC server m should be Bm ri,m = l (7) ti,k − toi,k,m,exe For the actual instantaneous transmission rate, it must hold ri,m ≥ r¯i,m (i ∈ Nm , m ∈ M).
In this paper, since each VT has only one computation task, we consider binary offloading, i.e., each type of computation tasks can be completely executed locally or remotely by MEC servers. Under the condition of the aforementioned minimum transmission rate, we consider VT i makes an offloading decision by comparing its local execution time and the maximum latency constraint of the task. We have o max xi,m = 1, if tli,k > tmax i,k andti,k,m < ti,k , ∀m ∈ M (8) l max xi,m = 0, if ti,k ≤ ti,k VT i makes a decision on computation offloading when its local execution time is larger than the maximum latency and the offloading consumption time must satisfy the latency o max constraint, i.e., tli,k > tmax i,k andti,k,m < ti,k . On the other hand, the offloading strategies of VTs are also motivated by the pricing schemes. The VTs decide to offload their computation tasks based on the unit resource price charged by MEC servers. If the unit price is high, few VTs will offload their computation tasks to decrease the cost. On the contrary, when MEC servers charge a low price, each VT wants to offload its computation task aiming at saving computation resources and improving its QoE. Therefore, the utility of VT i is determined by its QoE and cost, which can be denoted as o o Ui,k = φ fi,k,m − ψ pk , fi,k,m (9) where Ui,k is the utility function of VT i when it offloads the kth type of computation task to the MEC server m. We consider that each MEC server has the same computation capacity for a o specific type of computation tasks. φ fi,k,m represents the total utility of VT i obtained from computation offloading, which is related to the computation resources allocated from MEC server m. This is because the VTs always want to get the lowest latency for a high QoE, so they are really concerned about the allocated computation resources. We consider the utility from three aspects. The first part is the utility of saving execution time by computation offloading, o i.e., φ1 fi,k,m . Then, the second aspect is the utility of saving computation resources. Obviously, the larger the computation capacities of MEC servers are, the more computation resources will be saved by VTs. Finally, the utility of saving energy also needs to be considered, which can be denoted by o φ3 fi,k,m . The details can be shown as follows: o φ1 fi,k,m = λ1 tli,k − toi,k,m
(10)
o o φ2 fi,k,m = λ2 ln αfi,k,m +β
(11)
o o φ3 fi,k,m = λ3 e0 fi,k,m
(12)
In (11), α and β are positive coefficients, which are related to the utility of VTs. e0 in (12) is the unit utility of saving energy by computation offloading. Besides, λx (x ∈ {1, 2, 3}) is the weight factor, 0 < λx < 1, without loss of generality, λ1 + λ2 + λ3 = 1.
o ψ pk , fi,k,m in (9) is the payment that VT i should pay to MEC server m. It can be presented as o o ψ pk , fi,k,m =pk · fi,k,m , ∀i ∈ Nm (13) where pk is the unit resource price charged for the k-th type of computation tasks. When the price is given, the payment of VT i is positively related to the number of resources allocated from MEC server m for executing task Ti,k . Thus, the utility function of VT i can be rewritten as o o o o Ui,k =φ1 fi,k,m +φ2 fi,k,m +φ3 fi,k,m −pk ·fi,k,m ! in Di,k Zi,k Di,k o =λ1 − − o +λ2 ln αfi,k,m +β l fi,k ri,m fi,k,m
(14)
o o + λ3 e0 fi,k,m − pk ·fi,k,m (i ∈ Nm ,m ∈ M,k ∈ K)
Apparently, the utility function of VT i is mainly related to the computation resources allocated from MEC server m, o i.e., fi,k,m . When MEC server m allocates more computation resources to VT i, it will have a higher utility. Theorem 1. The utility function of VT i in (14) has a maximum value. Proof. The first-order derivative of Ui,k with respect to o fi,k,m is 1 α ∂Ui,k = λ1 Di,k · o +λ3 e0 −pk +λ2 o o ∂fi,k,m αfi,k,m +β f i,k,m 2
(15)
Since the limited computation capacities of MEC servers, the resources allocated to VT i from MEC server m must be max o max greater than zero but less than Fm , i.e., fi,k,m ∈ [0, Fm ]. Then, the second-order derivative is ∂ 2 Ui,k α2 1 2 2 = −2λ1 Di,k · o 3 −λ2 o ∂f i,k,m f i,k,m o αfi,k,m +β
B. Pricing Scheme In general, the MEC servers are operated and maintained by network operators, which aim to gain more utility by supporting computation offloading services to VTs. In this paper, we adopt an optimal pricing scheme for MEC servers. They charge VTs according to their required computation resources. As a fact of matter, MEC servers also have some cost to consider, when providing z units computation resources, the cost of MEC server m can be given by (17)
where µm > 0. We consider that each VT has a certain type of computation task, and different VTs can have the same type of computation tasks. Thus, based on the computation offloading services, MEC server m can gain Um =
Nm K X X k=1 i=1
Nm X o o max pk fi,k,m −µm . min( fi,k,m , Fm ) i=1
6: 7: 8: 9: 10: 11: 12: 13: 14: 15: 16: 17: 18: 19: 20:
o ∂Ui,k ∂fi,k,m = 0, and report to MEC server m; end for for MEC m=1, 2,..., M do PNmsever o max fi,k,m (t) ≥ Fm then if i=1 Increase the price,pk (t + 1) = pk (t) + ν; else Decrease the price,pk (t + 1) = pk (t) − ν; end if end for if pk (t + 1) == pk (t) then Send a no-change signal to the VTs; Break; else Report the new price pk (t + 1) to VTs; Back to 3; end if
N m P
Here, min(
i=1
(16)
Because all the values of parameters in (16) are positive, the second-order derivative.of the utility function is lower than zero, namely, ∂ 2 Ui,k ∂f oi,k,m 2 < 0. Therefore, the utility function of VT i is a concave function, which has a maximum value. And its corresponding optimal solution . ∗ o o o fi,k,m = f i,k,m can be obtained by ∂Ui,k ∂fi,k,m = 0.
cm (z) = µm z, m ∈ M
Algorithm 1 Distributed computation offloading algorithm 1: Initialization: The n number of VTs N; The computation o ∆ in tasks Ti,k = Zi,k , Di,k , tmax , i ∈ N , k ∈ K ; The i,k max computation capacities of MEC servers {Fm , m ∈ M}. 2: For t=0, MEC server m arbitrarily chooses a price pk (t) and announces it to the VTs inside its coverage; 3: Repeat for t=1, 2, 3, ...; 4: for VT i=1, 2,..., N do o 5: Compute . its optimal resource demand fi,k,m (t) using
(18)
o max ) indicates that the number of fi,k,m , Fm
computation resources allocated to VTs can not exceed the computation capacity of MEC server m. From (18), it can be seen that the utility of MEC server m increases linearly with the unit price pk . But it is noteworthy o , is inversely that the resource demand of VT i, i.e., fi,k,m proportional to the unit price. When the price is charged very high, few VTs are willing to offload their computation tasks. To solve this problem, we propose a distributed computation offloading algorithm within an iterative way, by which we can obtain the optimal price strategies of MEC serves and the optimal demand response solutions of VTs. The details are presented in Algorithm 1, where ν is the iteration step of the unit price. V. S IMULATION R ESULTS In this section, the performance of our proposal is evaluated. The simulation scenario is firstly introduced and then the results are discussed. A. Simulation Setup To evaluate the efficiency of the proposed distributed computation offloading algorithm, we consider a scenario where each VT wants to offload its computation task to the nearby MEC server. The detailed parameters and values used are shown in Table I. In this paper, we compare the utility of our proposal with two conventional pricing schemes. (i) Fixed scheme: the unit price of computation offloading is a constant value. (ii)
Parameter λ1 λ2 λ3 α β e0 µm pk (0) ν Di,m
Value 0.4 0.3 0.3 5 10 0.1 0.2 0.5 0.1 [10,15]
140
Fixed Scheme Random Scheme Proposed Scheme
120
Total utility of MEC servers
TABLE I SIMULATION PARAMETERS
100 80 60 40 20 0
5
10
15
20
25
Total number of VTs
Random scheme: the unit price is charged randomly and the utility of MEC servers is also changed randomly. B. Simulation Results As shown in Fig. 2, with the proposed scheme, the utility of MEC servers is higher than that of the others. When the VTs have few computation tasks, the utility of the three schemes has little difference. But with a higher number of the VTs, our proposal obtain more utility. This is because that a higher price of the fixed scheme or random scheme may cause a sharp reduction in demands. Nevertheless, in our proposed scheme, the price is adaptively determined by the resource demands. For this reason, our proposal is always the best among the three schemes. Besides, due to the randomness, the utility of the random scheme may exceed the fixed scheme. As depicted in Fig. 2, when there are 20 computation tasks, the utility of the random scheme is higher than that of the fixed scheme. VI. C ONCLUSION In this paper, we have investigated the QoE-based computation offloading scheme featured by the MEC framework in vehicular networks. Firstly, the utility of VTs for offloading their computation tasks is presented. Next, the interaction with the optimal resource demands of VTs and the unit price of MEC servers is analyzed, by which the QoE of VTs can be guaranteed. Then, in order to get the optimal price strategies of MEC servers and demand response solutions of VTs, a distributed computation offloading algorithm is proposed. In addition, the simulation results verify the efficiency of our proposal. For future work, we will consider the input data transmission through V2V communications and the cooperative offloading schemes among MEC servers to further improve the utility of MEC servers and the QoE of VTs. ACKNOWLEDGEMENT This work is supported in part by NSFC (no. 91746114, 61571286). R EFERENCES [1] K. Zheng, Q. Zheng, P. Chatzimisios, W. Xiang and Y. Zhou, “Heterogeneous Vehicular Networking: A Survey on Architecture, Challenges, and Solutions,” IEEE Communications Surveys & Tutorials, vol. 17, no. 4, pp. 2377–2396, 2015.
Fig. 2. Total utility of MEC servers with different pricing schemes.
[2] Z. Su, Y. Hui and Q. Yang, “The Next Generation Vehicular Networks: A Content-Centric Framework,” IEEE Wireless Communications, vol. 24, no. 1, pp. 60-66, Feb. 2017. [3] R. Yu, Y. Zhang, S. Gjessing, W. Xia and K. Yang, “Toward cloudbased vehicular networks with efficient resource management,” IEEE Network,vol. 27, no. 5, pp. 48-55, Sep-Oct. 2013. [4] Z. Su, Y. Wang, Q. Xu, M. Fei, Y. Tian, and N. Zhang, “A secure charging scheme for electric vehicles with smart communities in energy blockchain,” IEEE Internet of Things Journal, DOI:10.1109/JIOT.2018.2869297. [5] K. Xu, K. C. Wang, R. Amin, J. Martin and R. Izard, “A Fast CloudBased Network Selection Scheme Using Coalition Formation Games in Vehicular Networks,” IEEE Transactions on Vehicular Technology, vol. 64, no. 11, pp. 5327-5339, Nov. 2015. [6] Z. Su, Y. Hui, and T. H. Luan, “Distributed Task Allocation to Enable Collaborative Autonomous Driving with Network Softwarization,“ IEEE Journal on Selected Areas in Communications, in press. [7] J. Liu, J. Wan, B. Zeng, Q. Wang, H. Song and M. Qiu, “A Scalable and Quick-Response Software Defined Vehicular Network Assisted by Mobile Edge Computing,” IEEE Communications Magazine, vol. 55, no. 7, pp. 94-100, 2017. [8] X. Hou, Y. Li, M. Chen, D. Wu, D. Jin and S. Chen, “Vehicular Fog Computing: A Viewpoint of Vehicles as the Infrastructures,” IEEE Transactions on Vehicular Technology, vol. 65, no. 6, pp. 3860-3873, June 2016. [9] M. Sookhak et al., “Fog Vehicular Computing: Augmentation of Fog Computing Using Vehicular Cloud Computing,” IEEE Vehicular Technology Magazine, vol. 12, no. 3, pp. 55-64, Sept. 2017. [10] K. Zhang, Y. Mao, S. Leng, Y. He and Y. ZHANG, “Mobile-Edge Computing for Vehicular Networks: A Promising Network Paradigm with Predictive Off-Loading,” IEEE Vehicular Technology Magazine, vol. 12, no. 2, pp. 36-44, June 2017. [11] X. Zhu, Y. Li, D. Jin and J. Lu, “Contact-Aware Optimal Resource Allocation for Mobile Data Offloading in Opportunistic Vehicular Networks,” IEEE Transactions on Vehicular Technology, vol. 66, no. 8, pp. 7384-7399, Aug. 2017. [12] N. Cheng, N. Lu, N. Zhang, X. Zhang, X. S. Shen and J. W. Mark, “Opportunistic WiFi Offloading in Vehicular Environment: A Game-Theory Approach,” IEEE Transactions on Intelligent Transportation Systems, vol. 17, no. 7, pp. 1944-1955, July 2016. [13] B. Baron, P. Spathis, H. Rivano, M. D. de Amorim, Y. Viniotis and M. H. Ammar, “Centrally Controlled Mass Data Offloading Using Vehicular Traffic,” IEEE Transactions on Network and Service Management, vol. 14, no. 2, pp. 401-415, June 2017. [14] A. Kiani and N. Ansari, “Edge Computing Aware NOMA for 5G Networks,” IEEE Internet of Things Journal, vol. 5, no. 2, pp. 12991306, April 2018. [15] Z. Su, Q. Xu, F. Hou, Q. Yang and Q. Qi, “Edge Caching for Layered Video Contents in Mobile Social Networks,” IEEE Transactions on Multimedia, vol. 19, no. 10, pp. 2210-2221, Oct. 2017. [16] K. Zhang, Y. Mao, S. Leng, S. Maharjan and Y. Zhang, “Optimal delay constrained offloading for vehicular edge computing networks,” 2017 IEEE International Conference on Communications (ICC), Paris, pp. 1-6, 2017.