Accepted Manuscript Ada-Things: An adaptive virtual machine monitoring and migration strategy for internet of things applications Zhong Wang, Daniel Sun, Guangtao Xue, Shiyou Qian, Guoqiang Li, Minglu Li
PII: DOI: Reference:
S0743-7315(18)30440-4 https://doi.org/10.1016/j.jpdc.2018.06.009 YJPDC 3901
To appear in:
J. Parallel Distrib. Comput.
Received date : 25 May 2017 Revised date : 25 February 2018 Accepted date : 6 June 2018 Please cite this article as: Z. Wang, D. Sun, G. Xue, S. Qian, G. Li, M. Li, Ada-Things: An adaptive virtual machine monitoring and migration strategy for internet of things applications, J. Parallel Distrib. Comput. (2018), https://doi.org/10.1016/j.jpdc.2018.06.009 This is a PDF file of an unedited manuscript that has been accepted for publication. As a service to our customers we are providing this early version of the manuscript. The manuscript will undergo copyediting, typesetting, and review of the resulting proof before it is published in its final form. Please note that during the production process errors may be discovered which could affect the content, and all legal disclaimers that apply to the journal pertain.
[Title Page]
Ada-Things: An Adaptive Virtual Machine Monitoring and Migration Strategy for Internet of Things Applications
Zhong Wanga, Daniel Sunb, Guangtao Xuea, Shiyou Qiana, Guoqiang Lic, Minglu Lia a
Dept. of Computer Science and Engineering, Shanghai Jiao Tong University b
c
Data61, CSIRO, Australia
School of Software, Shanghai Jiao Tong University
Correspondence information: Guangtao Xue, Dept. of Computer Science and Engineering, Shanghai Jiao Tong University, 800 Dongchuan Road, Shanghai 200240, P. R. China,
[email protected], +86 18616339913
(Check the Guide for authors to see the required information on the title page)
1
Ada-Things: An Adaptive Virtual Machine Monitoring and Migration Strategy for Internet of Things Applications Zhong Wanga, Daniel Sunb, Guangtao Xuea,*, Shiyou Qiana, Guoqiang Lic, Minglu Lia a
Dept. of Computer Science and Engineering, Shanghai Jiao Tong University, 800 Dongchuan road, 200240 Shanghai, P. R. China b
c
Data61, CSIRO, Australia
School of Software, Shanghai Jiao Tong University, 800 Dongchuan road, 200240 Shanghai, P. R. China
Abstract Internet of Things (IoT) applications running on mobile devices are subject to the low storage capacity and short battery lifetime. Edge clouds (EC) provide an approach to offload computation tasks and reduce network latency for these applications. While the main challenge in such ecosystems is how to efficiently monitor and allocate virtual machine (VM) resources to realize load balancing among edge clouds. In this paper, we propose Ada-Things, an adaptive VM monitoring and live migration Strategy for IoT applications in edge cloud architecture. The basic idea of Ada-Things is that the migration method of a VM should be determined by its workload characteristics. Specifically, based on the variation of current memory dirty page rate in IoT applications, Ada-Things can adaptively select the most appropriate migration method to copy memory pages, thus addressing the two limitations (application generality and performance imbalance) of existing VM migration methods in edge cloud. Evaluation results show, compared with traditional methods, Ada-Things can significantly reduce the total migration time by 21%, the VM downtime by 38% and the amount of pages transferred by 29% in average.
*
Corresponding author. Tel.: +86 18616339913; fax: +xx xxx xx. E-mail address:
[email protected]
2
Keywords: Internet of Things; Cloud computing; Live migration; Virtual machine; Memory copy strategy.
3
1. Introduction With the rapid development of Internet of Things (IoT) technology, more and more IoT and mobile applications like social networking and streaming video need to be process in real time at the edge of the network [1]. Edge cloud (EC) [2], a near network edge cloud computing platform, provides a feasible solution to reduce network latency and computation cost for these applications. Similar with conventional widely used cloud computing platforms such as Amazon Web Services (AWS) and Microsoft Azure, EC has many resources like memory and network bandwidth. To efficient allocate these resources in EC, virtual machine (VM) live migration become an urgent way for load balancing and system maintenance in the modern data center and edge cloud computing architecture[2-4]. The method realizing VM migration is to restart a new VM on the destination host (destination VM), and synchronize all of the internal states like memory, CPU, I/O and other hardware device status from the VM on the source host (source VM) [5,6]. Figure 1 shows a typical VM monitor and live migration architecture between two edge clouds. It should be ensured that all the IoT applications or services running on the VM will never (or not too long) be broken off during this migration process [7]. Hence, it is an important requirement to resume the migrating VM on the destination host as quickly as possible. Among all the VM states, memory state is particularly crucial and difficult to deploy in live migration [8]. Consequently copying memory pages occupies the biggest part of total transferred data and costs most of the migration time, and these two metrics are the main benchmark for evaluating the performance of VM migration [9,10]. Therefore, it is significant to create a fast and stable memory copy strategy for VM live migration between edge clouds.
4
There are lots of researches [25-29,42,43,45,46] on the optimization of VM migration in the past decades. However, these existing methods have two major limitations: (1) application generality: They only suitable for some specific migration application scenarios, without generality; (2) performance imbalance: They usually optimize one migration metric at the expense of sacrificing the others, which suffer from performance imbalance. For example, per-copy [11] is able to reduce the VM downtime compared to some live migration methods like post-copy [12,13], but it is not effective for some memory-intensive workloads in IoT applications. This is due to the existence of a small set of dirty pages that are frequently updated. In the iteration copy stage of pre-copy, these dirty pages may incur a surge in total migration time, which can significantly affect the service deployed on the VM. Meanwhile, post-copy is good for decreasing the total migration time and the amount of memory pages transferred, while resulting in longer VM downtime to transfer the non-pageable memory. So this method will severely degrade users' experience, making it unsuitable for some compute-intensive applications. We will make a brief introduction of these two live migration methods in Section 2. In short, these methods both have their advantages and disadvantages. Naturally, is it feasible to apply a general memory copy strategy on different IoT application scenarios that achieves performance improvement on three key metrics, including migration time, VM downtime and memory pages transferred? Memory dirty page rate is one of the most important characteristic during live migration, but it is rarely mentioned as a factor in VM live migration. Hence there have few migration optimization research focusing on it. In this paper, we make an exploring research on taking the memory dirty page rate into consideration as a main factor when determining the method of memory copy. A memory page (for short page), is a fixed-size
5
(e.g., 4KB) contiguous block of memory and the smallest memory management unit for data exchange [44] during VM live migration. Dirty pages are actually the modified memory pages after being brought in the page cache of VM [10]. Based on the three basic stages of memory copy [11] : push copy (iteration copy), stop-and-copy and pull copy stage, we present Ada-Things (Adaptive Things), an adaptive memory copy strategy. Our basic goal is to achieve the generality on most of the VM migration scenarios without loss the migration performance. Different kind of IoT application scenarios or workloads will generate diverse dirty page rates during VM live migration, Ada-Things can flexibly select the optimal memory copy stage in terms of the current dirty page rate. So it simply solves the problem of application generality. Furthermore, due to the proposed once push copy at first and the dirty page rate comparison mechanism during memory iteration process, Ada-Things achieves performance improvement in all the key migration metrics compare with pre-copy and post-copy. These strategies provide the solution for the second limitation of performance imbalance. We evaluate the effectiveness of Ada-Things method using a series of migration tasks with different dirty page rates, then implement it on diverse VM memory usage scenarios. Ada-Things is compared with the state-of-art custom memory copy methods on an edge cloud computing platform. According to the experimental results, we find that for a memory-intensive or time-intensive scenario, Ada-Things achieves less VM downtime compare with post-copy and transfers less amount of memory pages than pre-copy. The total migration time is also shorted in average. The contributions of this paper are summarized as follows: 1) We find the needs of VM migration for load balancing among edge clouds in IoT ecosystems.
6
2) We analysis the two major limitations: application generality and performance imbalance in lots of existing migration methods like pre-copy and post-copy. 3) We present a general memory copy strategy Ada-Things for VM monitoring and live migration to address these two major limitations. Ada-Things realize a fast and stable VM migration in different kind of IoT application scenarios based on the dirty page rate. 4) We realize Ada-Things on the OpenStack [37] based edge cloud computing management platform. It is simply to implement based on the general three stages of memory transfer without extra computation overhead and other hardware costs. The rest of this paper is organized as follows: In Section 2 we will give a brief description on the background of edge cloud and VM live migration, including the brief introduction of edge cloud and IoT, the main VM migration metrics, the memory transfer stages and the typical VM migration methods. Section 3 explains our proposed AdaThings method on how to adaptively monitor dirty page rate and achieve an optimal migration method in detail. The experimental setup and migration tests are described in Section 4. Section 5 introduces several related research works and their optimizations on VM live migration. Finally, we discuss and conclude our study in Section 6. 2. Background of Edge Cloud and VM Migration In this section, we will give an overview and background of edge cloud and VM live migration mechanics. We start by describe the Internet of Things with edge cloud and three main performance metrics of VM live migration, then summarize three general stages of memory transfer which mostly effect the migration performance. In addition, we briefly introduce two major memory copy methods currently: pre-copy and post-copy.
7
2.1 IoT with Edge Cloud Edge cloud computing is on the edge of the cloud computing platform [14]. It moves the storage and computation of IoT applications closer to the edge of network. It is similar with the cloudlet [15] or fog computing [16]. Figure 2 is a typical architecture of edge cloud. The Internet of Things technology aim to connect the physical end user and edge devices (mobile phones and IoT devices) in home, school and government to the virtual cyber world, thus to build an intelligent system such as smart home, smart campus, smart city and even in digital earth [17]. But these edge devices (such as smart phone, laptop, and tablet) may have limited resources due to the low storage capacity and short battery lifetime. So it need to offloads some storage and computation tasks to the public cloud computing platform. Instead of access to the data center through the long core network link, edge cloud computing achieves the goal of reduce latency by taking the storage and computation tasks to the edge network nearby the edge devices. The edge cloud provides relevant resources in the form of VM for these edge devices to running application tasks on it. While unlike public cloud which has abundant resources, each edge cloud has limited amount of resources. So it is necessary to monitor and allocate resources properly to achieve load balancing and maintain quality-of-service in each edge cloud. VM live migration is an efficient way to realize load balancing without service interrupt. We can migrate some specified VMs from one resourceconstrained EC the nearby resources available EC. So it is significantly to find an efficient and reliable VM migration strategy to meet the IoT application tasksβ requirements between different EC. It is one of the motivation in this paper to present Ada-Things.
8
2.2 VM Migration Metrics There are three key metrics in VM live migration that performance of most live migration could be measured by these metrics [49]. (1) Total Migration Time (TMT): Time from the start of VM migration process at the source host until successful resume this VM at the destination host. Total migration always takes several seconds to a few minutes based on the workloads running on the VM. (2) VM Down Time (DT): Time between VM suspends and resumes. During this interval, VM cannot provide services for users because its execution is stopped. This is an important metric of VM migration performance. (3) Total Pages Transferred (TPT): The total amount of memory pages transferred from source VM to destination VM during live migration process. These pages are measured in MB through networks. Similar with VM migration time, the amount of transferred memory pages also depends on the workloads running on the VM. The goal of VM live migration optimization are minimize the total migration time and downtime of VM, meanwhile reduce the total amount of transferred memory pages during the migration process. 2.3 Memory Transfer Stages Generally, there have three stages of memory transfer: push, stop-and-copy and pull stage [11]. (1) Push copy (Iteration copy) stage: The source VM continues running and certain memory pages on it are pushed to the destination VM. To ensure state consistency, pages dirtied (or modified) during this process must be retransferred. In short, this is a memory iteration copy process.
9
(2) Stop-and-copy stage: The source VM is stopped, pages are copied to the destination VM, then start the new VM on destination host. All the services running on the VM during this process are stopped. (3) Pull copy (On-demand copy) stage: A new VM on destination host is restarted firstly, and if certain memory pages have not yet been copied to destination VM, these pages will be pulled from the source VM. This is an on-demand page copy process and will always incur page fault across the network. Actually these three stages always not all necessary to use in one migration method. Most of the memory copy methods contain one or two of the three stages. For instance, pre-copy includes two stages: push copy and stop-and-copy, while post-copy involves stop-and-copy and pull copy stage. We will discuss these two methods in the following subsection. As another example, the stop-and-copy is actually a static memory VM migration process relative to live migration. The services or IoT applications running on the VM will only resume after the destination VM is restarted. This method also called nonlive or cold VM migration, which is out of the scope of this paper. 2.4 Pre-Copy and Post-Copy Pre-Copy Live Migration: As mentioned before, there are two stages in pre-copy: a push copy stage and a very short stop-and-copy stage. The timeline of this method is shown in figure 3. In push copy stage, memory pages are pushed (transferred) to the destination VM over multiple iterations. So this stage is also being called memory iteration copy stage. It will need several times of iteration copy in this process to guarantee a very short downtime in stop-and-copy stage. In general, iteration times can be self-defined by the data center administrator. For instance, we can set a threshold value of the dirty page rate by 5% of the total memory pages. When the rate is under this threshold, then move
10
to the next stage. Also we can set a maximum number of iterations, and don't stop the source VM until iteration copy times exceed this number. If there have some set of very frequently modified or updated pages in source VM, it may incur a very long time of iteration copy process, which can severely degrade service performance. Hence pre-copy is not suitable for application load-intensive tasks or memory-intensive workloads. In stop-and-copy stage, the source VM is suspended, and transfer all the remaining dirty pages during the last iteration and the other device states of source VM to the destination VM. After these two stages, the new VM is resume on the destination host and then provides services for users. Post-Copy Live Migration: Post-copy has two stages: stop-and-copy and pull copy. Firstly, in the stop-and-copy stage, the source VM is suspended, and the minimal execution state of the running VM like CPU and hardware device state are transferred to the destination host immediately, then start the VM on the destination host. Just like the precopy migration, to guarantee the continuity of service on VM, the time costs in this stage should also be as short as possible. This stage may longer than pre-copy migration causes the non-pageable memory are all transferred during this period. After this stage, destination VM then fetch the needed memory pages from source VM on-demand or in background. This stage is pull copy stage. Any memory page which is faulted by the destination VM and has not yet been copied is the needed page. This on-demand memory copy always incurs page faults in destination VM, which ensures that each page in source VM is transferred at most once, and may significantly degrade the VM service performance. So it should make a tradeoff between the pages transferred and VM performance in this stage. The post-copy migration timeline is shown in figure 3.
11
In conclusion, pre-copy has long total migration time and large memory pages to transfer due to the iteration copy process. Post-copy may have less migration time and transferred pages than pre-copy, but sacrifice the performance of VM downtime. That is, whether pre-copy or post-copy, they both incur the performance imbalance during VM live migration. Moreover, all of them canβt be generality used in different application scenarios. These two limitations performance imbalance and application generality caused our thinking. They are our motivation in this paper to present Ada-Things. Apart from pre-copy and post-copy, there are many other memory copy methods [19,25-29,45,46] have been presented in the past few years, they are more or less improve the migration performance compare with pre-copy and post-copy. However, most of them are based on pre-copy and post-copy or the optimization of these two methods. So we just consider these two typical and widely used live migration methods in this paper, which can be easily implemented and simply evaluated with our proposed Ada-Things method. 3. Adaptive Monitoring and Migration: Ada-Things We present an adaptive memory copy strategy for VM live migration called AdaThings (Adaptive Things). Our approach is an adaptive selection of migration method based on the current dirty page rate. The basic idea of this method is shown in figure 4. It simply has three steps to realize VM based on the aforementioned three typical memory transfer stages without extra computation overhead or other hardware costs. In this section, we firstly describe an AR prediction model for dirty page rate, then introduce the three steps in detail. 3.1 Dirty Page Rate Model Dirty page rate is the most important parameter in our method, we will make a prediction model for it. As we mentioned before, dirty page is a modified fixed-size memory
12
page and a time-varying value. Thus we use an autoregressive (AR) model in our method to predict dirty page rate. AR model [47] is a variation of random process to describe certain time-varying processes, so it also be called time series model. The output value of AR model depends on the linear combination of its previous values and a stochastic variable. Actually this model is a stochastic difference equation and is the special case of the more complicated stochastic structure ARMA (Autoregressive Moving Average) model [48]. So we can use it to predict the dirty page rate π
π,π‘ in t times iteration copy. It can be defined as π
π,π‘ = π + π1 π
π,π‘β1 + π2 π
π,π‘β2 + β― + ππ π
π,π‘βπ + ππ‘ = π + βππ=1 ππ π
π,π‘βπ + ππ‘ οΌ
β΄
where c is a constant, ππ‘ is a random variable (white noise) with mean zero, π1 , π2 , β― , ππ are the model parameters in the latest k times iteration copy (1 β€ π β€ π‘ β 1), and π
π,π‘β1 , π
π,π‘β2 , β― , π
π,π‘βπ are the corresponding dirty page rates. It is a k orders AR(k) model. Consider the lag operator L πΏπ π
π,π‘ = π
π,π‘βπ οΌ
β΅
and the AR lag operator polynomial π(πΏ) = 1 β π1 πΏ1 β π2 πΏ2 β β― β ππ πΏπ = 1 β βππ=1 ππ πΏπ ,
βΆ
then the AR model can be write as π(πΏ)π
π,π‘ = π + ππ‘ .
β·
So the dirty page rate π
π,π‘ can be converted into π+π
π
π,π‘ = π(πΏ)π‘ = π + π β1 (πΏ)ππ‘ = π + π(πΏ)ππ‘ , where
13
βΈ
π=
c 1βπ1 βπ2 ββ―βππ
,
π(πΏ) = 1 + π1 πΏ1 + π2 πΏ2 + β― + ππ πΏπ = 1 + βππ=1 ππ πΏπ ,
βΉ βΊ
π(πΏ) is a k degree lag operator polynomial, and π1 , π2 , β― , ππ are the model parameters. 3.2 Ada-Things Strategy with Comparison Mechanism Our proposed Ada-Things strategy can be divided into the following three steps and the algorithm flowchart is shown in figure 4. Ada-Things perform once push copy at first, which transfers large number of memory pages, it thus has less non-pageable memory to be transferred in stop-and-copy stage than post-copy, so Ada-Things needs less downtime than post-copy to restart VM on the destination host. Moreover, due to the once push copy, Ada-Things also has less number of page faults than post-copy in pull copy stage, it will have less memory pages to be transferred than post-copy. Then the total migration time will have a corresponding decrease. To compare with pre-copy, Ada-Things lead in a comparison mechanism during iteration copy stage. This mechanism can identify the application with high dirty page rate running in VM, and only transfer the memory pages of this high dirty page rate application in the last time of iteration copy or when the dirty page rate is below a threshold value (this threshold πβ² will be defined in Step 3). It can significantly reduce the retransmission times of high dirty page rate memory and thus decrease the total amount of memory pages transferred. The total migration time and VM downtime can also be decreased compare with pre-copy by the finite times of iteration. In a word, Ada-Things achieves the goal of application generality and address the problem of performance imbalance that can't be realized by pre-copy and post-copy. The concrete implementation process can be summarized as the following three steps.
14
(1) Step 1: The beginning of Ada-Things method is a push copy stage. It will perform memory iteration copy only once from the source VM. This step will transfer a large amount of memory pages to the destination VM. The dirty pages are also generating in the source VM at the same time. Their amount is varying by different workloads running on the source VM. We can achieve the current VM memory dirty page rate π
π by equation (5). (2) Step 2: Then the Ada-Things will compare π
π with the current dirty page threshold d in this step. This threshold can be dynamically obtained by the current IoT applications or services running on the VM. Assume there are m IoT applications running on source VM, each of them has a dirty page rate π
π,1 , π
π,2 , β― , π
π,π , they can be traced by a custom probe in VM, so we defined the threshold d as π=
π
π,1 +π
π,2 +β―+π
π,π π
.
β»
It is an average value of all the IoT applications' dirty page rates, which approximately reflects the current memory workloads state in source VM. (3) Step 3: This is the most important step of Ada-Things method. In this step, AdaThings can adaptively select the following copy stage based on the comparison results. Actually this process should be very short and the influence on migration time can be neglected. It has two situations: 1) Situation 1: π
π > π. If the current dirty page rate π
π exceed the threshold d, it means that source VM may in a relatively high memory usage and frequent page modification, so Ada-Things will enter stop-and-copy and pull copy stages. That is suspend source VM and restart destination VM, then resume all the IoT applications running in the VM in the pull copy stage by on-demand copy method. This process is similar with post-copy method, while
15
the migration performance may have a clear difference between them. The reason is that the previous once push copy stage may transfer large amount of memory pages to the destination VM. Hence it will cost less downtime to start destination VM in stop-andcopy stage and may have less page fault than post-copy in pull copy stage. Thus it achieves less migration time and transfer less memory pages than post-copy. After these stages, a complete VM migration finished. 2) Situation 2: π
π < π. If the current dirty page rate π
π below the threshold d, Ada-Things will perform iteration copy stage. In this stage, we also set a threshold πβ² for each iteration. ππ‘β²
=
π
πβ² ,1 +π
πβ² ,2 +β―+π
πβ² ,π + β―+π
πβ² ,π π‘
π‘
π‘
π
π‘
=
βπ π=1 π
πβ² ,π π‘ π
(1 β€ π β€ π),
βΌ
We then make a comparison mechanism in each iteration to identify the service with high dirty page rate. For instance, if π
ππ‘β²,π > ππ‘β² , it means that the jth application running in the source VM is a high dirty page rate service, its memory pages may have large probability to be modified in the following iteration. If this high dirty page rate memory is transferred in this iteration, it will have high probability to be retransmitted in the next several iterations. The total amount of transferred pages and total migration time will thereby increase sharply. This is the biggest weakness of traditional pre-copy method. To address this problem, the Ada-Things comparison mechanism will decide not to transfer this application's memory pages in this iteration, until π
ππ‘β² ,π < ππ‘β² in the following iteration. This mechanism guarantees a high priority to transfer low dirty page rate memory, and also reduces the retransmission probability of high dirty page rate memory. That is to say, the low dirty page rate application's memory among the m IoT applications are always preferentially be transferred in each iteration. If the memory of an application keeps a high dirty page rate all the time, it will only be transferred in the last time iteration.
16
After n times iteration, the VM dirty page rate π
π,π can be calculate by π
π,π = π + π1 π
π,πβ1 + π2 π
π,πβ2 + β― + ππ π
π,πβπ + ππ = c + βππ=1 ππ π
π,πβπ + ππ = π + π(πΏ)ππ ,
β½
and at the same time the dirty page threshold ππβ² is ππβ² =
π
πβ² ,1 +π
πβ² ,2 +β―+π
πβ² ,π +β―+π
πβ² ,π π
π
π
π
π
=
βπ π=1 π
πβ²π ,π π
(1 β€ π β€ π).
βΎ
If π
π,π < ππβ² , it means the current VM dirty page rate is low enough, and Ada-Things will perform the stop-and-copy stage to copy all the remain memory pages and other VM states. Then resume the destination VM at last and finish the migration. The iteration copy time n should be fixed in advance. Different with pre-copy, this process is a real-time monitoring and estimation process. It effectively mitigates the redundant iteration copy in pre-copy due to the comparison mechanism. Thus Ada-Things achieves the goal of reduce migration time and decrease the total amount of pages transferred. In all, we can see that Ada-Things method provides a more flexible and efficient memory copy strategy to meet different kind of application scenarios like memory-intensive or time-intensive workloads. It thus ingeniously solves the two major limitations of existing works we have mentioned before. 4. Evaluation In this section, we make our evaluation in detail. We firstly present the experimental setup and workloads. Then we evaluate our migration method in different dirty page rates and diverse memory usage scenarios. 4.1 Setup and Workloads In our setup we implement the VM migration based on OpenStack, a widely used open source cloud computing management platform.
17
We use three SUGON-I620-G10 (Intel(R) Xeon(R) CPU E5-2670 0 @ 2.60GHz, 64GB RAM, 300GB Disk) blade servers running with Ubuntu 14.04 LTS operating system and 1Gb/s Ethernet network between them to build edge cloud system. We set one server as the controller node and the other two as the edge cloud nodes. Both nodes are installed the newly released version of OpenStack Newton with kvm-libvirt [37]. The controller node is configured with full OpenStack nova services like nova-network (Network Controller), nova-scheduler (Scheduler), nova-api (API Server) and nova-object store. Two edge cloud nodes are just configured to run OpenStack nova-compute (Compute Workers) service [38] and we use a separate gigabit Ethernet network between these two nodes, itβs only used for the live migration process to guarantee adequate network bandwidth of memory transfer. The storage of the system is based on NFS (Network File System), which is a shared storage system that we need not to transfer the disk images of VM during live migration. It is essential to reduce total VM migration time because the transfer of disk images may take a long time. The overall edge cloud system framework is shown in figure 1. We choose 5 typical VM flavors (Tiny, Small, Middle, Large and Extra Large) with different memory size (RAM) from 0.5GB to 16GB in OpenStack at each edge cloud node to evaluate migration performance. Each VM also has its own virtual CPU numbers (VCPUs) and disk image size. We use the normal IoT applications of streaming video (video player application) and web server (web browser application) as the working set (workloads). The detailed parameters are shown in table 1. We configure our experiment in the following four typical migration scenarios:
18
(1) Ada-Things Live Migration: This is our proposed adaptive memory copy migration method, the threshold of dirty page rate is fixed at 50% and iteration loop is no more than 10 times. (2) Pre-copy Live Migration: We leverage the widely used pre-copy method presented by Clark et al. [11] in our evaluation. (3) Post-copy Live Migration: The post-copy method [12] presented by Hines et al. is also the well-known memory copy method. We leverage it without any modification of the method. (4) Stop-and-copy Migration: As we described in the previous section, this is a socalled cold or non-live migration method. We use this method as a comparing baseline of total migration time and total transferred pages to compare with another three migration methods. We have performed lots of test work and repeat at least five times for each single performance experiment. The test results take the average of these values so that these evaluation results can have generality and representability. 4.2 Migration Test in Different Dirty Page Rate We perform the migration test under several simple task scenarios with different dirty page rate: 25%, 45%, 65% and 85%. These dirty page rates are the approximate monitor values. They are obtained and monitored by use the certain kind of writable working sets (WWS) and running a streaming video application with a scripting language in VM at each edge cloud node. Then we evaluate four migration methods under these dirty page rates, the three performance metrics are described as follows. Total Migration Time: As it shown in figure 5, total migration time increases as the memory dirty page rate grows. The higher dirty page rate means the dirtier pages it may
19
generate during migration. Thus it needs more time to transfer these extra memory pages. We can see that Ada-Things method always use less migration time than the other two live migration methods in each dirty page rate scenario. The more increasing in memory size, the more reducing on total migration time. It is due to the pre-judgment of the current dirty page rate after the once push copy stage. Thus Ada-Things can adaptively select the best fitted memory copy method at the next stage based on the judgment result. The migration time of stop-and-copy method present no difference in each dirty page rate scenario because it is a cold migration process, and costs the same time to copy the certain size of memory pages (only copy the VM RAM) in each scenario. It is the minimal time to transfer all the memory pages without page iteration and page faults. Ada-Things method gradually approaching this minimal migration time as the dirty page rate decreased. They spend almost the same amount of time in a low dirty page rate scenario. It is a huge performance improvement for VM live migration. The experimental results show that Ada-Things method effectively reduces 26% of the total migration time compare with pre-copy method and 17% of migration time compare with post-copy method in average. VM Downtime: We only implement the VM downtime metric on three live migration methods except the stop-and-copy method. Because the VM downtime of stop-and-copy method is actually the same as total migration time and can't quantitatively compared in live migration. The evaluation results are shown in figure 6. It is obvious that Ada-Things achieves less downtime than pre-copy or post-copy in each dirty page rate scenario. AdaThings still maintain a relatively low downtime even in high dirty page. While the downtime of pre-copy or post-copy take several seconds may severely impact the services performance which running on VM. This is cannot be acceptable by the VM users. From the
20
evaluation, Ada-Things has decrease 42% of downtime compare with pre-copy method and decrease 52% of downtime compare with post-copy method in average. Total Pages Transferred: Figure 7 shows the total amount of transferred memory pages during VM migration. We know that stop-and-copy method transfers all the VM RAM in each dirty page rate scenario. Pre-copy method transfers the largest amount of memory pages because of several times of iteration copy. Since post-copy method fetch the memory pages from source VM on-demand when needed, so it transfers less pages than pre-copy. Among all three live migration methods, Ada-Things method transfers the least memory pages in each dirty page rate scenario due to the dirty page rate prediction model and the comparison mechanism. It can select an optimal migration strategy depending on the current scenario of dirty page rate, thus transfers relatively less memory pages in each corresponding scenario. It gradually approaching the amount of pages transferred in stop-and-copy method as the dirty page rate decreased. From extensive experimental verification on our system platform, Ada-Things method can significantly reduce more than 35% amount of memory pages compare with pre-copy method and nearly 23% amount of memory pages compare with post-copy method in average. 4.3 Implementation on Diverse Memory Usage Scenarios We further implement our presented Ada-Things method on four IoT application scenarios with different memory usage rate: 20%, 40%, 60% and 80%. They are based on diverse kind of dynamic web server applications running in VMs and a memory stress test workload. These different memory usage rates will incur variation of dirty page rate on running VM during the migration duration. Hence we can use them to verify the performance of migration methods. To achieve these memory usage rate, we use a memory stress tool memtester [39] and running web server application HTTPerf [40] in VM to
21
generate a stable memory workloads during live migration. The parameters of VM flavors and other system configuration workloads are similar with section 4.2. We evaluate four migration methods by the three migration performance metrics as well. Total Migration Time: The total migration time can be varying under different VM memory usage scenarios. In figure 8 we can see that total migration time of all three live migration methods reduced as the VM memory usage decreased. This is obvious that the less memory usage rate of VM, the less probability of dirty page rate it has and hence the less time it needed to transfer the memory pages. The migration time of stop-and-copy method is still the minimal time without page iteration and page faults in each memory usage scenario. We can see that Ada-Things method gradually approaching this minimal time as the VM memory usage decreased. This is a dramatically performance improvement in VM live migration. According to our experiments, Ada-Things method has reduced the total migration time by 22.4% compare with pre-copy method and 15.8% compare with post-copy method in average. VM Downtime: Just like aforementioned reason in different dirty page rate test, we implement the VM downtime metric on three live migration methods. As it shown in figure 9, our presented Ada-Things method has less VM downtime than the other two migration methods. The downtime in each memory usage scenario were increased with the rise of VM memory size. While Ada-Things method have no more than 0.5 seconds downtime under all memory size in each memory usage scenario. This downtime interval granularity can be acceptable for most users and it is also particularly important for the ceaseless running of services on the VM. It has decrease 38% of VM downtime compare with pre-copy method and decrease 54% of VM downtime compare with post-copy method in average.
22
Total Pages Transferred: The total amount of memory pages transferred during the migration are shown in figure 10. Pre-copy method transfers the largest amount of memory pages and post-copy transfers less pages than pre-copy. Stop-and-copy method transfers certain size of memory pages in each memory usage scenario. Among all three live migration methods, Ada-Things method transfers the least memory pages in each memory usage scenario due to the dirty page rate comparison mechanism and it gradually approaching the amount of pages transferred in stop-and-copy method as the VM memory usage decreased. Experimental results shown that Ada-Things method can significantly reduce more than 32% amount of memory pages compare with pre-copy in average and nearly 21% amount of memory pages compare with post-copy in average. In addition to the above mentioned typical metrics, the Ada-Things method also has less performance degradation cause by the memory page faults and network bandwidth variation than the rest migration methods, this should be in our future works and we are no longer to take a quantitative analysis in this paper. 5. Related Work Edge cloud and edge cloud computing aim to extend cloud computing to the edge of the network [2]. It is early proposed as the fog computing by Bonomi et al. [18] to explain the role of edge cloud in the IoT ecosystem and the era of big data [41]. As a main method of virtualization technical architecture in cloud computing, VM live migration can realize the load balance, fault tolerance and online maintenance between clusters in edge cloud [3]. There have been many research publications of live migration in the past decades. Clark et al. [11] firstly introduce the basic concept of VM live migration and proposed the pre-copy migration method. Pre-copy method can greatly reduce the VM downtime but incurs high migration time and total memory transferred memory pages due to the
23
several times of iteration memory copy. Hines et al. [12,13] present post-copy Live Migration of VM, they gave a detailed describe of post-copy from design, implementation. Their evaluation result shows that post-copy can significantly reduce the total migration time and memory pages transferred for some time-intensive workloads, while it may not have a better performance on VM downtime than pre-copy method. Actually these two methods are only suitable for a particular workloads and usually cannot simultaneously optimize all of the three performance metrics of VM live migration. To reduce the high resource costs and application performance penalties in live migration, Hou et al. [22] proposed an application assisted VM live migration with Java application called JAVMM, which migrates VMs running different types of Java applications. However, this method needs extra applications to realize live migration, which can only be applied in some special occasions and has no generality compare with Ada-Things. There also have other types of application-based migration like JavaScript web applications [21], memory intensive applications [19,28], Madeus [24], large enterprise applications [20]οΌdistribution based workloads [25] and so on. Nathan et al. [31] combine the existing pre-copy optimization works like page skip [35,36], deduplication [32-34], delta compression [26] and data compression [29], and then proposed a recommendation on selecting the right optimizations for VM live migration to reduce the migration time and network traffic. While this work needs complicated optimization methods in advance which incurs huge computational overhead in system. The other main disadvantage of existing researches we mentioned before is they always optimize one performance metric while sacrifice the others. Jin et al. [29] presented a zero-aware characteristics-based memory compression method for VM live migration. It can adaptively transfer compressed data and then decompressed them on destination
24
node. This memory compression method can reduce total amount of data transferred but lead to huge increase on system costs. Liu et al. [23] proposed a full system check-pointing/recovery and trace/replay motion technology to achieve a fast and reliable live migration. This mechanism increases the extra hardware costs to realize the system trace and replay technology. Abe et al. [30] presented an improved post-copy mechanism called enlightened post-copy, which use the aggregate performance of affected VM as the migration metric instead of downtime. But this method must to make the appropriate changes in guest Linux kernel, it greatly increases the difficulty of system implementation and severely cut down the system performance. While Ada-Things strategy can well improve the migration performance in all the three main metrics in edge cloud system. 6. Discussion and Conclusions Traditional live migration methods like pre-copy and post-copy have their own limitations, we present an adaptive VM monitor and migration strategy Ada-Things in this paper. Ada-Things can adaptively select memory transfer stage to achieve optimal live migration based on the current dirty page rate. We implement and evaluate our Ada-Things algorithm and other three typical migration methods on the open source OpenStack based edge cloud computing platform. The evaluation results show that Ada-Things can apply to a wider range of IoT workload scenarios. It also can significantly reduce the total migration time, VM downtime and the total amount of memory pages transferred compare with pre-copy and post-copy. Except dirty page rate, memory page faults [12] and network bandwidth [50,51] can also impact migration performance. Due to space limitations of this paper, we will investigate them in our future works.
25
Acknowledgement This work was supported by National Key R&D Program of China (2017YFC0803700)οΌthe Joint Key Project of the National Natural Science Foundation of China (U1736207)οΌNSFC (61572324).
References [1] Botta, A., De Donato, W., Persico, V., & PescapΓ©, A. (2016). Integration of cloud computing and internet of things: a survey. Future Generation Computer Systems, 56, 684-700. [2] Li, Q., Niu, H., Papathanassiou, A., & Wu, G. (2014, May). Edge cloud and underlay networks: Empowering 5G cell-less wireless architecture. In Proceedings of 20th European Wireless Conference, (pp. 1-6). [3] Satyanarayanan M, Bahl P, Caceres R, Davies N. The case for vm-based cloudlets in mobile computing. In: Proceedings of the IEEE Pervasive Computing, vol 8(4), pp 14β23. [4] Khoshkbarforoushha, Alireza, Alireza Khosravian, and Rajiv Ranjan. Elasticity management of Streaming Data Analytics Flows on clouds. Journal of Computer and System Sciences 89, 24-40 , (2016). [5] Voorsluys, W., Broberg, J., Venugopal, S., and Buyya, R. Cost of virtual machine live migration in clouds: A performance evaluation. In IEEE International Conference on Cloud Computing (CLOUD β09). Bangalore, India, September 2009. [6] Weerasiri, D., Barukh, M. C., Benatallah, B., Sheng, Q. Z., and Ranjan, R. A Taxonomy and Survey of Cloud Resource Orchestration Techniques. ACM Computing Surveys (CSUR) 50.2 (2017), 26. [7] Strunk, A. Costs of virtual machine live migration: A survey. In 2012 IEEE 8th World Congress on Services (SERVICES β12), Honolulu, HI, USA, June 2012. [8] Leelipushpam, P. G. J., and Sharmila, J. Live VM migration techniques in cloud environmenta survey. In 2013 IEEE Conference on Information and Communication Technologies (ICT β13), Thuckalay, Tamil Nadu, India, April 2013. [9] Wu, Y., and Zhao, M. Performance modeling of virtual machine live migration. In IEEE International Conference on Cloud Computing (CLOUD β11), Washington, DC, USA, July 2011. [10] Kapil, D., Pilli, E. S., and Joshi, R. C. Live virtual machine migration techniques: Survey and research challenges. 2013 IEEE 3rd International In Advance Computing Conference (IACC β13), Ghaziabad, India, February 2013. [11] C. Clark, K. Fraser, S. Hand, J. G. Hansen, E. Jul, C. Limpach, I. Pratt, and A. Warfield. Live Migration of Virtual Machines. In Proceedings of the 2nd Conference on Symposium on Networked Systems Design and Implementation - Volume 2 (NSDI β05), Boston, MA, USA, May 2005. [12] M. R. Hines, U. Deshpande, and K. Gopalan. Post-copy Live Migration of Virtual Machines. SIGOPS Operating Systems Review, 43(3), July 2009. [13] M. R. Hines and K. Gopalan. Post-copy Based Live Virtual Machine Migration Using Adaptive Pre-paging and Dynamic Self-ballooning. In Proceedings of the 2009 ACM SIGPLAN/SIGOPS International Conference on Virtual Execution Environments (VEE β09), Washington, DC, USA, March 2009. [14] Hu, Y. C., Patel, M., Sabella, D., Sprecher, N., and Young, V. Mobile edge computingβA key technology towards 5G. ETSI White Paper, 11, 2015. [15] Lewis, G., EcheverrΓ a, S., Simanta, S., Bradshaw, B., and Root, J. Tactical cloudlets: Moving cloud computing to the edge. In 2014 IEEE Military Communications Conference (MILCOM β14), (pp. 1440-1446). October, 2014.
26
[16] Luan, T. H., Gao, L., Li, Z., Xiang, Y., Wei, G., & Sun, L. Fog computing: Focusing on mobile users at the edge. 2015. [17] Wang, L., Ma, Y., Zomaya, A. Y., Ranjan, R., and Chen, D. A parallel file system with application-aware data layout policies for massive remote sensing image processing in digital earth. IEEE Transactions on Parallel and Distributed Systems 26.6 (2015), 1497-1508. [18] Bonomi F, Milito R, Zhu J, Addepalli S. Fog computing and its role in the internet of things. In: Proceedings of the first edition of the MCC Workshop on mobile cloud computing (MCC β12). ACM, New York, NY, USA [19] K. Z. Ibrahim, S. Hofmeyr, C. Iancu, and E. Roman. Optimized Pre-copy Live Migration for Memory Intensive IoT applications. In Proceedings of 2011 International Conference for High Performance Computing, Networking, Storage and Analysis (SC β11), Seattle, WA, USA, November 2011. [20] S. Hacking and B. Hudzia. Improving the Live Migration Process of Large Enterprise IoT applications. In Proceedings of the 3rd International Workshop on Virtualization Technologies in Distributed Computing (VTDC β09), Barcelona, Spain, June 2009. [21] J. Lo, E. Wohlstadter, and A. Mesbah. Live Migration of JavaScript Web Apps. In Proceedings of the 22nd International Conference on World Wide Web (WWW β13), Rio de Janeiro, Brazil, May 2013. [22] K.-Y. Hou, K. G. Shin, and J.-L. Sung. Application-assisted Live Migration of Virtual Machines with Java IoT applications. In Proceedings of the 10th ACM European Conference on Computer Systems (EuroSys β15), Bordeaux, France, April 2015. [23] H. Liu, H. Jin, X. Liao, L. Hu, and C. Yu. Live Migration of Virtual Machine Based on Full System Trace and Replay. In Proceedings of the 18th ACM International Symposium on High Performance Distributed Computing (HPDC β09), Garching, Germany, June 2009. [24] T. Mishima and Y. Fujiwara. Madeus: Database Live Migration Middleware Under Heavy Workloads for Cloud Environment. In Proceedings of the 2015 ACM SIGMOD International Conference on Management of Data (SIGMOD β15), Melbourne, Australia, May 2015. [25] Khoshkbarforoushha, A., Ranjan, R., Gaire, R., Abbasnejad, E., Wang, L., and Zomaya, A. Y. Distribution Based Workload Modelling of Continuous Queries in Clouds. IEEE Transactions on Emerging Topics in Computing 5.1 (2017): 120-133. [26] P. Svard, B. Hudzia, J. Tordsson, and E. Elmroth. Evaluation of Delta Compression Techniques for Efficient Live Migration of Large Virtual Machines. In Proceedings of the 7th ACM SIGPLAN/SIGOPS International Conference on Virtual Execution Environments (VEE β11), Newport Beach, CA, USA, March 2011. [27] X. Song, J. Shi, R. Liu, J. Yang, and H. Chen. Parallelizing Live Migration of Virtual Machines. In Proceedings of the 9th ACM SIGPLAN/SIGOPS International Conference on Virtual Execution Environments (VEE β13), Houston, TX, USA, March 2013. [28] A. Shribman and B. Hudzia. Pre-Copy and Post-copy VM Live Migration for Memory Intensive IoT applications. In Proceedings of the 18th International Conference on Parallel Processing Workshops (Euro-Par β12), Rhodes Island, Greece, August 2012. [29] H. Jin, D. Li, S. Wu, X. Shi, and X. Pan. Live virtual machine migration with adaptive memory compression. In Proceedings of the IEEE International Conference on Cluster Computing and Workshops (CLUSTER β09), September 2009. [30] Abe, Y., Geambasu, R., Joshi, K., and Satyanarayanan, M. Urgent Virtual Machine Eviction with Enlightened PostCopy. In Proceedings of the 12th ACM SIGPLAN/SIGOPS International Conference on Virtual Execution Environments (VEE β16), Atlanta, GA, USA, March 2016. [31] Nathan, S., Bellur, U., and Kulkarni, P. (2016, March). On Selecting the Right Optimizations for Virtual Machine Migration. In Proceedings of the 12th ACM SIGPLAN/SIGOPS International Conference on Virtual Execution Environments (VEE β16), Atlanta, GA, USA, March 2016. [32] Y.-K. Li, M. Xu, C.-H. Ng, and P. P. C. Lee. Efficient hybrid inline and out-of-line deduplication for backup storage. ACM Transactions on Storage, 2014. [33] R. Koller and R. Rangaswami. I/O deduplication: Utilizing content similarity to improve i/o performance. ACM Transactions on Storage, 2010. [34] B. Mao, H. Jiang, S. Wu, Y. Fu, and L. Tian. Readperformance optimization for deduplicationbased storage systems in the cloud. ACM Transactions on Storage, 2014.
27
[35] S. Nathan, U. Bellur, and P. Kulkarni. Towards a Comprehensive Performance Model of Virtual Machine Live Migration. In 2015 ACM Symposium on Cloud Computing (SoCC β15), Kohala Coast, HI, USA, August 2015. [36] S. Nathan, P. Kulkarni, and U. Bellur. Resource Availability Based Performance Benchmarking of Virtual Machine Migrations. In 2013 the 4th ACM/SPEC International Conference on Performance Engineering (ICPE β13), Prague, Czech Republic, April 2013. [37] Openstack. http://docs.openstack.org/ [38] Openstack Nova Compute. http://docs.openstack.org/adminguide/compute.html [39] Memtester. http://pyropus.ca/software/memtester/ [40] Mosberger D, Jin T. httperf a tool for measuring web server performance. Measurement and modeling of computer systems, 26(3), pages 31-37 (1998) [41] Wang, L., Geng, H., Liu, P., Lu, K., Kolodziej, J., Ranjan, R., and Zomaya, A. Y. Particle swarm optimization based dictionary learning for remote sensing big data. Knowledge-Based Systems 79 (2015), 43-50. [42] Akoush, S., Sohan, R., Rice, A., Moore, A. W., and Hopper, A. Predicting the performance of virtual machine migration. In 2010 the 18th Annual IEEE/ACM International Symposium on Modeling, Analysis and Simulation of Computer and Telecommunication Systems (MASCOTS 10), Miami, FL, USA, August 2010. [43] Park, S. Y., Jung, D., Kang, J. U., Kim, J. S., and Lee, J. CFLRU: a replacement algorithm for flash memory. In Proceedings of the 2006 International Conference on Compilers, Architecture, and Synthesis for Embedded Systems (CASES β2006), Seoul, Korea, October 2006. [44] Memory page. https://en.wikipedia.org/wiki/Page [45] Zhang, J., Ren, F., and Lin, C. Delay guaranteed live migration of virtual machines. In 2014 IEEE Conference on Computer Communications (INFOCOM β14), Toronto, Canada, April 2014. [46] Gerofi, B., Fujita, H., and Ishikawa, Y. An efficient process live migration mechanism for load balanced distributed virtual environments.In 2010 IEEE International Conference on Cluster Computing (CLUSTER β09), Heraklion, Greece, September 2010. [47] Akaike, H. Fitting autoregressive models for prediction. Annals of the institute of Statistical Mathematics, 21(1), pages 243-247, 1969. [48] Johansen, S. Likelihood-based inference in cointegrated vector autoregressive models. Oxford University Press on Demand. 1995. [49] Medina, V., and Garca, J. M. A survey of migration mechanisms of virtual machines. ACM Computing Surveys (CSUR), 46(3), 30. ACM, 2014. [50] Keller, E., Ghorbani, S., Caesar, M., and Rexford, J. Live migration of an entire network (and its hosts). In Proceedings of the 11th ACM Workshop on Hot Topics in Networks (HotNets β12). Redmond, WA, USA. October 2012. [51] Travostino, F., Daspit, P., Gommans, L., Jog, C., De Laat, C., Mambretti, J., and Wang, P. Y. Seamless live migration of virtual machines over the MAN/WAN. Future Generation Computer Systems, 22(8), pages 901-907, 2006.
28
Figure Captions
Fig.1 VM monitor and live migration architecture on edge cloud. We can migrate VM from resource-constrained EC to the nearby resources available EC.
Fig.2 A typical architecture of edge cloud with IoT end user and edge devices.
29
Fig.3 Pre-copy and post-copy live migration methods.
Fig. 4 The flowchart of Ada-Things strategy.
30
(a) Total migration time in 25% memory dirty page rate.
(b) Total migration time in 45% memory dirty page rate.
31
(c) Total migration time in 65% memory dirty page rate.
(d) Total migration time in 85% memory dirty page rate. Fig. 5 Total migration time of four migration scenarios in different memory dirty page rates.
32
(a) VM downtime in 25% memory dirty page rate.
(b) VM downtime in 45% memory dirty page rate.
33
(c) VM downtime in 65% memory dirty page rate.
(d) VM downtime in 85% memory dirty page rate. Fig. 6 VM downtime of four migration scenarios in different memory dirty page rates.
34
(a) Total pages transferred in 25% memory dirty page rate.
(b) Total pages transferred in 45% memory dirty page rate
35
(c) Total pages transferred in 65% memory dirty page rate.
(d) Total pages transferred in 85% memory dirty page rate Fig. 7 Total pages transferred of four migration scenarios in different memory dirty page rates.
36
(a) Total migration time in 20% memory usage.
(b) Total migration time in 40% memory usage.
(c) Total migration time in 60% memory usage.
37
(d) Total migration time in 80% memory usage. Fig. 8 Total migration time of different migration scenarios in diverse memory usages.
(a) VM downtime in 20% memory usage.
(b) VM downtime in 40% memory usage.
38
(c) VM downtime in 60% memory usage.
(d) VM downtime in 80% memory usage. Fig. 9 VM downtime and total pages transferred of different migration scenarios in diverse memory usages.
39
(a) Total pages transferred in 20% memory usage.
(b) Total pages transferred in 40% memory usage.
40
(c) Total pages transferred in 60% memory usage.
(d) Total pages transferred in 80% memory usage. Fig. 10 Total pages transferred of different migration scenarios in diverse memory usages.
Table 1 Typical VM flavors and their configuration parameters in openstack, include RAM , number of VCPUs and Disk size. Disk(GB) VM RAM(MB) VCPUs Tiny(T)
0.5
1
1
Small(S)
2
1
20
Middle(M)
4
2
40
Large(L)
8
4
80
Extra Large(XL)
16
8
160
41
*Highlights (for review)
Highlights 1. We find the needs of VM migration for load balancing among edge clouds in IoT ecosystems. 2. We analysis the two major limitations: application generality and performance imbalance in lots of existing migration methods. 3. We present a general memory copy strategy Ada-Things for VM monitoring and live migration to address these two major limitations. 4. We realize Ada-Things on Openstack based edge cloud computing management platform.
*Author Biography & Photograph
Author Biography
Zhong Wang received the masterβs degree from the College of Computer Science and Electronic Engineering in Hunan University. He is currently working toward the PhD degree at the Department of Computer Science and Engineering in Shanghai Jiao Tong University. His research interests include virtual machine migration and resources allocation in cloud computing.
Daniel Sun is a research scientist in Data61, Commonwealth Scientific and Industrial Research Organisation (CSIRO), Australia. He is also a conjoint lecturer in School of Computer Science and Engineering, the University of New South Wales, Australia, and a visiting researcher at Shanghai Jiao Tong University. He received his Ph.D. in Information Science from Japan Advanced Institute of Science and Technology (JAIST) in 2008. From 2008 to 2012, he was an assistant research manager in NEC central laboratories in Japan. From 2013 to 2016, he was a researcher in National ICT Australia (NICTA).
Guangtao Xue obtained his PhD from the Department of Computer Science and Engineering at the Shanghai Jiao Tong University in 2004. He is a Professor in the Department of Computer Science and Engineering at Shanghai Jiao Tong University, China. His research interests include vehicular ad hoc networks, wireless networks, mobile computing and distributed computing. He is a member of the IEEE Computer and the IEEE Communication Society.
Shiyou Qian received the PhD from the Department of Computer Science and Engineering at the Shanghai Jiao Tong University in 2015. He is now a research assistant in the Department of Computer Science and Engineering in Shanghai Jiao Tong University. His research interests include matching in pub/sub systems and driving recommendation with vehicular networks.
Guoqiang Li received the B.S., M.S., and Ph.D. degrees from Taiyuan University of Technology, Shanghai Jiao Tong University, and Japan Advanced Institute of Science and Technology in 2001, 2005, and 2008, respectively. He is now an associate professor in school of software, Shanghai Jiao Tong University, and a guest associate professor in Kyushu University. His research interests include formal verification, programming language theory and computational learning theory.
Minglu Li received the PhD degree from Shanghai Jiao Tong University in 1996. He is currently a professor at the Department of Computer and Engineering in Shanghai Jiao Tong University. He is the director of the IBM-SJTU Grid Research Center at Shanghai Jiao Tong University. His main research topics include grid computing, image processing, and e-commerce.