Modeling and application of moderate prefetching strategy based on video slicing for P2P VoD systems

Modeling and application of moderate prefetching strategy based on video slicing for P2P VoD systems

The Journal of China Universities of Posts and Telecommunications April 2012, 19(2): 57–66 www.sciencedirect.com/science/journal/10058885 http://jcup...

216KB Sizes 0 Downloads 21 Views

The Journal of China Universities of Posts and Telecommunications April 2012, 19(2): 57–66 www.sciencedirect.com/science/journal/10058885

http://jcupt.xsw.bupt.cn

Modeling and application of moderate prefetching strategy based on video slicing for P2P VoD systems DENG Guang-qing1 (), WEI Ting1, CHEN Chang-jia1, ZHU Wei2 , WANG Bin2, WU Deng-rong2 1. School of Electronic and Information Engineering, Beijing Jiaotong University, Beijing 100044, China 2. Department of PPLive P2P-CDN R&D, Shanghai 201203, China

Abstract In peer-to-peer (P2P) video-on-demand (VoD) streaming systems, each peer contributes a fixed amount of hard disk storage (usually 2 GB) to store viewed videos and then uploads them to other requesting peers. However, the daily hits (namely popularity) of different segments of a video is highly diverse, which means that taking the whole video as the basic storage unit may lead to redundancy of unpopular segment replicas and scarcity of popular segment replicas in the P2P storage network. To address this issue, we propose a video slicing mechanism (VSM) in which the whole video is sliced into small blocks (20 MB, for instance). Under VSM, peers can moderately remove unpopular blocks from and accordingly add popular ones into their contributed hard disk storage, which increases the usage of peers’ contributed resource (storage and bandwidth). To reasonably assign bandwidth among peers with different download capacity, we propose a moderate prefetching strategy (MPS) based on VSM. Under MPS, when the amount of prefetched content reaches the predefined threshold, peers immediately stop prefetching video content and then release occupied bandwidth for others. A stochastic model is established to analyze the performance of the MPS and it is found that perfect playback continuity can be got under MPS. Then the MPS is applied to PPLive VoD system (one of the largest P2P VoD systems in China) and measurement results demonstrate that low server load and perfect user satisfaction can be achieved. Also, the server bandwidth contribution of PPLive VoD system under MPS (namely 5%) is much lower than that of UUSee VoD system (namely 30%). Keywords

bandwidth, P2P, VoD, slicing, prefetch

1 Introduction In P2P VoD streaming systems, every peer contributes some upload bandwidth and a fixed amount of hard disk storage (usually 2 GB) to serve other peers and thus the entire viewer population forms a distributed P2P storage system (DPSS). When the contributed storage is full, peers use replica replacement algorithm [1–2] to remove outdated videos from and add newly viewed videos into the contributed storage. Thus, each peer can download needed video from and upload stored video to other peers; then the server stress is alleviated. Many literatures [1,3–4] take the entire video as the

Received date: 09-10-2011 Corresponding author: DENG Guang-qing, E-mail: [email protected] DOI: 10.1016/S1005-8885(11)60246-X

basic storage unit of the DPSS. However, through the network measurement towards PPLive VoD system (http://www.pptv.com), it is observed that the popularity of different segments of a video is highly diverse, just as shown in Fig. 1. Specifically, the number of daily hits of some segments is twice larger than that of other segments of the same video. Due to the popularity difference of video segments, storing the whole viewed video on peers’ contributed storage may lead to redundancy of unpopular segment replicas and scarcity of popular segment replicas in the DPSS. If storing too many unpopular segments and too few popular ones on hard disk storage, peers will be visited by only a few other peers and then peers’ upload bandwidth are wasted. To address this issue, we propose a VSM to slice the whole video into small blocks (20 MB, for instance). Under VSM, peers can moderately remove

58

The Journal of China Universities of Posts and Telecommunications

unpopular blocks from and accordingly add popular blocks into their contributed storage, which increases the usage of peers’ contributed resource (storage and bandwidth).

2012

smooth playback. Then we apply MPS to PPLive VoD streaming system and actual network measurement demonstrates that low server load and perfect user satisfaction can be achieved under MPS. The remainder of this paper is organized as follows. First, we present the video slicing mechanism in Sect. 2. Then we study the moderate prefetching strategy in Sect. 3. In Sect. 4, we present a stochastic model of the MPS. The application of the MPS in PPLive VoD system is presented in Sect. 5. Finally, we summarize our results in Sect. 6.

2 Video slicing mechanism

Fig. 1

Popularity diversity of videos’ different blocks

There are some works [5–8] towards prefetching strategy in P2P VoD systems; specifically, these works mostly focus on predicting the segments which will be viewed soon and then prefetching that as quickly as possible to ensure playback continuity. However, to the best of our knowledge, no one studies the issue that how much content should be prefetched by a peer to guarantee playback continuity and at the same time not to bring adverse effect to other peers. In P2P VoD systems, peers usually have different download capacity. For example, download capacity of asymmetric digital subscriber line (ADSL) users can be 1 Mbit/s, 2 Mbit/s or 4 Mbit/s in mainland China. Without prefetching limitation, high download capacity (HDC) peers will continuously occupy too much upload bandwidth (provided by the P2P network) and thus the upload bandwidth left for low download capacity (LDC) peers will be not enough to support normal playback in case of limited total network upload capacity. To address this issue, we present a MPS based on the VSM. Recall that under VSM the video is sliced into blocks (whose size can be variable or fixed). Under MPS, peers download the currently viewed block and then the next one in full speed to guarantee playback continuity. After the next block is fully prefetched, peers immediately stop prefetching content and release their occupied bandwidth, which will soon be assigned to those peers with just a little prefetched content. Under MPS, upload bandwidth can be reasonably assigned among HDC and LDC peers and thus all peers can achieve fluent playback. To analyze the performance of MPS, we establish a stochastic model and find that under MPS peers can obtain

In P2P VoD systems, peers use replica replacement algorithm [1–4] or prefetching algorithm [5–8] to regulate the video replicas stored on their hard disk storage. In general, peers should store those videos which will be frequently requested by other peers to make full use of its contributed hard disk storage and upload bandwidth. For the sake of simplicity, many literatures [1,3–4, etc.] take the entire video as the basic storage unit of the DPSS, which may lead to low usage of storage space and upload bandwidth due to popularity difference of video blocks. 2.1 Popularity difference of video blocks To analyze the popularity difference within a video, we randomly choose two videos from PPLive VoD system. We slice these two videos into 27 and 31 blocks with fixed size of 20 MB, respectively. Then we analyze the number of hits of blocks on march 15th, 2010, just as shown in Fig. 1. It is observed that different blocks have different daily hits and the daily hits of blocks at the beginning of a video are usually much larger than that at the end of the video. Specifically, the daily hits of the first block in each video are almost twice larger than that of the last block. Besides, the daily hits of blocks in the middle of each video are also different. 2.2

Video slicing method

Just as previous subsection shown, the popularity of different blocks of a video is highly diverse, which determines that the entire video is not suitable to be the basic storage unit of the DPSS. To make full use of peers’ resource (storage and bandwidth), slicing the entire video into blocks is really a feasible choice. Generally, the method to slice videos can be classified into two categories:

Issue 2

DENG Guang-qing, et al. / Modeling and application of moderate prefetching strategy based on video slicing for…

fixed size block slicing (FSBS) and variable size block slicing (VSBS). Under FSBS, the video is sliced into fixed size blocks. In fact, FSBS is currently used by PPLive VoD streaming system. Specifically, every video in PPLive VoD streaming system is cut into blocks of 20 MB. Also, we will analyze the performance of PPLive VoD system in the remaining part of this paper. Under VSBS, the video is sliced into variable size blocks. To provide constant image quality, variable bit rate (VBR) video is widely used in P2P VoD system. At the beginning of the viewing of a video (or a block), peers fetch no video content and thus their ability to overcome the dynamics of download speed and video playback rate is very weak. To improve the user perceived QoS (UPQoS) at the beginning, it is required that the playback rate of the beginning of each block should be as low as possible. VSBS can be used to slice the VBR video and due to the variable block size (from 15 MB to 25 MB, for instance), the slicing point on the video can be adjusted to let the playback rate of the block head be as low as possible. In a word, it is very important to slice the video into blocks to improve the usage of peer resource (storage and bandwidth) in a P2P VoD system. FSBS and VSBS are two fundamental methods to slice videos. In fact, the block in P2P VoD streaming system can be considered as an independent small video. Taking the block but not the entire video as the basic storage unit of the DPSS, peers can remove the unpopular blocks from and add popular blocks into the contributed hard disk storage. To do so, peers’ resource can be fully used and then the server bandwidth is saved.

3 Moderate prefetching strategy Taking the block as the basic storage unit of the DPSS can improve the resource usage and finally increase the total upload capacity of the P2P network. In this section, we present the moderate prefetching strategy based on VSM. 3.1 Difference between P2P live and VoD streaming system In both P2P live and VoD streaming systems, each peer stores more or less video replicas to provide uploading service to other peers; at the same time, every peer download its needed video content from other peers. So each peer in a P2P network is both a service requester and

59

provider. However, P2P live and VoD streaming systems are of difference. Due to synchronization of peers’ playback point, peers in P2P live systems just need to store a small amount of video content (tens of MB, for instance) and then peers’ upload bandwidth can be fully used. However, in P2P VoD systems, peers can view any video at any time and then the concurrent viewers of an on demand video are not many. To be requested by other peers at any time, peers in P2P VoD systems usually have to contribute a large amount of hard disk storage (2 GB in PPLive VoD streaming system, for instance) to store viewed videos. In P2P live streaming system, peers use greedy strategy to download chunks [1] as quickly as possible so as to promote the chunk propagation among peers. In greedy strategy, each peer tries its best to prefetch chunks and therefore continuously increases the playback continuity (the probability of continuous playback). Generally, the chunks newly distributed by the server are less than those previously distributed, which contributes to that the peers whose playback progress is close to the source have lower download speed. Due to the limitation of the source playback progress, each peer has similar average download speed (namely average playback rate), though their download capacity emerges heterogeneity. Different with the P2P live system, peers in P2P VoD systems have no limitation in prefetching content. Namely peers can download any part of any video at any time. Besides, different peers have different download capacity. For example, download capacity of ADSL users can be 1 Mbit/s, 2 Mbit/s or 4 Mbit/s in mainland China. If the greedy strategy is used, HDC peers will consume much more upload bandwidth (provided by their neighbor peers), resulting in better playback performance; however, LDC peers may not be assigned to enough upload bandwidth and then suffer interruptions in case of limited total network upload capacity. Suppose that the time length and average playback rate of a video is 2 hours and 500 kbit/s, respectively. For a peer with download capacity of 4 Mbit/s, the download speed can be as high as 4 Mbit/s, which is much larger than the video playback rate. Also, it takes half an hour for the peer to completely download the video and during this period other LDC peers may have to seek help from servers because too much bandwidth is consumed by the HDC peer. Therefore, greedy strategy is not the best bandwidth allocation strategy for P2P VoD systems though it performs well in P2P live systems.

60

The Journal of China Universities of Posts and Telecommunications

3.2

Moderate prefetching strategy

To soundly assign the upload bandwidth among peers with different download capacity, we propose a moderate prefetching strategy based on video slicing mechanism in this subsection. The details of MPS are shown in algorithm 1. First, all videos are sliced into blocks using FSBS or VSBS. For the sake of simplicity, suppose that FSBS is used and then the size of different blocks is the same. By the way, the FSBS is currently used by PPLive VoD streaming system and the block size is 20 MB. In fact, each block can be considered as an independent small video. In the viewing process, peers can download the currently viewed block and then the next one in full speed. After the next block is fully prefetched, the peers has to stop downloading and then release their occupied bandwidth for other peers just with a little prefetched content. In a word, each peer can prefetch at most two blocks (the currently viewed one and the one next to it) at one time no matter how large its download capacity is. Only when a block is currently being played, the one after it can be prefetched but the one after next still can not be prefetched. Under MPS, HDC peers release occupied bandwidth once a certain amount of video content is prefetched and thus the smooth playback is ensured; LDC peers also receive enough bandwidth and thus obtain playback continuity as same as HDC peers’. Algorithm 1 Moderate prefetching Determine the currently viewed block bi if block bi has not been completely downloaded then download block bi in full speed else if block bi +1 has not been completely downloaded then download block bi +1 in full speed else stop downloading blocks end if

4 Stochastic model of MPS In this section, we present a stochastic model to analyze the performance of the VSM based MPS before applying this strategy to PPLive VoD streaming system. 4.1 Model of download and playback process Great efforts are made to model the traffic in IP networks [9–11]. The local-area and wide-area network traffic have the property of long range dependence [9] and

2012

self-similarity [11]. According to Ref. [10], the World Wide Web traffic is self-similar because the file size in the web is heavy-tailed in the mid 90s of last century. In 2003, authors in Ref. [10] first find that the wide area network is dominated by P2P applications. Also the predominance of P2P traffic tends to remove long range dependence as well as self-similarity in IP networks because of the way P2P protocols are running and the short files. In P2P applications, both file downloading and video streaming applications, the download speed of peers is highly diverse over time [12–13]. According to Ref. [12], although the number of concurrent neighbors is almost unchanged, the instant download speed is highly diverse. Authors in Ref. [13] study the client performance variations in BitTorrent, and it is found that the download speed fluctuates significantly because of the random arrival of downloaders and the random departure of seeds. In fact, peers in PPLive VoD streaming system (one of the largest P2P VoD service providers in China) concurrently connect 32 neighbor peers to download video content. And a peer’s download speed is equal to the summation of data transfer rates from neighbor peers to itself. Moreover, authors in Ref. [10] point out that TCP connections carrying P2P traffic have small bit rates and their superposition process can be well approximated by a smooth Gaussian process. Motivated by the above observations, we suppose that the downloading process of each peer is a stochastic process. Let D(t ) be the amount of video content downloaded within t seconds. We suppose that the downloading process {D(t ); t0} is a brownian motion with drift [14]. According to Ref. D(t ) ~ N (μ t , σ 2t ) . Then we have D(t ) = μt + σ D(t )

[14], (1)

where D(t ) ~ N (0, t ) , D(t ) is a Wiener process. μ and σ is the mean and the standard deviation of the download speed, respectively. If peer’s download speed is constant, the σ is equal to zero and D(t ) = μ t , which is the simplest case of downloading process. However, due to the churn of the peer and network, peer download speed highly fluctuates and thus σ is larger than zero. Then parameter σ is used to measure the fluctuation of peer download speed; also the more intense the fluctuation is, the larger the σ is. For the sake of simplicity, suppose the video is constant bit rate (CBR) video and the playback rate is ν . Let Q (t ) be the amount of CBR video content that has been played

Issue 2

DENG Guang-qing, et al. / Modeling and application of moderate prefetching strategy based on video slicing for…

out within t s. Then we have Q (t ) = ν t

the (2)

4.2 Distribution of the amount of prefetched content of blocks In this subsection, we establish a playback continuity model on condition of FSBS. Suppose the block size is M and a video is sliced into N blocks. Let T be the playback time length of a block, then we have M (3) T= ν Under MPS, peers will prefetch the block after the currently viewed one. Suppose the amount of prefetched content of the ith block is Bi (1iN ) . The amount of downloaded content within time T is D(T ) and if Bi + D(T ) < M , there are playback interruptions during the playback process of the ith block. In this case, Bi +1 = 0 , which means that the amount of prefetched content of the (i + 1)th block is zero! if Bi + D(T ) < M , Bi +1 = 0 If M Bi + D(T ) < 2M , the

(4) ith block is fully

downloaded within time T and some content of the (i + 1)th block is also prefetched. Specifically, Bi +1 = Bi + D (T ) − M , which means that the (i + 1)th block has not been fully prefetched. if M Bi + D (T ) < 2M , Bi +1 = Bi + D (T ) − M (5) If Bi + D (T )2M , the ith block is fully downloaded within time T and the (i + 1)th block is also fully prefetched. In this case, Bi +1 = M , then if Bi + D(T ) 2M , Bi +1 = M (6) Suppose Bi is given and then combine Eqs. (4), (5) and (6), and then we have ⎧0; Bi + D(T ) < M ⎪ Bi +1 = ⎨ Bi + D(T ) − M ; M Bi + D(T ) < 2M (7) ⎪ M ; B + D(T )2M ⎩ i Recall that D(t ) ~ N (μ t ,σ 2 t ) . So the distribution of Bi +1 can be derived through D(T ) . Rewrite Eq. (7) and then ⎧ 0; D(T ) < M − Bi ⎪ Bi +1 = ⎨ Bi + D(T ) − M ; M − Bi D (T ) < 2 M − Bi (8) ⎪ M ; D (T )2M − B ⎩ i According to the characteristic of normal distribution,

distribution

of

Bi +1

(namely

61

F (Bi +1x | Bi ),

0xM ) can be derived. Then we have ⎧ ⎛ M − Bi − μT ⎞ ⎪Φ ⎜ ⎟; x = 0 σ T ⎝ ⎠ ⎪⎪ F (Bi +1x | Bi ) = ⎨ ⎛ x + M − Bi − μT ⎞ ⎟; 0 < x < M ⎪Φ ⎜ σ T ⎠ ⎪ ⎝ ⎩⎪1; x = M

(9)

Finally, the probability density function of Bi +1 (namely f (Bi +1 = x | Bi ), 0xM ) can be derived from Eq. (9). Then ⎧ ⎛ M − Bi − μT ⎞ ⎪Φ ⎜ ⎟; x = 0 σ T ⎠ ⎪ ⎝ ⎪ 1 ⎧ ( x + M − Bi − μT ) 2 ⎫ exp ⎨ − ⎪ ⎬; f (Bi +1 = x | Bi ) = ⎨σ 2πT 2σ 2T ⎩ ⎭ ⎪ 0< x
Playback continuity

In peers’ viewing process, an interruption occurs if a chunk can not be obtained before its deadline. In this subsection, we take the fluency which is defined as the probability of no interruptions during the playback process of a block as the metric to evaluate UPQoS. Intuitively, the larger the fluency is, the better the UPQoS is. If the fluency is equal to 100%, peers experience no interruptions during the viewing process. Recall that {D(t ); t0} is a brownian motion with drift and take the ith block for example. Let C (t ) = D(t ) − Q (t ) , then C (t ) is also a brownian motion with drift [14] and we have C (t ) N ⎡⎣( μ −ν ) t , σ 2 t ⎤⎦

(11)

The first interruption time is the time when C (t ) first hits level zero from starting state Bi (1iN ) . Following Ref. [15] but with our notation, the probability density function g (t ; Bi ) of the first interruption time is given by

62

The Journal of China Universities of Posts and Telecommunications

2012

⎧ ⎡ B + ( μ −ν ) t ⎤ 2 ⎫ ⎪ i ⎦ ⎪ ; t < ∞ (12) exp ⎨− ⎣ g ( t ; Bi ) = ⎬ 2 2 3 σ 2 t 2πσ t ⎪⎩ ⎪⎭ Recall that the playback time length of a block is T and the fluency is equal to the probability that the first interruption time is larger than T. Denote the fluency by f, then we have Bi



f = ∫ g (t ; Bi )dt T

(13)

Then T

f = 1 − ∫ g (t ; Bi )dt 0

(14)

Recall that Bi can be solved by Eq. (10) and the

fluency can be got from Eq. (14). 4.4

Numerical evaluation

On the basis of Eqs. (10) and (14), some numerical evaluations are performed to vividly display the evolution of the amount of prefetched content (which is abbreviated as APC hereafter) and the fluency of blocks in the viewing process. In fact, the model (Eqs. (10) and (14)) can be solved numerically using MatLab. From a peer’s point of view, we fix peer download speed and then study the relationship among APC, fluency and block size in a video. Specifically, the video playback rate is 500 kbit/s and the range of block size is from 1 MB to 20 MB. The mean and standard deviation of the download speed is 515 kbit/s and 171 kbit/s. Apparently, the standard deviation is one third of the average download speed. According to the characteristic of normal distribution, 99.74% of values of download speed are within the confidence interval from zero to double the mean of the download speed. The result is shown in Fig. 2 and Fig. 3.

Fig. 2 Expected prefetched content amount of each block

Fig. 3 Fluency of each block

To evaluate the performance of MPS, we first compute the mathematical expectation of APC of each block, just as shown in Fig. 2. Here, the horizontal axis is the identity of blocks, which is just the sequence number of blocks in a video. And the vertical axis is the expected APC, which is computed from Eq. (10). When the block size is given (10 MB, for instance), the expected APC increases linearly at the first tens of blocks and then increases slowly and finally approaches the upper bound (namely the block size). The reason is that the average peer download speed (515 kbit/s) is larger than average video playback rate (500 kbit/s). When the block ID is given (20, for instance), the expected APC increases as the block size does. For example, the expected APC is larger than 10 MB when the block size is 20 MB but that is less than 5 MB when the block size is 10 MB. That is because different size blocks have different playback time. For example, the playback time of a block of 20 MB is 320 s and that of a block of 10 MB is 160 s. For a peer in a P2P VoD system, the playback continuity of a peer is highly associated with the APC. Specifically, when the download speed suddenly decreases, the peer with small APC may suffer playback interruptions with great probability; however, in the same situation, the peer with large enough APC can avoid interruptions, for the already prefetched video content can be played even if the peer download speed is zero. So increasing APC is an effective way to improve playback continuity. Now, the question is whether increasing APC can continuously improve playback continuity. In other words, how much prefetched content is needed to obtain acceptable playback continuity? With above question, we compute the fluency of each block, just as shown in Fig. 3. Here, the horizontal axis is the sequence number of blocks in a video. And the vertical axis is the fluency, which is the probability of no playback

Issue 2

DENG Guang-qing, et al. / Modeling and application of moderate prefetching strategy based on video slicing for…

interruptions during the viewing process of a block. When the block size is given (5 MB, for instance), the fluency increases quickly as the sequence number of blocks does at the beginning, which means that playback interruptions centralize at the first several blocks of a video. The reason is that the APC of the blocks at the beginning of a video is very small and then the ability to overcome the dynamics of download speed is relatively weak. With the increase of prefetched content, the ability to overcome the download speed dynamics becomes stronger and stronger and thus the fluency also becomes larger and larger. When the block size is 5 MB, the fluency is near 100% for blocks whose sequence number is larger 10. From Fig. 2, it is observed that the APC is about 1.5 MB for the block (whose size is 5 MB) with sequence number of 10. So acceptable playback continuity can be obtained if the APC reach a threshold, which is not as large as we imagine. Also, when the APC is large enough, increasing APC has no obvious effect on playback continuity, just as shown in Fig. 2 and Fig. 3. When the block ID is given (5, for instance), the fluency increases as the block size does, just as shown in Fig. 3. For example, the fluency is about 90% when the block size is 1 MB but near 100% when that is 20 MB. And the fluency is almost the same (near 100%) when the block size is larger than 10 MB (for example 10 MB, 15 MB and 20 MB). From Fig. 2, it is observed that the APC is about 1.5 MB for the block (whose size is 10 MB) with sequence number of 5. In other words, if the APC is larger than 1.5 MB, the fluency is high no matter what the block size is. So the APC is the key parameter determining the playback continuity when other parameters are given. Of course, the APC is highly connected with the block size, for APC must be smaller than the block size under MPS.

63

more than twenty thousands videos and ten millions daily hits. The download capacity of users is highly diverse; specifically, the range of user download capacity is from 512 kbit/s to tens of Mbit/s. In PPLive VoD streaming system, peers contribute some storage and bandwidth resource to serve others and at the same time receive services from others and thus every peer in the same P2P VoD network can watch any video at any time. 5.2 VSM and MPS in PPLive VoD streaming system

5 Application of MPS in PPLive VoD streaming system

For the sake of simplicity in software design, the FSBS is chosen to slice videos in PPLive VoD streaming system into fixed size blocks. Intuitively, the block size is the key parameter for video slicing mechanism. If the block size is too small, the signaling overhead will be high in the P2P VoD network; on the contrary, if the block size is too large, peers’ contributed hard disk storage utilization will be low. Based on the above stochastic model, 20 MB is chosen as the block size by us to gain acceptable fluency and facilitate rapid deployment. In PPLive VoD system, each peer contributes hard disk storage of 2 GB in which about 100 blocks can be stored. When the contributed hard disk storage is full, peers use a replica replacement algorithm similar with LRU to remove the unpopular blocks and fill the vacated hard disk storage with more popular blocks to make full use of peers’ storage and bandwidth resource. When watching a video, peers download the currently watched block and then the next one in full speed. If the next block is fully prefetched, peers stop downloading any content and spare the occupied bandwidth for other peers with a little prefetched content. The details of MPS are shown in algorithm 1. In a word, each peer can prefetch only one block at a time no matter how large its download capacity is. Only when a block is being played, the one after it can be prefetched but the one after next still can not be prefetched.

5.1 Brief introduction of PPLive VoD streaming system

5.3

On the base of the stochastic model presented in previous section, we apply MPS and VSM to PPLive VoD streaming system. PPLive VoD streaming system is one of the largest P2P VoD systems in China. The video in PPLive VoD system can be classified into Blu-ray video, high-definition (HD) video and general video with average bit rate of 1300 kbit/s, 800 kbit/s and 500 kbit/s, respectively. Until March 15th, 2010, this VoD system has

The scale of PPLive VoD streaming system is so large that it is very difficult to measure the instant quantity (download speed, prefetched content amount, for instance) of each peer. In this case, we focus on two fundamental metrics of a P2P VoD system: server bandwidth contribution (SBC) and user satisfaction (US). Specifically, the SBC is quantified by the percentage of server bandwidth contributed for video content distribution

Measurement results

64

The Journal of China Universities of Posts and Telecommunications

and the US is quantified by the interruption times per view (ITPV). The computation of SBC and US is as below. Let Sij

2012

alleviated.

and Pij be the amount of video content downloaded by peer i when it watches video j from the server and peers, respectively. Let I ij be the total number of interruptions experienced by peer i when it watches video j. Recall that the interruption occurs only if a chunk can not be received before its playback deadline. Let N and M be the number of peers and videos, respectively. Then N

RSBC =

M

∑∑ S N

i =1 j =1 M

∑∑ ( S i =1 j =1

ij

ij

+ Pij )

Fig. 4

Server bandwidth contribution

(15)

Intuitively, the quantity RSBC is the percentage of server bandwidth contribution in a P2P VoD streaming system and thus quantifies the SBC. For the sake of simplicity of presentation, we define a function Oij and ⎧ x; peer i watches video j for x times Oij = ⎨ ⎩0; peer i does not watch video j Then we have N

RUS =

(16)

M

∑∑ I i =1 j =1 N M

Fig. 5

ij

∑∑ Oij

User satisfaction

(17)

i =1 j =1

Intuitively, the quantity RUS is just the average interruption times per view, which is used to reflect user satisfaction. The RSBC and RUS in each hour on March 15th, 2010 are shown in Figs. 4 and 5, respectively. From 0 to 7 o’clock, the RSBC is higher than that at other time on that day, and the reason is that most peers in China are offline in this time period and thus the system scale is rather small. So the efficiency of content distribution among peers become low and thus more peers seek help from servers to guarantee satisfactory user experience. The number of views (namely video clicks) of all videos in each hour on that day is shown in Fig. 6. It is observed that the number of views from 0 to 7 o’clock is relatively small. Specifically, the number of views during 22:00–23:00 is about 8 times larger than that during 04:00–05:00. In a word, the RSBC during 00:00–07:00 is relatively large but the proportion of the number of views in this time period is low. So the RSBC on whole day is sill low (5.07%, in fact), which means that 95% of video content downloaded by peers is from P2P network and thus the server stress is

Fig. 6

Number of views in each hour

From Fig. 5, it is observed that the ITPV in each hour is about 2. In other words, peers experience about 2 interruptions when viewing a video in average. So the user satisfaction of PPLive VoD streaming system is relatively excellent. 5.4 Performance comparison between UUSee and PPLive VoD streaming system UUSee VoD system (http://www.uusee.com/) is another very popular P2P VoD system in China. The system parameters of UUSee and PPLive are listed in Table 1. The

Issue 2

DENG Guang-qing, et al. / Modeling and application of moderate prefetching strategy based on video slicing for…

number of videos in UUSee is 993 [16] and that in PPLive is more than 20 000. According to Ref. [16], videos in UUSee are encoded into different streaming bitrates between 264 kbit/s and 1.3 Mbit/s, which means that the video playback rate of UUSee is similar to that of PPLive. According to Ref. [3], for a P2P VoD system, the server load increases if the number of videos does when other system parameters remain unchanged. So it is more challenging for PPLive VoD system to gain low server bandwidth contribution. UUSee and PPLive VoD systems take the whole video and the block (whose size is 20 MB) as the basic unit of DPSS, respectively. Usually, a peer watches multiple videos in a day. According to Ref. [16], in UUSee VoD streaming system, the number of unique peers and video views every day is 1 872 754 and 8 707 329, respectively. In PPLive VoD streaming system, the number of unique peers and video views is 1 633 588 and 10 865 228, respectively. So the scale of UUSee and PPLive is almost the same. What we are interested in is the SBC of these two famous P2P VoD streaming systems, for SBC is one of the most important indicators to measure the efficiency of a P2P VoD streaming system. According to Ref. [16], the average percentage of server bandwidth contribution of UUSee is about 30%; in contrast, that of PPLive VoD streaming system under MPS is just 5%. Under VSM, peers in PPLive VoD system choose the video block but not the whole video as the basic storage unit of DPSS and thus peers can moderately remove unpopular blocks from and accordingly add popular blocks into their contributed hard disk storage. Then the usage of peers’ contributed resource (bandwidth and storage) is increased. So the server bandwidth is saved. Under MPS, HDC peers in PPLive VoD system will release the occupied bandwidth (provided by other peers in the P2P network) when the currently viewed block and the one next to it are completely prefetched and then more bandwidth will be assigned to LDC peers. So LDC peers seldom seek help from servers and thus the server bandwidth is saved. Table 1 System parameters of UUSee and PPLive VoD streaming system Parameter

UUSee

PPLive

Cache storage unit

Whole video

Video block

Number of videos

993

20 793

Number of unique daily peers

1 872 754

1 633 588

Number of daily video views

8 707 329

10 865 228

Server bandwidth contribution/%

30

5

65

6 Conclusions In P2P VoD system, each peer contributes some storage and bandwidth resources to serve other peers and thus the server stress is alleviated. How to make full use of peers’ contributed resource as well as reasonably assign bandwidth among peers with different download capacity is still challenging issue for both researchers and engineers. In this paper, we propose the VSM to increase the usage of peers’ contributed hard disk storage as well as upload bandwidth by adjusting the replica distribution of blocks according to their popularity. Further, we present a moderate prefetching strategy based on VSM to reasonably assigning bandwidth among peers. Under MPS, peers with large amount of prefetched content spare their occupied bandwidth for those peers with less prefetched content, which makes all peers achieve perfect playback continuity. Stochastic model and actual network measurement results demonstrate that low server load and perfect user satisfaction can be achieved under MPS. Acknowledgements This work was supported by the National Basic Research Program of China (2007CB307101), and the National Natural Science Foundation of China (60672069, 60772043).

References 1. Huang Y, Fu T Z J, Chiu D M, et al. Challenges, design and analysis of a large-scale P2P VoD system. Proceedings of the Conference on Applications, Technologies, Architectures, and Protocols for Computer Communications (SIGCOMM’08), Aug 17−22, 2008, Seattle, WA, USA. New York, NY, USA: ACM, 2008: 375−388 2. Li H T, Xu K, Seng J, et al. Towards health of replication in large-scale P2P-VoD systems. Proceedings of the 28th Performance, Computing, and Communications Conference (IPCCC’09), Dec 14−16, Phoenix, AZ, USA. Los Alamitos, CA, USA: IEEE Computer Society, 2009: 323−330 3. Zhou Y P, Fu T Z J, Chiu D M. Statistical modeling and analysis of P2P replication to support VoD service. Proceedings of the 30th Annual Joint Conference of the IEEE Computer and Communications (INFOCOM’11), Apr 10−15, 2011, Shanghai, China. Piscataway, NJ, USA: IEEE, 2011: 945−953 4. Tewari S, Kleinrock L. Proportional replication in peer-to-peer networks, Proceedings of the 25th Annual Joint Conference of the IEEE Computer and Communications (INFOCOM’06), Apr 23−29, 2006, Barcelona, Spain. Piscataway, NJ, USA: IEEE, 2006: 12p 5. Zhang T Y, Li Z H, Cheng X Q, et al. Multi-task downloading for P2P-VoD: an empirical perspective. Proceedings of the 16th International Conference on Parallel and Distributed Systems (ICPADS’10), Dec 8−10, 2010, Shanghai, China. Los Alamitos, CA, USA: IEEE Computer Society, 2010: 484−491 6. Xu T Y, Wang W W, Ye B L, et al. Prediction-based prefetching to support VCR-like operations in Gossip-based P2P VoD system. Proceedings of the

66

7.

8.

9.

10.

11.

The Journal of China Universities of Posts and Telecommunications 15th International Conference on Parallel and Distributed Systems (ICPADS’09), Dec 8−11, 2009, Shenzhen, China. Los Alamitos, CA, USA: IEEE Computer Society, 2009: 8p He Y F, Guan L. Prefetching optimization in P2P VoD applications. Proceedings of the 1st International Conference on Advances in Multimedia (MMEDIA’09), Jul 20−25, Colmar, Franc Piscataway, NJ, USA: IEEE, 2009: 110−115 He Y, Liu Y H. VOVO: VCR-oriented video-on-demand in large-scale peer-to-peer networks. IEEE Transactions on Parallel and Distributed Systems, 2009, 20(4): 528−539 Garrett M W, Willinger W. Analysis, modeling and generation of self-similar VBR video traffic. Proceedings of the Conference on Applications, Technologies, Architectures, and Protocols for Computer Communication (SIGCOMM’94), Aug 31−Sep 2, 1994, London, UK. New York, NY, USA: ACM, 1994: 269−280 Azzouna N B, Guillemin F. Experimental analysis of the impact of peer-to-peer applications on traffic in commercial IP networks. European Transactions on Telecommunications, 2004, 15(6): 511−522 Crovella M, Bestravos A. self-similarity in World Wide Web traffic: evidence and possible causes. IEEE/ACM Transactions on Networking,

From p. 42 8. Ashraf I, Claussen H, Ho L T W. Distributed radio coverage optimization in enterprise femtocell networks. Proceedings of the IEEE International Conference on Communications (ICC’10), May 23−27, Cape Town, South Africa. Piscataway, NJ, USA: IEEE, 2010: 6p 9. Li H J, Xu X D, Cui Q M, et al. A novel capacity analysis for femtocell networks with optimal power and sub-channel adaptation. Proceedings of the 3rd IEEE International Conference on Broadband Network and Multimedia Technology (IC-BNMT’10), Oct 26−28, 2010, Beijing, China. Piscataway, NJ, USA: IEEE, 2010: 258−262 10. 3GPP TR36.814 v9.0.0. Further advancements for E-UTRA physical layer aspects. 2010 11. Alcatel-Lucent . Simulation assumptions and parameters for FDD HeNB RF

2012

1997, 5(6): 835−846 12. Wang H Y, Liu J C, Xu K. Measurement and enhancement of BitTorrent-based video file swarming. Peer-to-Peer Networking and Applications, 2010, 3(3): 237−253 13. Guo L, Chen S Q, Xiao Z, et al. Measurements, analysis, and modeling of BitTorrent-like systems. Proceedings of the 5th Internet Measurement Conference (IMC’05), Oct 19−21, 2005, Berkeley, CA, USA. New York, NY, USA: ACM, 2005: 1−14 14. Karlin S, Taylor H M. A first course in stochastic processes. New York, NY, USA: Academic Press, 1975: 355−356 15. Ingersoll J E. Theory of financial decision making. Totowa, NJ, USA: Rowman & Littlefield Publishers, 1987: 353−354 16. ZLiu Z M, Wu C, Li B C, et al. UUSee: large-scale operational on-demand streaming with random network coding, Proceedings of the 29th Annual Joint Conference of the IEEE Computer and Communications (INFOCOM’10), Mar 14−19, 2010, San Diego, CA, USA. Piscataway, NJ, USA: IEEE, 2010: 9p

(Editor: ZHANG Ying)

requirements. 3GPP TSG RAN WG4 3GPP #51 Meeting, May 6, 2009, San Francisco, CA, USA. 2009: R4-091422 12. 3GPP TS44.318 v10.0.0. Generic access network (GAN): Mobile GAN interface layer 3 specification. 2010 13. Mishra A, Shrivastava V, Agarwal D, et al. Distributed channel management in uncoordinated wireless environments. Proceedings of the 12th Annual International Conference on Mobile Computing and Networking (MOBICOM”06), Sep 24−29, 2006, Los Alamitos, CA, USA. New York, NY, USA: ACM, 2006: 170−181 14. 3GPP TS36.133 v9.3.0. Requirements for support of radio resource management. 2010

(Editor: WANG Xu-ying)