Computer Communications xxx (xxxx) xxx–xxx
Contents lists available at ScienceDirect
Computer Communications journal homepage: www.elsevier.com/locate/comcom
Data-driven QoE prediction for IPTV service ⁎
Ruochen Huang ,a, Xin Wei a b
⁎,a,b
, Yun Gaoa, Chaoping Lva, Jiali Maoa, Qiuxia Baoa
College of Telecommunications and Information Engineering, Nanjing University of Posts and Telecommunications, Nanjing, 210003, China National Engineering Research Center of Communications and Networking, Nanjing, 210003, China
A R T I C L E I N F O
A B S T R A C T
Keywords: QoE Key influencing factors Artificaial neural network IPTV
With the development of Internet and multimedia technology, more and more families enjoy smart multimedia services provided by Internet Protocol TV (IPTV). It is crucial for operators and content service providers to find key indicators and improve quality of experience(QoE) for users. In this paper, we propose data-driven QoE prediction for IPTV service. Specifically, we define QoE to evaluate user experience of IPTV in data-driven approach at first. Then we analyze user’s interests and device indicators to understand when and how they affect user experience. Based on user interest lists for both regular users and new users, we propose the uindex to quantify user’s interests in Live TV. Finally, we build a personal QoE model based on an artificial neural network (ANN). Experimental results show that uindex improves the integrity of QoE description. Moreover, the model can predict QoE with an accuracy of 83.93% for regular users and 83.90% for new users in the record level, better than those of competing algorithms.
1. Introduction With the rapid development of video services, more and more telecommunication operators are interested in providing high quality service to attract customers to subscribe. Many service providers and network operators measure user satisfaction by quality of service (QoS) which can only indicate the conditions of network performance. However, when users evaluate the provided video service, they also consider price, quality of content, ease of use and so on. Quality of Experience (QoE) is proposed to evaluate the provided service from the aspects of human perception. ITU-T defines QoE as: “overall acceptability of an application or service, as perceived subjectively by the end user” [1]. QoS only assures the quality of the video in network level. Compared with QoS, QoE is the way to evaluate video quality from the user’s perspective or user level. It can help network operators and content providers to find the way to provide better services for customers. There are also many papers to discuss the relationship between QoS and QoE [2–4]. The approaches for evaluating QoE can be categorized into three classes: subjective test, objective quality model and data-driven analysis [5]. Subjective test estimates user experience directly from users by questionnaire [6–8]. However, the drawbacks of subjective test are obvious: high cost, limited assessors and inapplicability for online QoE estimation. Objective quality models fit QoE by using mathematical tools. Most of objective quality models are based on Human Visual
System or reference-based classification methods. In [9], a QoE model based on decision tree is proposed to predict user’s acceptability and pleasantness. In [10], the authors use Weber-Fechner Law to describe the relationship between QoS and QoE. There are two drawbacks in objective quality model: Firstly, objective quality model is built in special scenarios. Its generalization ability is poor. Secondly, objective quality model always only considers one aspect of factors which affect QoE such as human visual or psychological factors. Therefore, it may encounter challenges when used in 5G [11–13] and wireless sensor network [14–18]. Data-driven analysis emerges for evaluation of QoE with large-scale applications of online video service. The data-driven approaches measure user experience in some quantifiable metrics which can be readily applied to real-world conditions. Many researchers focus on user behavior in large-scale measurement studies [19–23]. In [24], the authors propose a QoE model based on linear regression with QoS metrics. In [25], the authors find the relationship between network quality metrics and QoE. Then regression tree is built to predict QoE. Moreover, artificial neural networks (ANN) have been recently studied in predicting user QoE in data-driven approaches. In [26], back propagation neural network (BPNN) is used to build three-level quantitative QoE evaluation model. The results suggest that the BPNN model gets higher correlation coefficients than the linear regression. In [27], the authors build multilayer perceptron (MLP) neural network with real data from high-speed packet access (HSPA) network. They also discuss the
⁎
Corresponding authors. E-mail addresses:
[email protected] (R. Huang),
[email protected] (X. Wei),
[email protected] (Y. Gao),
[email protected] (C. Lv),
[email protected] (J. Mao),
[email protected] (Q. Bao). https://doi.org/10.1016/j.comcom.2017.11.013 Received 1 May 2017; Received in revised form 1 November 2017; Accepted 24 November 2017 0140-3664/ © 2017 Elsevier B.V. All rights reserved.
Please cite this article as: Huang, R., Computer Communications (2017), https://doi.org/10.1016/j.comcom.2017.11.013
Computer Communications xxx (xxxx) xxx–xxx
R. Huang et al.
influence of some key performance indicators on QoE such as average user throughput or number of active users. In [28], the authors use the nPerf to get network parameters and network assessment tool to get QoE. Finally, they build a two-layer-forward neural network to predict QoE. However, the schemes mentioned above have several limitations. Firstly, they hardly consider user’s interests in IPTV system. Secondly, QoE models based on ANN are always processed for small-scale datasets which can not take full advantage of non-linear fitting capacity. Thirdly, the researchers mostly use the basic ANN models. They hardly consider accessory technologies such as regularization, cost-function optimization and weight initialization. Different from the existing works, we obtain dataset which contains QoS metrics associated with 1 million users from China Telecom operators. Firstly, we give the definition of QoE in IPTV for multimedia service providers and analyze the influence of different parameters on QoE. Then we propose an improved personal QoE model based on ANN with fine tuning hyper-parameters. The personal QoE model can be applied in various scenarios such as mobile social network [29–31], media cloud [32–34], device-to-device [35,36] and CDN [37]. The rest of this paper is organized as follows: In Section 2, we introduce the ways to collect dataset and define QoE in IPTV. In Section 3, we analyze the influence of different parameters on QoE. In Section 4 we propose a personal QoE model based on ANN and fine tune the hyper-parameters in ANN for predicting QoE. Then we provide and discuss experimental results. In Section 5, we give the conclusion.
Table 1 Data attributes. Type
Attribute
Meanings
network related data
user complaint data
MLR DF JITTER COLLECT_TIME START_TIME END-TIME CHANNEL_TYPE CHANNEL_ID COMPLAINT TIME
device related data
COMPLAINT CPU_USAGE
The loss rate of media package The Media stream delay The jitter of the network The start time of one record The start time of one channel The end time of one channel The service type of channel The ID of channel The time that user make a complaint by phone call The complaint from user The usage of GPU in IPTV set-top box
channel related data
starts a session by watching a first channel. Then he tunes the channel or chooses video in VOD until IPTV is closed. In this paper, when the user changes the channel or video, we define that he finishes a prior view and starts a new view. When the user closes TV, we define that he finishes a session. For example, in this figure, the user has finished two sessions. The first session contains 6 views and second session contains 3 views. In our dataset, service types can be split into three types: Live TV, Video on Demand(VOD) and time-shift TV(TS TV). Live TV distributes the same content for different users by multicasting, which is similar to traditional TV. VOD can allow users to select the video content from the video library provided by operators. TS TV allows users to watch TV shows after original television broadcast.
2. Data collection and definition of QoE 2.1. Data collection
2.2. Data preprocessing
The process of data collection is shown in Fig. 1. For influence indicators, they are originated from four aspects: network related data, device related data, channel related data and user complaint data. The first three types of data are collected by IPTV set-top box. Specifically, network-related data such as jitter and delay are extracted by libpcap from data packets. Device related data are collected by system monitoring modules. Channel related data such as collect_time, start_time, end_time and channel_id are obtained from VOD server. All these data are updated to database from IPTV set-top box. For user complaint data, it can be easily obtained by the consumer service departments. All data collected in this paper are shown in Table 1. Fig. 2 shows a typical session in IPTV. A user firstly opens TV and
When we collect large-scale data, we begin to clean erroneous data and label data. 2.2.1. Cleaning erroneous data and useless data In the dataset, there are three kinds of erroneous data: missing data, anomalous data and duplicated data. Missing data mean the loss in attributes. Anomalous data mean values of attributes are beyond normal range. Duplicated data mean the same data in the dataset, which will lead to overfitting and resource consuming. After finding these erroneous data, we delete them. Fig. 1. The process of data collection.
2
Computer Communications xxx (xxxx) xxx–xxx
R. Huang et al.
Fig. 2. The typical user behaviours in IPTV in a day.
We also delete some useless data or attributes such as media address, service IP for reducing computational resources.
Table 3 The number of unique channels and videos watched by a user.
2.2.2. Labeling data If a user spends more time on watching one channel, it is more likely that he will subscribe IPTV service. It also provides a wealth of data to advertisers about the behaviors of their target audiences. So viewing ratio can represent QoE for users. Specifically, viewing ratio (Vratio) is calculated as follows:
Vratio =
endtime − start time , collect time + interval − start time
All Live TV VOD TS TV
Weekly
Monthly
13.61 9.62 11.9 1.78
48.08 26.39 37.28 2.47
121.14 49.31 96.66 3.97
periods (daily, weekly, monthly) in Table 3. Considering our dataset which contains 150 channels and 16,904 videos, a user only watches 6.4% of channels and 0.07% of videos in a day. We also can observe that views in all service types increase in different levels and users are more likely to watch the same channels in Live TV and different videos in VOD. TS TV is a supplementary approach for Live TV which is one of the reasons why TS TV is unpopular among users. Therefore, we can conclude that the service type affects user behaviour seriously in IPTV.
(1)
where start_time is the start time of one record. The end_time is the end time of the same record. The difference between them is the viewing length of one channel in one record. IPTV set-top box packages all these records every 5 min as a record package and uploads it to database. The collect_time is the start time for the first record in a record package. In our scenario, interval denotes the interval between two packages. It is set as 300 s. If a user watches a channel or video more than 300 s, the view is split into several records in several packages. QoE of a view is the average of Vratio of all records in this view and QoE of a session is the average of Vratio of all records in this session. For example, if a user watches channel_1 from 8:31 to 8:32 and channel_2 from 8:32 to 8:40, there will be two packages. The first package which contains two records is from 8:30(collect_time) to 8:35. One record is channel_1 from 8 : 32 − 8 : 31 60 s 8:31 to 8:32 and Vratio = 8 : 30 + 5 − 8 : 31 = 240 s = 0.25. The other record is 8 : 35 − 8 : 32
Daily
3.1.1. Usage by hour of a day In order to find user access patterns in different service types, we count the number of requests in all service types and consider the popularity of different types. We plot the normalization of requests shown in Fig. 3. From this figure we observe that the peak hour in Live TV (7PM ∼ 9PM) is longer than TS TV (8PM ∼ 9PM)and VOD (7PM ∼ 8PM). The churn rate of VOD in mid-nights(0AM ∼ 5AM) is higher than Live TV and TS TV. The main reason for the phenomenon is that many users do not close IPTV box when they sleep leaving Live TV playing for 24 h channel. Contrarily, VOD will stop when the video is over.
180 s
channel_2 from 8:32 to 8:35 and Vratio = 8 : 30 + 5 − 8 : 32 = 180 s = 1. The second package which contains one record is from 8:35(collect_time) to 8 : 40 − 8 : 35 300 s 8:40. The record is channel_2 and Vratio = 8 : 35 + 5 − 8 : 35 = 300 s = 1. The viewing ratio of view in channel_1 is 0.25 and in channel_2 is 1 which is the average of two records in two packages. Then we map the mean of Vratio to QoE, represented as the MOS value shown in Table 2:
3.1.2. Usage by day of a week The usage by hour of a day represents the fine granularity of user’s behaviors. Fig. 4 provides a more coarse-grained view on each day of a week.
3. Understanding key factors of QoE In this section, we mainly discuss about user interest indicators from channel related data and device indicators from device related data. 3.1. User access patterns In our dataset, there are 150 channels and 16,904 videos. To understand user’s interests, we firstly present statistics data from the number of channels or videos watched by a user in different time Table 2 The mean of Vratio mapping to QoE. Mean of Vratio
QoE(MOS_value)
[0 ∼ 0.2) (0.2 ∼ 0.4] (0.4 ∼ 0.6] (0.6 ∼ 0.8] (0.8 ∼ 1.0]
1 2 3 4 5
Fig. 3. Hourly normal scores distribution of requests in all service types.
3
Computer Communications xxx (xxxx) xxx–xxx
R. Huang et al.
more interested in Live TV than VOD in IPTV. 2. Compared with Live TV, the popularity of VOD have a more serious long tailed distributions.
20% 18%
percentage of requests
16%
We think there are two reasons for this phenomenon. Firstly, there are more videos in VOD than live TV. Users always choose small set of videos in VOD to watch. Secondly, it is impossible for all videos to have chance to show themselves. It mainly depends on its popularity and recommender system in IPTV. In order to find the differences between personal interests and general interests, we choose the Top 10 channels and videos from each service type and each user during one week to plot Fig. 8. In Fig. 8, we draw the CDF of the number of the same channels and videos between user individual Top 10 and general Top 10 in all service types. We can observe that in Live TV, 80% users have at least 3 same channels in the Top 10. In TS TV, 76% users have no same channels in the Top 10. In VOD, only 10% users have 1 or 2 channels in the Top 10 and 90% users are not interested in the Top 10 videos. The Fig. 10(a) shows that the rate of user interest change in the Top 10 channels is less than 25%. On the other hand, user’s interests in VOD change over time. Compared with Fig. 3, we can observe that user interest change is related to user access patterns. When more users watch videos in VOD, the Top 10 videos change more frequently. So it is necessary for operators to provide the personalized multimedia service to meet user’s demands.
14% 12% 10% 8% 6% 4% 2% 0% Monday
Tuesday
Wednesday
Live TV
Thursday
TS TV
Friday
Saturday
Sunday
VOD
Fig. 4. The percentage of requests in days of a week.
From Fig. 4, we can observe that the percentage of requests varies smoothly. From Fig. 5, we can see that users spend more time on watching IPTV in weekends than weekdays. The number of Live TV views and VOD views in weekends are higher than weekdays by 35.8% and 55.8%, respectively. It indicates that users spend more time watching TV and are more likely to watch videos in weekends. Combining Fig. 4 with Fig. 5, we can conclude that different days can reflect the user access patterns in different service types. Different service types have different levels of sensitivity to days. User experience is related to both service type and days.
3.3. Broadcasting quality In our dataset, many channels in Live TV have two kinds of broadcasting quality: Standard Definition(SD) and High Definition (HD). Fig. 11 represents the ratio distributions of SD in days. It can be seen that most users are more likely to choose SD instead of HD channels. However, it does not mean that users are more interested in content than quality. From Fig. 9, we find the ratio of HD is lower than 10% in 80% channels. 10% channels are still popular among 15% users. It means that there are some users who want to balance the content and quality in some channels.
3.2. Channel and video popularity In this paper, we use the number of requests to represent the popularity of the channel. The video popularity can be approximated by the Pareto Principle [38]. We draw the fraction of total requests for each day contributed by the Top 10% and the Top 20% of channels and videos. From Fig. 6 (a), we can see that the Top 10% and the Top 20% of channels respectively contribute 60% and 73% of total requests in Live TV. In contrast, the Top 10% and the Top 20% of videos in VOD account for 42% and 55% respectively. Therefore, we can conclude that popular channels are more attractive than popular videos. From Fig. 7 we can see that the popularity of channels have a serious long tailed distributions. From Fig. 7 (a) and (b) we can make conclusions as follows:
3.4. Early quitter It is the nature for users to quit watching channel or video in the middle [39,40]. In the video from a website, viewing ratio is always used to measure the amount of a video watched. However, in the scenario of IPTV, it is impossible for operators to get video length or channel length. The all information of video length is controlled by video sources providers. The channel length consists of dynamic programs and advertisements for 24 h. In order to research user quitter in IPTV, we use viewing time of each view to measure user quitter behavior in this paper.
number of views
1. The popularity of Live TV and VOD are in different levels. Users are 20 18 16 14 12 10 8 6 4 2 0
18.05
3.4.1. Service type in early quitter In Fig. 12, we draw CDF for viewing time in all service types. In this case, we find that early quitter is sensitive to service type. In first 10 s, some users quit watching TS channel which is rarely happened in Live TV and VOD. It shows that some users may enter TS TV by mistake. In first 10 min, the viewing time of VOD is longer than that of Live TV and TS TV. In other words, compared with Live TV and TS TV, users are more likely to spend time in VOD at first. However, when viewing time arrives at 800 s, users spend more time to watch Live TV and TS TV. The reasons may be that length of video is limited and short video is more popular. We can also observe that viewing time in Live TV and VOD rarely ends in 10 s. According to our definition of QoE in Section 2, the view ending in 10 s will lead to low QoE. But the short time can not represent user’s real QoE. Therefore, we decide to discard the records whose viewing time is less than 10 s.
13.4 11.58 9.4 7.6 6.92
0.3 0.28 0.35
Live TV allday
TS TV weekday
VOD weekend
Fig. 5. Number of views in different days.
4
Computer Communications xxx (xxxx) xxx–xxx
R. Huang et al.
Fig. 6. Contribution of the Top 10% and the Top 20% channels or videos in (a) Live TV, (b) TS TVand (c) VOD.
Fig. 13 (a), we can conclude that early quitter in VOD is less related to the popularity. When users watch more than 200 s in VOD, the unpopular videos are more likely to attract user’s attention than popular and normal videos. However, in the first 60 s all of the users show the same interests. The results consist with the conclusion that the user’s interests are various in VOD.
3.4.2. Popularity in early quitter Fig. 13 shows the CDF for viewing time of different popular channels in Live TV and VOD. The vertical axis is the proportion of all views. The horizontal axis is the log axis for viewing time in a view. We define the popularity of videos or channels in early quitter as follows: (a) the Top 10% videos in term of requests as popular (b) the Top 11% ∼ 30% in term of requests as normal (c)the Top 31% ∼ 100% in term of requests as unpopular. We can observe that in the first 10 s there is no difference for all three kinds of popularity and users watch channels in patience. Then when users watch more than 10 s and until 300 s, we can find the difference that users spend more time in watching channel with high popularity. With the increase of viewing time, the gap between unpopular and mid popular channels is closing. Different from
3.5. CPU usage In our dataset, CPU usage is main device indicator. The range of CPU usage is from 0% to 100%. 0% means that CPU do not work while 100% means that the CPU is in full load. According to Fig. 3, we can conclude that fluctuation of CPU usage is related to user access
Fig. 7. The distribution of user access number in Live TV (a) and VOD (b).
5
Computer Communications xxx (xxxx) xxx–xxx
R. Huang et al.
Fig. 8. CDF of the number of the same channels and videos between user individual Top 10 and general Top 10.
Fig. 9. The ratio of HD in all channels.
patterns. However, Fig. 14 show that CPU usage is maintained at low level from 16% to 22%. It shows that computing power of CPU in IPTV set-top box is of overcapacity. The operators should focus on software of IPTV set-top box to improve user experience. 3.6. Discussion Fig. 10. The ratio of user interest change of the Top 10 in Live TV (a) and VOD (b) during 24 h.
In this section, we discuss about influence of different factors on QoE such as user access patterns, the popularity of channels and videos, broadcasting quality, early quitter and CPU usage. We have the following main observations:
100%
•
92.24%
Monday
Tuesday Wednesday Thursday
93.92%
93.66%
92.13%
91.70%
93.24%
Friday
Saturday
Sunday
80%
• Importance of service type: According to the analysis, the service •
93.97%
90%
rao of SD
70%
type seriously affects user access patterns, the distribution of popularity of videos or channels and early quitter in IPTV. For example, more users quit the view in the first 10 min in Live TV and TS TV than VOD. A degree of overlap between personal interests and general interests in Live TV: Nearly 70% users can find at least 3 channels from the Top 10 channels in Live TV as their favorite channels. In VOD, only 10% users can find 1 or 2 channels. Stabilization of general interests in Live TV: The ratio of general interest change of the Top 10 channels in 24 h is stable. It is also
60% 50% 40% 30% 20% 10% 0%
Fig. 11. The ratio of SD in days.
6
Computer Communications xxx (xxxx) xxx–xxx
R. Huang et al.
25%
percentage of CPU usage
20%
15%
10%
5%
0% 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 me of a day
Fig. 12. CDF for viewing time in all service types.
Fig. 14. The average of CPU usage in 24 h.
related to user access patterns. These observations can help us build the QoE model for different service types. We should consider the differences between service types. It is also necessary for generating the suggestion list for users in IPTV. For example, if the recommender system gets a new user, it can generate the suggestion list for the new user according to the Top 10 channels in Live TV. 4. QoE model based on ANN 4.1. Personal QoE model on ANN After analyzing the influence of user interest indicators and device indicators, in this section we build a personal QoE model based on these indicators. The model can help network operators and content providers to find the way to improve QoE in corresponding fields. Because of the complicated relationship between indicators and QoE, we propose a personal QoE model based on both subjective and objective indicators. As mentioned in Section 3, device indicators such as CPU usage do not affect user experience. Here network indicators are used to represent objective aspects. We use user interest indicator (uindex) in Live TV to represent subjective aspects. In order to combine the personal QoE model with data collection mentioned in Section 2, the personal QoE model is split into three main steps shown in Fig. 15. When a user starts a view, user_id is taken as
data
Generate user interest list
Get uindex (user_interest_index)
QoE prediction model
Fig. 13. CDF for viewing time of different popular channels or videos in (a) Live TVand (b) VOD.
Get QoE in a view Fig. 15. The process of predicting QoE.
7
Computer Communications xxx (xxxx) xxx–xxx
R. Huang et al.
input to generate user interest list. Uindex is calculated to measure the user’s interests in the channel. Subsequently, ANN model is chosen and associated parameters are trained by user interest indicators and network indicators. Further details of this method are described as follows: Step 1: Generate user interest list In order to study user’s interests quantitatively, we firstly generate user interest list. In this process, whether the user is a regular user or a new user is decided by searching viewing history with user_id. If it is the first day for the user to enjoy the multimedia service, the model will take the user as a new user. In order to generate user interest lists with relevance and diversity, the model generates user interest lists with user interest channels in history.For new users, the model generates public user interest lists for them with the Top 10 channels from all users in previous day.All new users get the same public user interest list. For regular users, the model also generate personal interest lists for them with the Top 10 from the viewing history in the previous day which maintains personalization and reflects variation of user’s interests. All regular users get his or her own personal user interest list which will change next day according to the viewing history of the day. Step 2: Get user interest index From Fig. 10, we can see that most users have their individual interests in channel and it is hard for users to change their preferences in short time. From Table 3, we conclude that each user watches less than 10 channels in Live TV. Therefore, user interest list which contains popular the Top 10 channels in previous day can represent the user’s interests in Live TV in one day. The uindex is proposed to measure and quantify the user’s interests in specific channel by ratio between viewing time of specific channel and viewing time of all channels in user interest list. Uindex is defined as follows: Vtime
uindex =
⎧ Vsum
channel _id in interest list
⎨ ⎩ 0
otherwise ,
ΔW (n) = η
∂E (n − 1) + α ΔW (n − 1), ∂W
1 2n
∑ x
y (x )−aL (x )
>
.
?
?
? >T
7U+
.T ?
[YKXOTZKXKYZ OTJOIGZUXY >T
.T
?
Fig. 16. The structure of ANN neural network.
MOS. The input indicators act on output node through hidden point and gets the output QoE by nonlinear transformation. The typical QoE model based on ANN is to find the smallest output error for the dataset by adjusting the weight between hidden nodes, input nodes and threshold[41,42].We train hyper-parameters in ANN model and find the best choice for building a QoE model. 4.2. Performance evaluation The goal of a QoE model is to map user experience into range from 1 to 5. This is a typical classification problem and many machine learning models can be adopted. We mainly take k-Nearest Neighbor (kNN) and Support Vector Machine (SVM) as a base line to predict QoE. We use the scikit-learn to build the personal QoE model on kNN and SVM. We build the personal QoE model on ANN by Keras which is a high-level neural networks library running on the top of TensorFlow [43]. We predict QoE in both record level and viewing level. During the optimization process in ANN, we firstly fix the learning rate of SGD to 0.1, and then adapt the number of layers and neurons in hidden layer. And we finally get the 20 units in 1 hidden layer which works well and the accuracy is about 78%. We also find that the prediction accuracy is related to learning rate η. From Fig. 17, we can see that the accuracy may change when we change the learning rate. Then we get the highest accuracy at 0.1 in both viewing level and record level. The final ANN model contains 5 units in the input layer, 20 units in the hidden layer and 5 units in the output layer. The learning rate is 0.1 and activation function is Relu function. In Fig. 18, we observe the proposed ANN get higher accuracy than SVM and KNN in Live TV with the record level. Uindex indeed improves the prediction results by 1.27% and 1.24% for KNN, 1.24% and 0.13% for SVM and 0.07% and 0.05% for ANN. In comparison, KNN has a lower performance due to its overfitting in training data which makes it difficult to deal with testing data.The result proves that uindex from user interest factors can indeed predict user experience in the record
(2)
(3)
(4)
where W denotes weight, ΔW denotes gradient, η denotes learning rate and controls the step and E denotes gradient of error function. As we can see from Eq. (3), ANN updates η according to each sample rather than all the samples, which accelerates the speed of finding the optimal solutions. By adapting the learning rate, we may get the best model. Here, we use error function in the following:
C=
.
TKZ]UXQ OTJOIGZUXY
where Vtime denotes the sum of the user’s viewing time of the channel in previous day. Vsum denotes the sum of user’s viewing time for all channels in the user interest list in previous day. Every channel from user interest list gets its own uindex. If the user spends more time on the channel, the uindex of the channel will increase with the increase of the viewing time of the channel. If the channel_id is not found in the user interest list, the uindex will be 0. Step3: Train a personal QoE model on ANN ANN is one of the most popular neural networks for acceptable computational complexity. The theory is based on gradient descent method which uses chain rule to iteratively compute gradients for each layer. The basic equation of ANN training algorithm is:
W (n) = W (n − 1) − ΔW (n),
>
2
(5)
In this section, we train three layer ANN based on Stochastic Gradient Decent (SGD) to find the best parameters by network indicators and user interest indicators showed in Fig. 16. This network consists of three layers named input,hidden, output layers. X1 , …, Xn are input neurons. In this typical neural network, we use parameters as input. H1, …, Hn are hidden neurons. The network has five output nodes Y1, …, Y5 to represent QoE which is {1, 2, 3, 4, 5} in
Fig. 17. Predicting QoE with different learning rates in ANN.
8
Computer Communications xxx (xxxx) xxx–xxx
R. Huang et al.
Fig. 18. Accuracy comparison of SVM, KNN and ANN in Live TV with the record level.
Fig. 19. Accuracy comparison of SVM, KNN and ANN in Live TV with the viewing level.
Foundation of China(Grant No. 61322104, 61571240), the Priority Academic Program Development of Jiangsu Higher Education Institutions, the Natural Science Foundation of Jiangsu Province (Grant No. BK20161517), the National Engineering Research Center of Communications and Networking (Nanjing University of Posts and Telecommunications, Grant No.TXKY17003), the Qing Lan Project, and the Scientific Research Foundation of NUPT (Grant No. NY217022).
level. It is necessary for us to evaluate more features from subjective factors to measure QoE. From Fig. 19, we observe the similar result that uindex for regular users improves accuracy by 1.1% for KNN, 0.84% for SVM and 1.02% for ANN in viewing level. However, uindex for new users get lower accuracy in SVM, KNN and ANN. It shows that uindex can improve the accuracy of predicting QoE for regular users but it is invalid for new users in the viewing level. Considering uindex can get higher accuracy for regular users than new users in the record level, uindex should be improved for cold start problem.
References [1] ITU-t recommendation p.10/g.100, 2008, Vocabulary for performance and quality of service. Amendment 2: New definitions for inclusion in Recommendation ITU-T P.10/G.100. [2] S.S. Joshi, O.B.V. Ramanaiah, An integrated qoe and qos based approach for web service selection, 2016 International Conference on ICT in Business Industry Government (ICTBIG), (2016), pp. 1–7. [3] F.Z. Yousaf, et al., Network slicing with flexible mobility and qos/qoe support for 5g networks, 2017 IEEE International Conference on Communications Workshops (ICC Workshops), (2017), pp. 1195–1201. [4] V. Chervenets, V. Romanchuk, H. Beshley, A. Khudyy, Qos/qoe correlation modified model for qoe evaluation on video service, 2016 13th International Conference on Modern Problems of Radio Engineering, Telecommunications and Computer Science (TCSET), (2016), pp. 664–666. [5] Y. Chen, K. Wu, Q. Zhang, From qos to qoe: a tutorial on video quality assessment, IEEE Commun. Surv. Tutorials 17 (2) (2015) 1126–1165. [6] ITU-r recommendation p.800. methods for subjective determination of transmission quality, 1996. [7] M.H. Pinson, Comparing subjective video quality testing methodologies, Visual Communications and Image Processing, (2003), pp. 573–582. [8] M. Li, C.Y. Lee, A cost-effective and real-time qoe evaluation method for multimedia streaming services, Telecommun. Syst. 59 (3) (2015) 317–327. [9] A. Balachandran, V. Sekar, A. Akella, S. Seshan, I. Stoica, H. Zhang, Developing a predictive model of quality of experience for internet video, Proceedings of the
5. Conclusion In this paper, we firstly introduce data collection and definition of QoE in IPTV. Then we discuss about the relationship between key indicators and QoE. We observe that service type and user’s interests can seriously influence QoE. In order to quantify user’s interests in Live TV, we propose uindex to measure user’s interests in specific channel. Finally, we build a personal QoE model based on ANN to measure QoE. We find that the multi-layer neural network works better than the other two models by fine tuning when we use 10-fold cross validation. The results show that uindex indeed improve the accuracy of QoE prediction for regular users.In the future, we will focus on measuring user’s interests and QoE in VOD. Acknowledgment This work is partly supported by the National Natural Science 9
Computer Communications xxx (xxxx) xxx–xxx
R. Huang et al.
[10]
[11] [12]
[13] [14]
[15] [16] [17] [18]
[19]
[20]
[21]
[22]
[23]
[24]
[25]
[26]
Network Infrastructure and Digital Content, (2014), pp. 56–60. [27] L. Pierucci, D. Micheli, A neural network for quality of experience estimation in mobile communications, MultiMedia 23 (4) (2016) 42C49. [28] P. Anchuen, P. Uthansakul, M. Uthansakul, QOE model in cellular networks based on QOS measurements using Neural Network approach, 2016 13th International Conference on Electrical Engineering/Electronics, Computer, Telecommunications and Information Technology (ECTI-CON), (2016), pp. 1–5. [29] H. Hu, Y. Wen, D. Niyato, Public cloud storage-assisted mobile social video sharing: a supermodular game approach, IEEE J. Sel. Areas Commun. 35 (3) (2017) 545–556. [30] H. Hu, Y. Wen, D. Niyato, Spectrum allocation and bitrate adjustment for mobile social video sharing: Potential game with online qos learning approach, IEEE J. Selected Areas Commun. 35 (4) (2017) 935–948. [31] Z. Su, Q. Xu, Q. Qi, Big data in mobile social networks: a qoe-oriented framework, IEEE Netw 30 (1) (2016) 52–57. [32] Z. Liang, Specific- versus diverse-computing in media cloud, IEEE Trans. Circuits Syst. Video Technol. 25 (12) (2015) 1888–1899. [33] Z. Liang, Qoe-driven delay announcement for cloud mobile media, IEEE Trans. Circuits Syst. Video Technol. 27 (1) (2017) 84–94. [34] Z. Liang, On data-driven delay estimation for media cloud, IEEE Trans Multimedia 18 (5) (2016) 905–915. [35] Z. Liang, Mobile device-to-device video distribution: theory and application, ACM Trans. Multimed. Comput. Commun. Appl. 12 (3) (2015) 1253–1271. [36] D. Wu, L. Zhou, Y. Cai, Social-aware rate based content sharing mode selection for D2D content sharing scenarios, IEEE Trans Multimedia 19 (11) (2017) 2571–2582. [37] H. Hu, Y. Wen, T.S. Chua, J. Huang, W. Zhu, X. Li, Joint content replication and request routing for social video distribution over cloud CDN: a community clustering method, IEEE Trans. Circuits Syst. Video Technol. 26 (7) (2016) 1320–1333. [38] M. Cha, H. Kwak, P. Rodriguez, Y.-Y. Ahn, S. Moon, I tube, you tube, everybody tubes: Analyzing the worlds largest user generated content video system, Proceedings of the 7th ACM SIGCOMM Conference on Internet Measurement, New York, NY, USA, (2007), pp. 1–14. [39] L. Chen, Y. Zhou, D.M. Chiu, Video browsing - a study of user behavior in online vod services, 2013 22nd International Conference on Computer Communication and Networks (ICCCN), (2013), pp. 1–7. [40] A. Finamore, M. Mellia, M.M. Munaf, R. Torres, S.G. Rao, Youtube everywhere: impact of device and infrastructure synergies on user experience, Proceedings of the 2011 ACM SIGCOMM Conference on Internet Measurement Conference, New York, NY, USA, (2011), pp. 345–360. [41] K. Zheng, X. Zhang, Q. Zheng, W. Xiang, L. Hanzo, Quality-of-experience assessment and its application to video services in lte networks, IEEE Wireless Commun. 22 (1) (2015) 70–78. [42] Y. Kang, H. Chen, L. Xie, An artificial-neural-network-based qoe estimation model for video streaming over wireless networks, 2013 IEEE/CIC International Conference on Communications in China (ICCC), (2013), pp. 264–269. [43] M. Abadi, et al., Tensorflow: Large-scale machine learning on heterogeneous distributed systems, 2016.
ACM SIGCOMM 2013 Conference on SIGCOMM, New York, NY, USA, (2013), pp. 339–350. P. Reichl, B. Tuffin, R. Schatz, Logarithmic laws in service quality perception: where microeconomics meets psychophysics and quality of experience, Telecommun. Syst. 52 (2) (2013) 587–600. S. Chen, J. Zhao, The requirements, challenges, and technologies for 5g of terrestrial mobile telecommunication, IEEE Commun. Mag. 52 (52) (2014) 36–43. S. Chen, F. Qin, B. Hu, X. Li, Z. Chen, User-centric ultra-dense networks for 5g: challenges, methodologies, and directions, IEEE Wireless Commun. 23 (2) (2016) 78–85. H. Wang, S. Chen, H. Xu, M. Ai, Y. Shi, Softnet: a software defined decentralized mobile network architecture toward 5g, IEEE Netw. 29 (2) (2015) 16–22. O. Diallo, J.J.P.C. Rodrigues, M. Sene, Real-time data management on wireless sensor networks: a survey, Journal of Network & Computer Applications 35 (3) (2012) 1013–1021. Y. Wen, X. Zhu, J.J.P.C. Rodrigues, C.W. Chen, Cloud mobile media: reflections and outlook, IEEE Trans Multimed. 16 (4) (2014) 885–902. J.J.P.C. Rodrigues, P.A.C.S. Neves, A survey on IPbased wireless sensor network solutions, Int. J. Commun. Syst. 23 (8) (2010) 963–981. Y. Wang, et al., A data-driven architecture for personalized qoe management in 5g wireless networks, IEEE Wireless Commun. 24 (1) (2017) 102–110. Y. Xu, S.E. Elayoubi, E. Altman, R. El-Azouzi, Y. Yu, Flow-level qoe of video streaming in wireless networks, IEEE Trans. Mob. Comput. 15 (11) (2016) 2762–2780. A. Balachandran, V. Sekar, A. Akella, S. Seshan, Analyzing the potential benefits of CDN augmentation strategies for internet video workloads, Conference on Internet Measurement Conference, (2013), pp. 43–56. T.D. Pessemier, K.D. Moor, W. Joseph, L.D. Marez, L. Martens, Quantifying the influence of rebuffering interruptions on the users quality of experience during mobile video watching, IEEE Trans. Broadcast. 59 (1) (2013) 47–61. P. Gill, M. Arlitt, Z. Li, A. Mahanti, Youtube traffic characterization:a view from the edge, ACM SIGCOMM Conference on Internet Measurement 2007, San Diego, California, Usa, October, (2007), pp. 15–28. H. Yin, et al., Inside the birds nest: measurements of large-scale live vod from the 2008 olympics, Proceedings of the 9th ACM SIGCOMM Conference on Internet Measurement, New York, NY, USA, (2009), pp. 442–455. S.S. Krishnan, R.K. Sitaraman, Video stream quality impacts viewer behavior: inferring causality using quasi-experimental designs, IEEE/ACM Trans. Netw. 21 (6) (2013) 2001–2014. F. Dobrian, et al., Understanding the impact of video quality on user engagement, Proceedings of the ACM SIGCOMM 2011 Conference, New York, NY, USA, (2011), pp. 362–373. M.Z. Shafiq, J. Erman, L. Ji, A.X. Liu, J. Pang, J. Wang, Understanding the impact of network dynamics on mobile video user engagement, The 2014 ACM International Conference on Measurement and Modeling of Computer Systems, New York, NY, USA, (2014), pp. 367–379. W. Kaiyu, W. Yumei, Z. Lin, A new three-layer qoe modeling method for HTTP video streaming over wireless networks, 2014 4th IEEE International Conference on
10