J. Vis. Commun. Image R. 33 (2015) 78–84
Contents lists available at ScienceDirect
J. Vis. Commun. Image R. journal homepage: www.elsevier.com/locate/jvci
Improved SAP based on adaptive directional prediction for HEVC lossless intra prediction q Xiao-Peng Xia a,b,⇑, En-Hai Liu a, Jun-Ju Qin c a
Institute of Optics and Electronics, Chinese Academy of Sciences, Chengdu 610209, China University of Chinese Academy of Sciences, Beijing 100049, China c Chengdu Normal University, Chengdu 610072, China b
a r t i c l e
i n f o
Article history: Received 30 March 2015 Accepted 7 September 2015 Available online 12 September 2015 Keywords: HEVC Intra-coding Lossless video coding SAP Linear interpolation Prediction modes ADSAP Frame-level pretreatment
a b s t r a c t In HEVC (High Efficiency Video Coding), linear interpolation of the boundary pixels is used as the predictor and pixels within the same PU (Prediction Unit) are in the same prediction direction. When the PU block is large and in high complexity, the prediction performance would worsen. Although many algorithms use the pixel-based weighted averaging or interpolation operations to perform the prediction which improves the performance of bitrate saving notably, there is still space for further improvement. This paper proposed an improved SAP (Sample-based Angular Prediction) algorithm based on adaptive directional prediction (ADSAP). The innovation of this paper lies in two aspects: it puts forward a new method to estimate the best prediction direction of current pixel and it introduces the concept of ‘‘frame-level pretreatment” which could greatly improve the encoding speed. Experimental results show that it could save the output bitrate by about 9.4158% when compared with the HEVC lossless intra prediction algorithm, which is better than other typical algorithms. Besides, the encoding and decoding time are reduced by about 11%. Moreover, when the minimum PU size gets larger, the increase of output bitrate of ADSAP is much less than that of HEVC and the conventional SAP algorithm, which makes the proposed algorithm quite suitable for the compression of high resolution videos. Ó 2015 Elsevier Inc. All rights reserved.
1. Introduction HEVC is the new generation of video compression standard which outperforms other standards. It can reduce the output bitrate by about 50% and maintains similar quality when compared with AVC/H.264 [1,2]. However, the lossless intra compression efficiency of HEVC is inferior to that of MPEG-AVC/H.264 [3]. To make HEVC more outstanding and adaptive to different compression applications, many algorithms have been proposed to improve the performance of HEVC lossless intra prediction [3]. One method is to use the Sample-based Angular Prediction (SAP) [4–7]. It uses the pixels, which neighbor the current pixel to be predicted, to perform the interpolation and get the predicting value. Although it achieves considerable improvement than HEVC, it still uses the same prediction direction for each pixel in the whole PU block.
q
This paper has been recommended for acceptance by Yehoshua Zeevi.
⇑ Corresponding author at: Institute of Optics and Electronics, Chinese Academy of Sciences, Chengdu 610209, China. E-mail address:
[email protected] (X.-P. Xia). http://dx.doi.org/10.1016/j.jvcir.2015.09.003 1047-3203/Ó 2015 Elsevier Inc. All rights reserved.
To fully exploit the spatial redundancy, literature of [8] put forward the template-based adaptive-weighted intra prediction which could achieve about 8.51% of bitrate reduction. Besides, literatures of [9,10] proposed the pixel-wise spatial interleave prediction algorithm and hierarchical prediction algorithm based on adaptive interpolation filtering, respectively. They could improve the prediction performance in some degree. In addition, some other algorithms based on the residual prediction [11,12] and residual transform [13–15] have also been proposed. By analyzing the characteristics of residual signals and applying transforms to them, they have achieved notable improvement. Those algorithms further improve the compression gain of the intra coding; nevertheless, there is still space for improvement. On the other hand, as the high resolution videos have been more and more widely used, one way to speed up the encoding process is to use larger predicting blocks (PU). However, the prediction performance of traditional block-based algorithms would worsen when the PU block becomes larger. Considering the above reasons, this paper puts forward a new algorithm which could not only achieve impressive compression gain, encoding and decoding speed, but also perform well when the PU block becomes larger.
X.-P. Xia et al. / J. Vis. Commun. Image R. 33 (2015) 78–84
79
The organization of this paper is as follows: in Section 2, angular prediction which is used in HEVC is briefly introduced and the conventional SAP algorithm is described. In Section 3, the proposed ADSAP algorithm is explained. The experimental results are discussed in Section 4. Some optimizations for higher encoding speed are introduced in Section 5. Finally, Section 6 concludes this paper. 2. HEVC intra-coding and SAP algorithm
Fig. 2. Linear interpolation of the angular prediction.
2.1. Angular prediction in HEVC There are at most 35 modes for luma intra predictions to be checked (Fig. 1). Mode 1 is the DC prediction which uses the average of the boundary pixels in the neighboring column and row of the current PU block as the predictor. The planar mode (mode 0) generates the prediction considering the distance between the current pixel to be predicted and the corresponding boundary pixels in the neighboring row and column of the PU block. Other angular prediction modes use the 1/32 interpolated prediction values which are deduced from the block boundary integer pixels as the predictors, which is shown in Eq. (1) [16]:
p ¼ ðð32 iFactÞ a þ iFact b þ 16Þ 5
ð1Þ Fig. 3. Reference samples selection of SAP for vertical prediction angles.
where a, b are reference samples for current sample x, p is the prediction sample, and iFact is the distance between p and b as depicted in Fig. 2. The HEVC uses the boundary pixels to perform the prediction, which is not good enough, so sample-based prediction has been proposed to better exploit the spatial redundancy. The SAP algorithm is a good example. 2.2. Sample-based Angular Prediction (SAP) Figs. 3 and 4 show that the SAP performs the prediction sample by sample. The reference samples denote available reconstructed samples, while the padded samples denote the unavailable samples. In SAP, the padded sample is replaced by its neighboring pixel which is in the PU block. The reference samples of current pixel (x) are its neighboring pixels (a and b), not the boundary pixels, which is the main difference between SAP and the angular prediction modes of HEVC. For vertical prediction angles (Fig. 3), the SAP is processed row by row within a PU, while for horizontal prediction angles (Fig. 4) the SAP is processed column by column within a PU [4–7].
Fig. 4. Reference samples selection of SAP for horizontal prediction angles.
Fig. 5. The current PU block S (N N pixels) to be predicted.
3. Proposed ADSAP algorithm 3.1. Analysis of pixel-based adaptive directional prediction
Fig. 1. Prediction modes in luma for the intra prediction.
Experimental results show that the SAP is very effective for exploiting the spatial redundancy, especially for the test sequences of D, E, F Classes. However, the compression gain for the sequences of A, B, C Classes are relatively low [4]. Besides, all the pixels within the same PU block are still in the same prediction direction, which is not always the best in actual situations.
80
X.-P. Xia et al. / J. Vis. Commun. Image R. 33 (2015) 78–84
Assume that the PU block S shown in Fig. 5 denotes the PU block to be predicted. In HEVC or SAP, the predicted block S0 can be expressed as follows:
S0 ¼ f cur
dir ðSÞ
S0i ¼ f cur
dir ðSi Þ
i ¼ 1; 2; . . . l
ð2Þ
where l is the number of pixels in current PU block, i is the index of each pixel, cur_dir denotes the current prediction direction of the PU, the fcur_dir denotes the interpolation operation in current prediction direction and the S0 is the predicted PU block. It’s obvious that when current PU block is quite small or with single texture, the Eq. (2) could achieve ideal prediction performance because each part in the PU is similar and would have the same prediction direction. However, when the PU block is large or with complex texture, the best prediction direction of each part inside the PU block may be different, which would worsen the prediction performance. In this situation, the best predictor should be:
S0i ¼ f cur
dir i ðSi Þ
i ¼ 1; 2; . . . l
ð3Þ
where the cur_dir_i denotes the best prediction direction of the i-th pixel (Si) and the fcur_dir_i denotes the interpolation operation in the direction of cur_dir_i. Apparently, the predictor shown in Eq. (3) would be quite good if the cur_dir_i could be determined accurately in each position within the PU block. So in the following, we will discuss how to estimate the best prediction direction of each pixel in the PU block. 3.2. Estimation of the best prediction direction for current pixel It is quite suitable to use the linear interpolation to predict the current pixel if the texture direction is known. Unfortunately, the texture information in current position is not available. However, it can be estimated by the neighboring pixels. Although there have been some algorithms such as the LOCO [11] and PWAP [8] which use the neighboring pixels of current pixel to perform the prediction, they are not outstanding enough. The best estimated direction of local texture should make the sum of the prediction errors of the neighboring pixels least. Here, a new method to estimate the best prediction direction of each pixel is proposed. As shown in Fig. 6, in this paper, the best prediction direction for current pixel C (marked with a green arrow) is estimated by the prediction directions of its neighboring 4 pixels of D, q, B, A because they are most closely related with the current pixel. When current pixel C is in the right column of the PU block which is marked in red in Fig. 6, only use the neighboring 3 pixels of D, q, B to estimate the prediction direction of C. The estimating process is as follows: Firstly, perform the linear interpolation prediction of D, q, B, (A) and calculate the sum of prediction errors of D–D0 , q–q0 , B–B0 , (A– A0 ) for each angular direction (mode 2–34). Here, the D, q, B, A are the neighboring pixels of current pixel C and their predicting values are D0 , q0 , B0 , A0 . Then choose the direction (prediction
Fig. 6. Reference samples selection of proposed algorithm.
Fig. 7. Padding the reference samples of ADSAP.
mode), which makes the sum of prediction errors least, as the best estimated prediction direction for current pixel C. Secondly, use the estimated direction (estimated_mode), which is marked with a green arrow in Fig. 6, to do the linear interpolation among D, q, B, A, and then the predicting value C0 could be obtained. Take an example: if estimated_mode 2 (26, 34], then iFact = (34-estimated_mode) ⁄ 8, and the predicting value C0 should be ((32-iFact) ⁄ B + iFact ⁄ A + 16)/32. By using this method, it could be guaranteed that the prediction is adaptive with the local texture as much as possible. From Fig. 6 it can be seen that the proposed algorithm is similar with the traditional SAP algorithm because both of them use pixelbased angular prediction. The only difference is that the SAP uses the same prediction direction for the whole PU block, while the proposed algorithm estimates the prediction direction of each pixel. To improve the performance of conventional SAP and fully exploit the spatial redundancy, our proposed adaptive directional prediction algorithm is integrated in the conventional SAP, replacing one angular prediction mode. The proportion of mode 34 is quite small among all the prediction modes (shown in Table 4), it has relatively small effect on the coding performance, so we use the proposed algorithm to replace it, hoping to obviously improve the coding performance. 3.3. Complement of ADSAP algorithm 3.3.1. Reduce encoding time From Section 3.2 it can be seen that the proposed ADSAP needs interpolation operations many times to estimate the best prediction direction of each pixel, which is in relatively high complexity and would be time-consuming. In order to reduce the encoding time, this paper uses threshold to speed up the encoding process. If the absolute value of (A + D q B) is smaller than a threshold which is chosen as 5 in this paper, we think that the current local area is in relatively low complexity. In this situation, the predictor of LOCO shown in Eq. (4), which has relatively low computational complexity, is adopted to perform the prediction. This threshold is determined by comparing different values at a constant interval and choosing the value which could keep a good balance between compression gain and encoding speed.
8 > < minðB; DÞ C 0 ¼ maxðB; DÞ > : BþDq
if q > maxðB; DÞ; if q < minðB; DÞ;
ð4Þ
else;
3.3.2. Complete directional prediction In the proposed ADSAP, the best prediction direction of current pixel is estimated by comparing the sum of prediction errors of D–D0 , q–q0 , B–B0 , (A–A0 ) in each direction (mode 2–34). However, the interpolation prediction operations are impracticable for the modes of 2–9 (shown in Fig. 6) because the pixel marked with cross (on the left side of the PU) is not available for D and the other pixel marked with cross is not available for A. They have not been reconstructed and are not available. In order to fairly consider all
X.-P. Xia et al. / J. Vis. Commun. Image R. 33 (2015) 78–84
the prediction modes (mode 2–34), the predictor of LOCO is used to replace the interpolation prediction for the modes of 2–9. 3.3.3. Padding reference samples As shown in Fig. 7, the reference samples marked with circles are reconstructed samples which are available. The R0, R1, . . ., RN are the reference pixels which are not available. For example: if the current pixel is in the position of C1, then the reference pixels of R2 and R3 will be needed to predict the current pixel. When the current pixel is in the position of C2, the pixels of R3 and R4 will be needed. However, those reference pixels are unavailable, so they should be estimated. The reference pixels are estimated as follows:
R0 ¼ m; R1 ¼ n; 8 > < minðR1 ; tÞ R2 ¼ maxðR1 ; tÞ > : R1 þ t n
if n > maxðR1 ; tÞ; if n < minðR1 ; tÞ;
LOCO predictor
ð5Þ
else;
R2 ; R3 ; . . . ; RN Use the LOCO predictor to estimate The whole flowchart of the proposed algorithm is shown in Fig. 8.
8 ð32 iFactÞ B þ iFact A þ 16Þ 5 > > > < ð32 iFactÞ q þ iFact B þ 16Þ 5 C0 ¼ > ð32 iFactÞ D þ iFact q þ 16Þ 5 > > : LOCO Predictor
81
3.4. Decoding process of ADSAP algorithm At the decoder side, the pixels in current PU block are decoded from left to right and top to bottom. If the angular prediction mode of current PU block is not 34, the decoding process is the same as that of the conventional SAP algorithm; otherwise, the decoding process should be as follows: Firstly, if the absolute value of (A + D q B) is smaller than the threshold of 5, Eq. (4) is used to get the predictor C’, and then the reconstructed pixel C can be got by adding the predictor and the residual. Secondly, if the absolute value of (A + D q B) is higher than the threshold, we should firstly estimate the best prediction direction of current pixel. In this situation, the sum of prediction errors of D–D0 , q–q0 , B–B0 , (A–A0 ) for each angular direction is calculated. If an angular direction could make the sum of prediction errors least, we take it as the best estimated direction of current pixel, and then the estimated direction is used to perform the linear interpolation to get the predictor C0 . Adding C0 and the residual will get the reconstructed pixel C. The process of linear interpolation prediction is the same as that of the encoding process, which is illustrated as Fig. 9 and Eq. (6).
iFact ¼ ð34 estimated modeÞ 8; estimated mode 2 ð26; 34 iFact ¼ ð26 estimated modeÞ 8; estimated mode 2 ð18; 26 iFact ¼ ð18 estimated modeÞ 8; estimated mode 2 ½10; 18
ð6Þ
estimated mode 2 ½2; 10Þ
4. Test results and analysis The experiments were conducted with HEVC (HM12.0) integrated with our proposed ADSAP algorithm. The reference software was running on Windows XP and VS2008, and the simulation platform was provided by a PC with an Intel Core i5 CPU @3.10 GHz, 3.33 GB RAM. The test conditions are the most common test conditions of JCTVC-K1003 [17]: Intra mode, main class. The formats of the test sequences are listed in Table 1. They include the Classes A, B, C, D, E, F, and the resolutions are 2560 1600, 1080p, 832 480, 416 240, 720p, respectively. As the proposed algorithm is for the lossless coding, the transform and quantization process for the residual signals are skipped. The Table 2 shows the output bitrate performance of different algorithms mentioned in Section 1. The ‘‘LOCO” algorithm uses
Fig. 8. The flowchart of the proposed ADSAP algorithm.
Fig. 9. The process of linear interpolation prediction using estimated direction.
82
X.-P. Xia et al. / J. Vis. Commun. Image R. 33 (2015) 78–84
Class
Resolution
Sequence name
Frame count
Frame rate
Bit depth
A
2560 1600 2560 1600 2560 1600 2560 1600
Traffic PeopleOnStreet Nebuta SteamLocomotive
150 150 300 300
30 30 60 60
8 8 10 10
B
1920 1080 1920 1080 1920 1080 1920 1080 1920 1080
Kimono ParkScene Cactus BQTerrace BasketballDrive
240 240 500 600 500
24 24 50 60 50
8 8 8 8 8
832 480 832 480 832 480 832 480
RaceHorses BQMall PartyScene BasketballDrill
300 600 500 500
30 60 50 50
8 8 8 8
D
416 240 416 240 416 240 416 240
RaceHorses BQSquare BlowingBubbles BasketballPass
300 600 500 500
30 60 50 50
8 8 8 8
E
1280 720 1280 720 1280 720
FourPeople Johnny KristenAndSara
600 600 600
60 60 60
8 8 8
F
832 480 1024 768 1280 720 1280 720
BaskeballDrillText ChinaSpeed SlideEditing SlideShow
500 500 300 500
50 30 30 20
8 8 8 8
C
Ƹbitrate/kbps x1000
Table 1 The formats of the test sequences.
Avg
SAP [4]
CROSS [15]
PWAP [8]
ADSAP
11.5 2.4 3.3 6.8 8.8 6.8
5.4 5.1 6.9 8.4 10.6 12.2
11.3 3.91 4.76 6.91 9.72 10.11
10.88 5.98 7.39 8.29 9.48 9.02
11.11 5.454 6.4725 8.855 10.8467 13.7565
6.3
7.9
8.43
8.51
9.4158
Table 3 The performance of encoding and decoding time.
Decoding time (%)
ADSAP Encoding time (%)
Decoding time (%)
3.7668
11.884
8.3381
11.7056
minumum PU_size=8 minumum PU_size=16
1
2
3
4
5
6
7
8
9
10
Fig. 10. HEVC bitrate performance with the PU size.
LOCO [11]
SAP Encoding time (%)
HEVC bitrate performance
sequence index
Table 2 The output bitrate performance of different algorithms (minimum PU_Size = 4).
A B C D E F
180 160 140 120 100 80 60 40 20 0
the LOCO predictor in the residual domain, and the ‘‘CROSS” algorithm applies the cross transform to the vertical and horizontal prediction residuals. The ‘‘PWAP” is the pixel-based weighted averaging prediction. It can be seen that among several typical algorithms, the proposed ADSAP achieves the best bitrate saving from 5.454% to 13.7565%, reaching about 9.4158% on average, which outperforms other algorithms. What is worth mentioning is that it achieves quite high bitrate saving of over 10.8467% for the Classes of A, E, F, which is much better than SAP and is very impressive. It’s also demonstrated that the proposed pixel-based adaptive directional prediction is very effective for exploiting the spatial redundancy. Table 3 shows the relative encoding and decoding time of the SAP and the proposed ADSAP when compared with HEVC (HM12.0). It’s clear that SAP can improve both the encoding and decoding speed, especially for the decoding process. However, it only achieves a bitrate saving of about 7.9%, which is not good enough. The proposed ADSAP could achieve a very impressive compression gain, and it could decrease the decoding time by about 11.7056% because the proposed ADSAP could notably decrease the output bitrate which could greatly cut down the entropy decoding time of the decoder. The only shortcoming is that it increases the encoding time by about 8.3381% which results from its relatively higher complexity. Fortunately, we have found the solution to this problem, and the solution will be discussed in Section 5. Table 4 shows the prediction mode distribution of the algorithms of HM12.0, SAP and the proposed ADSAP. It is easy to see that both the SAP and ADSAP decrease the PU number of Planar and DC modes and increase the PU number of angular modes, which indicates that they can make the prediction fit well with the actual texture. That’s why they can get higher compression gain than HEVC. When compared with the SAP, the PU number of mode 34 of ADSAP increases from 0.392% to 39.43%, which denotes that the proposed algorithm is effective and performs better than the SAP algorithm in large proportion.
Table 4 Intra prediction mode distribution (%). HM12.0
SAP
ADSAP
Planar
DC
Ang (except 34)
34
Planar
DC
Ang (except 34)
34
Planar
DC
Ang (except 34)
34
A B C D E F
15.2709 24.9175 19.5105 10.5969 17.7481 12.4694
8.2283 10.5647 9.3589 4.2626 12.4316 4.5697
74.1831 62.3103 69.124 84.398 68.6058 80.7401
2.3177 2.2075 2.0067 0.7425 1.2145 2.2208
5.4173 19.3857 7.6519 0.7469 8.5974 5.7394
1.667 9.1136 3.4189 0.2249 5.3695 1.6729
92.5018 71.3524 88.712 98.9077 85.8939 91.2768
0.414 0.148 0.217 0.121 0.139 1.311
1.8983 19.8326 5.0306 0.2558 4.3922 5.6046
0.719 11.1602 3.6367 0.0183 5.8777 1.7956
28.4124 45.9235 49.6436 53.097 50.7356 75.3791
68.97 23.08 41.69 46.63 38.99 17.22
Avg
16.7522
8.236
73.2269
1.785
7.9231
3.5778
88.1074
0.392
6.169
3.8679
50.5319
39.43
83
X.-P. Xia et al. / J. Vis. Commun. Image R. 33 (2015) 78–84
SAP bitrate performance Ƹbitrate/kbps x1000
60 50 minumum PU_size=8
40
minumum PU_size=16
30 20 10 0 1
2
3
4
5
6
7
8
9
10
sequence index Fig. 11. SAP bitrate performance with the PU size. Fig. 13. The optimization of ADSAP.
ADSAP bitrate performance 16 Table 5 Time performance of the optimized ADSAP.
Ƹbitrate/kbps x1000
14 12
minumum PU_size=8
10
minumum PU_size=16
Encoding time (%)
Decoding time (%)
11.83
10.96
8 6 4 2 0
1
2
3
4
5
6
7
8
9
10
sequence index Fig. 12. ADSAP bitrate performance with the PU size.
Figs. 10–12 show the bitrate increase of the HEVC, SAP and the ADSAP when the minimum PU size changes from 4 to 16. The horizontal axis denotes the sequence index (Traffic, PeopleOnStreet, NebutaFestival, SteamLocomotiveTrain, RaceHorsesC, BQMall, PartyScene, BasketballDrill, ChinaSpeed, BasketballDrillText) and the vertical axis denotes the increase of the output bitrate. It is obvious that when the minimum PU size becomes larger, the prediction performance would worsen and the output bitrate would increase. It can also be noticed that the output bitrate of HVEC increases the most, while the SAP increases less, and the ADSAP increases the least which is only about 1/4 of the SAP and 1/14 of the HEVC. These results show that the proposed ADSAP could not only achieve the best compression gain, but also increase the least bitrate when the PU block becomes larger. Considering that the high resolution videos have been widely used in many applications, and using larger PU blocks can greatly reduce the encoding time, the proposed ADSAP could play an important role in future applications. 5. Optimization of ADSAP As mentioned before, although the proposed algorithm performs quite well in compression gain and decoding speed, the encoding process is time-consuming to some extent. The reason is that for each pixel in the PU block, we need many times of interpolation operations to estimate its best prediction direction and then use the estimated prediction direction to predict the current pixel. Besides, the same pixel can be predicted repeatedly in the encoding process when the PU block becomes 64 64/32 32/16 16/8 8/4 4 block. Therefore, in order to further speed up the encoding process, we proposed a novel
method based on the ‘‘frame-level pretreatment” which is illustrated in Fig. 13. Firstly, divide the input frame into equal-size sub-blocks and then apply the ADSAP to each sub-block. Secondly, in the encoding process, if the prediction mode of current PU block is equal to 34, directly get the predicting values from the corresponding block which is in the reference frame (Fig. 13). This step doesn’t need any interpolation operations. By using this method, we could avoid repeating predicting the same pixel, which will greatly decrease the encoding time. The coding time performance of the optimized ADSAP algorithm is shown below (see Table 5).
6. Conclusions In this paper, an improved SAP algorithm based on adaptive directional prediction is proposed. By replacing one angular prediction mode of SAP with a new prediction method which uses the estimated texture direction of current local area to do the linear interpolation prediction, the proposed algorithm can achieve the bitrate saving of about 9.4158% on average, which is better than other typical algorithms. What’s more, among the algorithms of HEVC, SAP and ADSAP, the increase of output bitrate of ADSAP is much less than that of HEVC and the conventional SAP when the minimum PU size gets larger, which makes it quite suitable for the prediction of large blocks. Besides, a new novel method called ‘‘frame-level pretreatment” is also proposed. It could decrease the encoding time greatly, making the proposed ADSAP much faster than HEVC. The only shortcoming is that the proposed ADSAP cannot be used in lossy compression because it is incompatible with the block-based transforms, we will focus on this issue in our later work; even so, we still believe that all the advantages mentioned would make the proposed method a very useful tool in future applications.
Acknowledgment This research was in part supported by National Key Basic Research Program of China (2014CB744200).
84
X.-P. Xia et al. / J. Vis. Commun. Image R. 33 (2015) 78–84
References [1] G.J. Sullivan, J.-R. Ohm, W.-J. Han, T. Wiegand, Overview of the high efficiency video coding (HEVC) standard, IEEE Trans. Circ. Syst. Video Technol. 22 (12) (2012) 1649–1668. [2] P. Wu, M. Li, Introduction to the high-efficiency video coding standard, ZTE Commun. 110 (2) (2012) 1–4. [3] W. Gao, M. Jiang, H. Yu, On lossless coding for HEVC, in: Proceedings of the SPIE-The International Society for Optical Engineering, 2013, pp. 8666–8674. [4] M. Zhou, W. Gao, M. Jiang, H. Yu, HEVC lossless coding and improvements, IEEE Trans. Circ. Syst. Video Technol. 22 (12) (2012) 1839–1843. [5] M. Zhou, AHG19: Method of Frame-Based Lossless Coding Mode for HEVC, JCTVC document, JCTVC-H0083, San Jose, CA, February 2012. [6] W. Gao, M. Jiang, H. Yu, M. Zhou, AHG19: A Lossless Coding Solution for HEVC, JCT-VC document, JCTVC-H0530, San Jose, CA, February 2012. [7] M. Zhou, AHG22: Sample-Based Angular Prediction (SAP) for HEVC Lossless Coding, JCT-VC document, JCTVC-G093, Geneva, November 2011. [8] E. Wige, G. Yammine, P. Amon, A. Hutter, A. Kaup, Pixel-based averaging predictor for HEVC lossless coding, in: IEEE International Conference on Image Processing (ICIP), September 2013, pp. 1806–1815.
[9] S. Li, Z. Luo, C. Xiong, Improving lossless intra coding of H.264/AVC by pixelwise spatial interleave prediction, IEEE Trans. Circ. Syst. Video Technol. 21 (12) (2011) 1924–1928. [10] L.-L. Wang, W.-C. Siu, Improved hierarchial intra prediction based on adaptive interpolation filtering for lossless compression, in: IEEE International Symposium on Circuits and Systems, May 2013, pp. 265–268. [11] Y.-H. Tan, C. Yeo, Z. Li, Lossless Coding with Residual Sample-Based Prediction, JCTVC-K0157, Shanghai, China, October 2012. [12] Q. Zhang, Y. Dai, K. C.C. Jay, Lossless video compression with residual image prediction and coding (RIPC), in: IEEE International Symposium on Circuits and Systems (ISCAS), Taipei, Taiwan, May 2009, pp. 617–620. [13] J. Kwak, Y.-L. Lee, Secondary residual transform for lossless intra coding in HEVC, J. Broadcast. Eng. 17 (5) (2012) 734–741. [14] Y.H. Tan, C. Yeo, Z. Li, Residual DPCM for lossless coding in HEVC, in: IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), May 2013, pp. 2021–2025. [15] S.-W. Hong, J.H. Kwak, Y.-L. Lee, Cross residual transform for lossless intracoding for HEVC, Signal Process.: Image Commun. 9 (4) (2013) 1–7. [16] L. Dong, W. Liu, et al., Improved Chroma Intra Mode Signaling, Document JCTVC-D255, January 2011. [17] F. Bossen, Common Test Conditions and Software Reference Configurations, Document JCTVC-I1100, May 2012.