Image vector quantization codec indices recovery using Lagrange interpolation

Image vector quantization codec indices recovery using Lagrange interpolation

Image and Vision Computing 26 (2008) 1171–1177 Contents lists available at ScienceDirect Image and Vision Computing journal homepage: www.elsevier.c...

1MB Sizes 1 Downloads 40 Views

Image and Vision Computing 26 (2008) 1171–1177

Contents lists available at ScienceDirect

Image and Vision Computing journal homepage: www.elsevier.com/locate/imavis

Image vector quantization codec indices recovery using Lagrange interpolation Yung-Gi Wu a,*, Chia-Hao Wu b a b

Department of Computer Science & Information Engineering, Leader University, No.188, Sec. 5, An-Chung Road, Tainan City, Taiwan 709, Taiwan Institute of Applied Information, Leader University, Taiwan

a r t i c l e

i n f o

Article history: Received 18 July 2006 Received in revised form 26 November 2007 Accepted 25 February 2008

Keywords: Vector quantization Image communication Lagrange interpolation Index recovery

a b s t r a c t Vector quantization (VQ) which has been widely used in the field of video and image coding is an efficient coding algorithm due to its fast decoding efficiency. Sometimes, the indices of VQ will be lost because of the signal interference during the transmission. In this paper, we propose an efficient estimation method by using the Lagrange interpolation formula to conceal and recover the lost indices on the decoder side instead of re-transmitting the whole image again. If the image or video has the limitation of the period of validity, re-transmitting the data wastes of time and occupies the network bandwidth. Therefore, using the received correct data to estimate and recover the lost data is efficient in time constraint situation such as network conference or mobile transmission. For nature images, the pixels with its neighbors are correlative and VQ partitions the image into sub-blocks and quantize them to form the indices to transmit, the correlation between adjacent indices is very strong as well. There are two parts of the proposed method. The first one is preprocessing and the other is the estimation process. In preprocessing, we modify the order of code-vectors in the VQ codebook to increases the correlation among the neighboring vectors. In the second part, using the Lagrange interpolation formula to constitute a polynomial to describe the tendency of VQ indices and use the polynomial to estimate the lost VQ indices on the decoder side. Using conventional VQ to compress Lenna and transmit without any index lost can achieve the PSNR of 30.154 dB on the decoder. The simulation results demonstrate that our method can efficient estimate the indices to achieve the PSNR value of 28.418 dB when the lost rate is 5% and 29.735 dB at the lost rate of 1%. Ó 2008 Elsevier B.V. All rights reserved.

1. Introduction The consumer electronic products are more and more popular nowadays. Digital camera and digital video become the essentials in daily life. Image processing is important because which is like a bridge between the natural scenes and users. When we catch an image from the camera, the image processing plays important role in many applications like image filtering, image compression, image scaling, etc. to meet the requirements of users. High-quality digital image processing tools are needed to fulfill the functionalities mentioned just now. Many papers have been developing to find better algorithms to solve it. Many algorithms have been proposing for image compression, because the size of natural images is huge so that it takes space to storage and time to transmission. In order to decrease image size, we use the data compression technique to reduce image data. Many image compression algorithms are proposed and devised. Lossless compression is not needed in most of the applications. Lossy compression schemes can meet the requirement of real * Corresponding author. E-mail address: [email protected] (Y.-G. Wu). 0262-8856/$ - see front matter Ó 2008 Elsevier B.V. All rights reserved. doi:10.1016/j.imavis.2008.02.007

applications. Lossy encoding includes discrete cosine transform (DCT), block truncation coding (BTC), wavelet coding, vector quantization (VQ) coding, etc. In 1980, Gersho and Gray developed the vector quantization and many other researches have been working on the research topic of VQ [1–7]. VQ is still one of the most successful signal processing techniques because it is quick and simple in decoding so that many applications that require fast decoding select this technique to compress data before transmission. Although this technique reduces image fidelity, it is still acceptable in many applications. Meanwhile having a high compression rate, the VQ method does not suffer from error propagation since it is a fixed-length source coding technique. The image VQ coding steps are shown in Fig. 1. During encoding, it searches a best-matched code-vector in the codebook to replace input vector and transmit the index of the best-matched code-vector to the channel. On the decoder, using these received indices to get code-vectors from the codebook and paste them to reconstruct the decoded image. VQ can be viewed as a form of pattern recognition where an input pattern is ‘‘approximated” by one of a predetermined set of standard patterns by matching it with one of a stored set of vector indices in the codebook. People have always assumed that VQ indices are transmitted to the receiver

1172

Y.-G. Wu, C.-H. Wu / Image and Vision Computing 26 (2008) 1171–1177

polynomials, there is a tradeoff between having a better fit and having a smooth well-behaved fitting function. The more data points that are used in the interpolation, the higher the degree of the resulting polynomial, and therefore the greater oscillation it will exhibit between the data points. Therefore, a high-degree interpolation may be a poor predictor of the function between points, although the accuracy at the data points will be ‘‘perfect”. The Lagrange interpolating polynomial P(x) of degree n1 that passes through the n points is given by PðxÞ ¼

n X

P i ðxÞ

ð1Þ

i¼1

where Pi ðxÞ ¼ yi

n Y k¼1;k6¼i

x  xk xi  xk

ð2Þ

written explicitly, Fig. 1. VQ coding diagram.

on noiseless channels and thus the receiver can reconstruct the image correctly. However, when the indices are transmitted over a noisy channel, which is obviously most often the case, transmission errors usually occur. If the occurrence of transmission error, re-transmission will be a good choice to reconstruct the image. However, under the situation of time constraint, re-transmission takes time and increases the bandwidth so that it becomes another good choice if we can recover the image without re-transmission. The transmission efficiency will be very poor if the transmission of VQ indices occurs often over a noisy channel. Hence, some techniques have been explored to investigate the transmission of VQ indices over the noisy channels [8–13]. Essentially, one can take one of the two approaches to recover the lost indices. We refer to these as VQ with prevention and VQ without prevention. In the first approach, VQ is trained for a noiseless channel and is subsequently made robust against channel errors by using an index assignment algorithm [11–13]. In the second approach [8–10], the noise detection and recovery is based on the idea that natural images usually have a high correlation between adjacent pixels. Some examples of VQ with prevention are pseudo-gray coding (PGC) [11] and anti-gray coding(AGC) [12]. Our method is based on the first approach so that PGC and AGC are briefly discussed in the following. PGC assigns n-bit binary indices into the 2n code words in a Euclidean space. When the binary indices are used as channel symbols on the discrete memoryless channel, PGC provides a redundancy-free error protection for VQ. PGC achieves the noise robustness through the codeword assignment scheme. There are some cases in which the noise in the PGC encoded image is visible. AGC is proposed to solve this problem and improve the image quality. In AGC, binary indices are assigned to the codeword in such a way that the 1-bit neighbors of a code-vector are as far as apart as possible so that once an error corrupts a codeword the decoded code-vector will differ greatly from that it should be. The error in AGC is easy to detect. AGC can achieve high detection rate when the BER is low. However, when the BER is high the quality is not good enough. Some noises are not detectable. To improve this drawback, Hung and Chang [13] proposes a new method in which some verification information is embedded in the indices of the VQ encoded data. When the indices are transmitted over a noisy channel, the information can help with the detection of error. This paper develops a new method by Lagrange interpolation to conceal and recover the lost indices. The Lagrange interpolation formula was first published by Waring in 1779, rediscovered by Euler in 1783, and published by Lagrange in 1795 [14–16]. When constructing interpolating

PðxÞ ¼

ðx  x2 Þðx  x3 Þ . . . ðx  xn Þ y ðx1  x2 Þðx1  x3 Þ . . . ðx1  xn Þ 1 ðx  x1 Þðx  x3 Þ . . . ðx  xn Þ y þ ... ðx2  x1 Þðx2  x3 Þ . . . ðx2  xn Þ 2 ðx  x1 Þðx  x2 Þ . . . ðx  xn1 Þ y þ ðxn  x1 Þðxn  x2 Þ . . . ðxn  xn1 Þ n

þ

ð3Þ

The major topic of this paper is using this Lagrange interpolating polynomial to achieve the goal of VQ indices recovery. In Section 2, we describe the preprocessing of codebook at first and then show the Lagrange interpolating polynomial to recover VQ indices that is the major part of this paper. In Section 3, we show the simulation results in the situation of VQ indices losing and using several methods to recover them as well for comparison. The conclusion of this paper is in Section 4. 2. Vector quantization indices recovery During the Internet transmission, random noises may cause the transmitting indices to lose. The receiver can ask to re-transmit the indices again. However, in order to save time, we propose a method to estimate these lost data and to recover them instead of transmitting whole data again. Large data lost may cause the receiver to judge the occurrence of the network disconnection; therefore, the large data lost is monitored by the network rules. The main idea of our proposed method is decreasing the network traffic capacity while maintaining the quality as well as possible. In general, automatic recovery will not be considered when the data is lost seriously, in which the system will re-transmit the data. The proposed method is efficient with respect to time constraint situation and bandwidth usage. We use a method using Lagrange interpolation formula to estimate and recover the lost data for VQ in this paper. There are two important parts of our proposed method. One of them is preprocessing, which sorts codebook to improve the correlation between neighboring vectors. Notice that, we just sort the codebook once before any other processing so that the preprocessing is off-line. Another part of our method is the recovery process, which constitutes a polynomial to describe the tendency of image data and estimate data on the decoder side. Fig. 2 shows the steps of the preprocessing. At first, we classify the codewords in the codebook by calculating the difference Di among code-vectors. Di ¼ maxfV ij g  minfV ij g  Smooth; Di 6 e Complicated;

otherwise

16i6n ð4Þ

1173

Y.-G. Wu, C.-H. Wu / Image and Vision Computing 26 (2008) 1171–1177 Table 1 Corresponding coordinates for each index yi

L1

L2

M

R2

R1

xi

2

1

0

1

2

the correct indices. We constitute a polynomial using correct four indices as follows: PðxÞ ¼

Fig. 2. Preprocessing diagram.

where Vij represents the code-vector in the codebook, n denotes the codebook size, i denotes the i-th code-vector in the codebook and j denotes the dimension of the code-vector. Eq. (4) selects the maximum and minimum gray in each code-vector and calculates the difference of them. We classify code-vectors into two classes which are smoothly and complex according to the difference values Di and the given threshold value e. Then, we use the mean value mi of each code-vector to sort the code-vectors of two classes, respectively. Finally, the sorted code-vectors in the codebook would be stored and used in all of the coding. The preprocessing of codebook classification can dominate the performance of index recovery. Notice that the codebook is global which means we just yield it once and which can be used to encode all images. The preprocessing is yielded once as well. We do not have to fulfill the preprocessing for every index recovery. In conventional LBG codebook training processing, the code-vectors among the codebook do not have any special relationship. In the proposed method, the processed code-vectors among the codebook have the relationship of block mean and complexity. Now, there are two classes in the codebook. One class is smooth and the other class contains complex code-vectors. The code-vectors in each class are sorted by individual mean value of each code-vector. The difference mean value between the adjacent code-vectors is tiny to make the recovery performance better. No methods of index estimation can guarantee the recovered index is the same as the index by full searching. Good estimation should get the index as close as possible to the best result. However, bias is un-avoidable for all kind of estimation. Even thought the estimated index is not the best one, the mean difference between the predicted and best one is not huge after the code-vectors permutation in the codebook so that the visual quality by the proposed method is acceptable. If we do not divide the code-vectors into two classes and only sort them by mean values in the codebook, one scenario will be incurred. The mean difference for nearby code-vectors is not huge; but the complexity could be different. Such a scenario will affect the visual quality seriously. In this paper, we can avoid the occurrence by the two-class classification, in which that the system can make sure the complexity of the adjacent code-vectors have similar complexity. When some VQ indices are lost during the transmission, the proposed method uses the correct indices to estimate and recover the lost ones by the Lagrange interpolating polynomial on the decoder. Our method used to estimate the lost index is based on the correlation of neighboring code-vectors. Here, the corresponding coordinates of those indices of code-vectors are listed in Table 1. M denotes the lost index in position [m, n] and L1, L2, R2, R1 are

ðx þ 1Þðx  1Þðx  2Þ ðx þ 2Þðx  1Þðx  2Þ L1 þ L2 12 6 ðx þ 2Þðx þ 1Þðx  2Þ ðx þ 2Þðx þ 1Þðx  1Þ R2 þ R1 þ 6 12

ð5Þ

Then we use two-way polynomial to estimate the lost index M by getting different four coefficients for each way. In our method, there are two ways; vertical and horizontal directions, to predict the value of M. Table 2 shows the positions of reference points and the two ways of estimation. Table 2 lists the ideal situation in our method in which that all the indices surrounding the M are existing. Consider the other situations that could be occurred; refer to Figs. 3a–d, the black block denotes that the index in the position is lost. The gray blocks mean the reference points with correct or recovered indices. We process our recovery by scanning the indices in the manner of top to bottom and left to right. The data in positions of [m1, n1], [m1, n], [m1, n+1], [m, n1] are correct or recovered before the processing of position [m, n]. Refer to Fig. 3, in which the index in the position of [m, n+1] is lost. The value in [m, n+2] is padded to the position of [m, n+1]. If the both the two values in [m, n+1] and [m, n+2] are lost, we use the value in [m, n+3] to pad to [m, n+1]. By using Eq. (5) and Table 2, we can get two values of Ph(x) and Pv(x) by the Eqs. (6a) and (6b). Because the both two ways are adopted to estimate the lost index, the final estimated value is determined by Eq. (7) Ph ðxÞ ¼

Pv ðxÞ ¼

ðx þ 1Þðx  1Þðx  2Þ ½m  1; n  1 12 ðx þ 2Þðx  1Þðx  2Þ þ ½m; n  1 6 ðx þ 2Þðx þ 1Þðx  2Þ þ ½m; n þ 1 6 ðx þ 2Þðx þ 1Þðx  1Þ þ ½m þ 1; n þ 1 12

ð6aÞ

ðx þ 1Þðx  1Þðx  2Þ ½m  1; n þ 1 12 ðx þ 2Þðx  1Þðx  2Þ þ ½m  1; n 6 ðx þ 2Þðx þ 1Þðx  2Þ þ ½m þ 1; n 6 ðx þ 2Þðx þ 1Þðx  1Þ þ ½m þ 1; n  1 12

ð6bÞ

Table 2 The positions of reference point

yi

L1

L2

M

R2

R1

Horizontal Vertical

[m-1, n-1] [m-1, n+1]

[m, n-1] [m-1, n]

[m, n] [m, n]

[m, n+1] [m+1, n]

[m+1, n+1] [m+1, n-1]

1174

Y.-G. Wu, C.-H. Wu / Image and Vision Computing 26 (2008) 1171–1177

Fig. 3. (a) Reference points when the data [m, n+1] is not correct. (b) Reference points when the data [m+1, n-1] is not correct. (c) Reference points when the data [m+1, n] is not correct. (d) Reference points when the data [m+1, n+1] is not correct.



P h ðxÞ þ Pv ðxÞ 2

ð7Þ

The lost index will be estimated and recovered by the above steps. Fig. 4 shows the steps of the whole recovery process and Fig. 5 shows the whole system configuration. Before any index recovery technique can be applied at the decoder, it is necessary to find out whether and where a transmission error has occurred at first. In [17], the detection is performed at the transport coder/decoder by adding header information [17]. The indices are packetized into packets, each of which contains a header and payload field. The header further contains a sequence number subfield that is consecutive for sequentially transmitted packets. At the transport decoder, the sequence number is used for packet-lost detection. FEC can also be used to detect error at transport level. Error-correction encoding is applied to segments of the output bit stream of the encoder. At the decoder, error-correction decoding is employed to detect and correct some bit errors. Anti-gray coding [12] (AGC) has a high detection rate when the channel random bit error rate (BER) is low. In AGC, binary indices are assigned

Fig. 4. Recovery process diagram.

to the codeword in the way that the 1-bit neighbors of a code-vector are as fast apart as possible. Therefore, once an error corrupts a codeword the decoded code-vector will differ greatly from what it should be so that the detection of error can be achieved easily. However, AGC cannot detect the error completely. We use the algorithm in [13] to correct or conceal the channel errors from a VQ encoded image. In the proposed method, some verification information (checksum) is embedded into the indices of the VQ encoded data. When the indices are transmitted over a noisy channel, the information can help with the error detection. Once the indices are regarded as error, the system can conceal them by padding the new indices yielded by the interpolation. 3. Simulation result The quality measurement for reconstructed image is the peak signal–noise rate (PSNR): ! 2552 PSNR ¼ 10  log10 ðdBÞ ð8Þ MSE

Fig. 5. System diagram of VQ indices recovery.

Y.-G. Wu, C.-H. Wu / Image and Vision Computing 26 (2008) 1171–1177

where the MSE is given as follow:  X M X N 1 ^ij Þ2 MSE ¼ ðy  y M  N i¼1 j¼1 ij

1175

ð9Þ

M  N is the image size and yij and yˆij denote the pixel value at the location (i, j) of original and reconstructed images, respectively. We select the gray images including Lenna, Tiffany, Boat, Landscape and ‘‘Grand Canyon” whose size is 512  512 to be the test images. The codebook size is 256 and we select the threshold value e = 128 after repeating tests to classify the codebook as described in the previous section. The selection of e influences the performance of the system. The codebook is yielded by LBG algorithm and the training images include Lenna, Fruit and F-16. In the following experiments, all the images are encoded by the same codebook. In order to show the performance, we compare the proposed method to other methods including the method of [13], low-pass filter, one-way Lagrange and the proposed two-way Lagrange interpolation. Table 3 lists two simulation results at different lost rates and the PSNR values of reconstructed image using different methods. Notice that, we all know that low-pass filter can be used to eliminate the random noise. Therefore, we also use it for testing. Table 4 is the results to recover Tiffany and Boat by the different methods at the lost rate of 5%. By the observation of Tables 3 and 4, we can see that the proposed method in image recovery is efficiently. The qualities of using low-pass filter, one-way, and two-way Lagrange interpolations at low data lost rates are not so obvious, but at high data lost rates or lost is in the edge blocks, the two-way Lagrange interpolation has the best results compared to all the other methods. Now we show the experimental results of preprocessing for the permutation of codebook. Fig. 6 is the original Lenna image whose size is 512  512. The codebook we used to encode Lenna was yielded LBG with the size of 256 code-vector. The bits to represent each index are eight. Refer to Fig. 7, which are the collection of indices after VQ encoding. It is like as the diminished image of Lenna image from the observation of Fig. 7. Each pixel in Fig. 7 is the index for a 4 by 4 code-vector in the codebook so that the size of Fig. 7 is 128 by 128. We can see that the data in indices map have a high correlation between adjacent pixels. Without the preprocessing of codebook, the indices map will be chaotically and the distribution of them are at random as shown in Fig. 8. In that case, all the prediction methods for indices recovery will have poor results. Fig. 9 is the indices map at

Fig. 6. Original Lenna image.

the lost rate of 1%. Fig. 10 is the indices map reconstructed by the proposed Lagrange interpolation method. Conventional VQ decoded Lenna without any index lost is shown in Fig. 11 whose PSNR is 30.154 dB. Figs. 12a–d and 13a–d show the test images at different lost rates (1% and 5%). Fig. 12b shows the result at the lost rate of 1% by the method to [13] to reconstruct image. Ref. [13] is the recent published paper. Fig. 13b is the result at the bit rate of 5% by Hung and Chang [13]. We use Figs. 13c and 14 to show the recovered images at different lost rates by low-pass filter to remove those noise-liked lost indices. Low-pass filter is imposed to the whole image so that the computation burden is higher than the proposed method. Figs. 12d and 13d show the test images at different data lost rates and using two-way Lagrange interpolation to estimate lost indices and recover them. The difference between Figs. 11 and 12d is quite tiny. The visual difference is almost can be neglected. Figs. 14– 19 are the testing images including Landscape and ‘‘Grand Canyon” and the recovered images by the proposed method and the original VQ decoded images. Notice that the permutated codebook is the same as the one to encode Lenna, Boat and Tiffany. Figs. 15 and 18 are the two images yielded by the conventional VQ reconstructed images without any index lost. The lost rate is 1% for Figs. 16 and 19. The PSNR degradation between the proposed method and without index lost is quite small. Our

Fig. 7. Indices map of Lenna with codebook permutation.

Table 3 The quality of reconstructed image (Lenna) expressed in dB with different methods and different lost rates Lost rate (%)

0

0.1

0.5

1

5

10

Non recovery [13] Low-pass filter One-way Lagrange Two-way Lagrange

30.154 30.154 30.154 30.154 30.154

29 30.115 30.097 30.105 30.134

26.31 29.942 29.822 29.828 30.0

24.307 29.671 29.607 29.698 29.735

18.438 28.013 27.331 26.921 28.418

15.802 26.142 25.150 23.701 27.215

Table 4 The quality of reconstructed image expressed in dB with different methods at the lost rate of 5% Images

Tiffany

Boat

Non recovery [13] Low-pass filter One-way Lagrange Two-way Lagrange

20.134 27.364 26.215 28.583 28.934

17.819 26.369 26.092 26.512 27.101

Fig. 8. Indices map without codebook permutation.

Fig. 9. Indices map at lost rate = 1%.

Fig. 10. Recoveed indices map.

1176

Y.-G. Wu, C.-H. Wu / Image and Vision Computing 26 (2008) 1171–1177

Fig. 11. The VQ reconstructed Lenna image. PSNR = 30.154 dB.

Fig. 14. Original Landscape image.

Fig. 15. VQ reconstructed Landscape image. PSNR = 25.10 dB.

Fig. 12. (a) Lost rate 1% (Lenna). PSNR = 24.307 dB. (b) Recovered image by [13] at lost rate 1% PSNR = 29.671 dB. (c) Low-pass filter at lost rate 1%. PSNR = 29.607 dB. (d) Two-way Lagrange recovery at lost rate 1%. PSNR = 29.735 dB.

Fig. 16. Recovered Landscape image. PSNR = 24.94 dB.

Fig. 17. Original Grand Canyon image.

Fig. 13. (a) Lost rate 5% (Lenna). PSNR = 18.438 dB. (b) Recovered image by [13] at lost rate 5%. PSNR = 28.013 dB. (c) Low-pass filter at lost rate 5% PSNR = 27.331 dB. (d) Two-way Lagrange recovery at lost rate 5%. PSNR = 28.418 dB.

Fig. 18. VQ reconstructed Grand Canyon image. PSNR = 26.99 dB.

Y.-G. Wu, C.-H. Wu / Image and Vision Computing 26 (2008) 1171–1177

1177

requirement is needed. The application of Lagrange polynomial to the topic is successful. The method is suitable to noisy channels like mobile ones. References

Fig. 19. Recovered Grand Canyon image. PSNR = 26.77 dB.

results show that our proposed method can achieve better results than the other methods. In addition, the recovery time of the proposed method is quite fast. This result shows our method could be used in time constraint applications. 4. Conclusion We present an efficient data recovery method for VQ encoded image transmission in this paper. During data transmission, if the data lost happens, we usually request the sender to transmit these data again. If the image or video has the limitation of the period of validity, re-transmitting the data wastes of time. Hence, using the received correct data to estimate and recover the lost data is efficient in time-constraint situation. We hide some verification information (i.e., the checksums) in the indices of the VQ encoded data. When the indices are transmitted over a noisy channel, the information can help with the lost detection. In our simulation result, using the correct indices to constitute a Lagrange interpolating polynomial and estimate the lost indices to reconstruct image can yield good quality. The recovered image is almost indistinguishable to the original VQ decoded images. Therefore, these results show our method can efficiently improve the transmission quality when the time-constraint

[1] K.R. Rao, J.J. Hwang, Techniques and Standards for Image Video, and Audio Coding, Prentice Hall, Englewood Cliffs, NJ, 1996. [2] K. Sayood, Introduction to Data Compression, Morgan Kaufman, San Francisco, CA, USA, 2000. [3] A. Gersho, R.M. Gray, Vector Quantization and Signal Compression, Kluwer, Norwell, MA, 1992. [4] Y.G. Wu, Design of a fast vector quantization image encoder, Opt. Eng. 46 (8) (2007) 08700801–08700812. [5] N.M. Nasabadi, R.A. King, Image coding using vector quantization: a review, IEEE Trans. Commun. 36 (8) (1988) 957–971. [6] Y. Linde, A. Buzo, R.M. Gray, An algorithm for vector quantization design, IEEE Trans. Commun. COM-28 (1) (1980) 84–95. [7] Y.G. Wu, K.L. Fan, Fast vector quantization image coding by mean value predictive algorithm, J. Electron. Imaging 13 (2) (2004) 324–333. [8] K.L. Hung, C.C. Chang, T.S. Chen, Reconstruction of lost blocks using codeword estimation, IEEE Trans. Consum. Electron. 45 (4) (1999) 1190– 1199. [9] X. Lee, Y.Q. Zhang, A. Leon-Garcia, Information loss recovery for block-based image coding techniques-a fuzzy logic approach, IEEE Trans. Image Process. 4 (3) (1995) 259–273. [10] W.J. Zeng, Y.F. Huang, Boundary matching detection for recovering erroneously received VQ indices over noisy channels, IEEE Trans. Circ. Syst. Vid. Tech. 6 (1) (1996) 108–113. [11] K. Zeger, A. Gersho, Pseudo-gray coding, IEEE Trans. Commun. 38 (1990) 2147– 2158. [12] C.J. Kuo, C.H. Lin, C.H. Yeh, Noise reduction of VQ encoded images through anti-gray coding, IEEE Trans. Image Process. 8 (1) (1999) 33–40. [13] K.L. Hung, C.C. Chang, Error prevention and resilience of VQ encoded images, Signal Process. 83 (2003) 431–437. [14] M. Abramowitz, I.A. Stegun, Handbook of Mathematical Functions with Formulas, Graphs, and Mathematical Tables 9th, Dover, New York, 1972 (pp. 878–879 and 883). [15] H. Jeffreys, Jeffreys, Methods of Mathematical Physics 3rd, Cambridge University Press, England, 1988 (pp. 260–261). [16] Available from: . [17] Y. Wang, Q.F. Zhu, Error control and concealment for video communication: a review, Proc. IEEE 86 (5) (1998) 974–997.