An efficient block based lossless compression of medical images

An efficient block based lossless compression of medical images

Optik 127 (2016) 754–758 Contents lists available at ScienceDirect Optik journal homepage: www.elsevier.de/ijleo An efficient block based lossless c...

799KB Sizes 146 Downloads 245 Views

Optik 127 (2016) 754–758

Contents lists available at ScienceDirect

Optik journal homepage: www.elsevier.de/ijleo

An efficient block based lossless compression of medical images D. Venugopal a,∗ , S. Mohan a , Sivanantha Raja b a b

Department of ECE, K.L.N. College of Information Technology, Madurai, Tamilnadu, India Department of ECE, A.C. College of Engineering & Technology, Karaikudi, Tamilnadu, India

a r t i c l e

i n f o

Article history: Received 23 January 2015 Accepted 26 October 2015 Keywords: Integer wavelet transform Walsh Hadamard transform Truncated integer part Truncated residue part DC prediction

a b s t r a c t Medical images play a significant role in diagnosis of diseases and require a simple and efficient compression technique. This paper proposes a block based lossless image compression algorithm using Hadamard transform and Huffman encoding which is a simple algorithm with less complexity. Initially input image is decomposed by Integer wavelet transform (IWT) and LL sub band is transformed by lossless Hadamard transformation (LHT) to eliminate the correlation inside the block. Further DC prediction (DCP) is used to remove correlation between adjacent blocks. The non-LL sub bands are validated for Non-transformed block (NTB) based on threshold. The main significance of this method is it proposes simple DCP, effective NTB validation and truncation. Based on the result of NTB, encoding is done either directly or after transformation by LHT and truncated. Finally all coefficients are encoded using Huffman encoder to compress. From the simulation results, it is observed that the proposed algorithm yields better results in terms of compression ratio when compared with existing lossless compression algorithms such as JPEG 2000. Most importantly the algorithm is tested with standard non medical images and set of medical images and provides optimum values of compression ratio and is quite efficient. © 2015 Elsevier GmbH. All rights reserved.

1. Introduction Diagnosis of diseases with the aid of medical images and their storage finds much important place but consumes more bandwidth. For telemedicine applications, these medical images need to be transmitted to different destinations. Medical images such as Computed Tomography (CT), Magnetic Resonance Imaging (MRI), Ultrasound (US), Electrocardiogram (ECG) and Positron Emission Tomography (PET) need to be stored and sent for checking by another medical expert if necessary. These vast amounts of data cause a large amount of memory occupation and increase the time and traffic during transmission. Hence, medical image compression is essential in order to reduce the storage and bandwidth requirements. Many advanced image compression methods have been proposed in response to the increasing demands for medical images. JPEG 2000 is a high performance image compression algorithm [2]. SPHIT is good compression algorithm, but it requires imagelevel access and cannot eliminate the correlation inside the sub bands [3]. Most importantly, these are lossy schemes and cannot be efficient for medical images. 1D modified Hadamard transformation based on JPEG 2000 is proposed but it is suit-

∗ Corresponding author. Tel.: +91 9843463232. E-mail address: [email protected] (D. Venugopal). http://dx.doi.org/10.1016/j.ijleo.2015.10.154 0030-4026/© 2015 Elsevier GmbH. All rights reserved.

able for the LL- sub band [4]. A lossless algorithm method introduced in [5] based on hierarchical prediction and significant Bit Truncation (SBT) method occupies huge amount of memory to buffer the sub bands coefficients. A lossless algorithm is proposed in [6–8] by sub band decomposition with 1-D MHT. 1-D MHT could perform a vector decorrelation, but for vectors between two adjacent rows, it cannot able to further improve the CP. The objective of this work is to achieve an improved compression ratio for medical images. Here decomposition is done through IWT and LHT, DCP are performed to remove the correlation inside the block and the correlation between adjacent blocks respectively. But second and third stages of transformations are done for only for LL sub band and non-LL sub bands are validated for further truncation or direct encoding. After validation, the non truncated non-LL bands are encoded directly. LL and validated non-LL bands are truncated and encoded using lossless Huffman encoder to compress. The remaining part of the paper is organized as follows: In Section 2, the technical phenomena related to this work are explained in detail. Section 3 describes the proposed compression method based on LHT & Huffman encoding with the detailed workflow and with necessary representations and equations. The stage by stage progress is also presented. In Section 4, the performance is evaluated and the paper is concluded in Section 5.

D. Venugopal et al. / Optik 127 (2016) 754–758

755

Table 2 Adjusted TIP and TRPARR.

Fig. 1. Basic filter bank for biorthogonal wavelet transform.

Fig. 2. Interpretation of the proposed algorithm.

2. Technical background 2.1. Integer wavelet transform for decomposition 3D integer Wavelet Transform is proposed and the CR of 3.27 was achieved for MRI image and CR of 4.13 was achieved for CT image [19]. Integer version of every wavelet transform employing finite filters can be built with a finite number of lifting steps in [18]. A new set of decomposition bases was presented and Lanczos filter is used in the lifting scheme in [17]. The CR was increased to about 10% for noisy images and about 30% for the MRI images. Integer Wavelet Transform (IWT) is a lossless encoding, but also simple to implement in practice [15]. The standard lifting scheme (LS) can achieve an invertible lossless integer transformation [13] and it can be easily implemented since it involves simple arithmetic operations [14]. In most of the cases, in wavelet transform the output coefficients of the filter are in floating point representation. Since, the input images are represented as matrices and it consists of integer values, but the filtered output no longer consists of integers and will

TIP (8-bit)

TRPARR (2-bit)

TIP (8-bit)

TRPARR (3-bit)

10011011 11111010 11100010 11101110 11111101 11111001 00001000 00000001 11101101 00000101 00001000 11111111 11100101 00000011 00000111 00001100

11 01 10 00 00 10 00 00 10 00 00 10 00 00 11 01

10011011 11111011 11100011 11101111 11111110 11111010 00001000 00000001 11101110 00000101 00001000 00000000 11100110 00000011 00000111 00001100

011 101 110 100 100 110 000 000 110 000 000 110 100 000 011 001

result in quantization loss. For lossless coding it is required to make an invertible mapping from an integer image input to an integer wavelet representation. The wavelet transform can be considered as a sub band transform and implemented with a filter bank. Fig. 1 describes the general block scheme of a one-dimensional biorthogonal wavelet transform. 5/3 LWT is utilized in this work and its prediction, update equations are

⎧ ⎨X(2n + 1) − X(2n) + X(2n + 2) Normal 2 Y (2n + 1)prediction = (1) ⎩ X(2n + 1) − X(2n)

Y (2n)update =

⎧ Y (2n + 2) ⎪ ⎨ X(2n) +

odd end

Normal

2

⎪ ⎩ X(2n) + Y (2n − 2) + Y (2n + 2) + 2 even end

(2)

2

where Y is the one dimensional transformation of X which is the periodic extension of the prediction and update separately, the “odd-end” and “even-end” represent the embedded extensions for the problem of boundary extension [12]. When updating the first data of the image, the “even-begin” in (2) is transformed into Y (0) = X(0) +

Y (1) + 1 2

(3)

Y(1) can be calculated by the “normal” in (1) Y (1) = X(1) −

X(0) + X(2) 2

(4)

Table 1 Stage wise results. LL coeff. (8-bit)

2-D LHT coeff. (16-bit)

TIP (8-bit)

TRP (4-bit)

TRPARR (2-bit)

01011101 01110010 01110000 01101111 01110000 11110100 11100011 11010010 01101001 11000001 10111000 01111000 01100101 11101111 10101010 10011111

0000100110111110 1111111110100100 1111111000101000 1111111011100010 1111111111010000 1111111110011010 0000000010000010 0000000000010000 1111111011011000 0000000001010010 0000000010000010 1111111111111000 1111111001010010 0000000000110000 0000000001111100 0000000011000110

10011011 11111010 11100010 11101110 11111101 11111001 00001000 00000001 11101101 00000101 00001000 11111111 11100101 00000011 00000111 00001100

1110 0100 1000 0010 0000 1010 0010 0000 1000 0010 0010 1000 0010 0000 1100 0110

11 01 10 00 00 10 00 00 10 00 00 10 00 00 11 01

756

D. Venugopal et al. / Optik 127 (2016) 754–758

Hence, according to Eqs. (3) and (4) a row transformation performed by only three data (X(0),X(1),X(2)). For the same reason, column transformation also needs three data to perform. So, only three rows of the image data can implement a 2-D IWT and there is no need to wait for the entire image data to finish transformation, it also means that two rows of the data must be buffered. 2.2. Lossless Hadamard transform for redundancy removal Hadamard transform is used for frequency decorrelation because of its simplicity and fast nature [6–8]. Neglecting the normalization factor, each element of Hadamard matrix is either addition or subtraction [1]. To remove correlation inside the block 2-D Lossless Hadamard Transformation is used which also makes more precise accumulation of DC component that helps in further removal of correlation. The 2-D LHT can be expressed as, L = HSHT where,

(5)



1

1

1

1



⎢ 1 1 −1 −1 ⎥ ⎥ ⎣ 1 −1 −1 1 ⎦

H = HT = ⎢

1 −1

1

4. Simulation experiments

Det(H) = 16

(7)

S = HT LH = HT HSTT H = 16s

(8)

It signifies the fact that output coefficients are 16 times larger than inputs [10]. The Philips and Denecker [11] proposed a lossless version of Hadamard transformation, which decomposed H as

| det(B)| = 1

(9) (10)

Then the 2-D LHT can be defined as L = BSBT

Step4: According to the result directly encode or else further transform it again by 2D LHT and truncate. Step5: For LL band, apply 2D LHT and predict DC by DCP and truncate. Step6: Encode resulting DC and AC coefficients separately by Huffman encoder. Step7: Analyze the compression ratio and compare with existing algorithm.

(6)

−1

H=A·B

Fig. 3. Proposed DC Prediction.

(11)

From [16] all the 16 LSBs of transformed coefficients are same and by using only 1-bit is enough for first LSB and 5-bits, 11 bits and 15 bits are required in the second, third and fourth LSBs respectively and the others are redundant. Totally there are 32 bits out of 64 are needed to encode which removes 50% redundancy. Gray scale medical image compression of various sizes is implemented using the Ripplet transform with the combination of Huffman coding algorithm [9]. Daubechis wavelet transform, lifting wavelet transform and Huffman encoding is used to compress the images in [19] but the CR value is less. 2D-LHT and Huffman coding remove the redundancy in each macro block but seems to be complex [1].

The natural as well as medical images are used for simulation experiments. One of the CT images is taken for detailed explanation of algorithm flow and is given as follows. CT image is decomposed by 2D LIWT and it results in sub bands. First 2D LHT is applied to 4 × 4 macro block of LL-alone and transform all coefficients into binary form as in Table 1. LL coefficients are represented in 8 bit and 2-D LHT coefficients are represented in 16 bit. Clear the first 3 bits and last 4 bits from the 2-D LHT coefficients for redundancy reduction. The cleared last 4 bits represents truncated residue part (TRP) and uncleared 8 bits represents truncated integer part (TIP). Then, two LSB of TRP alone are considered and is termed as TRPARR (truncated residue part after redundancy reduction). The resultant matrix for the image given is as follows and the stage wise results are tabulated in Table 1.

⎡ ⎣

93

The proposed compression algorithm consists of three different transformations to remove the redundancy namely 2D IWT, 2D LHT and DCP. Further truncation is applied and coded by Huffman encoder separately to compress the image in lossless manner. The interpretation is given in Fig. 2. The algorithm is given as Step 1: Acquire the input medical images such as MRI, CT, etc. Step2: Decompose them by 2D LIWT to separate in to LL, HL, LH and HH sub bands. Step3: Validate non-LL band for NTB based on threshold.

105

101

193 239 184 170

111

120

210

⎡ 155 −6 ⎢ −3 −7 ⎣ −19 5 −27

3. Proposed compression algorithm

112

114 244 112 227

3



⎡ 2494

⎦ − 2D LHT → ⎣

−286

159

⎤ ⎡ −2 4 8 1 ⎥ ⎢ 0 −6 + 8 −1 ⎦ ⎣ −8 2

−30 −18

7

2

12

⎡ 155 −6 Redundant removed ⎢ −3 −7 −→ ⎣ −19 5 −27

3

−48

−296

−92 −102 −472 130

0

−8

16

−8

2

2 0 2 −8 −4

12

⎤ ⎦

198

⎤ ⎥ ⎦

6

⎤ ⎡3 1 −30 −18 8 1 ⎥ ⎢0 2 + 8 −1 ⎦ ⎣ 2 0 7

−430

82 48 130 124

0

0

0



0 0 0 2

⎥ ⎦

2

3

1 (12)

In fact, more zero is preferred after truncation, so, we make the follow adjustment. Ii,j = Ii,j + 1 Ii,j < 0

(13)

D. Venugopal et al. / Optik 127 (2016) 754–758

757

Table 3 Performance comparison. Image

Size 256 × 256 256 × 256 512 × 512 512 × 512 1024 × 1024 1024 × 1024 2048 × 2048 2048 × 2048

House Finger print Lena Ocean Man Airplane Airfield Farmland

Existing algorithm (LHT + EG)

JPEG 2000

Proposed algorithm (LHT + Huffman)

45.2 43.4 40.6 35.2 33.6 36.3 35.0 62.8

48.6 45.6 45.8 38.1 37.6 39.5 38.1 65.7

49.2 47.4 46.1 38.8 37.7 39.5 38.1 66.2

4.1. DC predictions for LL-sub band coding TIP has 15 AC and one DC as first (left top most) coefficient in each macro block. DC coefficient is separated in every 4 × 4 macro block of TIP by masking the AC coefficients. Masking means just multiplying zeros with AC coefficients. The proposed DCP is so simple to implement and is depicted in Fig. 3. 4.2. Validation for non-LL sub bands coding The non-LL sub band coefficients of 2-D IWT are validated for NTB before applying directly to Huffman encoder for lossless compression. NTB based on the threshold fixed by the user here checks whether the sub band has optimum entropy with less data or minimum entropy with more data. In first case, it is directly encoded but in the second case bands, they are further transformed by 2D LHT to remove the unwanted data and then encoded after truncation. 4.3. Huffman encoder Huffman coding is a statistical lossless coding technique that consists of techniques guaranteed to generate an exact replicate of input data stream. This could be used when storing database records, spreadsheets, or word processing file and image formats where the loss of even a single bit could be catastrophic. Medical images in which loss of detail leads to severe diagnostic failures could be compressed using this Huffman coding and it is best suited [17]. 5. Results and discussion The test images such as House, Fingerprint, Lena, Ocean, Man, airplane, air field and farmland were used as input images for compression and the results are compared with the existing algorithm and JPEG 2000 [16] and tabulated in Table 3. The compression percentage is calculated by



Fig. 4. Input medical images.

CP = 100 1 −

Thus (12) is adjusted to (14) and Table 2 shows the adjusted TIP and TRPARR.

⎡ 155 −6 ⎢ −3 −7 ⎣ −19 5 −27 3

⎤ ⎡3 1 8 1 ⎥ ⎢0 2 + 8 −1 ⎦ ⎣ 2 0

−30

7

−18

12

⎡ 155 −5 Adjustment ⎢ −2 −6 −→ ⎣ −18 5 −26

3

0

0

2

0



0 0

0 2

⎥ ⎦

3

1

⎤ ⎡3 5 1 ⎥ ⎢4 6 + 0 ⎦ ⎣6 0

−29 17 8 8

7 12

4

0

6

4



0 0

0 6

⎥ ⎦

3

1

(14)

compressed file size original file size

(15)

Table 3 implies that the proposed method is superior to the existing method and JPEG 2000 in terms of compression percentage for non medical images. Since the performance is better, further this algorithm is applied with set of medical images. All the simulation experiments have been conducted based on the same test environment. Set of 50 standard medical images have been used for the simulation experiments. Some CT medical images used for the purpose are shown in Fig. 4. The proposed lossless compression algorithm provides optimum values of compression percentage for medical image and compared with JPEG 2000 as shown in Table 4. Table 4 shows the better performance of the proposed algorithm for medical images. The compression percentage is considerably increased and this method is less complex.

758

D. Venugopal et al. / Optik 127 (2016) 754–758

Table 4 Computation of compression percentage. Medical Images

Size

JPEG 2000 (%)

Proposed algorithm (%)

Heart Knee Safire pulmonary Body angio (front view) Body angio (back view) Safire pulmonary DICOM images

968 × 968 1024 × 1024 512 × 512 968 × 968 968 × 968 632 × 560 Varying sizes

42.1 54.66 52.04 54.98 52.21 45.20 Average of 48

42.23 56.87 52.93 55.70 52.80 46.28 Average of 48.6

6. Conclusion In the proposed algorithm, lossless compression of medical images is done using LHT and Huffman coding. Here, simple validation of NTB, proper truncation and fairer evaluation of DC are used. The algorithm is tested with various medical images and compression percentage computed is optimum. This algorithm provides better compression ratio compared to the existing algorithm when tested with set of images of different sizes with varying resolutions. It is also validated with three medical officers of different specializations such as surgical, general medicine and pediatrics and confirmed that it is meeting the clinically acceptable standards. Thus it is concluded that this algorithm is best suited for medical images and satisfy the requirements of band effective storage and useful for telemedicine applications. Future work may be carried out to modify the proposed algorithm for color medical image compression and to evaluate the performance. Acknowledgement We thank Dr. Arun, Medical officer, Ramakrishna Hospital, Coimbatore for providing all clinical support for this work. Also thank other surgeon and pediatrician for their timely support. References [1] T. Mochizuki, Bit pattern redundancy removal for Hadamard transformation coefficients and its application to lossless image coding, Electron. Commun. Jpn. (Part 3) 80 (6) (1997) 1–10.

[2] D. Taubman, High performance scalable image compression with EBCOT, IEEE Trans. Image Process. 9 (7) (2000) 1158–1170. [3] C.-C. Cheng, P.-C. Tseng, L.-G. Chen, Multimode embedded compression codec engine for power-aware video coding system, IEEE Trans. Circuits Syst. Video Technol. 19 (2) (2009) 141–150. [4] C.H. Son, J.W. Kim, S.G. Song, S.M. Park, Y.M. Kim, Low complexity embedded compression algorithm for reduction of memory size and bandwidth requirements in the JPEG2000 encoder, IEEE Trans. Consum. Electron. 56 (4) (2010) 2421–2429. [5] J. Kim, C.-M. Kyung, A lossless embedded compression using significant bit truncation for HD video coding, IEEE Trans. Circuits Syst. Video Technol. 20 (6) (2010) 848–860. [6] T.L.B. Yng, B.-G. Lee, H. Yoo, Low complexity, lossless frame memory compression using modified Hadamard transform and adaptive Golomb–Rice coding, IADIS Int. Conf. Comput. Gr. Vis. (2008) 89–96. [7] T.L.B. Yng, B.-G. Lee, H. Yoo, A low complexity and lossless frame memory compression for display devices, IEEE Trans. Consum. Electron. 54 (3) (2008) 1453–1458. [8] Y.-H. Lee, Y.-Y. Lee, H.-Z. Lin, T.-H. Tsai, A high-speed lossless embedded compression codec for high-end LCD applications, in: Proc. IEEE Asian Solid-State Circuits Conf. (A-SSCC), 2008, pp. 185–188. [9] S. Juliet, E.B. Rajsingh, K. Ezra, A novel medical image compression using Ripplet transform, J. Real-Time Image Process. (2013), http://dx.doi.org/10.1007/ s11554-013-0367-9, special issue paper. [10] S. Takamura, Y. Yashima, H.264-based lossless video coding using adaptive transforms, Proc. IEEE ICASSP (2005) 301–304. [11] W. Philips, K. Denecker, A lossless version of the Hadamard transform, in: Proc. ProRISC Workshop Circuits Syst. Signal Process., 1997, pp. 443–450. [12] K.C.B. Tan, T. Arslan, Low power embedded extension algorithm for lifting based discrete wavelet transform in JPEG2000, Electron. Lett. 37 (25) (2001) 1328–1330. [13] A.K. Al-Sulaifanie, A. Ahmadi, M. Zwolinski, Very large scale integration architecture for integer wavelet transform, IET Comput. Digit. Tech. 4 (6) (2010) 471–483. [14] M. Grangetto, E. Magli, M. Martina, G. Olmo, Optimization and implementation of the integer wavelet transform for image coding, IEEE Trans. Signal Process. 11 (2002) 596–604. [15] Q. Li, G. Ren, Q. Wu, X. Zhang, Rate pre-allocated compression for mapping image based on wavelet and rate-distortion, Int. J. Light Electron Opt. 124 (14) (2013) 1836–1840. [16] L. Yang, X. He, G. Zhang, L. Qinga, T. Che, A low complexity block-based adaptive lossless image compression, Optik 124 (2013) 6545–6552. [17] J. Taquet, C. Labit, Optimized decomposition basis using Lanczos filters for lossless compression of biomedical images, IEEE Int. Workshop Multimed. Signal Process. (2010) 122–127. [18] C. Lin, B. Zhang, Y.F. Zheng, Packed integer wavelet transform constructed by lifting scheme, IEEE Trans. Circuits Syst. Video Technol. 10 (8) (2000) 1496–1501. [19] D.Y. Venkataramani, S.P. Banu, An Efficient hybrid image compression scheme based on correlation of pixels for storage and transmission of image, Int. J. Comput. Appl. 18 (3) (2011) 0975–8887.