Computers and Electrical Engineering 56 (2016) 30–45
Contents lists available at ScienceDirect
Computers and Electrical Engineering journal homepage: www.elsevier.com/locate/compeleceng
A 2D electrocardiogram data compression method using a sample entropy-based complexity sorting approachR Anukul Pandey a,∗, Barjinder Singh Saini a, Butta Singh b, Neetu Sood a a b
Department of Electronics and Communication Engineering, Dr B R Ambedkar National Institute of Technology, Jalandhar, India Department of Electronics and Communication Engineering, Guru Nanak Dev University, Regional Campus, Jalandhar, India
a r t i c l e
i n f o
Article history: Received 31 March 2015 Revised 13 October 2016 Accepted 13 October 2016
Keywords: Electrocardiogram ECG Pre-processing Data compression Sample entropy SampEn JPEG20 0 0 Quality score Complexity sorting
a b s t r a c t This paper proposes an effectual sample entropy (SampEn) based complexity sorting preprocessing technique for two dimensional electrocardiogram (ECG) data compression. The novelty of the approach lies in its ability to compress the quasi-periodic ECG signal by exploiting the intra and inter-beat correlations. The proposed method comprises the following steps: (1) QRS detection, (2) Length normalization, (3) Dc equalization, (4) SampEn based nonlinear complexity sorting and (5) Compression using JPEG20 0 0 Codec. The performance has been evaluated over 48 records from the MIT-BIH arrhythmia database. The average quality score (QS) measurements at different residual errors were 42.25, 4.73, and 2.75 for percentage root mean square difference (PRD), PRD1024, and PRD Normalized respectively. The work also reports extensive experimentations on the compressor for various durations of the ECG records (5–30 min, with 5-min increment). The proposed algorithm demonstrates significantly better performance in comparison to the contemporary stateof-the-art works present in the literature. © 2016 Elsevier Ltd. All rights reserved.
1. Introduction The incorporation of wireless communication technology in the field of the tele-cardiological platform has played a vital role in the timely monitoring of electrocardiogram (ECG) signals, especially in remote areas. The ECG signal that monitors the electrical activity of the heart is normally characterized by its different clinical features based on set points (P, QRS, T) and intervals (PR-interval, QT-interval, and RR-interval) that reflect the rhythmic electrical repolarization and depolarization of the ventricles and atria [1]. The redundant information in an ECG signal is due to the presence of inter, and intra-beat correlation. Broadly, ECG data compression methods fall under one of three classifications namely, direct, transformed, or parameter extraction method [1]. The direct method reduces ECG sample points in the time domain. This category includes turning point (TP), amplitude zone time epoch coding (AZTEC), improved modified AZTEC [2], coordinate reduction time encoding, the Fan algorithm, and the ASCII character based encoding systems [3,4]. The transformed method analyses energy distribution or compaction by persuading in other domains, which include the Fourier transform, the Fourier descriptor, the Karhunen–Loeve transform (KLT) [1], the Walsh transform, the discrete cosine transform (DCT), DCT with modified stages [5] and the wavelet transform
R ∗
Reviews processed and recommended for publication to the Editor-in-Chief by Associate Editor Dr. M. R. Daliri. Corresponding author. E-mail address:
[email protected] (A. Pandey).
http://dx.doi.org/10.1016/j.compeleceng.2016.10.012 0045-7906/© 2016 Elsevier Ltd. All rights reserved.
A. Pandey et al. / Computers and Electrical Engineering 56 (2016) 30–45
31
[6]. The compressed sensing model [7], the fractal model [8] and the accuracy driven sparse model [9] were recently introduced. The parameter extraction method is based on the dominant extracted features from the raw ECG signal. These are neural or syntactic, peak picking and linear prediction [10] based methods. Numerous researchers have reported ECG data compression procedures by formulating two-dimensional (2D) arrays from ECG signals to better exploit the inter- and intra-beat correlations by the employed encoder. [11–13]. The “cut and align beats approach with 2D DCT” along with “period normalization and truncated SVD algorithm” are the existing preprocessing techniques that achieve decent ECG compression results. The state-of-the-art image encoders, such as JPEG20 0 0, usually employ this type of a pre-processing step. The pre-processing method proposed here is a modification of the technique presented by [11]. In [12], the authors proposed a lossy ECG compression method based on translating the ECG signal into a 2D ECG array. After translation, a period sorting pre-processing approach was applied, which is composed of a lengthbased organization of all the periods. The inter-beat correlation of the 2D ECG has been exploited by Chou et al. [12]. These researchers formulated a 2D ECG array and sorted in ascending order according to period length. Then, inter-beat correlation among similar period lengths was perceived. In the analysis by Filho et al. [11] it is verified that this assumption does not hold for the large set of ECG signals of pathology subjects. One more pre-processing method consists of a QRS detector, period length normalization, Dc equalization, complexity sorting and image transformation as designed in [11]. The technique centred on the pre-processing stage by reducing the high-frequency content in the vertical direction of the equalized 2D ECG array. Although the vertical, high-frequency content of the formulated 2D ECG array has been reduced by Dc equalization and variance based linear complexity sorting techniques, adjacent lines may be quite dissimilar by following this pre-processing mechanism. In some circumstances, it fails to assure the maximum similarity through adjacent periods. As an ECG is a quasistationary signal, the variance based linear complexity sorting operation used in [11] can be further improved by nonlinear processing typically using entropy based complexity sorting. Physiological systems, such as the cardiovascular system, are composed of many nonlinearly interacted interdependent subsystems, resulting in highly complex signals. Many methods have been used to evaluate complexity, using both linear and nonlinear measures. Methods based on linear modelling are widely known and standardized. However, as a complex system, the cardiovascular system might be better assessed by nonlinear methods. Sample entropy (SampEn) represents the nonlinear quantitative metric used to quantify the irregularity of a time series [14]. The main objective of the present paper is to explore the SampEn based nonlinear complexity indices for sorting and rearrangement of the rows of the 2D ECG matrix array from the simplest to the most complex ones. The rest of this paper is organized as follows: Section 2 consists of the database and performance measure description; justification to use the sample entropy based sorting is discussed in Section 3. Section 4 describes the proposed 2D ECG processing mechanism; Section 5 demonstrates the experimental results; finally, Section 6 presents the paper’s conclusions. 2. ECG database and performance measures The Massachusetts Institute of Technology (MIT) provides useful resources for various research works, such as the analysis, processing, and compression of different signals. These resources incorporate with various databases, containing the physiological data and software for creating, viewing and analysing such recordings. In the present study, ECG samples from the Arrhythmia Database [15] (http://ecg.mit.edu) were used to evaluate the proposed method. The database contains 48 approximately thirty minutes two-channel ambulatory ECG recordings, obtained from 47 subjects studied by the BIH Arrhythmia Laboratory. The 11-bit resolutions over 10 mV ranges with sampling frequency of 360 Hz per sample channel were recorded. Tests were performed with the second channel of all 48 records of this database. To access the proposed pre-processing technique’s efficiency, 21,600 samples (10 min duration) were used and evaluated with the following data compression metrics [5,9,10,16]. Performance statistics were also computed for varying durations of the ECG records (5, 10, 15, 20, 25 and 30 min) thereby indicating the suitability of the designed approach for short, medium and long term ECG data. The various performance metrics are defined below. 2.1. Compression ratio (CR) The CR provides information of the extent by which the compressor eliminates the redundant data. The higher the CR, the fewer number of bits needed to store or transmit the data. CR is defined as follows:
CR =
B0 Bc
(1)
where, B0 is the total number of bits required to represent the original data and Bc is the total number of bits required to represent the compressed data along with the side information needed for retrieval of the original data. CR can also be defined as follows:
CR =
11 × fs × N Bc2d + Bs
(2)
where fs is the sampling frequency, N is the total number of samples in a one dimensional (1D) ECG signal, Bc2d is the total number of bits in the compressed (2D) ECG and Bs is the bits required to store the side information.
32
A. Pandey et al. / Computers and Electrical Engineering 56 (2016) 30–45
2.2. Percentage root mean square difference (PRD) PRD is a measure of adequate fidelity and degree of distortion introduced during the compression and de-compression algorithm defined as follows:
N (Xs (n ) − Xr (n ))2 n=1 P RD(% ) = 100 × N (Xs (n ))2
(3)
n=1
where, Xs (n) and Xr (n) are the original ECG data and reconstructed ECG data, respectively. 2.3. Percentage root mean square difference, with base removed (PRD1024) The PRD1024 (Eq. (4)) is similar to PRD, however, it removes the base of 1024 which was added to the MIT-BIH database for storage purpose.
N (Xs (n ) − Xr (n ))2 n=1 P RD1024(% ) = 100 × N (Xs (n ) − 1024 )2
(4)
n=1
2.4. Percentage root mean square difference, normalized (PRDN) PRDN (Eq. (5)) is the normalized version of PRD and does not depend on the signal mean value X .
N (Xs (n ) − Xr (n ))2 n=1 P RDN (% ) = 100 × N 2 (Xs (n ) − X )
(5)
n=1
2.5. Root mean square (RMS) error The RMS error measure (Eq. (6)) provides the reconstruction error with respect to sample size.
RMS =
N (Xs (n ) − Xr (n ))2 n=1
(6)
(N − 1 )
2.6. Signal to noise ratio (SNR) SNR is a measure of the degree of noise energy hosted by compression procedure in the decibel (dB) scale. The SNR definitions (Eq. (7)) are in agreement with that existing in literature as given by Lee et al. [5] and Zigel et al. [16].
⎛
N
2
⎞
⎜ n=1 (Xs (n ) − X ) ⎟ ⎟ ⎝ ⎠ N 2 (Xs (n ) − Xr (n ))
SNR = 10 × log ⎜
(7)
n=1
2.7. Quality score (QS) The QS is the ratio of the CR and PRD. The QS is a very objective performance indicator when it is hard to determine the better compression method while at the same moment taking into the account of the reconstruction errors. The higher the score, the better the compression method. As shown in the literature QS can be computed for different versions of PRD, thereby facilitating fair comparison with other algorithms, as shown in Eqs. (8)–(10) [5].
Q SPRD =
CR P RD
Q SPRD1024 =
CR P RD1024
(8) (9)
A. Pandey et al. / Computers and Electrical Engineering 56 (2016) 30–45
CR P RDN
Q SPRDN =
33
(10)
2.8. Wavelet based weighted percentage root mean square difference (WWPRD) WWPRD, first introduced by Al-Fahoum in [17], is based on 9–7 biorthogonal wavelet decomposition. The number of the decomposition level (L) depends upon the sampling frequency (fs ) and is defined as
L = log2 ( fs ) − 2.96
(11)
The average value of both original and reconstructed ECG signals was subtracted from the respective signals before the decomposition of the original and reconstructed ECG dataset. Each of the sub-bands consist of certain diagnostic features based on their frequency content. The lower frequency content features, such as T-waves and certain of the P-waves’ contributions, are reflected in approximate sub-bands. Therefore, the data dependent weights (wj ) were computed based on the heuristically chosen criteria to make it an actual representative impact of each of the sub-bands (j) independent of the fs, amplitude, and shape as in Eq. (13). Further, PRD was computed at the sub-band level.
W W P RD =
j=L
w j P RD j
(12)
j=0 nj
where-w j =
a j (i )
i=1 nj L +1
j=1 i=1
where j = 1, 2, . . ., L + 1
a j (i )
(13)
j = 0 represents the approximate sub-band, j = 1 to L represents the detail sub-band, nj is the number of wavelet coefficient in the jth sub-band, aj is an original coefficient within jt h sub-band. This metric exhibits high clinical correlation as verified by the statistical and the qualitative analysis. 2.9. Wavelet energy based diagnostic distortion (WEDD) The WWPRD metric values are affected by the presence of the noise in the signal. To address this shortcoming, WEDD was proposed where the energy of the wavelet coefficients is utilized to compute the dynamic weight of each sub-band [18]. Mathematically it is given as
W EDD =
j=L
w∗j P RD j
(14)
j=0 nj
w∗j
=
a2j (i )
i=1 L+1 n j
j=1 i=1
where j = 1, 2, . . ., L + 1
(15)
a2j (i )
where, w∗j is the weight of jth sub-band. Better performance of WWPRD and WEDD is attributed to its ability to provide more structured error estimation by focusing on the diagnostic quality of the reconstructed waveforms rather than random behaviour of errors that will be distributed equally between all the samples of the reconstructed signal. The advantage of WEDD lies in its ability to localize the diagnostic feature distortion estimation rather than the dispersed error in the ECG rhythm [18]. 3. Why SampEn based sorting There is a fundamental difference between regularity parameters, entropy measures, SampEn, and variability estimators, such as variance. SampEn is a regularity parameter, not a magnitude statistic [14]. The primary utility of SampEn is to uncover subtle complexity or alterations in long-term data that are not otherwise apparent. An entropy rate measurement for noisy and short data was introduced by Pincus, called the approximate entropy (ApEn) family. ApEn is a biased estimate, taking into account self-occurrences of patterns [14]. Furthermore, it is inconsistent for different parameter values and time series length. SampEn eliminates the self-count bias, being more consistent than ApEn [14]. For the sake of illustration, both the variance and SampEn based sorting for first three row segments of record number ‘100 from the MIT-BIH arrhythmia database after Dc equalization is demonstrated in Fig. 1(a). It provides the complexity analysis for Rows 1, – 3. The Dc equalization, along with the normalized amplitude, limits the maximum dynamic range of the ECG segments from 0-to-1. The scale for the normalized amplitude shown on the y axis of Fig. 1(a) has been computed by dividing the sample values
34
A. Pandey et al. / Computers and Electrical Engineering 56 (2016) 30–45
Fig. 1. The first three consecutive row segments with SampEn, variance, Dc value comparisons of record number 100 of MIT-BIH ECG arrhythmia database (a) row1, row 2 and row 3 (b) ordering after variance and SampEn based sorting.
by the maximum amplitude. The maximum value has been computed for the complete considered ECG time duration for a particular record. The Dc levels used for the first three rows of record number 100, are computed by taking the mean of the respective ECG beats. The obtained Dc values are provided in the legend entries in Fig. 1(a). Moreover, the minimum Dc level attained was 0.7143 with an offset of 0.01 that ensures each formulated ECG image has positive pixel values. Variance and SampEn for these three rows are 7.57 × 10−4 , 6.15 × 10−4 , 7.01 × 10−4 and 1.0679, 1.0982 and 1.0791, respectively, when computed over normalized values of the ECG segment. Row 1 reveals the highest complexity in terms of variance, but in terms of SampEn row 2 has highest complexity. After variance based sorting, the ascending order of segments is row 2, row 3 and row 1. However, for SampEn based sorting, the order is row 1, row 3 and row 2 (Fig. 1b). SampEn based sorting can be validated by visual inspection of these three row segments. As depicted in Fig. 1(a), tracing row 1 appears
A. Pandey et al. / Computers and Electrical Engineering 56 (2016) 30–45
35
Fig. 2. Flow diagram of proposed ECG compression and decompression technique.
to be less complex than the other rows. SampEn based complexity sorting reorders the ECG beats based upon ascending values of SampEn, which leads to the exploration of the similarity among the ECG beats via increased randomness.
4. Proposed 2D ECG pre-processing technique As shown in Fig. 2, there are four basic stages for compression and five stages for the reconstruction process of the proposed 2D ECG compression technique. The compression stage precedes the acquisition of ECG data from the standard MIT-BIH arrhythmia database that facilitates the comparative analysis and validation of the proposed technique. Details of this database have been already discussed in Section 2. The first and second stage involves QRS detection and R peak based ECG segmentation [19]. In ECG segmentation, ECG samples from one R peak to the next R peak are retained in one segmented block, and each block (Row wise) is vertically stacked to form the 2D ECG array. The third stage is the conjunction of interpolation and Dc equalization. Interpolation is needed for period normalization, and Dc equalization is required to reduce the vertical high-frequency contents of the 2D ECG array. The fourth stage further eliminates the high frequency content between adjacent rows in the ECG array by exploiting the SampEn based complexity sorting. The adjacent rows are reordered in terms of increased inter-beat randomness in a step-by-step manner. This stage leads to improved correlation among adjacent rows. Alternatively, this reduces the high frequency content, resulting in better compressor performance. The resulting array was encoded through a standard JPEG20 0 0 encoder in Matlab 2015a software, which provides precise rate control and progressive quality. The resulted compressed 2D ECG array along with side information can be transmitted over cloud storage or a base station from the remote location and can be easily reconstructed at the receiving end. The first stage of the reconstruction process is to split the side information and compressed ECG matrix from the merged data that is coming from the communication channel followed by the second stage of SampEn based resorting and JPEG20 0 0 decoding using the side information. The third stage reverts the Dc equalization procedure followed by decimation in the time domain which reverses the interpolation of row/periods of 2D ECG array. The fifth stage of the reconstruction process is R peak based incorporation and estimation of raw ECG data.
36
A. Pandey et al. / Computers and Electrical Engineering 56 (2016) 30–45
Fig. 3. R peak detection and marking for record number 119 with varying periods by Tompkins method.
4.1. R-peak detection, segmentation and 2D ECG array formation The peaks of the QRS complex were detected to identify each RR interval and to map the 1D ECG signal to the 2D ECG array. Several QRS detection algorithms have been proposed in the literature and in the present work, the RR interval time series were estimated by the Tompkins method proposed in [19] for its simple implementation and high detection accuracy. Fig. 3 depicts the R-peak marking for the relatively irregular window segment taken from record number 119 (for a total of 30 0 0 samples) with varying period. The segmentation and reassembling of the ECG signal as a 2D array were accomplished by choosing R peak as its delineation boundary, leaving half peak at each end to the row. The row oriented assembly was performed by retaining the ECG samples from one R-peak to the next R-peak. Fig. 4(a) shows the result of the 2D array of row wise stacking, for record number117 from the MIT-BIH arrhythmia ECG database. 4.2. Interpolation and Dc equalization The physiological and pathological alteration on the patient’s condition results in a left aligned segmented array (Fig. 4(a)) as they do not have equal period length in each heart beat segment. To form the equal universal period length of all periods and to better exploit the inter-beat dependencies, period normalization was adopted by [11]. The size of the original period lengths and original sequence of rows before sorting will be sent as side information along with the compressed file since this information is sufficient to reconstruct the Dc equalized ECG image at the decompression side. Cubic spline interpolation was employed based on the maximum length of the heart beat segment of the 2D ECG array [11]. The interpolated array for record number 117 from the MIT-BIH arrhythmia ECG database is shown in Fig. 4(b). The adjacent lines of the length normalized 2D ECG array may have different Dc levels that create high frequencies in the vertical direction that deteriorate the performance of compressors. To counteract with this limitation all the periods are clamped to a minimum possible Dc level from original Dc levels of all periods according to the following equation:
dck = mean(xk )
(16) kth
where xk are signal samples of normalized period and dck is the average of all the signal samples averaged over the normalized length. Based on dck , minimum Dc level and an offset, all the segments are clamped to the minimum Dc level [11]. The process of Dc equalization results in a smoother 2D ECG matrix through reduction of vertical high frequency content. The size of the original offset, minimum Dc level and offset introduced were incorporated into the side information. 4.3. Sample entropy based complexity sorting Although Dc equalization clamping reduces the vertical high frequency contents of the resulting 2D ECG array (Fig. 4(c)), the row wise adjacent periods are still very dissimilar, which tends to diminish the compressor efficiency. In order to address this problem, period sorting [12] and variance based complexity sorting [11] approaches are available in literature but still fail to assure the maximum similarity between the adjacent periods. The main contribution of this paper is to exploit a similarity among periods more efficiently by SampEn based complexity sorting. After the interpolation and Dc equalization, the SampEn of all periods is computed, and the row with the smallest entropy is moved to the first index of the 2D ECG array. The other segments are then occupied by the remaining rows in ascending order of randomness. This results in vertical smoothing of the formulated 2D ECG array.
A. Pandey et al. / Computers and Electrical Engineering 56 (2016) 30–45
37
Fig. 4. Illustrations of the 2D ECG array of 117 with dimension (505 × 464) resulting from the proposed approaches (a) originally detected QRS peaks based segmented array (b) row wise normalized (c) Dc equalized (d) sample entropy based sorting applied to (c).
4.3.1. SampEn The SampEn is a modified ApEn, which avoids self-comparison [14]. The algorithm for SampEn is summarized below. Given a signal u(1), u(2), …u(N), where, N is the total number of data points, m, is a positive integer andr, is a positive real number. For our study we have chosen requal to 20% of the standard deviation and m = 2. The SampEn algorithm [14] can be summarized in the following steps: Step 1: Formation of X(1) to X(N – m + 1) vectors defined as length of m
X (i ) = [ui ui+1 ... ui+m−1 ] 1 ≤ i ≤ N − m + 1 Step 2: Calculation of the distance between the two vector dm
dm [X (i ), X ( j )] = max ui+k−1 − u j+k−1
k = 1, 2, ...m
Step 3: Calculation of the number of similar segments in two vector, nm (i)defined as the count of integers values j distinct from i.
nm (i ) = |{ j : 1 <= j <= N − m + 1, i = j, dm [X (i ), X ( j )] ≤ r }| nm+1 (i ) = |{ j : 1 <= j <= N − m + 1, i = j, dm+1 [X (i ), X ( j )] ≤ r Step 4: Calculation of the similarity measure of these segmentsAm ( r ), Bm (r ) i i
Bm i (r ) =
nm ( i ) N−m+1
Am i (r ) =
nm+1 (i ) N−m+1
Step 5: Calculation of the mean measure of similar signal segments
Bm ( r ) =
N−m 1 Bm i (r ) N−m i=1
}|
38
A. Pandey et al. / Computers and Electrical Engineering 56 (2016) 30–45
Fig. 5. (a) Original ECG from MIT-BIH database of record number 100, (b) reconstructed ECG signal and (c) reconstruction error signal.
Am ( r ) =
N−m 1 Am i (r ) N−m i=1
Step 6: Calculation of SampEn of a finite data length of N can be estimated as
SampEn = − ln
Am ( r ) Bm ( r )
4.4. JPEG2000 encoding JPEG20 0 0 by MATLAB 2015a software has been successfully employed as an image compression standard due to its ability to simultaneously address the need of progressive transmission by quality and spatial locality [20]. The quantization step was fixed at a value of 1.5259 × 10−05 . Default JPEG20 0 0 parameters were employed with the targeted CR. Furthermore, in our proposed method, side information has been inserted to the compressed data via the lossless Huffman encoding mechanism. Typical for record number 100, the ECG matrix and side information size are quoted below. (1) Size of the 2-D ECG matrix (760 × 358) (2) Total side information (760 × 3) (i) Original ECG beat ordering before complexity sorting (760 × 1) (ii) Original Dc level for different ECG beat (760 × 1) (iii) Original length of each period (760 × 1) 4.5. Reconstruction procedure The reconstruction of raw ECG data from the compressed file is accomplished by following the inverse operation of compression stages shown in the right half of Fig. 2. The first stage of the restoration procedure was to separate and estimate the side information and compressed ECG array from the received file followed by the second stage, i.e., inverse procedure of complexity sorting (reordering). The third stage reverses two basic compression steps: first is the inverse of Dc equalization and second is decimation in order to recover the time signal using spline decimation and finally the estimated ECG signal is available for cardiac analysis.
A. Pandey et al. / Computers and Electrical Engineering 56 (2016) 30–45
39
Fig. 6. (a) Original ECG from MIT-BIH database of record number 117, (b) reconstructed ECG signal and (c) reconstruction error signal.
Fig. 7. (a) Original ECG from MIT-BIH database of record number 119, (b) reconstructed ECG signal and (c) reconstruction error signal.
5. Experimental results and discussion To evaluate the efficiency of the proposed pre-processing algorithm, all of the 48 MIT-BIH Arrhythmia database ECG records have been considered for the experimentation. The visual analysis has been depicted for records numbers 100, 117 and 119 and has been tabulated. These ECG records have been selected because validation results for them are available within the literature [11,12]. It is implicitly assumed that the reconstructed ECG signals have been evaluated through visual inspection by the cardiologist. Therefore, for visual assessment of the proposed method, the original, reconstructed and error signals are shown in Fig. 5 through Fig. 7 for first 30 0 0 samples of these records (Fig. 6).
40
A. Pandey et al. / Computers and Electrical Engineering 56 (2016) 30–45 Table 1 Performance measures of the proposed technique for 48 ECG records from MIT-BIH arrhythmia database for 10 mins. Record
CR
PRD
PRDN
PRD1024
RMS
SNR
QSPRD
QSPRDN
QSPRD1024
WWPRD
WEDD
100 101 102 103 104 105 106 107 108 109 111 112 113 114 115 116 117 118 119 121 122 123 124 200 201 202 203 205 207 208 209 210 212 213 214 215 217 219 220 221 222 223 228 230 231 232 233 234
55.50 58.57 58.85 60.18 58.68 48.63 50.83 61.14 58.63 52.14 59.54 57.17 62.30 64.35 58.70 50.84 71.48 45.17 52.24 66.40 52.64 66.15 68.32 42.69 47.98 61.76 37.76 46.46 43.66 41.99 46.66 44.51 52.36 46.71 44.81 39.94 57.54 48.05 56.04 42.05 47.28 45.25 47.38 56.09 110.99 94.57 42.43 54.88
0.55 1.78 1.09 0.66 2.21 0.69 1.88 2.97 1.98 0.84 1.05 0.65 1.58 1.80 1.01 1.81 0.73 3.96 1.19 0.80 0.77 3.81 1.79 2.62 1.14 2.58 3.68 0.47 3.76 2.56 1.11 0.89 1.57 2.51 4.35 1.17 2.37 4.58 0.92 2.73 0.85 1.02 2.98 1.40 4.12 1.78 3.00 0.70
14.89 29.32 27.74 10.16 37.92 10.68 26.92 17.71 46.22 9.50 20.40 12.12 17.83 59.41 13.32 12.04 13.10 38.65 9.59 12.68 9.050 55.58 14.15 35.24 30.33 53.36 34.46 11.72 44.05 27.40 21.32 17.44 23.68 18.86 47.43 20.78 19.62 43.21 13.43 43.89 28.17 12.79 39.84 20.20 71.05 59.46 27.27 10.06
7.29 11.45 18.88 7.94 43.40 8.77 24.17 17.12 24.92 7.63 17.33 3.27 16.75 41.77 7.74 6.24 3.25 17.91 4.23 4.02 3.62 19.27 7.72 29.21 20.71 39.97 43.48 10.02 44.10 43.42 18.00 36.65 21.82 16.42 49.45 18.67 15.44 26.30 6.35 38.65 19.13 15.57 39.44 17.97 58.90 42.07 14.80 9.06
5.33 17.22 10.65 6.49 21.61 6.74 18.62 29.40 19.42 8.23 10.44 5.59 15.69 17.86 9.35 15.29 6.26 33.96 10.22 6.87 6.64 32.94 15.57 26.32 11.30 25.59 36.58 4.48 37.41 25.52 11.04 8.86 15.67 24.61 43.40 11.65 23.64 41.56 8.45 27.10 8.45 9.40 29.74 13.94 40.67 17.76 29.94 6.97
16.54 10.66 11.14 19.86 8.42 19.43 11.40 15.03 6.70 20.44 13.81 18.33 14.98 4.52 17.51 18.39 17.66 8.26 20.36 17.94 20.87 5.10 16.98 9.06 10.36 5.46 9.25 18.62 7.12 11.24 13.42 15.17 12.51 14.49 6.48 13.65 14.15 7.29 17.44 7.15 11.01 17.86 7.99 13.89 2.97 4.52 11.29 19.95
100.07 32.95 53.80 90.91 26.61 70.88 27.07 20.61 29.56 62.18 56.71 87.73 39.54 35.77 58.23 28.04 97.85 11.40 43.92 83.34 67.99 17.38 38.10 16.27 42.03 23.91 10.26 99.64 11.61 16.40 41.93 49.90 33.27 18.57 10.30 34.17 24.31 10.49 60.68 15.42 55.56 44.50 15.90 39.94 26.93 53.04 14.15 78.30
3.75 3.69 1.90 6.16 0.86 4.53 1.87 3.45 1.31 4.89 2.87 4.67 3.53 1.71 4.34 4.48 5.98 1.17 6.59 5.15 5.75 1.18 4.92 1.30 1.69 1.28 0.77 2.31 1.01 0.80 2.19 0.92 2.18 2.64 0.87 1.90 3.02 1.13 4.33 0.97 1.69 1.53 1.17 2.80 1.55 1.54 2.28 5.55
7.62 5.08 3.16 7.61 1.10 5.52 2.10 3.59 1.94 5.55 3.37 17.53 3.76 2.39 7.51 8.06 21.98 2.57 12.45 16.35 14.49 3.42 8.77 1.37 2.29 1.57 0.81 4.54 1.08 0.84 2.60 1.09 2.41 2.86 0.91 2.14 3.12 1.82 8.81 1.09 2.46 2.64 1.26 3.12 1.87 2.13 2.37 6.09
0.23 0.27 0.40 0.16 0.75 0.20 0.36 0.25 0.59 0.17 0.36 0.21 0.23 0.79 0.18 0.14 0.21 0.72 0.15 0.25 0.19 0.82 0.21 0.40 0.51 0.82 0.80 0.26 0.83 0.71 0.33 0.79 0.34 0.22 0.92 0.30 0.28 0.70 0.18 0.76 0.42 0.53 0.73 0.34 0.82 0.79 0.34 0.16
0.12 0.14 0.27 0.08 0.54 0.09 0.23 0.15 0.33 0.07 0.20 0.10 0.15 0.57 0.12 0.09 0.11 0.38 0.07 0.11 0.07 0.53 0.11 0.18 0.27 0.47 0.38 0.13 0.41 0.33 0.19 0.37 0.22 0.13 0.46 0.15 0.14 0.39 0.11 0.37 0.27 0.24 0.38 0.22 0.67 0.54 0.17 0.08
The records (100, 117, and 119) were converted into matrices with dimensions 760 × 358, 505 × 464, and 659 × 506, respectively. Tables 1 and 2 provide the performance evaluation for all 48 records at short-term (10 min) and long-term (30 min) ECG data. The tables give insight of the efficiency while taking into consideration all the prevalent performance estimators such as CR, PRD, PRDN, PRD1024, RMS, SNR, QSPRD , QSPRDN , QSPRD1024 , WWPRD, and WEDD. The QSPRD , QSPRDN , and QSPRD1024 were 42.25, 2.75 and 4.73 for 10 min duration and 35.40, 4.07 and 2.35 for 30 min duration. Both durations clearly justify its efficacy for long window length records thereby capturing the non-linear characteristics of these signals. The clinical diagnostic measures WWPRD and WEDD are equal to 0.44 and 0.25 and 0.54 and 0.30 on an average for 10 and 30 min duration, respectively. The closer these values are to 0 the better the compression performance. A small drop in the compressor efficiency is noticed with increased time duration because of the interpolation at the period normalization stage. For fair comparison using the existing performance metrics i.e., PRD, PRDN and PRD1024 the results have been separately presented in Tables 3, 4 and 5. The respective QS has been measured using the different versions of the PRD, and the ratio has been tabulated. Table 3 provides the performance comparison with recent works based on DCT [5], Fractal [21] and Cloud enabled fractal [8] methods. For a typical record number 119, QSPRD is 43.90. Similarly, Table 4 provides the algorithm assessment using the PRD1024 metric. The QSPRD1024 for record 119 is 12.35. Lastly, Table 5 provides the analysis using the PRDN measure.
A. Pandey et al. / Computers and Electrical Engineering 56 (2016) 30–45
41
Table 2 Performance measures of the proposed technique for 48 ECG records from MIT-BIH arrhythmia database for 30 mins. Record
CR
PRD
PRDN
PRD1024
RMS
SNR
QSPRD
QSPRDN
QSPRD1024
WWPRD
WEDD
100 101 102 103 104 105 106 107 108 109 111 112 113 114 115 116 117 118 119 121 122 123 124 200 201 202 203 205 207 208 209 210 212 213 214 215 217 219 220 221 222 223 228 230 231 232 233 234
52.28 60.43 61.62 52.4 47.34 43.66 50.35 48.22 53.97 40.41 49.68 51.87 58.99 75.20 56.37 48.5 72.52 45.13 52.32 55.17 54.76 66.9 64.65 40.03 59.51 49.35 35.42 41.67 46.75 36.14 47.22 39.76 48.10 37.91 45.46 36.97 48.08 50.18 59.11 42.19 42.41 39.25 50.93 56.04 102.26 84.44 34.84 39.35
7.26 9.45 15.65 8.38 37.10 24.37 46.38 40.65 29.80 6.02 41.44 3.11 15.06 40.08 7.79 12.67 3.36 26.25 5.05 4.35 3.31 20.41 20.14 17.91 36.59 44.41 41.35 16.96 44.77 40.57 17.49 43.35 18.34 12.52 52.36 12.41 41.32 30.23 7.01 43.45 34.87 25.88 39.24 21.43 54.73 39.20 17.16 7.38
0.55 0.75 1.00 0.68 2.55 2.30 3.82 7.28 2.28 0.67 2.55 0.62 1.33 1.75 1.02 3.44 0.70 5.88 1.19 0.86 0.71 3.99 4.33 1.40 1.86 3.06 4.36 1.28 3.55 4.13 1.11 2.66 1.35 1.79 5.23 0.80 5.25 5.34 0.96 3.00 1.70 3.66 2.94 1.70 3.44 1.82 1.94 0.55
13.61 13.97 25.54 10.32 49.84 27.95 51.50 42.12 37.67 6.64 49.05 11.84 16.10 57.97 13.03 22.25 12.45 58.87 9.56 13.01 8.31 59.58 40.55 18.58 47.59 51.09 43.74 31.31 49.78 43.01 20.54 50.57 20.26 13.19 55.34 14.09 42.83 44.19 13.81 49.44 45.58 41.57 42.27 23.77 63.88 55.65 17.82 8.17
5.26 7.30 9.73 6.64 24.96 22.54 37.9 72.25 22.38 6.61 25.28 5.3 13.28 17.41 9.47 29.10 5.95 50.36 10.25 7.44 6.08 34.61 37.55 14.12 18.49 30.35 43.35 12.33 35.23 41.13 11.01 26.43 13.41 17.71 52.12 7.95 52.38 48.77 8.80 29.79 16.92 33.84 29.34 16.84 34.14 18.01 19.39 5.51
17.32 17.10 11.86 19.73 6.05 11.07 5.76 7.51 8.48 23.56 6.19 18.53 15.86 4.74 17.70 13.05 18.10 4.60 20.39 17.71 21.60 4.50 7.84 14.62 6.45 5.83 7.18 10.09 6.06 7.33 13.75 5.92 13.87 17.6 5.14 17.02 7.36 7.09 17.19 6.12 6.82 7.63 7.48 12.48 3.89 5.09 14.98 21.76
95.81 80.18 61.78 77.26 18.59 19.02 13.17 6.62 23.63 60.09 19.51 84.29 44.24 42.91 55.34 14.09 104.23 7.67 43.82 64.08 77.28 16.75 14.93 28.51 31.95 16.13 8.13 32.57 13.17 8.74 42.58 14.95 35.70 21.17 8.69 46.29 9.16 9.39 61.69 14.07 24.9 10.73 17.33 33.04 29.75 46.51 17.96 70.94
7.20 6.39 3.94 6.25 1.28 1.79 1.09 1.19 1.81 6.71 1.20 16.69 3.92 1.88 7.24 3.83 21.59 1.72 10.36 12.68 16.55 3.28 3.21 2.24 1.63 1.11 0.86 2.46 1.04 0.89 2.70 0.92 2.62 3.03 0.87 2.98 1.16 1.66 8.43 0.97 1.22 1.52 1.30 2.62 1.87 2.15 2.03 5.34
3.84 4.33 2.41 5.08 0.95 1.56 0.98 1.14 1.43 6.09 1.01 4.38 3.66 1.30 4.33 2.18 5.83 0.77 5.47 4.24 6.59 1.12 1.59 2.15 1.25 0.97 0.81 1.33 0.94 0.84 2.30 0.79 2.37 2.88 0.82 2.62 1.12 1.14 4.28 0.85 0.93 0.94 1.20 2.36 1.60 1.52 1.95 4.82
0.22 0.23 0.36 0.18 0.67 0.59 0.75 0.71 0.69 0.15 0.84 0.22 0.24 0.79 0.19 0.30 0.24 0.97 0.17 0.30 0.17 0.87 0.75 0.36 0.84 0.91 0.82 0.58 0.89 0.79 0.32 0.97 0.32 0.22 1.07 0.25 0.70 0.74 0.19 0.88 0.71 0.73 0.74 0.34 0.79 0.82 0.34 0.15
0.12 0.13 0.24 0.09 0.47 0.24 0.50 0.39 0.35 0.06 0.46 0.10 0.15 0.57 0.12 0.21 0.11 0.55 0.09 0.12 0.07 0.57 0.37 0.16 0.45 0.48 0.39 0.27 0.47 0.39 0.19 0.46 0.19 0.12 0.51 0.12 0.39 0.41 0.12 0.46 0.44 0.38 0.40 0.22 0.63 0.54 0.16 0.07
Comparison with the recent works in [5, 22] and [23] shows the performance improvement in QSPRDN is 4.27, 3.07,4.94 using the proposed scheme for record number 119. The advantage of the proposed method is its superior CR, PRD and QS performance. It is in agreement with the wide consensus that “QS is a best performance indicator while taking into consideration the both aspect reconstruction distortions (PRD) as well as a quantitative description of the compression (CR)”. The proposed method performs well by obtaining better QS compared to other methods for all versions of the PRD. From the formulated results on different ECG databases in Table 6, it can be concluded that values of CR, PRD and QSPRD for various lossy, lossless and tends to lossless methods outperformed in respect to the proposed method. The novelty in the compression side from previous work [11] has been achieved by replacing variance based sorting with SampEn based complexity sorting mechanism. It achieves self-similarity among the vertical columns of the 2D ECG matrix. At the reconstruction end, any desired ECG beat can be extracted via an appropriate selection of complexity sorted ECG beat numbers along with the other side information. This beat number results in the customizable retrieval of beats. It does not necessarily reduce the overall decompression time but provides access to the selective ECG data segments. Fig. 8 gives an extensive analysis for different durations of the ECG records. From the graphical analysis, it is observed that the algorithm works well for a varying range of time durations.
42
A. Pandey et al. / Computers and Electrical Engineering 56 (2016) 30–45 Table 3 Performance comparison of the proposed method with different compression techniques on normal and abnormal ECG records having CR, PRD and. QSPRD measurements (- performance measure values not available for particular record in the literature). Data
100 102 103 104 105 106 107 108 109 111 112 113 114 117 119
DCT-based scheme [5]
Fractal based technique [21]
Cloud enabled fractal based method [8]
Proposed
CR
PRD
QSPRD
CR
PRD
QSPRD
CR
PRD
QSPRD
CR
PRD
QSPRD
22.94 25.9 20.33 22.94 20.96 19.55 18.55 23.11 19.89 22.99 23.82 19.96 25.58 24.00 19.00
1.95 1.39 2.50 1.67 1.17 1.77 3.93 0.77 0.76 1.03 1.00 2.89 1.08 1.17 2.05
11.76 18.63 8.13 13.74 17.91 11.05 4.72 30.01 26.17 22.32 23.82 6.91 23.69 20.51 9.27
11.06 10.17 11.00 11.20 12.00 – – 8.22 11.00 – – – – 4.50 –
13.79 14.2 13.78 13.00 15.13 – – 15.42 17.39 – – – – 14.64 –
0.80 0.72 0.80 0.86 0.79 – – 0.53 0.63 – – – – 0.31 –
42 42 42 42 42 42 42 42 42 42 42 42 42 42 42
0.79 1.00 2.29 0.96 1.24 1.65 2.66 0.53 0.87 0.92 1.18 2.40 0.87 1.45 1.50
53.16 42.00 18.34 43.75 33.87 25.45 15.79 79.25 48.28 45.65 35.59 17.50 48.28 28.97 28.00
55.50 58.85 60.18 58.68 48.63 50.83 61.14 58.63 52.14 59.54 57.17 62.30 64.35 71.48 52.24
0.55 1.09 0.66 2.21 0.69 1.88 2.97 1.98 0.84 1.05 0.65 1.58 1.80 0.73 1.19
100.91 53.99 91.18 26.55 70.48 27.04 20.59 29.61 62.07 56.70 87.95 39.43 35.75 97.92 43.90
Table 4 Performance comparison of the proposed method with different compression techniques on normal and abnormal ECG records having CR, PRD1024 and. QSPRD1024 measurements (P∗ - PRD1024 and Q∗ - QSPRD1024 ). Data
Dc equalization and complexity sorting [11]
Mean extension and period normalization [12] Approach 1
100 117 119
Proposed
Approach 2
CR
P∗
Q∗
CR
P∗
Q∗
CR
P∗
Q∗
CR
P∗
Q∗
24 24 20
3.95 1.72 1.92
6.07 13.95 10.42
24 10 21.6
5.21 0.98 2.81
4.61 10.2 7.69
24 13 20.9
4.06 1.18 2.81
5.91 11.02 7.47
27.75 35.74 26.12
3.645 1.625 2.115
7.61 21.99 12.35
Table 5 Performance comparison of the proposed method with different compression techniques on normal and abnormal ECG records having CR, PRDN and. QSPRDN measurements. Data
DCT-based scheme [5]
Adaptive Fourier decomposition [22]
CR
PRDN
QSPRDN
CR
PRDN
100 101 102 103 109 111 112 113 115 117 119 121
22.94 23.86 25.91 20.33 19.89 22.99 23.82 19.96 19.88 24.43 19.31 25.84
48.88 38.34 35.70 37.74 8.05 19.60 19.08 34.85 38.00 20.80 16.30 7.73
0.47 0.62 0.73 0.54 2.47 1.17 1.25 0.57 0.52 1.17 1.18 3.34
25.64 25.64 25.64 25.64 25.64 25.64 25.64 25.64 25.64 25.64 25.64 25.64
16.18 12.87 22.26 14.4 15.73 19.45 18.73 6.85 11.84 11.23 10.77 9.92
Natural basis k-coefficients via sparse decomposition [23]
Proposed
QSPRDN
CR
PRDN
QSPRDN
CR
PRDN
QSPRDN
1.58 1.99 1.15 1.78 1.63 1.32 1.37 3.74 2.17 2.28 2.38 2.58
78.20 80.24 50.54 46.32 24.86 31.05 34.06 37.42 38.26 38.94 16.26 26.67
18.03 14.66 18.45 12.57 13.70 26.20 16.58 14.08 9.76 14.42 32.19 17.36
4.34 5.47 2.74 3.68 1.81 1.19 2.05 2.66 3.92 2.70 0.51 1.54
55.50 58.57 58.85 60.18 52.14 59.54 57.17 62.30 58.70 71.48 52.24 66.4
14.89 29.32 27.74 10.16 9.50 20.40 12.12 17.83 13.32 13.10 9.59 12.68
3.73 2.00 2.12 5.92 5.49 2.92 4.72 3.49 4.41 5.46 5.45 5.24
Table 6 Performance comparison of the proposed technique for average values for the database (∗ MIT-BIH arrhythmia database, ∗ ∗ mean ± standard deviation). Algorithm
Database
PRD
CR
QSPRD
m-AZTEC [2] ASCII encoding [3] SPIHT [6] Linear prediction with wavelet [10] Wavelet and vector quantization [13] JPEG20 0 0 [20] Hilton [24] Neural networks [25] Proposed
CSE PTB [17]
25.50 7.89 1.18 5.30 1.6 ± 0.98∗ ∗ 3.26 2.60 0.61 1.88
5.6 15.7 8.0 11.6 12.0 20.0 8.0 12.74 54.96
0.22 1.99 6.78 2.19 5.00 6.13 3.08 20.89 29.23
∗ ∗ ∗ ∗ ∗ ∗ ∗
A. Pandey et al. / Computers and Electrical Engineering 56 (2016) 30–45
43
Fig. 8. Average and standard deviation (Std Dev) values of different performance metrics at varying ECG time durations. Table 7 Step-by-step performance enhancement trend for record number 117 of the proposed method. Algorithm Steps
QSPRD
QSPRD1024
QSPRDN
Trend
After Period Normalization After Period Normalization + Dc Equalization After Period Normalization + Dc Equalization + Sample Entropy Based Complexity Sorting
13.05 58.94 107.32
2.67 12.07 21.98
0.73 3.29 5.98
–
↑ ↑
The step by step result enhancement trend is depicted in Table 7 for record number 117. The table shows an increase in QS after each intermediate step. The combination that resulted in the best QS is the third one. Undoubtedly the major contribution for this increase is because of the sample entropy based complexity sorting scheme. 6. Conclusions A modified pre-processing algorithm for compression of quasi-periodic ECG signals in 2D domain was developed. The advantage of the sample entropy based complexity sorting is the generation of the uniformity for the JPEG20 0 0 encoder for efficient data compression. The proposed algorithm results in minimum loss of information by applying a SampEn based nonlinear complexity sorting mechanism in addition to period normalization and Dc equalization in a 2D ECG matrix to decreasing vertical highfrequency content. Using the formulated approach along with the JPEG20 0 0 encoder method outperformed other ECG data compression methods by achieving high QS and lower PRD, PRD1024 and PRDN performance measures. Moreover, clinical diagnostic measures, such as WWPRD and WEDD, have also been applied to evaluate the performance of the proposed method. Typically, for the 10 and 30 min duration, the average WWPRD and WEDD values were 0.44, 0.22 and 0.54, 0.30, respectively. From the investigation, it was concluded that the PRD derived performance metrics slightly deteriorate with the increase in the ECG duration. This shortcoming is attributed to the use of interpolation at the period normalization stage. The results reveal that it would be worthwhile to develop a more efficient method that would adapt minimization reconstruction error due to period normalization. The modified pre-processing for quasi-periodic ECG as represented by an image
44
A. Pandey et al. / Computers and Electrical Engineering 56 (2016) 30–45
is a useful and powerful method to achieve higher compression with low reconstruction error. Future works will focus on applying the designed approach to multichannel ECG signals as well to other quasi periodic signals. References [1] Jalaleddine SM, Hutchens CG, Strattan RD, Coberly WA. ECG data compression techniques-a unified approach. IEEE Trans Biomed Eng 1990;37:329–43. [2] Kumar V, Saxena SC, Giri VK, Singh D. Improved modified AZTEC technique for ECG data compression: Effect of length of parabolic filter on reconstructed signal. Comput Electr Eng 2005;31:334–44. [3] Mukhopadhyay SK, Mitra S, Mitra M. An ECG signal compression technique using ASCII character encoding. Measurement 2012;45:1651–60. [4] Pandey A, Singh B, Saini BS, Sood N. A joint application of optimal threshold based discrete cosine transform and ASCII encoding for ECG data compression with its inherent encryption. Australas Phys Eng Sci Med 2016:1–23. doi:10.1007/s13246- 016- 0476-4. [5] Lee S, Kim J, Lee M. A real-time ECG data compression and transmission algorithm for an e-health device. IEEE Trans Biomed Eng 2011;58:2448–55. [6] Lu Z, Kim DY, Pearlman WAP. Wavelet compression of ECG signals by set partitioning in hierarchical trees algorithm. IEEE Trans Biomed Eng 20 0 0;47:849–56. [7] Polania LF, Carrillo RE, Blanco-Velasco M, Barner KE. Compressed sensing based method for ECG compression. In: 2011 IEEE international conference on acoustics speech signal processing; 2011. p. 761–4. [8] Ibaida A, Al-shammary D, Khalil I. Cloud enabled fractal based ECG compression in wireless body sensor networks. Futur Gener Comput Syst 2014;35:91–101. [9] Grossi G, Lanzarotti R, Lin J. High-rate compression of ECG signals by an accuracy-driven sparsity model relying on natural basis. Digit Signal Process 2015;45:96–106. [10] Al-Shrouf A, Abo-Zahhad M, Ahmed SM. A novel compression algorithm for electrocardiogram signals based on the linear prediction of the wavelet coefficients. Digit Signal Process 2003;13:604–22. [11] Filho EBL, Rodrigues NMM, Silva EABda, Faria SMMde, Carvalho MBde. ECG signal compression based on Dc equalization and complexity sorting. IEEE Trans Biomed Eng 2008;55:1923–6. [12] Chou H-H, Chen Y-J, Shiau Y-C, Kuo T-S. An effective and efficient compression algorithm for ECG signals with irregular periods. IEEE Trans Biomed Eng 2006;53:1198–205. doi:10.1109/TBME.2005.863961. [13] Wang X, Meng J. A 2-D ECG compression algorithm based on wavelet transform and vector quantization. Digit Signal Process 2008;18:179–88. [14] Richman JS, Moorman JR. Physiological time-series analysis using approximate entropy and sample entropy. Am J Physiol Heart Circ Physiol 20 0 0;278:H2039–49. [15] Goldberger AL, Amaral LAN, Glass L, Hausdorff JM, Ivanov PC, Mark RG, et al. Physiobank, physiotoolkit, and physionet components of a new research resource for complex physiologic signals. Circulation 20 0 0;101:e215–20. [16] Zigel Y, Cohen A, Katz A. The weighted diagnostic distortion (WDD) measure for ECG signal compression. IEEE Trans Biomed Eng 20 0 0;47:1422–30. [17] Al-Fahoum AS. Quality assessment of ECG compression techniques using a wavelet-based diagnostic measure. IEEE Trans Inf Technol Biomed 2006;10:182–91. [18] Manikandan MS, Dandapat S. Wavelet energy based diagnostic distortion measure for ECG. Biomed Signal Process Control 2007;2:80–96. doi:10.1016/ j.bspc.20 07.05.0 01. [19] Pan J, Tompkins WJ. A real-time QRS detection algorithm. IEEE Trans Biomed Eng 1985;32:230–6. doi:10.1109/TBME.1985.325532. [20] Bilgin A, Marcellin MW, Altbach MI. Compression of electrocardiogram signals using JPEG20 0 0. IEEE Trans Consum Electron 2003;49:833–40. [21] Khalaj A, Naimi HM. New efficient fractal based compression method for electrocardiogram signals. In: Canadian conference on electrical and computer engineering. IEEE; 2009. p. 983–6. doi:10.1109/CCECE.2009.5090276. [22] Ma J, Zhang T, Dong M. A novel ECG data compression method using adaptive fourier decomposition with security guarantee in e-Health applications. IEEE J Biomed Heal Informatics 2015;19:986–94. [23] Adamo A, Grossi G, Lanzarotti R, Lin J. ECG compression retaining the best natural basis k-coefficients via sparse decomposition. Biomed Signal Process Control 2015;15:11–17. [24] Hilton ML. Wavelet and wavelet packet compression of electrocardiograms. IEEE Trans Biomed Eng 1997;44:394–402. [25] Fira CM, Goras L. An ECG signals compression method and its validation using NNs. IEEE Trans Biomed Eng 2008;55:1319–26.
A. Pandey et al. / Computers and Electrical Engineering 56 (2016) 30–45
45
Anukul Pandey was born in Jamui, Bihar, India. He received his B.Tech and M.Tech degrees in Electronics and Communication Engineering in 2011 and 2013, respectively. Currently, he is working towards the Ph.D. degree at Dr B R Ambedkar National Institute of Technology, Jalandhar in India. His research interests include the biomedical signal/image processing, data compression, steganography and wireless communications. Barjinder Singh Saini was born in Jalandhar, India. He received his B.Tech and M.Tech degrees in Electronics & Communication Engineering. He then obtained his Ph.D. degree in Engineering on ‘‘Signal Processing of Heart Rate Variability’’ from Dr. B. R. Ambedkar National Institute of Technology, Jalandhar. His areas of interest are medical image processing, digital signal processing and embedded systems. Butta Singh received his B.Tech, M.Tech and Ph.D. degree from Guru Nanak Dev Engineering College, Ludhiana SLIET, Longowal and National Institute of Technology, Jalandhar, India respectively. Presently, He is serving as an Assistant Professor at Guru Nanak Dev University, Regional Campus, Jalandhar, India. His research interest includes signal processing/image processing, in particular, applied to biomedical applications. Neetu Sood works as assistant professor in the Department of Electronics and Communication Engineering at Dr B R Ambedkar National Institute of Technology, Jalandhar in India since 2007. Her research interest includes wireless communication system design, OFDM based systems, channel modelling & its simulations, Biomedical signal processing and plant consciousness.