ASCII-character-encoding based PPG compression for tele-monitoring system

ASCII-character-encoding based PPG compression for tele-monitoring system

Biomedical Signal Processing and Control 31 (2017) 470–482 Contents lists available at ScienceDirect Biomedical Signal Processing and Control journa...

4MB Sizes 77 Downloads 263 Views

Biomedical Signal Processing and Control 31 (2017) 470–482

Contents lists available at ScienceDirect

Biomedical Signal Processing and Control journal homepage: www.elsevier.com/locate/bspc

ASCII-character-encoding based PPG compression for tele-monitoring system Sourav Kumar Mukhopadhyay, M. Omair Ahmad ∗ , M.N.S. Swamy Department of Electrical and Computer Engineering, Concordia University, 1455 De Maisonneuve Blvd. West, Montreal, Quebec, H3G 1M8, Canada

a r t i c l e

i n f o

Article history: Received 15 June 2016 Received in revised form 23 September 2016 Accepted 26 September 2016 Keywords: ASCII character Compression Heart rate Photoplethysmogram Tele-monitoring Web-application

a b s t r a c t Photoplethysmogram (PPG) is becoming one of the most important techniques for assessing several vascular parameters, but the area of digitized PPG signal compression still remains largely unexplored. This paper presents an efficient PPG compression algorithm, and is developed on the basis of five major sequential steps: noise elimination through Butterworth low-pass filter, down-sampling, inter-sample difference computation, grouping and American Standard Code for Information Interchange (ASCII) character encoding of reduced dataset. The compressed file is then either uploaded to a Hypertext Preprocessor (PHP)-based web-application or transmitted to doctor’s mobile phone in the form of short message service (SMS) depending upon the availability and feasibility of services. Efficiency of the compression algorithm is measured in terms of compression ratio and the performance of the reconstruction protocol is tested through a variety of numerical assessments such as the cross correlation coefficient and percent root mean square difference. As per the gold standard subjective measure, the mean-opinion-score of the reconstructed signal is found to be ‘very good’. Finally, heart rate is also calculated with no more than 1.68% error from the compressed data without actually reconstructing the signal. © 2016 Elsevier Ltd. All rights reserved.

1. Introduction Photoplethysmogram (PPG) is a non-invasive measurement technique developed by A.B. Hertzman in 1938 [1] that measures relative blood volume changes in the blood vessels. The signal can be easily acquired by using a light source (usually infrared light) to send light into the tissue and then measuring the amount of the backscattered light corresponding to the variation of the blood volume via a photo sensitive element [2]. The measurement is absolutely independent of individual skin color, thickness and blood volume [3]. Since 1987, the technique has been accepted as the standard measure of oxygen-saturation level by the International Standards Organization (ISO) and European Committee for Standardization [4]. PPG is a low-frequency signal and it does not contain clinical information above 15 Hz [5,6]. Use of lowpass or bandpass filters having higher cutoff frequency up to 40 Hz of different orders (up to 8th order) have been reported for de-noising the PPG signal [7,8]. Recently, researchers have hypothesized the ability to extract additional information embedded in the PPG signal and it has

∗ Corresponding author. E-mail addresses: s [email protected] (S.K. Mukhopadhyay), [email protected] (M.O. Ahmad), [email protected] (M.N.S. Swamy). http://dx.doi.org/10.1016/j.bspc.2016.09.021 1746-8094/© 2016 Elsevier Ltd. All rights reserved.

become an indispensable tool for detecting and diagnosing the heart rate (HR) [9] and heart rate variability (HRV) [10], Blood oxygen saturation (SpO2 ) [11], blood pressure [12], respiratory rate [13], sleep apnea [14], blood glucose [15], etc. In [14], a study of HRV for obstructive sleep apnea syndrome screening is presented. A PPG-based non-invasive blood-glucose estimation technique is reported in [15]. A robust PPG feature extraction algorithm (detection of systolic and diastolic waves) is proposed in [16] for estimating arterial stiffness, hypertension and cardiovascular ageing using these extracted feature. Recent research findings have also shown that PPG signal contains sufficient information to be used in biometric applications. A PPG based robust face detection method in video stream is proposed by Gibert et al. in [17]. Since bio-signals are highly subjective, symptoms of physical or cardiac abnormalities may appear at random in the timescale [18]. Hence, bio-signal pattern analysis has to be carried out for extended periods of time (24 h monitoring) [10]. Therefore, the volume of the data handled would be enormous. Researchers from different parts of the world are continuing to develop dedicated, high performance and reliable biomedical signal compression-reconstruction algorithms exploiting various signal processing techniques. Standard compression algorithms such as Huffman coding (HC) and run length encoding (RLE) are sometimes used as part of a multi-layered compression algorithm [19,20].

S.K. Mukhopadhyay et al. / Biomedical Signal Processing and Control 31 (2017) 470–482

Only a few attempts have been made so far in recent days addressing the issue of PPG data compression [19,21–25]. In [22,23], Reddy et al. proposed a technique for removal of motion artifacts (MA) from a PPG signal using cycle-by-cycle Fourier series analysis (CFSA). The CFSA method represents each PPG cycle with a reduced set of Fourier coefficients, which enhances the storage efficiency by 12 times. In [21], delta modulation (DM) is used to compress digitized PPG signal, where the step size is fixed to 1/216 . A major disadvantage of using a fixed and very small step size is that it cannot track high-slope regions and introduces slope overload. On the other hand, to get better performance, i.e., less reconstruction error, the PPG signal must be sampled at a much higher rate which increases the size of the data. Hence, DM-based techniques might not be suitable for PPG signals recorded at very low sampling rates. However, DM-based techniques produce a train of ‘1’ at each rising edge of the PPG cycle, which can be used to count the number of systolic peaks and hence, the HR from the compressed data. A real-time, low-complexity and lossless PPG compression technique based on second-order delta and HC is proposed in [19]. Alam et al. proposed a modified delta-modulation and RLE-based PPG compression technique in [25]. They categorize the whole PPG signal into two zones, viz., ‘complex’ or ‘plain’ based on thresholding. These zones are quantized and then a compression algorithm is applied. However, the compression ratio obtained using these two algorithms ([19,25]) are too low to be used in real time remote monitoring applications. Moreover, since HC and RLE transform the signal into a completely new domain, it is also not possible to calculate the HR from the compressed data using these two methods. In [39], a new paradigm, called compressive sampling (CS) technique is used to acquire the PPG signal. Since the PPG signal is sparse in the frequency domain, the use of non-uniform-sampling-based CS technique could be beneficial in acquiring the PPG signal [40]. This enables the acquisition of the PPG signal at a rate which is below that of the Nyquist and results in significant power savings. These advantages are exploited in [39] and it is shown that using the CSbased techniques, the data-size can be reduced by a factor of 30 over that of using uniform sampling. Moreover, the Lomb-Scargle periodogram technique is used in [39] for the spectral analysis of the compressively sampled PPG signals and the spectrum thus obtained is used to calculate the average HR and HRV with standard accuracy. In [41], a very low-power (172 ␮W), fully integrated CS-based PPG acquisition, and real-time HR and HRV extraction system was developed. The algorithm and the techniques presented in [39,41] are able to calculate HR from the compressed PPG. As described in [39], the main problem associated with the CS-based data acquisition technique is that the compressed data need to be processed through a complex convex optimization technique to be restored fully. The algorithm proposed in this paper differs from the above two in many aspects such as: (1) the compression operation is performed in [39,41] at the time of PPG acquisition, whereas the proposed compression algorithm is applicable on the acquired PPG signal, (2) the proposed algorithm can reconstruct the time-domain PPG signal using the algorithm that is a reverse of the compression algorithm and the reconstruction process does not depend on the compression ratio, but the algorithm proposed in [39] is unable to reconstruct the PPG signal if the compression ratio exceeds a certain limit and (3) since the proposed compression algorithm is applicable only on the uniformly-sampled PPG data, the sampling instances does not have to be stored, whereas the technique proposed in [41] needs to store the non-uniform sampling instants corresponding to each compression ratio in a lookup table. Compression of biomedical signals is not only essential for reducing the storage overload, but also for efficient use of transmission bandwidth in real time tele-monitoring applications over wired/wireless communication networks. In recent years, a number of mobile tele-monitoring systems have been designed using

471

different communication technologies such as global system for mobile communication (GSM), code division multiple access and wireless mesh networks [26]. Short message service (SMS) was also used in [27,28] to transmit compressed biomedical signals to a remote healthcare center or to an expert. A number of webapplications and cloud computing-based tele-monitoring systems have been reported in recent years [29–31]. In [30], Hsieh et al. proposed a cloud and pervasive computing-based 12-lead ECG tele-diagnosis system. Discharged patients’ health-related quality of life (HRQoL) and vital signals monitoring system for domestic care is proposed in [31]. It is predicted that by the year 2050, 20% of the global population will be at least 60-years of age [32]. Tele-monitoring/tele-healthcare could be a solution to defer institutionalizing older persons and reducing the overall expenditure. Objective of all these tele-monitoring systems is to continuously monitor patients’ vital sign and health status to ensure better healthcare service. Insufficient healthcare services and acute shortage of medically-trained staff personnel are serious healthcare issues in developing and underdeveloped nations [28]. Motivation behind the design of this proposed PPG compression and telemonitoring architecture is to develop a user-friendly and fast PPG processing system, which can efficiently compress and transmit digitized PPG signal, thereby spreading the healthcare facilities beyond the hospital boundary. In this research work, dedicated PPG signal compression and reconstruction algorithms are developed. First, the PPG signal is de-noised using a Butterworth low-pass filter and the sampling frequency (SF) is reduced. Then the first difference of the downsampled PPG signal is computed. Those numbers are amplified, neighbor integers logically merged (grouping) and finally printed in the output in their corresponding ASCII characters. It is well known that the size of an ASCII character is 1-byte (8-bit) and a single byte allows a numeric range from 0 through 255. Hence, all the information that is to be printed in the output file must be within this numeric range. There are several advantages of having ASCII characters in the output file. First of all, it can be transmitted using most of the communication protocols (GSM, General Packet Radio Service (GPRS), web-based system, SMS etc.) without any further processing and secondly, standard ASCII compression algorithms (HC, RLE etc.) can further be used to recompress the data. The protocols used here in this research for remote data-transmission are web-application and SMS. PPG signal is reconstructed using the reverse programming approach. The algorithm can also calculate the HR from the compressed PPG data and thereby reduce the delay. A few ASCII-character-encoding-based ECG compression techniques have been proposed by Mukhopadhyay et al. in [33–35]. The PPG compression algorithm presented here is different from those in many aspects:

(1) The ECG compression techniques in [33–35] are implemented on C platform, whereas MATLAB is used here for coding and developing the algorithm. (2) The algorithms in [33,34] do not include any down-sampling operation. (3) The main compression algorithm, i.e., the merging techniques of neighbor integers are completely different. Different types of merging techniques such as merging 2-integers in the forward direction and also in the backward directions are implemented in [33], whereas only the forward-merging technique is used in our PPG compression algorithm. The techniques of handling the non-merged integers are also different. Moreover, the algorithms [33–35] store addition data-bytes with merged integers for proper data reconstruction. (4) In this compression algorithm, special care has been taken to encode the first-PPG-sample, which ensures better sig-

472

S.K. Mukhopadhyay et al. / Biomedical Signal Processing and Control 31 (2017) 470–482

date [19]. Therefore, if the amplitude-difference between neighbor samples is found to be ≥±0.1 mV, it is fixed at ±0.099 mV. As an example, let an array (say x) contain the PPG signal and another array (say d) contain its first-difference. −5.023 −5.022 −5.022 −5.034 −5.024 −5.015 −5.022 −5.002 −5.002 x[1]

x[2]

x[3]

x[4]

x[5]

x[6]

x[7]

x[8]

x[9]

0.001

0

−0.012

0.01

0.009

−0.007

0.02

0

d[1]

d[2]

d[3]

d[4]

d[5]

d[6]

d[7]

d[8]

2.2. Encoding the first sample

Fig. 1. Flowchart of the PPG compression algorithm.

nal reconstruction than that in the algorithms presented in [33–35].

The first-difference operation leaves the first-PPG-sample unchanged. If there is a drift in the PPG baseline or the signal is highly amplified during acquisition, the signal may start from higher amplitude and hence, it might not be possible to represent it by a valid ASCII character (0–255). As a consequence, the data reconstruction would not be perfect. To overcome this, the firstPPG-sample is encoded using 3-ASCII characters. The amplitude part of the sample is broken into two integers and the third byte is used to encode the ‘sign’ of the sample. A binary zero or one is used to denote whether the sample is positive or negative, respectively. The encoding algorithm is given below.

The remainder of this paper is organized as follows. The proposed PPG-signal compression algorithm is presented in Section 2 and the signal reconstruction algorithm is described in Section 3. Robustness and efficiency of the proposed algorithm at different sampling rates are demonstrated in Section 4. This section also describes the HR calculation technique using an empirically determined method from the compressed PPG data and the application of the proposed PPG compression algorithm in tele-monitoring system using web-application and SMS. Time complexity of each of these PPG compression, reconstruction and HR calculation algorithms is also included in Section 4. Performances of other PPG compression algorithms are compared with the proposed method in Section 5. Finally, a few other advantages of using this PPG compression algorithm are discussed and conclusions drawn in Section 6. 2. PPG compression algorithm The compression algorithm is divided into five parts: preprocessing and first-difference computation (Section 2.1), encoding the first sample (Section 2.2), sign byte generation and amplification (Section 2.3), grouping (Section 2.4) and ASCII character encoding (Section 2.5). Each of these parts is described below. Flowchart of the compression algorithm is shown in Fig. 1. 2.1. Preprocessing and first-difference computation At first, the PPG signal is passed through a Butterworth low-pass filter for high frequency noise elimination above 25 Hz and then the SF is reduced to one half, i.e., if the SF of the original PPG signal is 500 Hz, it is reduced to 250 Hz. First difference of the down-sampled PPG signal is computed using the following equation, where x(i) and x(i − 1) are the present and previous samples, respectively. d(i) = x(i) − x(i − 1)

(1)

Characteristically, PPG is a very low frequency, low amplitude signal and it has been experimentally observed that the amplitudedifference between two neighbor samples (after SF reduction) rarely exceeds ±0.1 mV even at 125 Hz sampling rate, which is the lowest sampling rate for PPG signal reported in literature to

2.3. Sign byte generation and amplification The first-sample is encoded using the algorithm explained above. Now, the remaining portion of the signal is compressed in a different way. Since the size of an ASCII character is 8-bit, the compression algorithm deals with 8-samples at-a-time and generates ASCII characters in a logical manner. Every sample (except the first sample) of the above array (d) is checked to identify positive and negative numbers. A binary ‘0’ and ‘1’ are assigned to identify each of the positive and negative numbers, respectively. Following this rule, for the above d array, a binary string ‘00100100’ is obtained and its decimal equivalent is calculated taking the leftmost bit as MSB [33]. For this particular example, the decimal equivalent is 36. This is used as the sign-byte of those corresponding 8 samples. After sign-byte generation, negative numbers of d array (if any) are made positive and every element of d array is multiplied by 1000, i.e., d [i] = 1000 * d[i], to get integer numbers. Here, the amplification factor is selected on the basis that standard biomedical databases record samples up to three decimal points of accuracy.

S.K. Mukhopadhyay et al. / Biomedical Signal Processing and Control 31 (2017) 470–482

473

2.4. Grouping Grouping plays an important role for getting higher compression performance. The grouping technique represents those 8-amplified-integers with a reduced number of data-set in a logical manner. Different types of grouping possibilities are considered here. TYPE I: If all the 8 integers in the d array are same, then they are represented by a single integer. This type of grouping possibility arises when the signal exhibits a constant slope. TYPE II: If TYPE I grouping is not possible, then it is to be checked as to whether two neighbor integers of the d array are less than 10 or not. If yes, then these two are grouped into a single one; otherwise, 100 is added to each of them. This addition is performed to differentiate between ‘grouped’ and ‘non-grouped’ integers. Grouping is done using the following technique: (d [i] × 10) + d [i + 1]. Grouped and non-grouped integers are saved in a different array (say e array). In the compressed data file, ASCII-character sets are separated by an ‘ENTER’ character whose decimal equivalent is 10. Therefore, if any grouped integer is found equal to 10, it is immediately changed to 11. Let us take an example

Fig. 2. Flowchart of the PPG reconstruction algorithm.

(e.g. +1514443XXXX 500 240). The compression algorithm is executed iteratively until all the PPG samples are compressed. 3. PPG reconstruction algorithm

TYPE III: If all the 8-integers of the d array are grouped and all of them are <16, they are then re-grouped according to their binary bit patterns. Let us take an example (this is a special case).

The PPG signal reconstruction algorithm is developed by following an approach, which is the reverse of the compression algorithm and here again, the algorithm is divided into five parts: reading ASCII characters (Section 3.1), ungrouping (Section 3.2), sign-byte regeneration (Section 3.3), PPG sample re-generation (Section 3.4) and time scale and output file generation (Section 3.5). Each of these parts is now described sequentially. Flowchart of the reconstruction algorithm is shown in Fig. 2. 3.1. Reading ASCII characters

Since all the four elements of the above e array are less than 16, each of them is representable by a 4-bit binary number (nibble) and two nibbles can be concatenated to form a byte (8-bits). According to the given example, e[1] contains 12 whose binary is ‘1100’ and e[2] contains 13 whose binary is ‘1101’. These two binary strings are concatenated. For this particular case we get ‘11001101’ whose decimal equivalent is 205. Then, the next two integers are also regrouped using the same technique.

One set of characters is taken from the compressed PPG data file and their corresponding decimal values are saved in an array (say e array). The very first ASCII-character-set contains 3 characters. For example, let us suppose that the first character-set contains the following characters.

2.5. ASCII character encoding

Out of these three, the first one represents the sign and the next two represent the amplitude. From the second set onwards, every character set contains a variable number of characters ranging from 2 to 8. Let us take an example.

All the necessary information is printed in the output file in the form of ASCII character. At first, the sign byte is printed. The TYPE I grouping technique outputs only one integer and TYPE III grouping technique outputs only two re-grouped integers. In case of TYPE II, the number of grouped or non-grouped integers may vary from 4 to 8. However, all these grouped, non-grouped and re-grouped integers are printed in the output file. The name of the compressed PPG data file is given maintaining the format: ‘Patient’s Phone Number SF in Hertz Time duration of the original PPG signal in second(s)

Here, the first one represents the sign-byte and the next integers that follow are grouped/non-grouped values.

474

S.K. Mukhopadhyay et al. / Biomedical Signal Processing and Control 31 (2017) 470–482

3.2. Ungrouping The first PPG sample is re-generated using just the reverse approach and the decoded sample is printed in the output file. The decoding algorithm is given below.

1

1

−12

10

9

−7

20

0

d [1]

d [2]

d [3]

d [4]

d [5]

d [6]

d [7]

d [8]

3.4. PPG sample regeneration Now, every element of the d array is divided by 1000, i.e., d[i] = d [i]/1000. First differentiation of the PPG signal was implemented in Section 2.1. Hence, the reverse operation is performed to obtain the original samples. Original PPG samples are stored in another array (say x array). Let us take an example.

Here, for example, F = 50, L = 23 and sign = 1. Therefore, the firstPPG-sample = ((50 × 100) + 23)/1000 = 5.023 and since the sign = 1, the first-PPG-sample becomes −5.023. Grouped, non-grouped or re-grouped integers of the e array are ungrouped using the reverse programming logic of grouping, and saved in another array (say d array). If the number of integers (except the sign-byte) in the e array is one, it signifies that TYPE I grouping has been performed and all the 8-ungrouped-integers are re-generated easily. If the number of integers (except the signbyte) in the e array is two, it signifies that TYPE III grouping has been performed. Both of these 2-integers are converted into their corresponding 8-bit binary equivalent. Each 8-bit string is divided into two nibbles and decimal equivalents of those nibbles are evaluated. Now these 4-integers are ungrouped into eight using the reverse programming approach of TYPE II grouping. If the number of integers (except the sign-byte) in the e array varies from 4 to 8, it signifies that TYPE II grouping has been performed and are ungrouped using the reverse programming logic. Let us take an example.

Here, the 3rd, 4th, 6th and 7th positions of the e array contain integers that are greater than or equal to 100. It signifies that these are ‘non-grouped’ integers. Hence, ‘100’ is subtracted from each of them. Rest of the elements of the e array are ‘grouped integers’ and are ungrouped using the reverse algorithm. 3.3. Sign byte regeneration In Section 2.3, negative numbers of the d array are made positive in order to group them properly. Hence, in the signal reconstruction stage, the reverse operation must be performed to properly reconstruct the PPG signal. In Section 2.3, the sign-byte is generated by converting all those 8-bits into their corresponding decimal equivalent. Therefore, in this section, the sign byte is decoded into 8-bit binary equivalent. Here for example, the sign-byte is 36 whose binary is ‘00100100’. In the binary string, if any bit is found to be ‘1’, then the corresponding ungrouped integer is made negative. Let us take an example.

Down sampling operation was implemented to reduce the sampling frequency of the PPG signal in Section 2.1. Hence, up-sampling must be implemented at this stage. The up-sampling operation is implemented using the linear interpolation method [42]. For example, if the SF of the original PPG signal is 500 Hz, it is reduced to 250 Hz in Section 2.1 and finally the SF is restored to 500 Hz through up-sampling. 3.5. Time scale and output file generation Since the SF of the original PPG signal is embedded in the file-name, the sampling interval and hence the time scale can be generated easily. Finally, the reconstructed PPG samples are printed in the output file along with the generated time scale. The reconstruction algorithm is executed repeatedly until all the ASCII characters of the compressed data file are decoded back. 4. Results A variety of numerical measures, such as percent root-meansquare difference (PRD), normalized PRD (PRDN), cross-correlation (CC) and root mean square error (RMSE) are available to ensure whether the reconstructed signal jeopardizes the clinical morphology or not [36]. PRDN is the modified and amplitude normalized version of PRD and this measure does not depend on the signal average value. Compression ratio (CR) is one of the figures of merit of a biomedical signal compression algorithm. It is the ratio of the size of the original data file to that of the compressed data file [27]. Quality Score (QS) is another numerical measure often used to judge the overall performance of a biomedical signal compression algorithm. The higher the value of QS, the better the performance of the algorithm [33]. In probability theory and statistics, the coefficient of variation (Cv ), which is the ratio of standard deviation and mean, is used for comparing the amount of deviation from one data series to another [35]. The lower the value of Cv , the better the performance. Digitized PPG data is collected from 4 different databases and are used as evaluation testbeds of the implemented algorithm. Sampling frequencies of these collected PPG data are different in different databases ranging from 125 to 500 Hz. Average CR, PRD, PRDN, QS, CC and RMSE obtained for different types of PPG signals are given in Table 1. Cv ’s of these parameters are also calculated. After testing the compression algorithm on this huge number of PPG samples (885 × 103 ), it can be concluded that the algorithm offers average CR, PRD, PRDN, QS, CC and RMSE of about 30.27, 0.22%, 0.33%, 511.33, 0.974344 and 0.06, respectively; CV of CR, PRD, PRDN, QS, CC and RMSE are 0.14, 1.35, 1.02, 1.24, 0.03

S.K. Mukhopadhyay et al. / Biomedical Signal Processing and Control 31 (2017) 470–482

Fig. 3. File: 055, Database: MIMICDB.

Fig. 4. File: t106s, Database: challenge/2015/set.

Fig. 5. File: 230, Database: MIMICDB.

Fig. 6. File: 444, Database: MIMICDB.

475

476

S.K. Mukhopadhyay et al. / Biomedical Signal Processing and Control 31 (2017) 470–482

Table 1 CR, PRD (%), PRDN (%), QS, CC and RMSE obtained for different types of PPG signals. File

CR

PRD

PRDN

QS

CC

RMSE

sample/file 0.06 0.05 0.05 0.44 0.08 0.06

0.42 0.40 0.14 1.26 0.35 0.34

549.33 697.46 533.84 86.55 338.49 514.53

0.948953 0.930837 0.992585 0.919678 0.970401 0.962439

0.03 0.02 0.02 0.02 0.04 0.03

challenge/2015/set, Sampling rate 250 Hz, 15 × 103 samples/file a103l 32.09 0.02 a104s 33.94 0.03 t106s 33.90 0.01 32.80 0.04 t107l 39.78 0.02 t108s 27.32 0.03 t114s 28.00 0.02 t117L 27.12 0.10 v100s 27.41 0.13 v101L 27.40 0.09 v102s 29.94 0.03 v111L v113L 26.55 0.03

0.18 0.13 0.05 0.17 0.14 0.09 0.06 0.26 0.34 0.24 0.08 0.10

1604.50 1131.33 3390.00 820.00 1989.00 910.67 1400.00 271.20 210.85 304.44 998.00 885.00

0.983932 0.991659 0.998938 0.985044 0.990366 0.996106 0.998406 0.979805 0.962831 0.973152 0.996567 0.994827

0.01 0.02 0.00 0.02 0.01 0.02 0.01 0.03 0.04 0.02 0.01 0.02

mimic2wdb/31, Sampling rate 125 Hz, 7.5 × 103 sample/file 3101815 26.36 0.06 3103379 24.07 0.09 3104031 26.57 0.05 26.32 0.13 3104760

0.13 0.07 0.14 0.63

439.33 278.13 531.40 201.29

0.99247 0.986342 0.990279 0.937462

0.03 0.04 0.03 0.06

mimicdb, Sampling rate 500 Hz, 30 × 103 samples/file 209 28.48 0.48 27.30 0.12 210 33.01 0.96 221 27.95 0.04 224 27.89 0.46 225 230 30.14 0.13 30.47 0.07 240 30.11 0.14 253 27.01 0.21 284 28.30 0.22 403 27.09 0.67 408 35.34 0.24 411 437 27.55 0.25 27.21 0.52 439 453 28.09 0.12 43.45 0.09 454 28.97 0.09 466 29.72 0.69 471 35.86 1.16 476 480 39.73 0.07 34.18 1.18 484 30.27 0.22 Average

0.52 0.13 0.97 0.05 0.49 0.14 0.08 0.15 0.22 0.23 0.74 0.25 0.27 0.55 0.12 0.09 0.10 0.72 1.16 0.07 1.38 0.33

59.29 227.50 34.36 704.11 61.16 231.85 435.29 222.94 130.00 128.77 40.69 149.17 110.32 52.55 234.08 482.78 321.89 43.38 30.94 171.61 29.00 511.33

0.967961 0.990654 0.903050 0.998930 0.973028 0.996845 0.997453 0.996157 0.993055 0.992522 0.930182 0.995149 0.992554 0.966709 0.991882 0.995584 0.995068 0.93102 0.95906 0.997801 0.849061 0.974344

0.16 0.05 0.18 0.01 0.17 0.05 0.03 0.05 0.08 0.09 0.24 0.05 0.09 0.19 0.04 0.01 0.04 0.25 0.11 0.01 0.18 0.06

challenge/2010/set-a, Sampling rate 125 Hz, 7.5 × 103 33.01 a00 a05 31.64 a07 26.51 38.08 a09 a10 25.95 29.17 a14

Table 2 Performance of the algorithm on different sampling rates. SF(Hz)

CR

PRD (%)

PRDN (%)

QS

CC

RMSE

125 250 500

28.77 30.52 30.85

0.11 0.05 0.38

0.39 0.15 0.40

417.04 1159.58 185.79

0.963145 0.987636 0.972082

0.03 0.02 0.10

and 1.09, respectively, which are small enough. Such small values of CV ’s suggest that the algorithm is consistent enough to process different types of PPG signals irrespective of the embedded clinical signature. A quantitative analysis of clinical acceptability of the reconstructed bio-signal in terms of PRD is given in [25] and [28]. As per the analysis, the reconstructed signal is considered to be ‘very good’ if the PRD value is <2%. Figs. 3–6 show the original PPG, its reconstructed version and their difference. The compression algorithm also works efficiently on noise-corrupted PPG signals. Fig. 7 shows such a signal which is partially corrupted by intense MA noise. From this Figure, it can be noted that the algorithm may not be able to reconstruct the high-slope regions of the MA noise, but

it can efficiently reconstruct the noise-free part of the signal without jeopardizing the morphology, i.e., the noisy part of the signal does not have any impact on the reconstruction of the noise-free section. Performances of the algorithm at different sampling rates are summarized in Table 2. Beside these, subjective measure is also important since it provides the true quality of the reconstructed bio-signal. Hence, semi-blind mean opinion score (MOS) test [37,38] is carried out of nine evaluators including four medical doctors and five researchers who are working in bio-signal, bio-image and digital signal processing domains. Three main PPG features viz. Systolic Peak, Diastolic Peak and Dicrotic Notch are considered here for the MOS test. Eval-

S.K. Mukhopadhyay et al. / Biomedical Signal Processing and Control 31 (2017) 470–482

477

Fig. 7. File: 219, Database: MIMICDB. First-part of the signal is corrupted by intense MA noise. The noise-free section is efficiently reconstructed and the error in the noise-free section is shown separately.

uators were provided with the original and the reconstructed PPG signals and they were asked to compare these features and to give a score in accordance with their similarities. The quality ratings are 5 (Identical), 4 (Very good), 3 (Good), 2 (Not bad) and 1 (Completely different). MOS of the PPG signal is calculated using Eq. (2).

Table 3 MOS error (%). MOS Error Type

Systolic Peak

Diastolic Peak

Dicrotic Notch

Selected PPG features Overall PPG signal

4.44% 5.56%

6.68%

6.67%

Nev Nfea

MOS =

 1 Qr (ev, fea) Nev Nfea ev=1fea=1

Where Nev = Total number of evaluator Nfea = Total number of PPG features th feature given by the evth Qr = Quality rating of the fea

evaluator (2)

th ) is calculated using MOS value of a particular PPG feature (fea Eq. (3).

1  Qr (ev, fea) Nev Nev

MOS(fea ) =

(3)

ev=1

The gold standard MOS error for each feature and the overall signal is given by Eqs. (4) and (5), respectively. MOSfea = (1 − MOSev = (1 −

MOS(fea ) )X100% 5

MOS )X100% 5

(4) (5)

MOS errors of the overall PPG signal and also various features are shown in Table 3. According to the MOS error criteria [37], both these two parameters fall under the ‘very good’ category. 4.1. HR calculation from compressed PPG data In case of emergency, often it becomes extremely necessary to calculate the patient’s HR in order to take immediate precautionary

measures. The proposed algorithm can serve efficiently in such situations. HR can be calculated directly from the compressed PPG data itself without having to decompress the PPG signal, thereby saving precious time. From the compressed data file, ASCII values are taken in an array. Facts which are hypothecated in HR calculation are: (1) the probability of grouping is less around every rising edge of the PPG near systolic peak due to the sharp slope, which results in higher ASCII values and (2) the probability of occurrence of Type I and III groupings are more obvious around every falling edge, i.e., around the diastolic peak and dicrotic notch due to reduced slope, which results in smaller ASCII values. Fig. 8 demonstrates this fact. As a consequence, small ASCII values in the array represent the falling edges and the larger values represent the rising edges of the original PPG signal. From that array the maximum ASCII value is found and all those which are within 90% of that maximum are selected. From these selected samples, the local maximum is identified within a window and considered as a systolic peak. The proposed compression algorithm processes only PPG-voltage values and hence, the compressed-data does not contain any timedomain information. Therefore, the width of the window is chosen empirically and it is set to 40-characters wide. The compression ratio, threshold value and the window-size are strongly correlated. The higher the threshold value, the wider should be the windowsize and vice versa. Similarly, the higher the CR, the smaller the window size. This typical combination of the window-size (40characters) and the threshold value (90%) has been chosen based on our experiments. These threshold values (amplitude and windowsize) are fixed in the algorithm and the patient or the doctor does not need to select or modify these values. The patient or the doctor

478

S.K. Mukhopadhyay et al. / Biomedical Signal Processing and Control 31 (2017) 470–482

Fig. 8. Rising edge of the PPG signal has higher slope than that of the falling edge (the yellow shaded region). Compressed PPG-data corresponding to the rising edge contains larger ASCII values than that of the falling edge. (For interpretation of the references to color in this figure legend, the reader is referred to the web version of this article.)

Fig. 9. Plot of compressed data (ASCII characters); local-maximum values are marked as the systolic peaks.

only needs to choose the HR-calculation-option to directly measure the HR from the compressed data. Time length of the original PPG signal can be obtained from the file-name and hence, HR can be easily calculated using Eq. (6). The systolic peak detection operation from compressed PPG is shown Fig. 9. HR =

Number of detected systolic , peak(s) X60 Time length of the original PPG signal

(6)

Comparison between the actual HR and the estimated one is given in Table 4. From this comparison, it is clear that the algorithm is capable of detecting systolic peak from compressed PPG with an adequate accuracy (99.35%) and further, that the HR estimation error is also small (1.68%). Systolic peak detection accuracy and the HR estimation error are measured using Eqs. (7) and (8) respectively. Detection Accuracy =

Correctly Detected Systolic peak(s) X100% (7) Total no. of Systolic peak(s)

HR estimation error =

Total no. of peak(s) − detected peak(s) X100% Total no. of peak(s)

(8)

Time complexity is another figure of merit which specifies the run-time of an algorithm as a function of the length of the input data. Table 5 shows the run-time analysis of the PPG compression, reconstruction and the HR calculation algorithms. Table 5 shows that the run-times of compression, reconstruction and HR estimation algorithms vary almost linearly with the number of PPG samples and the HR-estimation time is much shorter than that of reconstruction. 4.2. Tele-monitoring system After compression the compressed file can either be saved for later diagnostic purposes or it can also be sent to a diagnostic center or to an expert physician in an emergency situation. Two different communication protocols, web-based system and SMS, are used here for this purpose. The protocol through which the data is to

S.K. Mukhopadhyay et al. / Biomedical Signal Processing and Control 31 (2017) 470–482

479

Table 4 Heart rate (beat per minute-bpm) estimation from compressed PPG signal. File

Systolic Peaks

Detected Systolic Peaks

Database: mimicdb, Duration of each PPG file → 4 s 8 8 039 055 7 7 208 7 6 5 5 221 6 6 224 6 7 225 5 5 230 240 5 5 253 5 5 276 8 8 8 8 281 7 7 284 5 5 401 403 5 5 6 6 408 9 8 409 7 7 437 439 6 6 6 6 444 453 6 6 2 2 454 5 5 471 7 7 474 8 8 476 484 6 6 155 154 Total

Actual HR (bpm)

Estimated HR (bpm)

Error (%)

120 105 105 75 90 90 75 75 75 120 120 105 75 75 90 135 105 90 90 90 30 75 105 120 90 Average

120 105 90 75 90 105 75 75 75 120 120 105 75 75 90 120 105 90 90 90 30 75 105 120 90

0 0 14.29 0 0 16.67 0 0 0 0 0 0 0 0 0 11.11 0 0 0 0 0 0 0 0 0 1.68

Table 5 Run time analysis. No. of sample

File size (KB)

Run time of compression algorithm

Run time of reconstruction algorithm

Run time of HR calculation algorithm

500 1000 2000 4000 8000 16,000 32,000

6.16 12.20 25.30 51.50 104 214 441

0.08 s 0.16 s 0.33 s 0.66 s 1.33 s 2.71 s 5.36 s

0.17 s 0.28 s 0.53 s 1.1 s 2.14 s 4.36 s 8.66 s

0.001 s 0.003 s 0.004 s 0.005 s 0.008 s 0.012 s 0.014 s

be sent depends on a few important facts such as: (1) whether the internet connection is present or not both at the patient’s and doctor’s site, (2) signal strength of the cellular network, and (3) SMS and internet data-transfer cost. 4.2.1. Web-application based storage and transmission A PHP (Laravel 5 Framework)-based web-application is developed (domain name: http://biotelemoni.com) for efficient and secure compressed-PPG storage (MySQL Database) and remote tele-monitoring purpose. Front-end of the system has been developed using JavaScript, Cascading Style Sheets-version 3 (CSS3) and Hyper Text Markup Language-version 5 (HTML5). The application is developed basically for two types of users: 1) administrator and 2) doctors/physicians. At the administrator’s site, the compressed file is uploaded to the web-application. While uploading, a client side data validator is enabled to ensure whether the entire file is successfully uploaded or not. After successful uploading, the administrator can choose and assign a doctor(s) from the database for that particular PPG file and an auto-generated email is delivered to the doctor. Emails are sent using the standard email protocol. Uploaded files will be archived automatically in zip format after seven days or the administrator can also delete those files. Presently, the webserver can store 20GB-data and the storage capacity can also be increased, if required. Only the administrator is allowed to edit, remove or insert doctors’ details in the database and only the concerned doctor(s) can download the compressed-PPG data file from this particular website after signing in with his registered user-ID and password. After reconstructing the compressed data, the doc-

tor can visually inspect the PPG signal. Since the contact number is embedded in the file-name, the doctor can also communicate with the patient if necessary. Fig. 10 shows a snapshot of the website. 4.2.2. Data transmission-reception through SMS The protocol proposed in [27] by Mukhopadhyay et al. for compressed-ECG transmission purpose by means of SMS is used here. A GSM transceiver module which is connected to the computer through the RS-232 serial-port is used to send SMS. The proposed algorithm splits the compressed PPG data file into a number of sub-files and these sub-files are transmitted to physician’s mobile phone through the modem using AT commands. The PPG compression, reconstruction and HR calculation algorithms are implemented on MATLAB 7.1 platform with a low-resource computer having Pentium 4 CPU 2.66 GHz, 32-bit Windows XP operating system and 1GB DDR1 RAM. Research on low-power and low-resource hardware implementation of the proposed compression, reconstruction, HR calculation and telemonitoring system is being carried out and will be reported in our future research. 5. Performance comparison CR, PRD, PRDN and QS of other PPG compression methods and the proposed method are compared in Table 6. Numerical outputs of the other methods are collected from the respective literature. Performances of PPG compression algorithms rely on various fac-

480

S.K. Mukhopadhyay et al. / Biomedical Signal Processing and Control 31 (2017) 470–482

Fig. 10. Snapshot of the website.

Table 6 Performance comparison. Method

SF (Hz)

Resolution

CR

PRD (%)

PRDN (%)

QS

Gupta et al. [19] Alam et al. [25] CFSA [22] DM [21] CS [39] Proposed

125 125 200 1K 125 Hz 125–500

10-bit 10-bit 16-bit 16-bit 10–12 bit 10–12 bit

2.223 3.84 12 16 8, 10, 30a 30.27

0.127 5.82 – 3.92 × 10−4 – 0.22

0.187 7.57 – – – 0.33

17.50 0.70 – 4.0816 × 104 – 511.33

a

Signal reconstruction is not possible at CR = 30.

tors such as signal bandwidth, number of quantization level, noise level and sampling frequency. The proposed method achieves better compression ratio compared to the other methods reported in Table 6. The reason behind the calculation of PRDN is that in case of an added DC level (shift in the signal base line) in the original PPG signal, the denominator part in the expression of PRD becomes high, which in turns increases the QS value. Hence, the error is represented by PRDN. The algorithm also produces lower PRD and PRDN than that reported in [25]. QS of the proposed method is also better than that reported in [19,25]. CC and RMSE of other methods are not available in literature and hence are not included in Table 6. The lower the value of the MOS error, the better the quality of the reconstructed signal. MOSfea of systolic peak, diastolic peak and dicrotic notch are small enough and suggest the fidelity of the reconstructed signal and also the efficiency of the algorithm. 6. Conclusion and discussion An efficient, reliable and low complexity digitized PPG compression algorithm has been proposed in this paper. The algorithm attains an attractive compression ratio (30.27) without jeopardizing the clinical morphology even after down-sampling. Since the algorithm provides an extremely low reconstruction error, it can be

considered as lossless. The amount of error occurring between the original and reconstructed PPG signals is mostly due to the downsampling operation. Since the context of the proposed research work is in the wearable long term monitoring of vital signals, the power-budget is extremely limited. The proposed PPG processing system is primarily implemented on software platform and since both the compression and reconstruction algorithms involve only a few simple logical and numerical operations such as addition, subtraction, decimal-to-binary and binary-to-decimal conversion etc., it will be easier to implement each of these on low-resource hardware specification with low power-requirement. The compressed file contains ASCII characters and hence, a suitable ASCII compression algorithm (Huffman coding, Run Length Encoding etc.) can further be used to re-compress the data. A major advantage is that the algorithm can process digitized PPG signals regardless of their sampling frequencies. Moreover, time-domain PPG signals are encoded into ASCII after compression, which acts as signalencryption and helps also in increasing the information security. Using SMS as a carrier, the compressed data can be sent across the international boundary, but the cost is high; however, the data transfer through the designed web-application is much cheaper than that of SMS. Therefore, using the web-application, the signal can be reconstructed at any hospital, diagnostic center or anywhere in the world, thus enabling the patient’s parties to consult with

S.K. Mukhopadhyay et al. / Biomedical Signal Processing and Control 31 (2017) 470–482

specialists around the globe. The application software designed for the patient-end is very simple to operate and hence a person having basic computer handling proficiency or a semi-trained medical staff can use it. Nowadays, most of the PPG acquisition systems are provided with an option to visualize and store the acquired data on a computer. Such an acquisition unit along with the proposed compression, transmission and HR calculation module could be a better choice in PPG-based tele-healthcare applications. Result and robustness have shown that the proposed method can be considered as a promising candidate for both online and offline tele-monitoring systems. Research on PPG acquisition, noise (ambient light, MA noise etc.) elimination, feature extraction, disease classification and low-resource hardware implementation will be undertaken in our future investigation. Acknowledgements The authors acknowledge their deepest gratitude to Dr. Umapada Das (M.B.B.S., MD), Dr. Uma Mandal (M.B.B.S.), Dr. Suranjan Bhattacharya (M.B.B.S.), Dr. Debarghya Chattopadhyay (M.B.B.S.) Kolkata, India and five researchers (Mr. Sujan Roy, Mr. Elnasser Abdelhafez, Md. Omar Faruqe, Mr. B. K. Shreyamsha Kumar and Mr. P. Venkataswamy) of the Department of Electrical and Computer Engineering, Concordia University, for their time and effort in carrying out the subjective quality estimation. Authors also convey their gratefulness to Md. Omar Faruqe for his support in developing the web-application. This work was supported in part by the Natural Sciences and Engineering Research Council (NSERC) of Canada and in part by the Regroupement Stratégique en Microélectronique du Québec (ReSMiQ). References [1] A.B. Hertzman, The blood supply of various skin areas as estimated by photoelectric plethysmograph, Am. J. Physiol. 124 (2) (1938) 328–340. [2] M. Elgendi, On the analysis of fingertip photoplethysmogram signals, Curr. Cardiol. Rev. 8 (February (1)) (2012) 14–25, http://dx.doi.org/10.2174/ 157340312801215782. [3] M. Alnaeb, N. Alobaid, A. Seifalian, D. Mikhailidis, G. Hamilton, Optical techniques in the assessment of peripheral arterial disease, Curr. Vasc. Pharmacol. 5 (1) (2007) 53–59, http://dx.doi.org/10.2174/ 157016107779317242. [4] A.B. Shang, R.T. Kozikowski, A.W. Winslow, S. Weininger, Development of a standardized method for motion testing in pulse oximeters, Anesth. Analg. 105 (December (6)) (2007) S66–S77, http://dx.doi.org/10.1213/01.ane. 0000278132.12257.cd. [5] A. Reisner, P.A. Shaltis, D. McCombie, H.H. Asada, Utility of the photoplethysmogram in circulatory monitoring, Anesthesiology 108 (May (5)) (2008) 950–958, http://dx.doi.org/10.1097/ALN.0b013e31816c89e1. [6] J. Allen, Photoplethysmography and its application in clinical physiological measurement, Physiol. Meas. 28 (February (3)) (2007) 1–39, http://dx.doi.org/ 10.1088/0967-3334/28/3/R01. [7] S.A. Akar, S. Kara, F. Latifoglu, V. Bilgic, Spectral analysis of photoplethysmographic signals: the importance of preprocessing, Biomed. Signal Process. Control 8 (January (1)) (2013) 16–22, http://dx.doi.org/10. 1016/j.bspc.2012.04.002. [8] A. Keikhosravi, H. Aghajani, E. Zahedi, Discrimination of bilateral finger photoplethysmogram responses to reactive hyperemia in diabetic and healthy subjects using a differential vascular model framework, Physiol. Meas. 34 (May (5)) (2013) 513–525, http://dx.doi.org/10.1088/0967-3334/34/5/513. [9] E. Khan, F.A. Hossain, S.Z. Uddin, S.K. Alam, M.K. Hasan, A robust heart rate monitoring scheme using photoplethysmographic signals corrupted by intense motion artifacts, IEEE Trans. Biomed. Eng. 63 (March (3)) (2016) 550–562, http://dx.doi.org/10.1109/tbme.2015.2466075. [10] E. Gil, P. Laguna, J.P. Martinez, O.B. Perez, A.G. Alberola, L. Sornmo, Heart rate turbulence analysis based on photoplethysmography, IEEE Trans. Biomed. Eng. 60 (November (11)) (2013) 3149–3155, http://dx.doi.org/10.1109/tbme. 2013.2270083. [11] K.A. Reddy, B. George, N.M. Mohan, V.J. Kumar, A novel calibration-free method of measurement of oxygen saturation in arterial blood, IEEE Trans. Instrum. Meas. 58 (May (5)) (2009) 1699–1705, http://dx.doi.org/10.1109/ tim.2009.2012934. [12] A. Suzuki, K. Ryu, Feature selection method for estimating systolic blood pressure using the Taguchi method, IEEE Trans. Ind. Inf. 10 (May (2)) (2014) 1077–1085, http://dx.doi.org/10.1109/tii.2013.2288498.

481

[13] W. Karlen, S. Raman, J.M. Ansermino, G.A. Dumont, Multiparameter respiratory rate estimation from the photoplethysmogram, IEEE Trans. Biomed. Eng. 60 (July (7)) (2013) 1946–1953, http://dx.doi.org/10.1109/tbme. 2013.2246160. [14] E. Gil, M. Mendez, J.M. Vergara, S. Cerutti, A.M. Bianchi, P. Laguna, Discrimination of sleep-apnea-related decreases in the amplitude fluctuations of PPG signal in children by HRV analysis, IEEE Trans. Biomed. Eng. 56 (April (4)) (2009) 1005–1014, http://dx.doi.org/10.1109/tbme.2008.2009340. [15] E.M. Moreno, Non-invasive estimate of blood glucose and blood pressure from a photoplethysmograph by means of machine learning techniques, Artif. Intell. Med. 53 (2) (2011) 127–138, http://dx.doi.org/10.1016/j.artmed.2011. 05.001. [16] M. Elgendi, Detection of c, d, and e waves in the acceleration photoplethysmogram, Comput. Methods Programs Biomed. 117 (November (2)) (2014) 125–136, http://dx.doi.org/10.1016/j.cmpb.2014.08.001. [17] G. Gibert, D.D. Alessandro, F. Lance, Face detection method based on photoplethysmography, in: 10th IEEE Int. Conf. on Advanced Video and Signal Based Surveillance (AVSS-2013), Krakow, Poland, 27–30 August, 2013, pp. 449–453, http://dx.doi.org/10.1109/AVSS.2013.6636681. [18] U.R. Acharya, J.S. Suri, J.A.E. Spaan, S.M. Krishnan, Advances in Cardiac Signal Processing, Springer-Verlag, New York, Berlin Heidelberg, 2007, http://dx.doi. org/10.1007/978-3-540-36675-1. [19] R. Gupta, Lossless compression technique for real-time photoplethysmographic measurements, IEEE Trans. Instrum. Meas. 64 (March (4)) (2015) 975–983, http://dx.doi.org/10.1109/tim.2014.2362837. [20] T.C. Wu, K.C. Hung, H.W. Lee, C.T. Ku, Realization of RRO-NRDPWT-based ECG signal compression with modified run-length coding, in: 2nd Int. Conf. on Education Technology and Computer (ICETC-2010), 22–24 June, Shanghai, China, 2010, http://dx.doi.org/10.1109/ICETC.2010.5529944, V5-71–V5-75. [21] K.S. Chong, K.B. Gan, E. Zahedi, M.A.M. Ali, Data compression technique for high resolution wireless photoplethysmograph recording system, in: Proc. of the 2013 IEEE Int. Conf. on Space Science and Communication (IconSpace-2013), 1–3 July, Melaka, Malaysia, 2013, pp. 345–349, http://dx. doi.org/10.1109/IconSpace.2013.6599493. [22] K.A. Reddy, B. George, V.J. Kumar, Use of fourier series analysis for motion artifact reduction and data compression of photoplethysmographic signals, IEEE Trans. Instrum. Meas. 58 (May (5)) (2009) 1706–1711, http://dx.doi.org/ 10.1109/tim.2008.2009136. [23] K.A. Reddy, B. George, V.J. Kumar, Motion artifact reduction and data compression of photoplethysmo-graphic signals utilizing cycle by cycle fourier series analysis, in: IEEE Int. Conf. on Instrumentation and Measurement Technology (IMTC-2008), Victoria, Vancouver Island, Canada, May 12–15, 2008, pp. 176–179, http://dx.doi.org/10.1109/IMTC.2008. 4547026. [24] K.S. Chong, E. Zahedi, K.B. Gan, M.A.M. Ali, Evaluation of the effect of step size on delta modulation for photoplethysmogram compression, in: 4th Int. Conf. on Electrical Engineering and Informatics (ICEEI-2013), Malaysia, 2013, pp. 24–25, http://dx.doi.org/10.1016/j.protcy.2013.12.263, Procedia Technology, vol. 11, June, 815–822. [25] S. Alam, R. Gupta, Zonal complexity based measure for lossy compression of photoplethysmogram using delta encoding, in: IEEE Int. Conf. on Signal Processing, Informatics, Communication and Energy Systems (SPICES-2015), Kozhikode, India, 19–21 Feb, 2015, pp. 1–5, http://dx.doi.org/10.1109/SPICES. 2015.7091454. [26] M. Siraj, K.A. Bakar, Minimizing interference in wireless mesh networks based telemedicine system, J. Comput. Sci. 8 (8) (2012) 1263–1271, http://dx.doi. org/10.3844/jcssp.2012.1263.1271. [27] S.K. Mukhopadhyay, S. Mitra, M. Mitra, A combined application of lossless and lossy compression in ECG processing and transmission via GSM-based SMS, J. Med. Eng. Technol. 39 (February (2)) (2015) 105–122, http://dx.doi.org/10. 3109/03091902.2014.990159. [28] B.S. Chandra, C.S. Sastry, S. Jana, Reliable resource-constrained telecardiology via compressive detection of anomalous ECG signals, Comput. Biol. Med. 66 (November) (2015) 144–153, http://dx.doi.org/10.1016/j.compbiomed.2015. 09.005. [29] X. Wang, Q. Gui, B. Liu, Z. Jin, Y. Chen, Enabling smart personalized healthcare: a hybrid mobile-cloud approach for ECG telemonitoring, IEEE J. Biomed. Health Inf. 18 (May (3)) (2014) 739–745, http://dx.doi.org/10.1109/JBHI.2013. 2286157. [30] J.C. Hsieh, M.W. Hsu, A cloud computing based 12-lead ECG telemedicine service, BMC Med. Inf. Decis. Making 12 (February (77)) (2012) 1–12, http:// dx.doi.org/10.1186/1472-6947-12-77. [31] C. Cheng, T.H. Stokes, M.D. Wang, caREMOTE: the design of a cancer reporting and monitoring telemedicine system for domestic care, in: Conf. Proc. IEEE Eng. Med. Biol. Soc. (IEEE EMBS-2011), Aug. 30–Sept. 3, Massachusetts, USA, 2011, pp. 3168–3171, http://dx.doi.org/10.1109/IEMBS.2011.6090863. [32] M. Chan, D. Esteve, C. Escriba, E. Campo, A review of smart homes-present state and future challenges, Comput. Methods Programs Biomed. 91 (July (1)) (2008) 55–81, http://dx.doi.org/10.1016/j.cmpb.2008.02.001. [33] S.K. Mukhopadhyay, S. Mitra, M. Mitra, A lossless ECG data compression technique using ASCII character encoding, Comput. Electr. Eng. 37 (July (4)) (2011) 486–497, http://dx.doi.org/10.1016/j.compeleceng.2011.05.004. [34] S.K. Mukhopadhyay, S. Mitra, M. Mitra, An ECG signal compression technique using ASCII character encoding, Measurement 45 (July (6)) (2012) 1651–1660, http://dx.doi.org/10.1016/j.measurement.2012.01.017.

482

S.K. Mukhopadhyay et al. / Biomedical Signal Processing and Control 31 (2017) 470–482

[35] S.K. Mukhopadhyay, S. Mitra, M. Mitra, ECG signal compression using ASCII character encoding and transmission via SMS, Biomed. Signal Process. Control 8 (July (4)) (2013) 354–363, http://dx.doi.org/10.1016/j.bspc.2013.02.007. [36] A.S.A. Fahoum, Quality assessment of ECG compression techniques using a wavelet-based diagnostic measure, IEEE Trans. Inf. Technol. Biomed. 10 (January (1)) (2006) 182–191, http://dx.doi.org/10.1109/titb.2005.855554. [37] Y. Zigel, A. Cohen, A. Katz, The weighted diagnostic distortion (WDD) measure for ECG signal compression, IEEE Trans. Biomed. Eng. 47 (November (11)) (2000) 1422–1430, http://dx.doi.org/10.1109/tbme.2000.880093. [38] S. Padhy, L.N. Sharma, S. Dandapat, Multilead ECG data compression using SVD in multiresolution domain, Biomed. Signal Process. Control 23 (January (1)) (2016) 10–18, http://dx.doi.org/10.1016/j.bspc.2015.06.012. [39] V.R. Pamula, M. Verhelst, C.V. Hoof, R.F. Yazicioglu, A novel feature extraction algorithm for on the sensor node processing of compressive sampled photoplethysmography signals, in: Proc. of the IEEE Sensors 2015, 1–4 Nov., Busan, South Korea, 2015, pp. 1–4, http://dx.doi.org/10.1109/ICSENS.2015. 7370396.

[40] V.R. Pamula, M. Verhelst, C.V. Hoof, R.F. Yazicioglu, Computationally-efficient compressive sampling for low-power pulseoximeter system, in: Proc. of IEEE Biomedical Circuits and Systems Conference (BioCAS-2014), 22–24 Oct., Lausanne, Switzerland, 2014, pp. 69–72, http://dx.doi.org/10.1109/BioCAS. 2014.6981647. [41] P.V. Rajesh, J.M.V. Sarmiento, L. Yan, A. Bozkurt, C.V. Hoof, N.V. Helleputte, R.F. Yazicioglu, M. Verhelst, A 172 W compressive sampling photoplethysmographic readout with embedded direct heart-rate and variability extraction from compressively sampled data, in: IEEE International Solid-State Circuits Conference (ISSCC-2016), Jan. 31–Feb. 4, San Francisco, CA, 2016, pp. 386–387, http://dx.doi.org/10.1109/ISSCC.2016.7418069. [42] L. Tan, J. Jiang, Digital Signal Processing: Fundamentals and Applications, second ed., Elsevier, USA, 2013, Feb., ISBN-10: 0124158935.