Classification of lung sounds using higher-order statistics: A divide-and-conquer approach

Classification of lung sounds using higher-order statistics: A divide-and-conquer approach

c o m p u t e r m e t h o d s a n d p r o g r a m s i n b i o m e d i c i n e 1 2 9 ( 2 0 1 6 ) 12–20 journal homepage: www.intl.elsevierhealth.com/j...

1MB Sizes 0 Downloads 56 Views

c o m p u t e r m e t h o d s a n d p r o g r a m s i n b i o m e d i c i n e 1 2 9 ( 2 0 1 6 ) 12–20

journal homepage: www.intl.elsevierhealth.com/journals/cmpb

Classification of lung sounds using higher-order statistics: A divide-and-conquer approach Raphael Naves ∗ , Bruno H.G. Barbosa, Danton D. Ferreira Engineering Department, Federal University of Lavras, MG, Brazil

a r t i c l e

i n f o

a b s t r a c t

Article history:

Background and objective: Lung sound auscultation is one of the most commonly used meth-

Received 6 June 2015

ods to evaluate respiratory diseases. However, the effectiveness of this method depends on

Received in revised form

the physician’s training. If the physician does not have the proper training, he/she will be

18 February 2016

unable to distinguish between normal and abnormal sounds generated by the human body.

Accepted 19 February 2016

Thus, the aim of this study was to implement a pattern recognition system to classify lung sounds.

Keywords:

Methods: We used a dataset composed of five types of lung sounds: normal, coarse crackle,

Lung sounds

fine crackle, monophonic and polyphonic wheezes. We used higher-order statistics (HOS)

Pattern recognition

to extract features (second-, third- and fourth-order cumulants), Genetic Algorithms (GA)

Higher-order statistics

and Fisher’s Discriminant Ratio (FDR) to reduce dimensionality, and k-Nearest Neighbors

Genetic Algorithm

and Naive Bayes classifiers to recognize the lung sound events in a tree-based system. We used the cross-validation procedure to analyze the classifiers performance and the Tukey’s Honestly Significant Difference criterion to compare the results. Results: Our results showed that the Genetic Algorithms outperformed the Fisher’s Discriminant Ratio for feature selection. Moreover, each lung class had a different signature pattern according to their cumulants showing that HOS is a promising feature extraction tool for lung sounds. Besides, the proposed divide-and-conquer approach can accurately classify different types of lung sounds. The classification accuracy obtained by the best tree-based classifier was 98.1% for classification accuracy on training, and 94.6% for validation data. Conclusions: The proposed approach achieved good results even using only one feature extraction tool (higher-order statistics). Additionally, the implementation of the proposed classifier in an embedded system is feasible. © 2016 Elsevier Ireland Ltd. All rights reserved.

1.

Introduction

Since antiquity, physicians have auscultated sounds inside the chest to identify signs of disease [1]. Lung sound auscultation by stethoscope is a simple, quick, and non-invasive method

to provide diagnostic information about a patient’s lung [1]. However, proper use of this method depends on the ability of the physician to recognize normal and abnormal sounds generated by the human body. In addition, lung sounds are non-stationary signals, which makes them both difficult to analyze and hard to distinguish

∗ Corresponding author at: Engineering Department, Federal University of Lavras, Campus Universitario, Caixa Postal 3037, CEP: 37200-000, Brazil. Tel.: +55 35 9194 4943. E-mail addresses: [email protected]fla.br (R. Naves), [email protected]fla.br (B.H.G. Barbosa), [email protected]fla.br (D.D. Ferreira).

http://dx.doi.org/10.1016/j.cmpb.2016.02.013 0169-2607/© 2016 Elsevier Ireland Ltd. All rights reserved.

c o m p u t e r m e t h o d s a n d p r o g r a m s i n b i o m e d i c i n e 1 2 9 ( 2 0 1 6 ) 12–20

when using traditional auscultation methods. Thus, the use of an electronic stethoscope together with a pattern recognition system helps to overcome the limitations of traditional auscultation, providing an efficient method for clinical diagnosis [2]. Lung sounds are classified as either normal (healthy individuals) or adventitious (abnormal). Adventitious sounds are divided into two categories: continuous sounds (wheezes and rhonchi) and discontinuous sounds (crackles). According to the American Thoracic Society (ATS) [3], wheezing sounds are defined as high-pitched continuous sounds and rhonchi are low-pitched continuous sounds. The ATS [3] specifies that a wheeze contains a dominant frequency above 400 Hz, while rhonchi are characterized by a dominant frequency of about 200 Hz or less. The wheeze can be monophonic if it contains a single frequency, or polyphonic when several frequencies are simultaneously perceived [4]. The diseases associated with wheezing sounds are asthma, pneumonia, and bronchitis. Crackles are discontinuous adventitious sounds caused by the sudden opening of small airways that had collapsed [2]. Crackles are usually defined by their time-domain features such as initial deflection width (IDW) and two-cycle duration (2CD). According to the ATS [3], the average duration of IDW and 2CD of fine crackles are 0.7 and 5 milliseconds (ms), respectively, and those of coarse crackles are 1.5 and 10 ms, respectively [5]. Fine crackles are heard in patients with pneumonia, pulmonary fibrosis, and congestive heart failure (CHF) [6]. Other categories of lung sounds include pleural rub, stridor, and squawks. Pleural rub is the characteristic sound produced when inflamed pleural surfaces rub together during respiration. Stridor is very loud wheezes and squawks are short inspiratory wheezes [1]. Several authors have used a variety of techniques for filtering, feature extraction, separation, and classification of lung sounds. For instance, to increase the accuracy of the forced oscillation technique (FOT) in categorizing the level of airway obstruction in patients with chronic obstructive pulmonary disease (COPD), [7] developed a pattern recognition system using the k-Nearest Neighbors (k-NN), random forest (RF), and support vector machines (SVM) as classifiers. They used the Receiver Operating Characteristic (ROC) curve for performance evaluation and the results achieved were higher than 0.9. The SVM classifier was used in [2] and [8] to classify lung sounds. In [2], normal, crackle, and rhonchus sounds were classified and the authors used the frequency ratio of Power Spectral Density (PSD) values and the Hilbert-Huang Transform (HHT) method for feature extraction. The classifier achieved an accuracy of above 90%. In [8], the authors proposed a time-frequency and time-scale analysis of pulmonary signals. They also used the k-NN and Multilayer Perceptron to classify crackling and non-crackling sounds and they conclude that the SVM classifier achieved the best result obtaining a classification accuracy of 97.5%. Instantaneous kurtosis, discriminating function, and entropy were used in [9] to classify respiratory sounds ranging from normal to continuous adventitious, including wheezing, stridor, and rhonchi. The authors achieved an optimal classification accuracy (between 97.7% and 98.8%). In [10], the authors

13

also classified normal and continuous adventitious signals, but only wheeze signals were used as continuous adventitious signals. Using the Mel-Frequency Cepstral Coefficients (MFCC) for feature extraction and the Gaussian Mixture Model (GMM) to classify the signals, the authors achieved a classification accuracy of 94.2%. Finally, a technique to obtain the time-frequency representation (TFR) of thoracic sounds was proposed by [11]. Using time-frequency patterns, the authors assessed the performance of ten TFRs for heart, adventitious, and normal lung sounds. After simulations, they concluded that the best TFR performance was found using the Hilbert–Huang spectrum (HHS). The purpose of our paper is to develop a pattern recognition technique to classify lung sounds including normal, fine crackle, coarse crackle, monophonic wheezes and polyphonic wheezes. We used Higher Order Statistics [13] to extract features, Genetic Algorithms (GAs) [14] to reduce dimensionality, and k-Nearest Neighbors [15] and Naive Bayes [16] classifiers to recognize lung sound events. Our application of HOS for lung sound classification is based on the study reported in [12] which used the parametric approach of bispectrum estimation (the Double Fourier Transform of the third-order moment sequence) to reveal information about lung sounds, such as deviations from normality. After performing several experiments, the authors conclude that bispectrum-based methods are more suitable and robust than second-order statistic-based methods for lung sound analysis and characterization due to their superior performance in noisy environments. However, unlike [12], we use a reduced number of second-, third-, and fourth-order cumulants extracted from the sound time series as features. In addition to properties of noise immunity, cumulant-based features have the following advantages: (i) they are not dependent on moments of lower order, in contrast to moment-based features; and (ii) from a computational perspective they are simpler than polyspectrum-based features. As such, cumulants provide a compact and representative feature vector signature for each sound class. The analysis of adventitious sounds is helpful in diagnosing pulmonary dysfunction. In contrast to other studies in this area, this paper looks to classify the five most important classes of lung sounds, including two types of crackle and wheezes that are linked to several lung diseases. The importance of this paper lies in its ability to provide classification for lung sounds that can be easily implemented in embedded systems by using a new system based only on HOS and the divide-and-conquer approach. The system developed herein is capable of efficiently distinguishing several lung sounds. The materials and methods proposed in this paper are described in Section 2 and results are shown in Section 3. Finally, Section 4 presents the conclusions and future work.

2.

Materials and methods

Pattern recognition is a science that deals with the classification and description of objects in particular classes by observing their features [17].

14

c o m p u t e r m e t h o d s a n d p r o g r a m s i n b i o m e d i c i n e 1 2 9 ( 2 0 1 6 ) 12–20

Fig. 1 – Proposed pattern recognition system to classify lung sounds. The pattern recognition system developed in this study follows the sequence shown in Fig. 1 and each step is described in detail below.

2.1.

Input data

The real data set (respiratory sounds) used in this study was obtained from [1], where vesicular sounds, fine crackle, coarse crackle, monophonic and polyphonic wheezing sounds are available. The data is composed of 36 samples: 8 vesicular respiratory cycles, 7 respiratory cycles with fine crackle, 13 with coarse crackle, 4 with monophonic wheezes, and 4 with polyphonic wheezes. The signals are sampled at 8 kHz. The pre-processing stage and the construction of the data set are summarized in Fig. 2. The available lung sounds (signals) were divided (splitting and windowing) into non-overlapping 320-sample segments, which represent about 40 ms, as shown in Fig. 2. This procedure was applied to the entire available data set. Since crackles are discontinuous sounds, and according to [18] and [8] the duration of a crackle is less than 20 ms, some of the 320-sample split segments do not contain this type of adventitious sound, as noted in Fig. 2 (step 2). Thus, only the segments which contain crackles were selected for the next step, signal normalization. The other respiratory sounds, normal and wheezes, are continuous sounds, which means segment selection was not required. It is worth mentioning that the selection of segments containing a crackle is employed only during the design stage of the proposed pattern recognition system and it is performed manually by a lung-sound expert. The objective of this step is to guarantee the classifiers are trained using proper lung sound patterns. At the operation stage, only steps 1 and 3 are performed and the selection step is not necessary. Thus, nonoverlapping segments of the lung sound under evaluation are extracted and normalized. If at least one extracted event (split segment) is a crackle, it will be properly classified as a coarse or fine crackle lung sound. The next step (Fig. 2, step 3) involves signal normalization. The selected events were normalized with zero mean and unit variance. Thus, we obtained 102 vesicular sound signals (events), 57 fine crackle signals, 57 coarse crackle signals, 51 monophonic wheezing, and 51 polyphonic wheezing signals. Fig. 3 illustrates one event of each type of lung sound randomly selected from the input data set.

2.2.

Feature extraction

Our study uses HOS as its feature extraction tool. Methods based on HOS are more suitable to deal with non-Gaussian

processes and nonlinear systems [13]. The major advantage of using HOS as a feature extraction method for pattern recognition is its noise immunity. Let x be a random process composed of real random variables with zero mean. The second-, third-, and fourth-order cumulants are given, respectively [13]: cum(x1 , x2 ) = E(x1 x2 ), cum(x1 , x2 , x3 ) = E(x1 x2 x3 ),

(1)

cum(x1 , x2 , x3 , x4 ) = E(x1 x2 x3 x4 ) − E(x1 x2 )E(x3 x4 ) −E(x1 x3 )E(x2 x4 ) − E(x1 x4 )E(x2 x3 ), where “E” is the expected value operator. Suppose that x(t) is a stationary random process with zero mean and that the cumulants of order k, denoted by Ck,x ( 1 ,  2 , . . .,  3 ), where  1 , . . .,  k , are lags in time, are defined in terms of the signals x(t), x(t +  1 ), . . ., x(t +  k ). Setting  1 =  2 =  3 = , from (1), the second-, third-, and fourth-order cumulants can be rewritten as: C2,x () = E{x(t)x(t + )}, C3,x () = E{x(t)x2 (t + )},



(2)



C4,x () = E x(t)x3 (t + ) − 3C2,x ()C2,x (0). For a discrete signal x[n], (2) can be approximated using the circular method proposed by [19]: 1 x[n]x[mod (n + , N)], N N−1

C2,x () =

n=0 N−1

1 x[n]x2 [mod (n + , N)], C3,x () = N n=0 N−1

(3)

1 x[n]x3 [mod (n + , N)] C4,x () = N n=0 N−1

N−1

n=0

n=0

 1  x[n]x[mod (n + , N)] · x2 [n], −3 2 N where x ∈ RN ,  = 0, 1, . . ., N − 1, and mod is the modulo operator (that is, the remainder after integer division). Fig. 4 illustrates the mean values (±2 standard deviations) of the second-, third-, and fourth-order cumulants extracted from the available vesicular and adventitious events. It can be inferred that each class has a different signature pattern according to their cumulants showing that HOS is a promising feature extraction tool for lung sounds. It is clear from Fig. 4 that although the extracted features of the lung sounds have different means, their large standard deviations can make

c o m p u t e r m e t h o d s a n d p r o g r a m s i n b i o m e d i c i n e 1 2 9 ( 2 0 1 6 ) 12–20

15

Fig. 2 – Lung sounds data set construction, a crackle example.

classification difficult. The classification task is even harder when the objective is to distinguish between coarse and fine crackle or between monophonic and polyphonic wheeze.

2.3.

Dimensionality reduction

After applying (3) to each lung sound event, 160 second-order cumulants, 320 third-order cumulants, and 320 fourth-order cumulants were obtained. Since this is a high dimensional feature space (800 features), it is prudent to reduce its dimensionality. The core purpose of this step is to build a compact and representative cumulant-based signature vector for each sound class. Thus, in order to reduce the computational cost of the classification system, we implemented Genetic Algorithms (GA) to reduce the feature space dimensionality and to maximize classification performance. Genetic Algorithms, a branch of evolutionary algorithms, are based on the biological process of natural evolution. While

the first study involving GA was presented by [14], this type of research became widely used in the 1980s. The basic operating principle of Genetic Algorithms can be summarized by the steps presented in Fig. 5. The first step of the algorithm is to randomly create an initial population of individuals. Secondly, the fitness of each population member is calculated based on its performance in an objective function; the higher the level of fitness value, the better its performance, thus increasing its chance of survival over generations. The best individuals are then selected using a selection procedure. After selection, standard crossover and mutation genetic operators are applied to the population. Subsequently, the best individual from the current population is included in the new one (elitism strategy). These procedures are repeated until the user defined maximum number of generations is reached. In our study, GA is designed to minimize a cost function that accomplishes the number of features (cumulants) used by the classifiers and their classification

16

c o m p u t e r m e t h o d s a n d p r o g r a m s i n b i o m e d i c i n e 1 2 9 ( 2 0 1 6 ) 12–20

Fig. 3 – Lung sound events: (a) vesicular, (b) fine crackle, (c) coarse crackle, (d) monophonic wheeze, and (e) polyphonic wheeze.

Fig. 4 – Mean values (solid lines) and two standard deviations (dashed lines) of the second-, third-, and fourth-order cumulants of the classes: (a) vesicular, (b) fine crackle, (c) coarse crackle, (d) monophonic wheeze, and (e) polyphonic wheeze.

17

c o m p u t e r m e t h o d s a n d p r o g r a m s i n b i o m e d i c i n e 1 2 9 ( 2 0 1 6 ) 12–20

Fig. 6 – Individual evaluation.

Fig. 5 – Steps of the Genetic Algorithm.

performance. Thus, the evaluation of each individual is given as: Evaluation = ClassificationError + 0.001 ∗ NumberCumulants

(4)

where the variable ClassificationError is the classification error of the evaluated individual in percentage and the variable NumberCumulants is the number of features (cumulants) used by the individual to perform its task. The latter variable was added to reduce the number of cumulants selected by the classification method. When the same classification error is obtained by distinct individuals, the one that selects fewer cumulants is given a better evaluation. Fig. 6 illustrates the individual evaluation process. Each individual of the GA population is a binary sequence with 800 positions, where each position indicates whether the respective cumulant is used. After this feature selection procedure, the classification is completed by a previously defined classification method (in this study we used k-NN and Naive Bayes classifiers) and, subsequently, fitness evaluation is performed (4). It is important to note that GA is performed only during the design stage. At the operation stage, only the selected features are extracted from the lung sound signal and presented to the classifier.

2.4.

Classification

simple classification method finds, according to a distance function, the k nearest neighbors of the unknown object among the data set and uses the majority vote approach to predict the object label [20]. In our study, Euclidean distance was used and k was chosen to be 5. The Euclidean distance d(x, y) between two instances x and y is defined as follows:

  n  2 d(x, y) =  (xi − yi ) .

(5)

i=1

According to [20], the basic steps of the k-NN algorithm are: • calculate the distances between the unknown object and all previous labeled objects; • select the k closest objects to the unknown object; • use the majority vote approach to predict the unknown object class. 2) Naive Bayes classifier: the Naive Bayes classifier [16] learns probability distributions from data and classifies the unknown instance by choosing the class with the highest posterior probability. The class is chosen to satisfy:

ᏴMAP = argᏴiεᏴ max p(x|Ᏼi )p(Ᏼi ),

(6)

where P(Ᏼi ) is the a priori probability, and p(x| Ᏼ i ) is the conditional probability density function of class Ᏼi , in which x is the attribute value and i corresponds to the class number. The conditional probability density function is based on the Gaussian distribution expressed by:



1

1



The signals were classified using two different classification methods: k-Nearest Neighbors and Naive Bayes.

p(x|Ᏼi ) =

1) k-Nearest Neighbors: the k-Nearest Neighbors (k-NN) classifier was introduced in 1957 by Fix and Hodges [15]. This

with i being the mean value of class variance.

(2i2 )

1/2

exp



(x − i ) 2

2

2i

,

(7)

Ᏼi and  i being its

18

c o m p u t e r m e t h o d s a n d p r o g r a m s i n b i o m e d i c i n e 1 2 9 ( 2 0 1 6 ) 12–20

Assuming uniform a priori probabilities, (6) is simplified such that

ᏴML = argᏴiεᏴ max p(x|Ᏼi ),

(8)

where ᏴML is the Maximum Likelihood hypothesis. After selecting the cumulants through the GA individual binary sequence, the k-Nearest Neighbors and Naive Bayes classifiers are used to classify the lung sounds into vesicular, fine crackle, coarse crackle, monophonic and polyphonic wheezes.

3.

Results and discussion

The input data set was divided into two subsets: a training set that included 70% of samples and a validation set with 30% of samples. The training set is composed of 71 vesicular samples, 39 fine crackle samples, 39 coarse crackle samples, 35 monophonic and 35 polyphonic wheeze samples. The validation set is composed of 31 vesicular, 18 fine crackle, 18 coarse crackle, 16 monophonic and 16 polyphonic wheeze samples. Using this data set, 40 different training and validation subsets were randomly created (holdout cross-validation procedure). They were generated to evaluate and compare the performance of the classifiers. We used MATLAB® software to run the tests. With the aim of finding the best results, some trial and error tests were performed in order to configure the GA. The selection operator, crossover operator, mutation operator, and their probabilities were chosen based on GA performance analysis over 50 executions of the algorithm using a population of 100 individuals. Thus, the selected GA configuration was: • • • • • •

Selection function: stochastic uniform. Mutation function: uniform. Crossover function: two points. Crossover probability: 0.8. Mutation probability: 0.25. Number of generations: 100.

To define which classifier would be used to classify the five types of lung sounds studied herein, the first attempt was to use the k-NN or the Naive Bayes to classify all possible classes at the same time, using just one classifier with five possible outputs. After training the classifiers, the mean training and validation classification accuracies (and standard deviations) over the 40 different subsets were 92.3 ± 0.9% and 87.2 ± 1.4% for the k-NN classifier and 94.4 ± 0.9% and 88.7 ± 1.3% for the Bayes classifier. The mean number of cumulants was 316 ± 10 for the former and 223 ± 9 for the latter. In such cases the number of cumulants varied because the training set is different for each run of the algorithm (we used 40 different subsets), and the GA select the features (cumulants) according to the classification accuracy and the number of cumulants used (see Eq. (4)). Therefore, vector size is not fixed and unlikely to be the same in all executions of the algorithm. In order to prove GA efficiency, we implemented the Fisher’s Discriminant Ratio (FDR) [17] feature selection tool, considering its multiclass case and using the same data sets. The Bayes classifier was employed and we compared the

Fig. 7 – Comparison between GA and FDR as feature selection tools using Tukey’s HSD criterion (95% confidence level) on validation data.

results applying Tukey’s Honestly Significant Difference (HSD) criterion, based on the Studentized range distribution (95% confidence level) on validation data. In analyzing Fig. 7 we can conclude that the GA achieved better results than FDR. Moreover, GA is able to automatically define an appropriate number of features. In order to improve the classification accuracy, a divideand-conquer approach was implemented. The objective was to divide the problem into smaller sub-problems, with the hope that the solutions of the sub-problems would be easier to find. After some experimentation, we defined a classification tree (Fig. 8) in which a different classifier method is implemented in each node. We tested many combinations of classes and classification methods (k-NN or Bayes) to arrive at the final classification tree, which presented the best performance. In the first step (node 1) the lung sound is classified into one of the three following classes: vesicular sound, wheeze (monophonic or polyphonic), or crackle (fine or coarse). The classification result is yielded by a k-NN classifier. This classifier used on average 223 ± 11 cumulants and its training and validation accuracies were 96.2 ± 1.8% and 92.1 ± 1.9%, respectively. Although the number of cumulants used by the classifier is still high, it is important to note that the feature space was considerably reduced (from 800 parameters to approximately 223) and the validation performance suggests the classifier achieved good capability for generalization. After the first step, if the lung sound was classified as a wheeze, a separate classifier (node 2) determined the sound classification as either a monophonic or a polyphonic wheeze.

Fig. 8 – Classification tree.

19

c o m p u t e r m e t h o d s a n d p r o g r a m s i n b i o m e d i c i n e 1 2 9 ( 2 0 1 6 ) 12–20

Table 1 – Confusion matrix of the classification performance on validation data set.

Vesicular Fine crackle Coarse crackle Monophonic wheeze Polyphonic wheeze

Vesicular

Fine crackle

94.4 ± 1.5%

0 91.9 ± 2.8% 9.2 ± 3.2% 0 0

0 0 0

Coarse crackle

A Naive Bayes classifier was designed to solve this classification problem (Bayes 1). This classifier used on average 166 ± 3 cumulants and its training and validation accuracies were 94.2 ± 1.7% and 91.1 ± 2.3%, respectively. If the lung sound was classified as a crackle in the first step, another classifier (node 3) was implemented to classify the crackle sound as either a fine or coarse crackle. This task was also performed by a Naive Bayes classifier (Bayes 2). This classifier used on average 167 ± 5 cumulants and its training and validation accuracies were 95.3 ± 1.4% and 91.4 ± 2.5%, respectively. The general training and validation classification accuracies of the built classification tree were 95.3 ± 1.6% and 92.2 ± 2.4%, respectively. The confusion matrix obtained using the validation sets is shown in Table 1. From Table 1 we can see that the proposed lung sound classification approach achieved good classification performance and it outperformed classifiers that do not use the divide-and-conquer strategy. Considering that only one classification tree would be implemented in a practical application, we selected one of the best sets of results, which was: 98.1% for classification accuracy on training, and 94.6% for validation data. The number of cumulants required to perform the classification depends on the respiratory sound. If the sound was classified as vesicular sound, only the k-NN classifier was used for evaluation and 210 cumulants were calculated. However, if the sound was classified as monophonic or polyphonic wheeze, the kNN and the Bayes 1 classifiers were used and 314 cumulants were extracted. Finally, if the sound was classified as fine or coarse crackle, the k-NN and Bayes 2 classifiers were employed and 328 cumulants were calculated. Thus, in the best case, 210 cumulants have to be extracted from the sound event and, in the worst case, 328 cumulants are required. Since the calculation of cumulants is a simple computational task [19], the implementation of the proposed classifier in an embedded system is feasible. Comparing the results obtained in this paper with the results discussed in Section 1 is a difficult task as each study classifies the respiratory sounds in a different way, or uses a different data set. For instance, [2,8,10] use fewer classes. The classification accuracy obtained by [2,8] and [10] were 90%, 97.5% and 94.2%, respectively. However, it is important to note that the proposed approach achieved competitive results even using only one feature extraction tool (HOS). Furthermore, our study dealt with more lung sound classes than are usually found in research developed in this area.

4.

Conclusion

In this paper, a divide-and-conquer approach to classify lung sounds was developed. HOS-based features

Monophonic wheeze

0 8.1 ± 2.8% 90.8 ± 3.2% 0 0

Polyphonic wheeze

5.6 ± 1.5% 0 0 91.9 ± 2.3% 9.7 ± 3.3%

0 0 0 8.1 ± 2.3% 90.3 ± 3.3%

(second-, third- and fourth-order cumulants) were used in combination with feature selection and classification methods. The selected cumulants were shown to be effective in representing lung sounds. Two different classification methods were implemented to build a tree-based system to classify lung sounds. The kNearest Neighbors and the Naive Bayes classifiers were used for this purpose. The k-NN classifier is simple, effective, intuitive and robust to noisy environments. Likewise, the Naive Bayes is also simple and effective, it can be trained quickly and it is quick to classify. HOS were used for feature extraction and GA were used to reduce dimensionality. GA individuals were evaluated using a fitness function that accomplished its purpose to maximize the classification performance and reduce the feature space. Therefore, the configuration of GA parameters was extremely important for achieving satisfactory results, since these parameters could negatively influence the obtained results. For future work, we expect to implement lung sound pattern recognition on a portable device and classify even more types of lung sounds. Moreover, other feature extraction techniques and classification methods can be studied to improve classification performance.

Conflict of interest The authors declare no conflict of interest.

Acknowledgments The authors would like to thank the Brazilian agencies FAPEMIG, CNPq and CAPES for financial support, and Dr. Evelyn R Nimmo for editing the manuscript.

references

[1] S. Lehrer, Understanding Lung Sounds, 3rd ed., WB Saunders, Philadelphia, 2002. [2] S. Ic¸er, S. Gengec¸, Classification and analysis of non-stationary characteristics of crackle and rhonchus lung adventitious sounds, Dig. Signal Process. 28 (2014) 18–27. [3] American Thoracic Society, Updated nomenclature for membership relation, ATS News 3 (1977) 5–6. [4] A.R.A. Sovijärvi, F. Dalmasso, J. Vanderschoot, L.P. Malmberg, G. Righini, S.A.T. Stoneman, Definition of terms for applications of respiratory sounds, Eur. Respir. Rev. 10 (2000) 597–610. [5] G. Charbonneau, E. Ademovic, B.M.G. Cheetham, L.P. Malmberg, J. Vanderschoot, A.R.A. Sovijärvi, Basic

20

[6]

[7]

[8]

[9]

[10]

[11]

[12]

c o m p u t e r m e t h o d s a n d p r o g r a m s i n b i o m e d i c i n e 1 2 9 ( 2 0 1 6 ) 12–20

techniques for respiratory sound analysis, Eur. Respir. Rev. 10 (2000) 625–635. R. Palaniappan, K. Sundaraj, N.U. Ahamed, Machine learning in lung sound analysis: a systematic review, Biocybern. Biomed. Eng. 33 (3) (2013) 129–135. J.L.M. Amaral, A.J. Lopes, A.C.D. Faria, P.L. Melo, Machine learning algorithms and forced oscillation measurements to categorise the airway obstruction severity in chronic obstructive pulmonary disease, Comput. Methods Programs Biomed. 118 (2) (2015) 186–197. G. Serbes, C.O. Sakar, Y.P. Kahya, N. Aydin, Pulmonary crackle detection using time–frequency and time–scale analysis, Dig. Signal Process. 23 (3) (2013) 1012–1021. F. Jin, F. Sattar, D.Y.T. Goh, New approaches for spectro-temporal feature extraction with applications to respiratory sound classification, Neurocomputing 123 (10) (2014) 362–371. M. Bahoura, Pattern recognition methods applied to respiratory sounds classification into normal and wheeze classes, Comput. Biol. Med. 39 (9) (2009) 824–843. B.A. Reyes, S. Charleston-Villalobos, R. González-Camarena, T. Aljama-Corrales, Assessment of time–frequency representation techniques for thoracic sounds analysis, Comput. Methods Programs Biomed. 114 (3) (2014) 276–290. L.J. Hadjileontiadis, S.M. Panas, Higher-order statistics: a robust vehicle for diagnostic assessment and

[13]

[14] [15]

[16] [17] [18]

[19]

[20]

characterisation of lung sounds, Technol. Health Care J. 5 (5) (1997) 359–374. J.M. Mendel, Tutorial on higher-order statistics (spectra) in signal processing and system theory: theoretical results and some applications, Proc. IEEE 79 (1991) 278–305. J.H. Holland, Adaptation in Natural and Artificial Systems, The University of Michigan Press, 1975. E. Fix, J.L. Hodges, Discriminatory Analysis, Nonparametric Discrimination: Consistency Properties, USAF School of Aviation Medicine, Randolph Field, TX, 1951. T. Mitchell, Machine Learning, 1st ed., McGraw- Hill Education, New York, 1997. S. Theodoridis, K. Koutroumbas, Pattern Recognition, 4th ed., Academic Press, USA, 2009. A.R.A. Sovijärvi, L.P. Malmberg, G. Charbonneau, J. Vanderschoot, F. Dalmasso, C. Sacco, M. Rossi, J.E. Earis, Characteristics of breath sounds and adventitious respiratory sounds, Eur. Respir. Rev. 10 (2000) 591–596. M.V. Ribeiro, C.A.G. Marques, C.A. Duque, A.S. Cerqueira, J.L.R. Pereira, Detection of disturbances in voltage signals for power quality analysis using HOS, EURASIP J. Adv. Signal Process. (2007). V. Garcia, E. Debreuve, F. Nielsen, M. Barlaud, K-nearest neighbor search: fast GPU-based implementations and application to high-dimensional feature matching, in: 17th IEEE international conference on image processing (ICIP), 26–29 Sept., 2010, pp. 3757–3760.