An ensemble system for automatic sleep stage classification using single channel EEG signal

An ensemble system for automatic sleep stage classification using single channel EEG signal

Computers in Biology and Medicine 42 (2012) 1186–1195 Contents lists available at SciVerse ScienceDirect Computers in Biology and Medicine journal h...

654KB Sizes 6 Downloads 191 Views

Computers in Biology and Medicine 42 (2012) 1186–1195

Contents lists available at SciVerse ScienceDirect

Computers in Biology and Medicine journal homepage: www.elsevier.com/locate/cbm

An ensemble system for automatic sleep stage classification using single channel EEG signal B. Koley a,n, D. Dey b a b

Department of Instrumentation Engineering, Dr. B.C. Roy Engineering College, Durgapur 713206, West Bengal, India Department of Electrical Engineering, Jadavpur University, Kolkata, West Bengal, India

a r t i c l e i n f o

a b s t r a c t

Article history: Received 4 March 2012 Accepted 30 September 2012

The present work aims at automatic identification of various sleep stages like, sleep stages 1, 2, slow wave sleep (sleep stages 3 and 4), REM sleep and wakefulness from single channel EEG signal. Automatic scoring of sleep stages was performed with the help of pattern recognition technique which involves feature extraction, selection and finally classification. Total 39 numbers of features from time domain, frequency domain and from non-linear analysis were extracted. After extraction of features, SVM based recursive feature elimination (RFE) technique was used to find the optimum number of feature subset which can provide significant classification performance with reduced number of features for the five different sleep stages. Finally for classification, binary SVMs were combined with one-against-all (OAA) strategy. Careful extraction and selection of optimum feature subset helped to reduce the classification error to 8.9% for training dataset, validated by k-fold cross-validation (CV) technique and 10.61% in the case of independent testing dataset. Agreement of the estimated sleep stages with those obtained by expert scoring for all sleep stages of training dataset was 0.877 and for independent testing dataset it was 0.8572. The proposed ensemble SVM-based method could be used as an efficient and cost-effective method for sleep staging with the advantage of reducing stress and burden imposed on subjects. & 2012 Elsevier Ltd. All rights reserved.

Keywords: EEG Recursive feature elimination Sleep staging Support vector machine

1. Introduction Sleep is a circadian rhythm which follows its own program with a sequence of sleep stages and autonomous nervous system functions related to them [1]. The performance of many basic activities in the normal life, such as learning, memorization productivity, and concentration, is closely connected to a good sleep quality [2–4]. Sustained deprivation of sleep can induce the rising risk of hypertension, cardiovascular pathologies, metabolic deregulation, obesity, diabetes and to a decrease in the efficiency in the immunitary system [3]. Furthermore, sleep evaluation is important for diagnosis of sleep disorders. The gold standard method for sleep evaluation is polysomnography (PSG), which requires the recording of many physiological signals such as EEG, ECG, electromyography (EMG), electrooculogram (EOG), pulse oximetry and respiration [4]. Through the visual observation of PSG, medical specialists evaluate individual’s sleep by defining different sleep stages: wakefulness (WA), rapid eye movement (REM), and non-rapid eye movement (NREM) sleep including stages 1, 2, 3 and 4, according to the

n

Corresponding author. Tel.: þ91 9434959946. E-mail address: [email protected] (B. Koley).

0010-4825/$ - see front matter & 2012 Elsevier Ltd. All rights reserved. http://dx.doi.org/10.1016/j.compbiomed.2012.09.012

Rechtschaffen and Kales (R–K) sleep staging criteria [4]. Sleep stages 3 and 4 are often merged together and termed as slow wave sleep (SWS). The outcome of this scoring is a hypnogram, which represents the temporal profile of sleep stage evolution. However, PSG is expensive, time consuming and labor intensive procedure. Moreover, it may be uncomfortable for the patients as several adhesive electrodes and wires are attached to them to acquire physiological signals during sleep. Thus, the development of simple and portable automatic sleep scoring system could be of great help. In the literature, several EEG based automatic sleep staging methods have been published, which include amplitude analysis [5,6], frequency domain analysis [7–10] and period analysis of EEG signals [11–15]. Hjorth parameters [16] and Haar functions [17] are extensively used for sleep scoring. Heuristic methods for sleep staging are reported in many studies [18–20]. Moreover, several new techniques like wavelet transform [21], fractal analysis [22], detrended fluctuation analysis [23] and correlation dimension [24] have been explored in sleep studies. Non-linear analysis of EEG has been demonstrated to be a powerful tool for automatic sleep staging [25,26]. Multivariate pattern analysis techniques were also applied in order to deal with several measurements from EEG simultaneously [27–29]. Various machine learning based methods, i.e., quadratic discriminant analysis [29], Bayesian approach based methods

B. Koley, D. Dey / Computers in Biology and Medicine 42 (2012) 1186–1195

[30], neural networks [31], [32], genetic fuzzy clustering [33], k-mean clustering [34] are also in use for sleep stage scoring. Although previous studies on sleep staging techniques yielded a classification accuracy ranging between 85% and 93% [19], they suffer from database bias and low generalization. Moreover, in some studies the classification accuracies saturates around 80% even when proper linear and non-linear features are selected and combined [32]. The aim of the present study is to develop an automatic sleep staging system based on single-channel EEG, so that it can be realized in a portable device suitable for home environment and clinical care applications. In the present scheme, EEG signal is acquired from C4-A1 channel of PSG machine and stored into some buffers for duration of 30 s. Thereafter, several features in time domain as well as in frequency domain are extracted. Moreover, non-linear features are also extracted from the buffered signal and all of these features are given input to the already trained support vector machine (SVM) based classifiers, to classify the corresponding signal into any of the five different sleep stages i.e. wake (WA), rapid-eye-movement (REM), S1, S2 and SWS (slow wave sleep which is the combination of NREM sleep stages 3 and 4). In the present work, the use of SVM is three-fold, first for the extraction of features using margin based criteria, mainly for the non-linear features, second for the selection of optimum subset of features from the initially collected 39 features using SVM based recursive feature elimination (SVM-RFE) technique, and third one for classification of different sleep stages into five different classes using one-against-all (OAA) strategy. The k-fold cross validation technique was used to validate the classifier performance and finally it was tested with entirely unknown subjects’ records. The classification accuracy shows that the proposed algorithm can be used as an additional tool to assist the expert. This paper is organized as follows. The details of the subjects and methodology of this paper are discussed in Sections 2 and 3 respectively. The obtained results are described in Section 4. The results of automatic detection of sleep stages are compared with ‘‘sleep profile‘‘, determined visually by sleep experts. In Section 5, the discussion of various observations and a brief comparison with other related works are presented.

2. Materials Among the subjects referred by different physicians for the overnight sleep study, only 28 subjects who had given their consent were included in the study. The physicians suspected that subjects are having sleep apnea on the basis of several symptoms they noticed during discussion with the subjects. The subjects are known to have one or more diseases like diabetes, bronchial asthma, high blood pressure, hyperthyroid, hyper tension and obesity and they were under treatment for their respective illnesses. The group of subjects can also be characterized as smoker/non-smoker, alcoholic/non-alcoholic and day worker/night shift worker. The subjects were found to have ages ranging from 35 to 56 years and weight varying between 89 and 152 kg. All the recordings were performed at the Center for Sleep Disorder Diagnosis (CSDD) located in West Bengal, India, following the clinical ethics. The PSG system that was used for overnight recordings of several physiological signals is Alice LE, part no. 1002287 by Philips Respironics. Using the PSG, sleep was scored for every 30 s epochs by two independent sleep experts according to R&K [4] criteria. In the study, only the data of C4-A1 EEG channel of PSG system were used for the development of single-channel sleep staging system. The sampling frequency of EEG signal was 250 Hz. In the work, S3 and S4 were combined to one stage SWS (slow wave sleep), thus each epoch was classified into one of the

1187

Table 1 Distribution of sleep stages. Records

Subjects

WA

REM

S1

S2

SWS

Total

Training set Testing set

16 12

2311 1729

634 465

1486 1117

3670 2837

706 586

8807 6734

five sleep stages: WA, REM, S1, S2, and SWS. The distribution of sleep epochs belonging to 28 subjects is shown in Table 1. Among the 28 subjects, randomly chosen 16 subjects recordings were used as training set and remaining 12 subjects’ recordings were kept entirely separate for final testing of the proposed model. An apnea-hypopnea index (AHI) Z5 events per hour (e/h) from PSG was considered as a positive diagnosis of sleep apnea. A positive diagnosis of sleep apnea was confirmed in 13 subjects with an average AHI of 25.6 712.6 e/h. The remaining 15 subjects composed the negative sleep apnea group, with an average AHI of 3.171.6 e/h.

3. Methods 3.1. Signal processing The common physiological artifacts present in the EEG recordings are muscle artifact, ECG, eye movements and eye blinking. These artifacts are mixed together with the brain signal, making interpretations of the EEG signal difficult [35]. Using a simple outof-bound test (with a7120 mv threshold), EEG data were processed to reject epochs that were grossly contaminated by muscle and/or eye movement artifacts. Finally, all recordings were digitally filtered with a band pass filter with cutoff frequencies at 0.2 and at 40 Hz in order to minimize residual artifacts present in the EEG signal. 3.2. Feature extraction The information contained in the EEG signal was summarized into a reduced set of measurements or features. In this study, most of the already reported features were considered [5–29], these features can be classified mainly in three categories: (A) time domain based features, (B) frequency domain based features and (C) non-linear features. Other features, such as time–frequency domain based like wavelet and STFT are not considered here, as time and frequency domain based features are considered separately. The following sections describe about the different features used in the study and their handy descriptions. Each of these analyses, as summarized in Table 2, was performed on the time series amplitudes of the filtered EEG epochs of 30 s durations. 3.2.1. Time domain based features The brief descriptions of different time domain based measures used in this work, are as follows: 3.2.1.1. Statistical measures. Among the various statistical properties of the EEG time series, the first- to fourth-order moments have already been used in preceding studies [28,29,32]. The variance of EEG (M2) was found suitable for discriminating REM sleep from S2 and SWS [29]. These first- to fourth-order statistical parameters i.e., mean (M1), variance (M2), skewness (M3) and kurtosis (M4) were computed for the EEG epoch X(n) to quantify the central tendency, degree of dispersion, asymmetry and peakedness, respectively. These measurements were calculated using the formulas given in [36].

1188

B. Koley, D. Dey / Computers in Biology and Medicine 42 (2012) 1186–1195

Table 2 List of extracted features. Label

Description

Type

F1, F2, F3 and F4 F5 F6, F7 and F8 F9

First- to fourth-order moments in time series Zero-crossing rate Activity, mobility and complexity 75th percentile

Time domain based features

F10, F11, F12 and F13 F14, F15, F16 and F17 F18 to F24 F25 F26 F27 F28 F29, F30, F31 and F32

Absolute spectral powers in the frequency bands of d, y, a and b respectively Frequency domain based features Relative spectral powers in the frequency bands of d, y, a and b respectively Ratios of d/a, d/b, d/y, y/a, y/b, a/b and (d þ y)/(a þ b) Spectral edge frequency Peak power frequency Median power frequency Spectral entropy Spectral mean, spectral variance, spectral skewness and spectral kurtosis

F33 and F34 F35 F36 F37 F38 F39

DFA scaling exponents for aS and aL Correlation dimension D2 Approximate entropy Lempel-ziv complexity Largest Lyapunov exponent Higuchi fractal dimension

3.2.1.2. Zero crossing rate. It counts the number of times the EEG signal crosses the reference line which is obtained from mean value. This simple measure is found quite robust in the characterization of sleep spindles and had been demonstrated in the analysis of EEG in sleep studies [29,37]. In the present work, zero crossing was obtained by the number of times EEG signal is crossing the 200 points moving average values taken over a moving window of 200 points, where the window was shifted by one sample each time. The number 200 was determined on the basis of maximum margin as discussed in Section 3.4.1. 3.2.1.3. Hjorth parameters. Among the different time domain based features, Hjorth parameters (i.e., activity, mobility and complexity) are quite popular for the analysis of EEG signal and also been used in sleep stages classification [38,39]. The calculations of these parameters are taken from [39]. 3.2.1.4. 75th percentile. This defines the value below which 75% of the random variable values are located. Preceding studies have already used this parameter for the identification of different sleep stages [32,38]. 3.2.2. Frequency domain based features It is well known that, the lower frequencies of EEG become prominent with the increasing depth of sleep [10]. The spectral measures of EEG for monitoring sleep cycles are extensively reported in many literatures [7–10]. The calculation of all spectral measures in the present work is based on Welch average periodograms method [40]. First, the method divides the signal into S overlapping segments of length M. Then each data segment is windowed by w(n), periodograms are calculated and then average of periodograms is found. For an EEG epoch x(n), having some data segments {xi(n)} where i ¼ 1, 2, . . .S: The Welch spectrum estimate is given by S  X p^ w ðf Þ ¼ 1=S p^ i ðf Þ i¼1

where p^ i ðf Þ ¼

 2 M  1 1  X  wðnÞxi ðnÞ exp ðj2pf nÞ   M P n ¼ 1

Non-linear features

p^i ðf Þ is the periodogram estimate of ith segment. P is given as PM 2 ^ P ¼ 1=M n ¼ 1 9wðnÞ9 . pw ðf Þ is the Welch PSD estimate. In the work, 512 samples Hanning window with 50% overlap and 1024point DFTs were used. From the obtained PSD of each epoch, following features were extracted. 3.2.2.1. Spectral power in different frequency bands. The absolute power in the four significant frequency bands: delta (d) from 0.5 to 4 Hz, theta (y) from 4 to 8 Hz, alpha (a) from 8 to 12 Hz and beta (b) from 13 to 30 Hz were computed from the obtained PSD of each epoch. From these absolute spectral power values, other derived features like relative spectral powers and power ratios were obtained. Relative spectral powers (PR): relative spectral powers PR were computed in the four frequency bands {d, y, a, b} by dividing the absolute power in each frequency band by the total spectral power. Relative powers in the respective frequency bands have been used for sleep stages classification in previous studies [28,32]. Power ratios: relative power ratios were computed between the relative spectral powers in the different frequency bands: ((d/a), (d/b),(d/y),(y/a),(y/b),(a/b),(d þ y)/(a þ b). 3.2.2.2. Spectral edge frequency (SEF95). SEF95 is the frequency below which 95% of the total spectral power is located. The use of this parameter for monitoring sleep cycles was found in [27], [28,38]. 3.2.2.3. Peak power frequency (PPF) and median frequency (MF). PPF and MF are usually applied for EEG based depth of anesthesia estimation [41]. PPF and MF are frequencies at which the highest power exists and at which half the spectral power is above and half is below respectively. 3.2.2.4. Spectral entropy (SE). SE is a disorder quantifier related to the flatness in the PSD. It is computed based on the Shannon’s P entropy [42]. SE ¼  i Pi lnðP i Þ, where pi is the normalized value of the PSD at each spectral component, with a bin width of one spectral unit [43]. This spectral feature is found suitable to separate between S2 and SWS with a high statistical significance, also reported in [27].

B. Koley, D. Dey / Computers in Biology and Medicine 42 (2012) 1186–1195

3.2.2.5. Statistical features. The first- to fourth-order statistical moments in the frequency domain were calculated using equations given in [39]. These spectral moments characterize the shape of the spectral density of the signal and were applied to sleep study in other context [36]. 3.2.3. Non-linear features As EEG shows significant complex behavior with strong nonlinear and dynamical properties [26], the effectiveness of different non-linear measures like correlation dimension [44], Lyapunov exponent [45], approximate entropy [46], detrended fluctuation analysis [47], Higuchi fractal dimension [48] and Lempel-ziv complexity [49] for sleep staging and EEG characterization have been demonstrated by various researchers in the past [27,28,50]. Brief description along with the calculation method is presented in the ongoing section in brief. For the calculation of non-linear measures like correlation dimension (D2), Lyapunov exponent (LE) and the approximate entropy (ApEn), a trajectory in phase space plane needs to form which requires two parameters, time delay (t) and the embedding dimension (m), to fix prior. To find the proper value of t, the average mutual information was used [44]. From this, the optimum time delay was selected as the smallest t where the average mutual information assumes a minimum value. For finding the appropriate embedding dimension, the false neighbors’ method (FN) [44] was adopted in the present work. 3.2.3.1. Correlation dimension (D2). The correlation dimension D2 is a statistical measure of dimension in a non-linear dynamical system. The estimation of D2, according to the proposal of Grassberger and Procaccia [51], proceeds by first calculating the correlation integral  P N C(r). CðrÞ ¼ limN-1 1=N2 i,j ¼ 1 HðrJXðiÞXðjÞJÞ. where H is ia j the Heaviside step function and r is the radius of the hypersphere in m-dimensional phase space. X(i) and X(j) are the points of the trajectory in the phase space. Now D2 is described as D2 ¼ limr-0 ðlog Cðr Þ=log ðrÞÞ. The value of D2 can be found as the slope of log C(r) vs. log (r) plot for small values of r, where r is an externally supplied input parameter and must be chosen prior to the calculation. It was observed that the D2 decreases with the deeper sleep stages, confirmed by the previous studies [27]. 3.2.3.2. Largest Lyapunov exponent lmax. The Lyapunov exponent (LE) is a quantitative measure for the divergence of nearby trajectories, a system with a greater magnitude of LE is said to be more unpredictable [44]. The value of largest Lyapunov exponent depends on the sleep stages [52] and this measure was reported to be more effective in discriminating sleep stages 1 and 2, compared to spectral measures [27]. Mathematically, LE is calculated for each dimension of the phase space as N 1 X dðX ði þ 1Þ,X ðj þ 1ÞÞ l ¼ lim ln N-1 N dðXðiÞ,XðjÞÞ n¼1

where d(X(i),X(j)) represents the initial distance between the nearest neighbors and d(X(iþ1), X(j þ1)) is the distance between X(iþ1) and X(j þ1) which are the next pair of neighbors on their trajectories. Among the whole spectrum of Lyapunov exponents, the largest one was considered for sleep staging. 3.2.3.3. Approximate entropy (ApEn). ApEn, introduced by Pincus [46], is a measure quantifying the unpredictability or randomness of the

1189

signal. The authors in [50] applied this measure to classify different sleep stages, and found that the mean value of this measure changes significantly with sleep stages. In the calculation of ApEn, two input parameters m (embedding dimension) obtained by false neighbors’ method, and r (tolerance width) must be chosen prior. Next from the signal x(n), the Nmþ1 vectors i.e., XðiÞ ¼ fxðiÞ, xði þ 1Þ,. . .,xði þ m1Þg are formed. Now for each i, 1rirNmþ1, the regularity C m r ðiÞ is expressed as Cm r ðiÞ ¼

Nm ðiÞ Nm þ 1

where Nm(i) denotes the number of patterns X(j) of length m to a distance less or equal than r from X(i). Next, the quantity jm(r) is calculated as

jm ðr Þ ¼

Nm Xþ 1 1 lnC m r ðiÞ Nm þ1 i ¼ 1

and finally ApEn ðm,r,N Þ ¼ jm ðrÞjm þ 1 ðrÞ: 3.2.3.4. Detrended fluctuation analysis (DFA). DFA is a widely used technique for the detection of long-range correlations and fluctuations in a noisy and non-stationary time series [47]. Previous studies reported that the scaling exponents of EEG obtained through DFA increase gradually as sleep goes to deeper stages [23]. In computation of DFA for discrete time series x(n), n ¼1 to N (N ¼ 7500 for an epoch), first the integrated time series P yðnÞ ¼ nt ¼ 1 fxðnÞxðnÞg is computed, where xðnÞ represents the average values of x(n). Then the y(n) is divided into B equal windows, where each window contain k ¼integer part(N/b) number of points, now if yb(n) represents the straight line by least square fit of the bth window (b¼ 1,yB) then rms value of the semi locally detrended fluctuation (variance of the fluctuation) for all the windows is computed by vffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi u B bk X u1 X F ðkÞ ¼ t ½fyðnÞgyb ðnÞ2 Bk b ¼ 1 n ¼ ðb1Þk þ 1 The trend of F(k) over variation of window size k is an indicator of the nature of the fluctuations in the EEG epoch, represented by scaling exponent a calculated from the slope of log[F(k)] vs. log (k). From the plot of log [F(k)] vs. log (k) for the epoch of various stages, it was observed that there are basically two distinct slopes, one for the smaller k values in the range of 10–300 and another for larger k values in the range of 400–600. For this reason two scaling exponents aS (for smaller k values) and aL (for larger k values) were obtained. The ranges of k values for the calculation of aS and aL were found to be sensitive and depend on the nature of the signal. It was also observed that the mean value of aS increases with the deeper sleep stages, but aL decreases significantly for REM and Wake and in other cases it remain moderately same. 3.2.3.5. Higuchi fractal dimension (HFD). This algorithm is regarded as a most stable estimator of fractal dimension and is computationally fast FD [48]. Previous authors found that the HFD can successfully discriminate between individual stages of sleep and particularly more suitable in distinguishing SWS from any other sleep stages [28]. In this measure, for a given signal xðnÞpm are  x(n), p new time series  constructed as follows: xðnÞpm ¼ xðmÞ, xðm þpÞ,. . .,x m þ bðNmÞ= pc:pÞg, m ¼ 1, 2,. . .p. Here, m and p are integers and they indicate the initial time and the time interval respectively, the maximum value of p was fixed to a value based on multiclass margin (discussed in Section 3.4.1). Then for each curve xðnÞpm , the average

1190

B. Koley, D. Dey / Computers in Biology and Medicine 42 (2012) 1186–1195

length of m curves is obtained by 20

3 1 Nm p Xp 6B X C j N1 k 7 LðpÞ ¼ 1=p 9xðm þ ipÞxðm þ ði1ÞpÞ9A: 4@ 5 p m¼1 Nm p i¼1 p N1

where Nm is a normalization factor. From the curve of log½LðpÞ vs. p

p

log [1/p], the slope of the least squares linear best fit is the estimate of Higuchi’s fractal dimension. 3.2.3.6. Lempel-ziv complexity (LZC). LZC is a nonparametric measure of complexity, with larger values corresponding to high complexity data. It has been applied in the context of wake and sleep stages diagnosis during anesthesia [49]. To compute the LZC, the EEG signal x(n) in transformed into a sequence of symbols (zero and one) by comparing each sample with a predefined threshold Td. Now the binary sequence is scanned from left to right and the complexity counter cðNÞ is increased by one unit every time a new subsequence of consecutive characters is encountered. Finally, the normalized complexity is defined as  CðNÞ ¼ ðcðN ÞÞ=ððN Þ= log2 N Þ . In this study, the threshold T d was determined on the basis of maximum margin (discussed in Section 3.4.1) rather than using median value. The non-linear parameters ApEn and LZC found to be effective in distinguishing between the two hardly distinguishable stages: S1 and REM. 3.3. Theory of SVM and SVM-RFE technique The motivation behind using SVM as a classifier is the fact that, recently lot of the works on classifications in high-dimensional feature space pointed out the superiority of SVM classifiers over traditional statistical and neural classifiers [53,54]. It has the ability to minimize both structural and empirical risk leading to better generalization for new data classification even with limited training dataset. In OAA strategy each classifier was trained to find optimum separating hyper-plane (OSH) by the principle of margin maximization between feature vectors of a particular class (ci) and feature vectors of all other classes (C ¼{c1, c2,ycjy,cC}), where j ai, in feature/kernel space. The mathematical background for finding separating hyper-plane is as follows. Let for each of the d-dimensional training feature vector ! ! ! ! w space  dimensional  k  ¼ fw 1 , w 2 ,. . ., w K g, is mapped to higher YðvÞ , with the help of a kernel function K ! w k , v , then a target value (yk) is assigned, where yk A 1, þ1, þ1 if the particular feature belongs to the particular class and 1 otherwise. Then from the K number of training feature vectors one needs to iteratively     ! find weight matrix w and bias b so that yk w:Y w k þb 1 40 for i ¼ 1,2,. . .K: With the help of training from examples, optimal separating hyper-plane (OSH) i.e., the decision surface is obtained by minimizing the cost function expressed as Rðw, nÞ ¼

K X 1 2 Jw J þ C^ xk 2 k¼1

ð1Þ

To overcome the problem of biasness of OSH, when imbalanced training data are presented, cost based learning strategies P are implemented. The total misclassification cost C^ Kk ¼ 1 xk P is replaced with two terms, one for each class: C^ KK ¼ 1 P P xk -C þ k A y þ xk þ C  k A y xk where C þ and C  are the softmargin constants for the two classes y þ (yk ¼ þ1), y   (yk ¼  1) and subject to     ! yk wUY w k þ b Z 1xk &xk Z 0 for k ¼ 1,2,. . .K ð2Þ

In order to provide equal weight to each class, the penalty parameter for a particular class is adjusted from the number of examples present in the respective training dataset. Eq. (1) is the combination of margin maximization and error minimization, where xk is called as slack variable, to allow the     ! possibility of examples that violate yk w:Y w k þ b Z 1.

Sx is an upper bound on the number of training errors. The parameter C^ is chosen by the user, a larger C^ corresponds to assigning a higher penalty to errors. The quantity 1=JwJ is called the margin, so maximizing margin needs to minimize JwJ. Since Jw2 J is a convex, minimizing it under linear constraints (2) can be achieved with Lagrange ^ and b^ are the parameters of the obtained multipliers. Now if w OSH, then the decision on whether a feature vector corresponds to particular is based on the value of the discriminant  class  ! function D w k , where     ! ^ Y ! w k þ b^ D w k ¼ w: ð3Þ   P After maximization, one can calculate W 2 Ln ¼ Kk,j ¼ 1 Lnk  ! ! Lnj yk yj K w k , w j , this is the measure of predictive ability of the SVM classifier [55], where Ln is the optimized Lagrange parameters. This measure is inversely proportional to the margin i.e., the separation between the two classes. The recursive feature elimination (RFE) technique [56] is based on the value of this margin on different sets of features. The feature selection is performed through a sequential backward elimination procedure followed by the margin maximization principle [56]. The idea of RFE is to start training the SVM classifiers with all the features that least decrease the margin using some mathematical or heuristic rules. This process of feature elimination is repeated till some stopping criteria are met. In the work, a particular feature q which is to be removed from the feature subspace S (qAS) during certain iteration, the   ranking score rq was calculated using r q ¼ 9W 2 L,Ln W 2q   L,Ln 9 where W 2q denotes the predictivity ability of the SVM after removal of qth feature from the feature vector. Then the feature q with smallest ranking score was eliminated from feature subspace S. 3.4. Application of SVM The use of SVM in the present work can be seen in three folds. One is for the extraction of suitable features from non-linear analysis of the time series data using maximum margin as criteria. Second one for the selection of optimum subset of features among the extracted 39 number of features using SVM-RFE technique. Third one is for the classification of features into five different classes using ensemble of binary SVM classifiers in OAA strategy. 3.4.1. Margin based selection of input parameters It is well known that the numerical estimations of non-linear measures for EEG signals depend on the choice of the input parameters [57] which must be chosen prior to the calculation. Previous researchers have shown different ways to heuristically obtain appropriate values of these input parameters for a particular time series. Following the procedures, a real challenge was faced when different input parameters values were obtained for EEG time series of different stages as well as some times with different subjects. For example, the numerical values of the scaling exponents aS and aL of DFA get altered due to choice of the window sizes as the appropriate values of these input parameters are dependent on nature of the time series. Similarly the numerical estimations of non-linear measures like LE, D2, LZC, HFD and ApEn also depend on corresponding input parameters as

B. Koley, D. Dey / Computers in Biology and Medicine 42 (2012) 1186–1195

Table 3 Parameters optimized according to margin. Non-linear measure

Input parameters

Optimum value

DFA

Window size for aS Window size for aL Threshold for FN Radius (r) Threshold (Td) kmax Radius (r)

135–430 560–800 1.34 12–32 1.24nmean of epochs 12 r ¼1.8nSD

LE D2 LZC HFD ApEn SD, standard deviation.

listed in Table 3. Therefore, a reasonable interpretation of nonlinear EEG measures should be based on statistically significant effects for different diagnostic groups, but never on the absolute values of these measures. In the work, a margin based criterion was adopted for calculation of proper input parameters using SVM. For this purpose a small size of training dataset was obtained from the training data set of 16 subjects. This training dataset was created by collecting 10 randomly chosen epochs from each stage of each subject (if not present then drawing more from other subjects), care was taken to have balanced training data set for each class. To find the suitable input parameter value of any measure, the minimum (PNmin) and maximum (PNmax) values of the corresponding input parameters were first decided and then the search process for the corresponding non-linear measure was initiated which results maximum margin for the training data set. The process is summarized in the following algorithm. The input parameters along with its obtained suitable values for the individual measures are presented in Table 3. start step 1: initialize i¼1, and set input parameters of measure N, PN (i)¼PNmin, step 2: apply measure N to calculate corresponding numerical values for the training data set, using parameter PN(i). step 3: train the SVM with the obtained numerical values, and obtain the margin MN(i). step 4: set PN(i þ1) ¼PN(i)þ Z, (Z is the small suitable positive increment value) step 5: if PN(i þ1)oPNmax then increment i¼i þ1 and goto step 2, else find PN(i) for which MN(i) is maximum, end repeat the sequences for all the nonlinear measures

3.4.2. SVM-RFE based feature selection Feature selection for classification in high-dimensional spaces can improve generalization ability, reduce classifier complexity, and identify important discriminating feature. In the present work, to avoid the redundant features and to achieve significant detection rate, SVM-RFE technique has been combined with SVM classifiers. It is a popular technique for feature selection, used in bioinformatics area along with SVM [56]. For feature ranking, the regularization parameter C^ for SVM model and the g of the RBF kernel function were chosen experimentally by training and testing with all the 39 features. After finalizing the SVM model, the feature selection process with SVM-RFE was computed. Altogether 39 features were applied to the model. In each iteration of RFE (say ith), the value of ranking score rq of the remaining (39 i) features was obtained and then

1191

the feature with lowest r q value was removed and a rank R¼39  i was assigned to that removed feature. The next cycle starts with the {39  (iþ 1)} number of features, and continues to remove the next least important feature of rank R¼39  (i þ1). This process was continued until there is only one feature remaining. Finally, at the end of the process, overall ranking of all the features were obtained based on margin and according to their importance on classification. Then the top ranked feature alone in the feature vector was used for training and testing and average classification error was obtained. In the next cycle, same process was repeated for top two ranked features in the feature vector, this cyclic process was repeated for 39 numbers of cycles, so that by rank wise inclusion of one feature at a time leads at obtaining classification errors for the entire feature subsets. Fig. 1 shows the variation of margin for the WA and SWS class w.r.t. number of features used in training, and change in average classification error. 3.4.3. SVM based ensemble classification model In the present work, an ensemble (parallel) of five binary SVM classifiers was used to classify the different sleep stages into different classes. Among the several techniques to use binary SVM based classifiers to solve multiclass problem, one-against-all (OAA) is quite popular [54]. Thus, in the classification phase, OAA strategy with the ‘‘winner-takes-all’’ rule [54] was used, to decide which class label is to be assigned to a particular feature vector, and the winning class is the one that corresponds to the SVM classifier of the ensemble which shows the highest output (discriminant function value). A block diagram of the proposed sleep stage identification system is shown in Fig. 2. 3.4.3.1. Training, testing and SVM parameters optimization. For training and testing process, features were drawn according to k-fold (k¼4) cross validation (CV) scheme [58]. In this scheme,

Fig. 1. Normalized values (%) vs. number of feature retained for (A) average classification error, (B) margin for WA class against remaining and (C) margin for SWS class against remaining.

Fig. 2. Block diagram of the proposed system for identification of sleep stages.

1192

B. Koley, D. Dey / Computers in Biology and Medicine 42 (2012) 1186–1195

the whole training dataset was divided into k subsets subject wise, with one subset used for testing and the remaining k 1 subsets were used to train and construct the SVM decision surface. This was repeated for other subsets so that all subsets were used as the testing sample and average classification error represented for the entire feature vector set was obtained. After repetitive cycle of training and testing process, the classification performance of individual run was averaged and presented for performance analysis. The four measures, namely sensitivity (SE), specificity (SP), predictivity (PR) and accuracy (AC) calculated from the class wise false positive (FP), false negative (FN), true positive (TP) and true negative (TP) values were used to assess the performance using equations given in [59]. The receiver operating characteristics curve (ROC) was also analyzed to check the biasness of margin [60]. Among these measures, the sensitivity parameter was used to optimize the SVM model. For optimization, the kernel function needs to be chosen  prior. In this study, the  Gaussian RBF kernel of ! ! the form K w k , v ¼ exp gJ w k vJ2 has shown to perform similar or better when compared to polynomial and linear kernels. For the selection of proper C^ of the SVM model and g value for the RBF kernel function, the values of C^ and g were varied iteratively through grid-search process [61], in each of the iterations the average classification performance was obtained by CV technique.

4. Results and analysis The present section summarizes the obtained results at various stages of work starting from selection of features.

4.1. Feature selection The values of average classification errors w.r.t. rank wise inclusion of features are shown in Table 4. From second and third columns of Table 4, it can be noted that inclusion of the second ranked feature F38 along with F19; the top ranked feature in the classification process decreases the classification error from 36.7% to 32.9%. With further combination involving top 21 features, the classification error could be decreased significantly to 11.5%. Adding more features, however, did not improve the performance substantially. It could then be concluded that with the combination of top 21 ranked features, sleep stages can be identified with reasonable accuracy. Tests were also conducted to examine the performance of the SVM for different kernel functions (polynomial, linear and RBF) and different regularization parameters.

4.2. Analysis of performance and computation sensitive parameters The optimum values of SVM parameters like C^ and g were chosen through grid search which involves lot of computation thus, coarse long-range grid search [61] was performed initially, and then limited range fine grid search was performed. The variation of sensitivity of the proposed system for limited range fine grid search is shown in Fig. 3. All the classifiers were trained and tested using CV technique as mentioned in the earlier section using the top 21 features in the feature vector for the selection of proper C^ of the SVM model and g value for the RBF kernel function. The values for C^ and g were varied iteratively, and in each iteration the average classification error were obtained by CV technique. The range within which this search operation was initiated is 0.1–100 for C^ and 0.01–10 for g. The optimized values found to be 16 and 0.25 for C^ and g respectively, and classification error using these parameters values was 8.96%. 4.3. SVM based classification The details of the classification performance obtained using CV technique for training dataset are presented in Table 5. From Table 5, it is observed that the SWS stage achieved highest 98.7% sensitivity and 98.2% specificity followed by WA with sensitivity and specificity of 95.9% and 99.4% respectively. The least recognizable state was S1 with sensitivity and specificity of 83.3% and 96.8% respectively, the reason behind is that it is transitional stage between WA and S2 and it was observed that most of the miss-classified epochs of S1 are either in REM or WA. This is confirmed by previous studies that reported about the lower inter-agreement rates for S1 [31]. The two phases S1 and REM are hardly distinguishable by the EEG signal analyzed in the frequency domain. Non-linear measures D2 and lmax performed better in discriminating stage S2 and SWS, confirmed with the previous studies by [27]. In addition, the kappa coefficient (k) [62] was calculated to assess the agreement between the classifier model and human scorer. If k ¼1, then there is perfect agreement. On the other hand, k ¼0 means no agreement. To judge the

Table 4 Feature ranking and cumulative feature subset wise average classification error in %. Rank

FI

Error

Rank

FI

Error

Rank

FI

Error

1 2 3 4 5 6 7 8 9 10

F19 F38 F17 F28 F2 F29 F35 F39 F18 F36

36.7 32.9 29.4 25.6 24.2 23.7 22.1 21.9 19.5 17.3

11 12 13 14 15 16 17 18 19 20

F5 F21 F23 F37 F25 F14 F16 F9 F33 F36

16.7 14.9 14.3 13.2 13.1 13 12.9 12.6 12.1 11.5

21 22 23 24 25 26 27 28 29 30

F6 F11 F13 F30 F10 F15 F27 F7 F8 F12

11.5 11.6 11.8 12.7 11.4 11.2 10.6 11.5 11.9 12.7

FI, feature index. Error in %.

Fig. 3. Optimization of average sensitivity for C^ and g. Table 5 Various parameters for classifier performance evaluation for training dataset. Sleep stage

FN

FP

TP

TN

SE

SP

PR

AC

WA REM S1 S2 SWS

94 36 249 401 9

42 215 233 150 149

2217 598 1238 3269 696

6454 7958 7087 4987 7953

0.959 0.943 0.833 0.891 0.987

0.994 0.974 0.968 0.971 0.982

0.981 0.736 0.842 0.956 0.824

0.985 0.971 0.945 0.937 0.982

Error: 8.958%.

B. Koley, D. Dey / Computers in Biology and Medicine 42 (2012) 1186–1195

Table 6 Various parameters for classifier performance evaluation on testing dataset of non-apnea subjects. Sleep stage

FN

FP

TP

TN

SE

SP

PR

AC

WA REM S1 S2 SWS

101 40 74 128 23

33 114 85 63 71

775 228 580 1545 350

2935 3462 3105 2108 3400

0.885 0.851 0.887 0.923 0.938

0.989 0.968 0.973 0.971 0.980

0.959 0.667 0.872 0.961 0.831

0.965 0.960 0.959 0.950 0.976

Error: 10.09%.

Table 7 Various parameters for classifier performance evaluation on testing dataset of apnea subjects. Sleep stage

FN

FP

TP

TN

SE

SP

PR

AC

WA REM S1 S2 SWS

96 33 55 103 35

25 106 72 63 56

757 164 408 1061 178

2012 2587 2355 1663 2621

0.887 0.832 0.881 0.912 0.836

0.988 0.961 0.970 0.963 0.979

0.968 0.607 0.850 0.944 0.761

0.958 0.952 0.956 0.943 0.969

Error: 11.14%.

Fig. 4. Comparison of hypnogram and variation of delta-to-beta ratio w.r.t. different sleep stages; (A) hypnogram by the expert, (B) delta-to-beta ratio, (C) hypnogram by the proposed classifier, (D) correctly classified epochs and (E) misclassified epochs.

performance of the classifier in the case of entirely new dataset i.e., the case the classifier will face when implemented; the optimized classifier was tested with the data set obtained from 12 subjects. The performance measures obtained for seven nonsleep apnea subjects are shown in Table 6. The error obtained in this case was 10.09%, which is closure to the error of 8.96% as obtained from CV technique. Table 7 shows the obtained classification results for five sleep apnea subjects. The classification error was found to be 11.14%. The classification performance was better for the non-apneic subjects than the apneic subjects. This is due to the fact that the sleep apnea subjects show the disturbed sleep architecture with often an increased number of awakenings and sleep–wake transitions. The kappa value was estimated as 0.877, for training dataset. It shows good agreement. In case of, non-apneic subjects in testing dataset, kappa coefficient was found to be 0.868 and for apneic subjects it was found to be 0.8461 (average k ¼0.8572). Fig. 4 displays the results of the automatic classification in comparison to the hypnogram of the visual scorers. From this it can be observed that the dynamic of the hypnogram is almost maintained in the automatically

1193

obtained sleep profile and the single-best feature resembles the hypnogram; however, even this measure leads to mistakes in about 36.7% of the cases, therefore use of optimal subset of extracted features found to be reasonable.

5. Discussion The proposed automatic sleep estimation algorithm is based on a single channel EEG signal. A set of features (initially 39) were extracted from EEG time series according to three methods, i.e., time-domain, frequency-domain and non-linear based, as found suitable from earlier researches [5–29]. The variations of such features were cross verified from the respective original research works, and the classification performances were also evaluated individually. Some more features reported by some other authors were rejected either on the basis of individual classification performance or due to the calculation complexity. Problem was faced during extraction of suitable features from the non-linear analysis, to select proper values for externally supplied input parameters, which found to have influence on numerical values of non-linear measures. This problem was solved by calculating the margin over all the classes for a given training data set. The effectiveness of such method was also verified by the training and testing of the classifier with the non-linear features only, and the classification error was 26% compared to 32% when the average values were chosen as the input parameters. An optimum set of features, consisting only 21 features among 39 were selected on the basis of performance and computation power required, as the use of more features means more computational power required and also increase the complexity of the system. The use of more features though have better effect on generalization ability and improve margin but fail to improve classification performance significantly. The proposed algorithm to classify sleep stages indicated higher performance with over all accuracy of 96.4% than that of the techniques presented in [19,26,33]. The independent test result shows that the optimally designed SVM classifier performs well (k ¼0.8572). The authors of [26],[63] reported classification performance accuracy of 77% and 61% in the case of automatic sleep staging using only non-linear features of EEG signal. Previous efforts on automatic sleep staging using data mining methods reported classification accuracy below 86% [27,29,32,38]. However, these previous studies only focused on the combinations of different features and tried to improve the classification performance. To the best of our knowledge, the present study is the first attempt at applying maximum margin criteria for SVM to generate input parameters for non-linear features. The important part of this paper is to identify the most informative features and their successive extraction. Finally, the application of the SVM classifier boosts the generalization capabilities, and its robustness to a better exploitation of the data discrimination capability between classes, leading to reasonably good classification accuracies compared to the schemes reported in the literatures [33,64,65,66]. In the present study, error is calculated from the total number of mis-classified epochs. Earlier studies on single-channel EEG based automated sleep-staging methods achieved sensitivities in the order of 74% 88% [33,64–66], whereas the present algorithm is capable to have an average sensitivity, specificity, accuracy and error of 88.32%, 97.42%, 95.88% and 10.61% respectively with respect to the scoring of the first expert in identifying five sleep stages, excluding the epochs containing movement artifacts. The agreement between expert 1 and expert 2 was found to be 91.6%. All the three scoring results found to agree statistically. Even though, the error of the proposed scheme is fairly high and could result in

1194

B. Koley, D. Dey / Computers in Biology and Medicine 42 (2012) 1186–1195

significant mis-scoring in clinical practice; however, there exists further scope of study for improving the results. It is worth mentioning here that the present study consists of variety of records (i.e., male/female, apnea/non-apnea, young/ old), contrary to only healthy as considered in [28,33,64,65] or only male as in [34]. In this sense, the present dataset has low biasness and better generalization. Another important aspect of the proposed algorithm is that, it requires only single channel EEG recordings as the input, thus makes it suitable for home care due to portability. In case of home care, the proposed algorithm can be realized in personal computer (PC) or mobile devices, which will collect EEG signal of the subject through some communication interface from the EEG recorder. Such portable EEG electrodes are already available in the market with Bluetooth/wireless connectivity. The measured analog EEG signals are converted into digital signals by the analog to digital converters (ADCs) embedded with the sensor attachment. This type of system experiences low-noise interferences from non-physiological artifacts due to digitization at the measurement site, thus making the scheme suitable for home application. However, in the present case of study, the performance of the scheme is not judged in the home environment as the EEG data were collected from the PSG recordings of C4-A1 channel in clinical environment (i.e. CSDD). In clinical environment, certain precautions are taken which may be lacking in the recordings taken in home environment. In clinic, EEG electrodes were placed by rubbing the skin and applying gel so the electrodes get good contact and signal quality improves. One of the main disadvantages in home recording may be the loss of control of the investigator over the sleeping environment and the inability to check and correct improper connections of the electrodes, thereby possible loss of signal or noise contamination. However, with the advent of the dry type electrodes this problem can be avoided [67]. On the other hand, the subject gets natural sleeping environment at home which increases the comfort level and this approach can provide more recording trials at a lower cost. Previous study as reported in [68] showed a comparison between home and laboratory PSG. It was observed that there exist differences in sleep parameters for the two cases (i.e., home and laboratory). The laboratory PSG recording showed shorter sleep time, significant increase in stage 1 sleep and decrease in REM sleep than those at home. Hence, such aspects may be included in the scopes of future studies.

6. Conclusions An ensemble of binary SVM classifiers have been used for automatic sleep staging with the help of single channel EEG recordings. Several features from time, frequency and non-linear analyses provide valuable information to characterize sleep cycles. The non-linear features which shown variability with the certain externally supplied input parameters were suitably processed to improve the discriminating information contained about the different sleep stages. Feature selection by means of SVM-RFE approach has shown to be a powerful tool to obtain an optimum feature set from EEG recordings. The automatically selected optimum feature set significantly improved the classification performance. The results indicate a high agreement between manual and automatic scoring. The present technique of using large number of features along with ensemble classifier system makes the method highly effective and reliable, on the other hand, it imposes a lot of computational burden, but day by day decrease of computational cost and increase of computational power limits the drawback. Additional

studies on larger database are required before the algorithm is introduced into clinical practice.

Conflict of interest statement None declared.

References [1] W.J. Randerath, B.M. Sanner, V.K. Somers, Sleep Apnea Current Diagnosis and Treatment. Rochester, MN, 2006. [2] D.P. White, Sleep apnea, Proc. Am. Thorac. Soc. 3 (2006) 124–128. [3] T. Young, P.E. Peppard, D.G. Gottlier, Epidemiology of obstructive sleep apnea, a population health perspective, Am. J. Respir. Crit. Care Med. 165 (2002) 1217–1239. [4] A. Rechtschaffen, A. Kales, A manual of standardized terminology, techniques, and scoring system for sleep stages of human subjects, in: UCLA, Brain Research Institute/Brain Information Service, Los Angeles, CA, 1968. [5] H.W. Agnew Jr., J.C. Parker, W.B. Webb, R.L. Williams, Amplitude measurement of the sleep electroencephalogram, Electroenceph. Clin. Neurophysiol. 22 (1967) 84–86. [6] R.L. Maulsby, J.D. Frost Jr., M.H. Graham, A simple electronic method for graphing EEG sleep patterns, Electroenceph. Clin. Neurophysiol. 21 (1966) 501–503. [7] J.R. Knott, F.A. Gibbs, C.E. Henry, Fourier transforms of the electroencephalogram during sleep, J. Exp. Psychol. 31 (1942) 465–477. [8] L.E. Larsen, D.O. Walter, On automatic methods of sleep staging by EEG spectra, Electroenceph. Clin. Neurophysiol. 28 (1970) 459–467. [9] A. Lubin, L.C. Johnson, M.T. Austin, Discrimination among states of consciousness using EEG spectra, Agressologie 10 (1969) 593–600. [10] D.J. Hord, L.C. Johnson, A. Lubin, M.T. Austin, Resolution and stability in the autospectra of EEG, Electroenceph. Clin. Neurophysiol. 19 (1965) 305–308. [11] R. Roessler, F. Collins, R. Ostman, A period analysis classification of sleep stages, Electroenceph. Clin. Neurophysiol. 29 (1970) 358–362. [12] T.M. Itil, D.M. Shapiro, M. Fink, D. Kassebaum, Digital computer classifications of EEG sleep stages, Electroenceph. Clin. Neurophysiol. 27 (1969) 76–83. [13] N.R. Burch, Automatic analysis of the electroencephalogram: a review and classification of systems, Electroenceph. Clin. Neurophysiol. 11 (1959) 827–834. [14] M. Fink, T.M. Itil, D.M. Shapiro, Digital computer analysis of the human EEG in psychiatric research, Compr. Psychiatry 8 (1967) 521–538. [15] C.S. Lessard, H.M. Hughes, Digital Simulation Aid in Designing an Automatic EEG Analyzer, USAF Report, SAM-TR-70-33, June 1970. [16] B. Hjorth, EEG analysis based on time domain properties, Electroenceph. Clin. Neurophysiol. 29 (1970) 306–310. [17] A.O. Bishop Jr., R.W. Snelsire, L.C. Wilcox, W.P. Wilson, The moving window approach to on-line real-time waveform recognition, Proceedings of the San Diego Biomedical Symposium (1970) 77–83. (San Diego, CA). [18] J.R. Smith, M. Negin, A.H. Nevis, Automatic analysis of sleep electroencephalograms by hybrid computation, IEEE Trans. Syst. Sci. Cybern. SSC-5 (1969) 278–284. [19] R. Agarwal, J. Gotman, Computer-assisted sleep staging, IEEE Trans. Biomed. Eng. 48 (12) (Dec. 2001) 1412–1423. [20] J.D. Frost Jr., An automatic sleep analyzer, Electroenceph. Clin. Neurophysiol. 29 (1970) 88–92. [21] E. oropoesa, H.L. Cycon, M. Jobert, Sleep Stage Classification Using Wavelet Transform and Neural Network, International Computer Science Institute, March 1999 TR-99-008. [22] E. Pereda, A. Gamundi, R. Rial, J. Gonzalez, Non-linear behaviour of human EEG: fractal exponent versus correlation dimension in awake and sleep stages, Neurosci. Lett. 250 (1998) 91–94. [23] J.M. Lee, D.J. Kim, I.Y. Kim, K.S. Park, S.I. Kim, Detrended fluctuation analysis of EEG in sleep apnea using MIT/BIH polysomnography data, Comput. Biol. Med. 32 (2002) 37–47. [24] P. Achermann, R. Hartmann, A. Gunzinger, W. Guggenbuh, A.A. Borbely, Correlation dimension of the human sleep electroencephalogram: cyclic changes in the course of the night, Eur. J. Neurosci. 6 (1994) 497–500. [25] J. Fell, J. Roschke, Nonlinear dynamical aspects of the human sleep EEG, Int. J. Neurosci. 76 (1994) 109–129. [26] C.–F.V. Latchoumane, J. Jeong, Quantification of brain macrostates using dynamical nonstationarity of physiological time series, IEEE Trans. Biomed. Eng. 58 (4) (Apr. 2011) 1084–1093. [27] J. Fell, J. Roschke, K. Mann, C. Schaffner, Discrimination of sleep stages: a comparison between spectral and nonlinear EEG measures, Electroenceph. Clin. Neurophysiol. 98 (1996) 401–410. [28] K. Susmakova, A. Krakovska, Discrimination ability of individual measures used in sleep stages classification, Artif. Int. Med. 44 (2008) 261–277. [29] A. Krakovska, K. Mezeiova, Automatic sleep scoring: a search for an optimal combination of measures, Artif. Int. Med. 53 (1) (Sep. 2011) 25–33.

B. Koley, D. Dey / Computers in Biology and Medicine 42 (2012) 1186–1195

[30] E. Stanus, B. Lacroix, M. Kerkhofs, J. Mendlewicz, Automated sleep scoring: a comparative reliability study of algorithms, Electroenceph. Clin. Neurophysiol. 66 (1987) 448–456. [31] N. Schaltenbrand, et al., Sleep stage scoring using the neural network model: comparison between visual and automatic analysis in normal subjects and patients, Sleep 19 (1996) 26–35. [32] L. Zoubek, S. Charbonnier, S. Lesecq, A. Buguet, F. Chapotot, Feature selection for sleep/wake stages classification using data driven methods, Biomed. Signal Process. Control 2 (2007) 171–179. [33] H.G. Jo, J.Y. Park, C.K. Lee, S.K. An, S.K. Yoo, Genetic fuzzy classifier for sleep stage identification, Comput. Biol. Med. 40 (2010) 629–634. [34] S. Gunes, K. Polat, S. Yosunkaya, Efficient sleep stage recognition system based on EEG signal using k-means clustering based feature weighting, Exp. Syst. Appl. 37 (2010) 7922–7928. [35] K.A. Kooi, Fundamentals of Electroencephalography, Harper and Row, New York, 1971. [36] J.D. Jobson, Applied multivariate data analysis, Regression and Experimental Design, I, Springer-Verlag, New York, 1991. [37] J.R. Smith, I. Karacan, M. Yang, Automated EEG analysis with microcomputers, Sleep 1 (4) (1979) 435–443. [38] S. Charbonnier, L. Zoubek, S. Lesecq, F. Chapotot, Self-evaluated automatic classifier as a decision-support tool for sleep/wake staging, Comput. Biol. Med. 41 (2011) 380–389. [39] S.J. Redmond, C. Heneghan, Cardiorespiratory-based sleep staging in subjects with obstructive sleep apnea, IEEE Trans. Biomed. Eng. 53 (3) (2006). [40] P.D. Welch, The use fast Fourier transform of the estimation of power spectra: a method based on time averaging over short, modified periodograms, IEEE Trans. Audio Electroacoust. AU-15 (1967) 70–73. [41] D. Jordan, G. Stockmanns, E. Kochs, G. Schneider, Median frequency revisited: an approach to improve a classic spectral electroencephalographic parameter for the separation of consciousness from unconsciousness, Anesthesiology 107 (2007) 397–405. [42] T. Inouye, K. Shinosaki, H. Sakamoto, S. Toi, S. Ukai, A. Iyama, Y. Katsuda, M. Hirano, Quantification of EEG irregularity by use of the entropy of the power spectrum, Electroenceph. Clin. Neurophysiol. 79 (1991) 204–210. [43] D. Abasolo, R. Hornero, P. Espin, D. Alvarez, J. Poza, Entropy analysis of the EEG background activity in Alzheimer’s disease patients, Physiol. Meas. 27 (2006) 241–253. [44] H.D.I. Abarbanel, Analysis of Observed Chaotic Data, first ed., Springer-Verlag, New York, 1996. [45] A. Wolf, J.B. Swift, H.L. Swinney, J.A. Vastano, Determining Lyapunov exponents from a time series, Physica D 16 (1985) 285–317. [46] S.M. Pincus, Approximate entropy as a measure of system complexity, Proc. Natl. Acad. Sci. 88 (March 1991) 2297–2301. [47] K. Hu, P. Ch., Z. Ivanov, P. Chen, Carpena, H.E. Stanley, Effect of trends on detrended fluctuation analysis, Phys. Rev. E, Stat. Phys. Plasmas Fluids Relat. Interdiscip. Top. 64 (2001) 011114-1–011114-19. [48] T. Higuchi, Approach to an irregular time series on the basis of the fractal theory, Physica D 31 (1988) 277–283. [49] X.-S. Zhang, R.J. Roy, E.W. Jensen, EEG complexity as a measure of depth of anesthesia for patients, IEEE Trans. Biomed. Eng. 48 (12) (2001) 1424–1433.

1195

[50] R.A. U., O. Faust, N. Kannathal, T. Chua, S. Laxminarayan, Non-linear analysis of EEG signals at various sleep stages, Comput. Methods Prog. Biomed. 80 (2005) 37–45. [51] P. Grasssberger, I. Procaccia, Characterization of strange attractors, Phys. Rev. Lett. 50 (1983) 346–349. [52] J. Roschke, J. Fell, P. Beckmann, The calculation of the first positive Lyapunov exponent in sleep EEG data, Electroenceph. Clin. Neurophysiol. 86 (1993) 348–352. [53] C. Huang, L.S. Davis, J.R.G. Townshend, An assessment of support vector machines for land cover classification, Int. J. Remote Sens. 23 (4) (Feb. 2002) 725–749. [54] F. Melgani, L. Bruzzone, Classification of hyper spectral remote sensing images with support vector machine, IEEE Trans. Geosci. Remote Sens. 42 (8) (2004) 1778–1790. [55] V. Vapnik, The Nature of Statistical Learning Theory, Springer-Verlag, New York, 1995. [56] I. Guyon, J. Weston, S. Barnhill, V. Vapnik, Gene selection for cancer classification using support vector machines, Mach. Learn. 46 (1–3) (2002) 389–422. [57] J. Fell, J. Roschke, P. Beckmann, Deterministic chaos and the first positive Lyapunov exponent: a nonlinear analysis of the human electroencephalogram during sleep, Biol. Cybern. 69 (1993) 139–146. [58] M. Stone, Cross-validatory choice and assessment of statistical predictions, J. R. Stat. Soc. Ser. B 36 (1) (1974) 111–147. [59] C.C.C. Pang, A.R.M. Upton, G. Shine, M.V. Kamath, A comparison of algorithms for detection of spikes in the electroencephalogram, IEEE Trans. Biomed. Eng. 50 (4) (2003) 521–526. [60] J.A. Hanley, B.J. McNeil, A method of comparing the areas under receiver operating characteristic curves derived from the same cases, Radiology 148 (1983) 839–843. [61] H. Kim, S. Pang, H. Je, D. Kim, S.Y. Bang, Constructing support vector machine ensemble, Pattern Recogn. 36 (2003) 2757–2767. [62] K.J. Berry, P.W. Mielke Jr., A generalization of Cohen’s Kappa agreement measure to interval measurement and multiple raters, Educ. Psychol. Meas. 48 (1988) 921–933. [63] I.H. Song, D.S. Lee, S.I. Kim, Recurrence quantification analysis of sleep electroencephalogram in sleep apnea syndrome in humans, Neurosci. Lett. 366 (2004) 148–153. [64] C. Berthomier, X. Drouot, M. Herman-Stoica, P. Berthomier, J. Prado, D. BokarThire, O. Benoit, J. Mattout, M.P. d’ Ortho, Automatic analysis of singlechannel sleep EEG: validation in healthy individuals, Sleep 30 (11) (2007) 1587–1595. [65] S.–F. Liang, C.–E. Kuo, Y.–H. Hu, Y.–H. Pang, Y.–H. Wang, Automatic stage scoring of single-channel sleep EEG by using multiscale entropy autoregressive models, IEEE Trans. Inst. Meas. 61 (6) (2012) 1649–1657. [66] A. Flexer, G. Gruber, G. Dorffner, A reliable probabilistic sleep stager based on a single EEG signal, Artif. Intell. Med. 33 (3) (March 2005) 199–207. [67] A. Searle, L. Kirkup, A direct comparison of wet, dry and insulating bioelectric recording electrode, Physiol. Meas. 22 (2000) 71–83. [68] C. Iber, S. Redline, A.M.G. Kaplan, S.F. Quan, L. Zhang, D.J. Gottlieb, D. Rapoport, H.E. Resnick, M. Sanders, P. Smith, Polysomnography performed in the unattended home versus the attended laboratory setting-sleep heart health study methodology, Sleep 27 (3) (May 2004) 536–540.