Fractional mesh-free linear diffusion method for image enhancement and segmentation for automatic tumor classification

Fractional mesh-free linear diffusion method for image enhancement and segmentation for automatic tumor classification

Biomedical Signal Processing and Control 58 (2020) 101841 Contents lists available at ScienceDirect Biomedical Signal Processing and Control journal...

2MB Sizes 1 Downloads 81 Views

Biomedical Signal Processing and Control 58 (2020) 101841

Contents lists available at ScienceDirect

Biomedical Signal Processing and Control journal homepage: www.elsevier.com/locate/bspc

Fractional mesh-free linear diffusion method for image enhancement and segmentation for automatic tumor classification Saroj Kumar Chandra ∗ , Manish Kumar Bajpai Computer Science and Engineering, Indian Institute of Information Technology, Design and Manufacturing, Jabalpur, India

a r t i c l e

i n f o

Article history: Received 26 September 2019 Received in revised form 11 December 2019 Accepted 31 December 2019 Keywords: Linear diffusion Partial differential equation Radial basis function Brain tumor segmentation Image enhancement Computer aided diagnostic

a b s t r a c t Computer aided diagnostic (CAD) models have shown outstanding performance in identifying many kind of diseases. Tumor identification is one of the most useful application of CAD model. Benign and malignant are two categories of tumor cells. Both categories share some textural features due to which tumor classification becomes complex and difficult task. In the present manuscript, a novel CAD model is being presented for classifying tumor cells automatically. The proposed model has been divided into four modules (image enhancement, segmentation, feature extraction and classification). All the modules are equally important in tumor classification. The manuscript focuses on effect of first two modules namely image enhancement and segmentation on tumor classification. In the present work, a novel linear fractional mesh-free partial differential equation (FPDE) based image enhancement method has been proposed to improve quality of the images. The proposed enhancement model is able to preserve fine details in smooth areas also non-linearly increases high frequency information. A novel fractional mesh-free segmentation method has been proposed to extract tumor region. It has been found that the method is able segment tumor region more accurately. Thirteen textural features have been used for training and testing. Support vector machine (SVM) classifier has been used to classify extracted tumor region. Quantitative analysis of proposed image enhancement, segmentation and CAD model have been done with other popular models. Higher performance has been observed using proposed models. © 2020 Elsevier Ltd. All rights reserved.

1. Introduction Medical imaging has played a key role in diagnosis and forecasting of diseases [1]. It is set of mathematical procedure to acquire internal characteristics of human body in the form of images [2]. Medical imaging uses different energy sources such as X-rays, gamma rays, visible light, ultrasound, infrared, magnetic and electric field, etc. to acquire internal details of human body [3]. The abnormal tissue generation and abnormal tissue behavior can be analyzed by observing images acquired. Magnetic resource imaging (MRI) is one of the non-invasive medical imaging technique, which is able to visualize abnormality in the human body [4]. It produces multiple images of the same tissue region with different contrast visualization capabilities. The multiple images of the same tissue region increases reliability in diseases confirmation process. A brain tumor is abnormal growth of cells in the brain, which can be analyzed by acquired brain MRI images. Brain tumors are classified into primary brain tumor (When tumor originates inside brain) and

∗ Corresponding author. E-mail addresses: [email protected] (S.K. Chandra), [email protected] (M.K. Bajpai). https://doi.org/10.1016/j.bspc.2019.101841 1746-8094/© 2020 Elsevier Ltd. All rights reserved.

secondary brain tumor (when tumor cells come from other parts of the body) [5]. Benign and malignant are the part of the primary brain tumor. Benign tumors have slow growth rate and do not affect other parts of the body [6]. It is completely curable under complete surgical exclusion. Benign brain tumors have very small intensity difference to its surrounding cells. The malignant brain tumor is having sufficient intensity difference to its surrounding cells and hence is easily identifiable. It has fast growth rate as compared to benign brain tumor. It also spreads into other parts of the body. The malignant brain tumor is curable by using chemotherapy or radiotherapy or combination of both. Benign brain tumors once removed from the body, it is very rare to see it again but this is not true in the case of malignant brain tumor. The acquired images are inspected by radiologist to confirm or dis-confirm the presence of tumor cells. It becomes very challenging task for radiologist to differentiate between benign and malignant tumor cells. Hence, CAD models have been designed to automate the tumor classification process [7–9]. CAD models increases accuracy, also improve inter- and intra-reader variability. It involves image enhancement, segmentation, feature extraction and classification. The acquired MRI images have many forms of artifacts, which must be reduced to increase the reliability of CAD models. The degraded medical images complicate decision mak-

2

S.K. Chandra and M.K. Bajpai / Biomedical Signal Processing and Control 58 (2020) 101841

ing process in diagnosis by radiologist. Image enhancement is one of the most important step in most of the CAD models. It enhances quality of the images [10]. Enhancing local regions in low-contrast images with noise suppression without blurring the edges present in image is challenging and difficult task. In the literature various methods can be found for image enhancement such as adaptive neighborhood contrast enhancement, unsharp masking (UM), fuzzy based, and histograms-based contrast enhancement [11]. It is found that the adaptive neighborhood is not noise resistant and produces artefacts. Histogram-based contrast limited adaptive histogram equalization (CLAHE) suffers from over enhancement and blurs image fine details such as edges. Unsharp masking enhances fine details of image at the expenses of increased noise. In the present manuscript, a novel fractional mesh-free linear diffusion model has been proposed for image enhancement. Crank–Nicholson (C–N) method has been used to solve proposed enhancement model. Fractional calculus based models non-linearly increase high frequency information such as edges also preserve low frequency information in smooth areas [12]. The proposed enhancement method has been used in the CAD model to increase the accuracy in tumor classification. Segmentation method is used in second module to detect and segment suspicious region. In the literature, many segmentation methods such as region based, watershed based, clustering based and PDE based methods can be found [13–19]. It has been analyzed that the existing models are unable to detect and segment low variational region such as benign brain tumor [20]. In the present manuscript, a novel FPDE based segmentation method has been proposed and has been used in the proposed CAD model. It has been analyzed that the proposed segmentation method is able to segment tumor region more accurately. Feature extraction has been performed in third module from extracted tumor region. Support vector machine (SVM) classifier has been used in last module for tumor classification. Results obtained from the proposed CAD model has been compared with other CAD model. Higher performance has been obtained by the proposed CAD model. Organization of manuscript is as follows. Section 2 introduces proposed CAD model for tumor classification. Results and discussion are presented in Section 3 along-with comparative study with existing methods.

Fig. 1. CAD model for tumor classification.

Here, u(x, y, t) is image function at time t. u(x, y, 0) = u(x, y) is original degraded image. In the present manuscript, fractional Laplacian has been used and has solved using mesh-free approach. The proposed FPDE can be defined as ˛

The proposed CAD model has been divided into four modules: (I) image enhancement; (II) segmentation; (III) feature extraction and (IV) classification. The proposed CAD model is shown in Fig. 1. Each module is described in following sections. 2.1. Image enhancement Image enhancement improves visual quality of the images [10]. In the proposed work, linear fractional mesh-free diffusion model has been designed for improving quality of the images. Diffusion is physical phenomena which balances concentration differences while preserving total mass of the system. Noise has high intensity values and hence get diffused to nearby pixel location to give smoother view. Linear diffusion process can be defined by PDEs as

∂u(x, y, t) = div(u(x, y, t)) = ∇ u(x, y, t) ∂t

(1)

Here, div is divergence operator,  is gradient operator and ∇ is Laplacian operator: 2

2

∂u(x, y, t) ∂ u(x, y, t) ∂ u(x, y, t) + = ∂t ∂x2 ∂ y2

(2)

(3)

Forward difference method of FDM has been used to discretize time using Taylor series expansion as: u(x, y, t + t) − u(x, y, t) ∂ = u(x, y, t) t ∂t

(4)

Crank–Nicolson implicit finite difference is unconditionally stable and hence is being used in current work to calculate arbitrary order derivative of image function u(x, y) [21]. By using Eqs. (3) and (4) as n un+1 x,y − ux,y

t 2. Computer aided diagnostic system

˛

∂u(x, y, t) ∂ u(x, y, t) ∂ u(x, y, t) + = ∂t ∂x˛ ∂ y˛

=

 1 n (ı˛,x + ı˛,y )un+1 x,y + (ı˛,x + ı˛,y )ux,y 2

(5)

ı˛,x and ı˛,y are directional differentiation operator of arbitrary order. Simplifying Eq. (5) un+1 x,y −



1−

t t n (ı˛,x + ı˛,y )un+1 (ı˛,x + ı˛,y )unx,y x,y = ux,y + 2 2



t (ı˛,x + ı˛,y ) un+1 x,y = 2



1+

Let us assume P = t ı and Q = 2 ˛,x Eq. (7) to get equation of the form

(6)



t (ı˛,x + ı˛,y ) unx,y 2 t ı . 2 ˛,y

(7)

Factorizing each side of

n+1 n n (1 − P)(1 − Q )un+1 x,y − (PQ )ux,y = (1 + P)(1 + Q )ux,y − (PQ )ux,y

(8)

t n n+1 (1 − P)(1 − Q )un+1 x,y = (1 + P)(1 + Q )ux,y + (PQ )ux,y − (PQ )ux,y

(9)

n+1 n (1 − P)(1 − Q )Um,n = (1 + P)(1 + Q )unx,y + PQ (un+1 x,y − ux,y )

(10)

Here, the last term defined in Eq. (10) as n PQ (un+1 x,y − ux,y ) =

t 2 n ı˛,x ı˛,y (un+1 x,y − ux,y ) 4

(11)

n PQ (un+1 x,y − ux,y ) =

n (un+1 t 3 x,y − ux,y ) ı˛,x ı˛,y 4 t

(12)

n (un+1 x,y − ux,y )

t

= O(t)

(13)

S.K. Chandra and M.K. Bajpai / Biomedical Signal Processing and Control 58 (2020) 101841

The term given in Eq. (13) is least significant term and its presence has very little impact on final solution and hence can be neglected. Eq. (10) can be written after neglecting lower order term as n (1 − P)(1 − Q )un+1 x,y = (1 + P)(1 + Q )ux,y

(14)

Eq. (14) is final solution of Eq. (5). Let us put the value of symbolic notation P and Q. The final equation given as



1− =

t ı˛,x 2



1+



1−

t ı˛,x 2



t ı˛,y un+1 x,y 2



1+

(15)

Alternating direction implicit (ADI) approach has been used solve Eq. (15) [22]. ADI requires two steps to solve proposed FPDE for image enhancement. Eq. (15) is solved by breaking down each iteration into two steps as

 

1− 1−



n+ 1 t ı˛,x ux,y 2 = 2



t ı˛,y un+1 x,y = 2

 

1+

1+

(16)

n+ 1 t ı˛,x ux,y 2 2

=

t ı˛,x 2



1+



1−

t ı˛,x 2

(17)



1+



t ı˛,y unx,y 2

u(x, y) =

ϕ(rj )j

ı˛,x u(x, y) =

  Here, u(x, y) is image function, rj = (x − xj ) + (y − yj ) is the Euclidean distance between centers and evaluation points, j is the unknown RBF expansion coefficients and N is the number of center points in . The  are determined by the center points. Eq. (19) can be expressed as N    u(x, y) = ϕ((xi − xj ) + (yi − yj ))j 1 ≤ i, j ≤ N

N ˛  ∂ j=1

ı˛,y u(x, y) =

∂x˛

N ˛  ∂

∂ y˛









ϕ((xi − xj ) + (yi − yj ))j

ϕ((xi − xj ) + (yi − yj ))j

n

∂ ∂ 1 u(x) = (n − ˛) ∂xn ∂x˛



L

x

u( ) (x −

)

(21)

˛+n−1



(26)

˛



˛

∂ ∂r ˛



1

(1 + er 2 ) 2 = e

∞ ˛  (−1)n ( −1 ) 2 n ∂

n!

∂r ˛

j=0

r 2n

(27)

Here, ()n is Pochhammer function whose value can be calculated as

(x)n = x(x − 1)(x − 2)· · ·(x − n + 1) =

n−1 

(x − k)

k=0

Simplifying Eq. (27) as







In matrix vector notation, Eq. (20) can be represented as

(25)

Here, ∂ ˛ = ı˛,x has been taken due to notational convenience. ∂x n is an integer such that n − 1 < ˛ ≤ n. Fractional derivative of popular RBF functions can be found in the manuscript [29]. The present manuscript has used fractional derivative of multiquadratic RBF function using Riemann–Liouville definition to approximate the derivative of image function. Fractional derivative of multi-quadratic RBF function using R–L definition can be computed as

(20)

j=1

(24)

Here, () is RBF which is used to approximate derivative of image function using derivative of RBF. Riemann–Liouville fractional derivative definition has been used in the current work to calculate fractional derivative of RBF function due to its unique property of having fractional derivative of constant as non-zero [27,28]. The fractional derivative of a function u(x) over the interval L < x < R can be defined as

(19)

j=1

U = A

The derivative of image function u(x, y) can be calculated using mesh-free method as

˛

(18)

The fractional directional derivative operators ı˛,x and ı˛,y are defined to calculate derivative image function u(x, y). The derivative of image function is approximated using mesh-free method. Mesh-free methods provide easy computation and also dimensionality independent [23]. Radial basis function (RBF) is true mesh-free method. It has been used in many scientific computing and engineering applications such as multivariate scattered data processing, numerical solutions of PDEs, neural networks, and machine learning. Different PDEs such as parabolic, hyperbolic, and elliptic have been solved using RBFs [24,25]. Multi-quadric RBF function has been used in current work for solving proposed FPDE [26]. The image function u(x, y) can be approximated by the linear combination of RBF, which is defined as N 

(23)

j=1

t ı˛,y un+1 x,y 2



 ⎤ (x1 − xN ) + (y1 − yN )  ⎥ · · · (x2 − xN ) + (y2 − yN ) ⎥ ⎥ ⎥ (22) .. ⎥ ⎦ ··· .     ··· (xN − xN ) + (yN − yN ) ···

 = A−1 U



n+ 1

1−

 ⎡ (x1 − x1 ) + (y1 − y1 )  ⎢ ⎢ (x2 − x1 ) + (y2 − y1 ) ⎢ A=⎢ .. ⎢ ⎣ .   (xN − x1 ) + (yN − y1 )



t ı˛,y unx,y 2

Eliminating intermediate terms ux,y 2 in Eqs. (16) and (17) the equation reduces to its original form as



Here, A is known as system matrix of size N × N scattered points. The value of collocation matrix A can be obtained by

The vector of unknowns,  can be determined by inverting Eq. (21) as



t ı˛,y unx,y 2

3

˛

∂ ∂r ˛ ˛

∂ ∂r ˛ ˛

∂ ∂r ˛



1

(1 + er 2 ) 2 = e

∞  (−1)n ( −1 ) 2

n! j=0



1

(1 + er 2 ) 2 =

n

2n + 1 r 2n−˛ (2n − ˛ + 1)

(28)

−1 er −˛  (1)2n ( 2 )n (−r 2n ) (1 − ˛)2n n! (1 − ˛)

(29)

1 −1 er −˛  ( 2 )n (1)n ( 2 )n (−r 2n ) (2−˛) (1 − ˛) ( (1−˛) ) ( ) n! 2 2 n n

(30)



j=0



1

(1 + er 2 ) 2 =



j=0

4



S.K. Chandra and M.K. Bajpai / Biomedical Signal Processing and Control 58 (2020) 101841

˛

∂ ∂r ˛ =



has been calculated to get boundary region of tumor region and post-processing operation has been performed to extract tumor region. The fractional diffusion is defined as

1

(1 + er 2 ) 2 er −˛

(1 − ˛)

F32

1 2





, 1,

−1 (1 − ˛) (2 − ˛) ; , ; (−r 2 ) 2 2 2

(31)

Here, r is Euclidean norm in two dimension, e is shape parameter whose value should be chosen by optimization method. The value of e is different for different applications. It is an open problem to have an optimized value/method for all kinds of applications. F32 is generalized hyper-geometric series. Eq. (31) has been used to obtain derivative of image function using derivative of RBF function. An algorithm for image enhancement: The algorithmic steps are given in Algorithm 1. Algorithm 1.

Fractional image enhancement(Image, IMGENHNCD )

Input: Image u: Original image ˛: Fractional order t0 : Initial time tmax : End time for diffusion Identity: identity matrix Output: IMGENHNCD begin Initialize: u(x, y, t0 ) = u(x, y) LM = (Identity − H) RM = (Identity + H) Compute: Fractional coefficient matrix H while t ≤ tmax do while i ≤ nx do Ix (x, y, ti+1 ) = RM × u(:, i, ti ) × INV(LM) Iy (x, y, ti+1 ) = RM × u (i, :, ti ) × INV(LM) i = (i + 1) end while u = u + t * (Ix + Iy) t = (t + 1) end while IMGENHNCD = u end

The algorithm has used many parameters such as ˛, t0 , tmax , and t. Here, ˛ is arbitrary order of spatial derivative, t0 is initial time stamp is diffusion process, tmax is maximum number of iteration in diffusion process and t is time step. The optimal value of alpha is not yet developed. For different value of ˛, different enhanced images are obtained. In the present work, experimental study has been performed for different values of ˛ such as 1.2, 1.2, 1.3, 1.4, 1.5, 1.6, 1.7, 1.8, 1.9 and 2.0. It has been observed that at ˛ = 1.5, the algorithm performs better in improving visual quality the input image. The image at t0 is input image, tmax = 5 has been used to get enhanced image and t = 1 has been used. The algorithm has also used matrices such as u, H, RM, LM IMGENHNCD , Identity. Here, u is original image function taken as input for performing enhancement operation, H is matrix to store fractional coefficient values for calculating derivative of image function, LM = (Identity − H) and RM = (Identity + H) has been taken. Identity is identity matrix of image size. In the proposed enhancement method, Ix and Iy are enhanced images of X and Y direction, respectively. Finally, the resultant enhanced image IMGENHNCD has been obtained by combining Ix and Iy . 2.2. Segmentation module The enhanced image obtained in first module is input to the second segmentation module in the proposed CAD model. This module segments tumor part from the enhanced image. In the proposed segmentation method, diffusion has been used to get two diffused images with different time stamp. The difference of diffused images

n un+1 x,y,t = ux,y,t + D

˛

˛

∂ ux,y,t ∂ ux,y,t + ∂x˛ ∂ y˛



(32)

Eq. (32) gives diffused image with time stamp n. Two different valued of n are used to get different diffused images. Finally difference of diffusion (DOD) for boundary detection is being calculated to get boundary of tumor region. DOD can be defined as



n DoDx,y,t = D un+1 x,y,t − ux,y,t



(33)

An algorithm for tumor segmentation: Algorithm 2 has been designed to segment tumor region. It uses Eq. (33) to obtain boundary of tumor region. The algorithm has used many parameters such as t1 , t2 , RM, LM and Identity. In the proposed work, t1 = 1 and t2 = 2 has been chosen to get two different diffused images. The difference of these diffused images has calculated to get boundary of tumor region. In the algorithm, Identity, H, RM and LM are matrix. identity is identity matrix of image size. H is matrix to store fractional coefficient values for calculating derivative of image function. LM = (Identity − H) and RM = (Identity + H) has been taken. The values of u1 and u2 both have been initially initialized with input image. These are used to store diffused images with different time stamp. u1 and u2 stores diffused images with time stamp t1 and 2 , respectively. In the proposed work, the u2 is more diffused than u1 so that their difference should not be null. Finally, the difference of u1 and u2 gives boundary of tumor region. In the algorithm, LM \ RM × u1 and LM \ RM × u2 calculate fractional diffusion in X direction. LM\RM × u1 and LM\RM × u2 calculate fractional diffusion in Y direction. The partial diffused images in X and Y direction are combined to get final diffused images. Result obtained by DoD is further post-processed to segment tumor region. The algorithm of proposed work is described in Algorithm 2. In the algorithm 2, regional properties such as conncomp to get connected regions in the image, labelmatrix to assign different label to each connected region in the image and regionsprops to extract region properties such as density and area have been used to extract tumor region. These regional properties are available in the MATLAB in the form of inbuilt methods. In the proposed segmentation method, higher values of both density and area have been used to extract tumor region. Algorithm 2.

Tumor region segmentation(I, IDoD)

Input: I: Original image ˛: Order of spatial derivative t0 : Initial time N: Number of collocation points t1 : Time for diffusion t2 : Time for diffusion Identity: Identity matrix Output: IDoD begin Initialize: u1 (x, y, t0 ) = u(x, y) u2 (x, y, t0 ) = u(x, y) Compute: LM = (Identity − H) RM = (Identity + H) for i = 1 : t1 solve u1 (x, y, ti+1 ) = RM × u1 (x, y, ti ) × INV(LM) end for i = 1 : t2 solve u2 (x, y, ti+1 ) == RM × u2 (x, y, ti ) × INV(LM) end IDoD(x, y) = u2 (x, y) − u1 (x, y) Obtain tumor boundary by inverting IDoD image obtained Obtain connected region Label each region with different intensity values Get average density of each region

S.K. Chandra and M.K. Bajpai / Biomedical Signal Processing and Control 58 (2020) 101841 Find region with greater average density Calculate area of region with higher average density Find regions with higher average intensity Mark regions as Tumor end

2.3. Feature extraction and classification module The proposed segmentation method gives tumor region as binary image. The extracted region has been superimposed on the original brain image to get tumor region with original textural features. 13 textural features such contrast, correlation, dissimilarity, energy, entropy, homogeneity, mean, variances, standard deviation, root mean square value, smoothness, kurtosis, skewness and inverse difference movement have been extracted from tumor region for training and testing the classifier [30]. Support vector machine (SVM) is supervised learning classifier and It has been used in current work for tumor classification. 5-fold mechanism has been used in proposed CAD model for cross validation. 3. Results and discussion All the experimental work has been performed in MATLAB. The hardware configuration used for implementing algorithms are tabulated in Table 1. The validation of proposed image enhancement, segmentation and classification have been performed on BRATS dataset [31]. The dataset information are tabulated in Table 2. The image enhancement using proposed FPDE method has been shown in Fig. 2 for arbitrary order of spatial derivative ˛ = 1.5. The comparative study has been performed using popular image enhancement methods, i.e. histogram equalization (HE), contrastlimited adaptive histogram equalization (CLAHE) and unsharp masking (UM). Although, the qualitative differences among different image enhancement methods are not clearly visible from Fig. 2. Hence, quantitative analysis has been performed using peak signal to noise ratio (PSNR) and Structural Similarity Index (SSIM) [32] to get performance analysis. The results obtained have been tabulated in Table 3. It can be observed from the table that the proposed enhancement model is able to get higher PSNR values than HE and CLAHE method. This is due to fractional calculus which suppresses noise while preserving low frequency information is smooth areas. Table 1 Hardware configuration used. Hardware

Capacity

CPU clock speed RAM L1 cache memory L2 cache memory L3 cache memory

2.27 GHz 32 GB 256 KB 1 MB 4 MB

Table 2 Brats dataset information. Image format Total images Number of high grade tumor images Number of low grade tumor images Image division Is ground truth included Image regions

mha 243 135 108 T1-weighted, T1-weighted with contrast (T1c), T2-weighted (T2), flair-weighted All images contains ground truth images Each image has 5 regions, i.e. normal regions, necrosis, edema, non-enhancing tumor and enhancing tumor

5

Table 3 Enhancement results of images with ˛ = 1.5. Methods

PSNR

Image 1 Histogram equalization CLAHE Unsharp masking Proposed

6.7449 21.7759 32.4220 32.8168

Image 2 Histogram equalization CLAHE Unsharp masking Proposed

5.5476 20.8484 37.7748 35.6134

Image 3 Histogram equalization CLAHE Unsharp masking Proposed

5.4975 20.9544 36.4544 35.3537

Image 4 Histogram equalization CLAHE Unsharp masking Proposed

7.9712 19.4173 33.6339 32.0414

Image 5 Histogram equalization CLAHE Unsharp masking Proposed

5.4617 19.0437 35.1468 33.0769

However, it is found that proposed model is having almost similar performance with the UM method. But, the proposed model is computationally more efficient than UM since the proposed model does not use compute intensive convolution operation. The images of five patients have been taken into consideration for validating proposed segmentation method. Tumor region extraction has been done using Algorithm 2. The results obtained have been shown in Fig. 3. It can be seen from Fig. 3 that proposed the method with ˛ = 1.5 extract tumor region more accurately than other methods proposed by Hasan et al. [13], Oo et al. [14], Abdel-Maksoud et al. [15] and Deng et al. [16]. The quantitative performance of proposed segmentation method with other methods have is being evaluated by using the Dice coefficient, Jaccard similarity index and Hausdorff distance [33,34]. Table 4 tabulates the qualitative data obtained by different methods. It can be visually analyzed that the proposed method has higher similarity to

Table 4 Dice, Jaccard and Hausdorff entry for various methods with proposed method (˛ = 1.5). Methods

Image 1

Image 2

Image 3

Image 4

Image 5

Dice coefficients Hasan et al. [13] Oo et al. [14] Abdel-Maksoud et al. [15] Deng et al. [16] Proposed

0.5006 0.9165 0.9165 0.4923 0.9273

0.4123 0.4123 0.3480 0.2797 0.8973

0.9038 0.9038 0.9627 0.9080 0.9602

0.7682 0.7682 0.7444 0.7749 0.9000

0.8315 0.8315 0.8113 0.8331 0.9148

Jaccard Index Hasan et al. [13] Oo et al. [14] Abdel-Maksoud et al. [15] Deng et al. [16] Proposed

0.3339 0.8458 0.8443 0.3266 0.8644

0.2597 0.2597 0.2107 0.1626 0.8137

0.8245 0.8245 0.9281 0.8315 0.9234

0.6236 0.6236 0.7444 0.6325 0.8182

0.7115 0.7115 0.6825 0.7140 0.8431

Hausdorff coefficient Hasan et al. [13] Oo et al. [14] Abdel-Maksoud et al. [15] Deng et al. [16] Proposed

8.8318 3.8730 3.7417 8.8318 3.6056

6.4031 6.4031 7.0711 8.3066 3.3166

3.9765 3.9765 3.9131 3.9765 3.0000

8.3666 8.3666 5.5734 8.4261 4.7958

4.5208 4.5208 4.5208 4.5208 3.6056

6

S.K. Chandra and M.K. Bajpai / Biomedical Signal Processing and Control 58 (2020) 101841

Fig. 2. Image enhancement results by various methods.

the ground truth images as compared to other methods used for comparison. The dice coefficient measures percentage of pixels present in the extracted regions by segmentation method with ground truth image. It has 0 as lowest similarity with ground truth image and 1 as highest similarity with ground truth image. Hence, the value close to 1 gives higher similarity regions. The proposed method has extracted tumor region which has highest 96%, lowest 89% and on an average 91% overlapping pixels as can be seen as Dice coefficient entry in Table 4. The Jaccard index measures overlapping pixels obtained by segmentation method with ground truth image. It value also lies between 0 to 1. The value close to 1 gives higher overlapping with ground truth image. It can be observed from Table 4 that the proposed method has highest 84%, lowest 68% and on an average 73% overlapping pixels as can be seen in Table 4 by Jaccard index entry. The Hausdorff entry measures dissimilarity of region segmented by segmentation method with ground truth image. Its value lies between 0 to ∞. Value close to 0 means less dissimilarity with ground truth image. It is found that the proposed method has highest 4.5208, lowest 3.6056 and on an average 4.3377

dissimilarity measure as can be seen in Table 4 by Hausdorff entry. Hence, the proposed method gives less dissimilarity to ground truth images as compared to other methods used for comparison. The qualitative data obtained using Tables 3 and 4 shows that the proposed image enhancement and segmentation have performance measures in image enhancement and segmentation. Hence, the work has been extended for tumor classification. The quantitative analysis of proposed CAD model with different values of ˛ in image enhancement and segmentation is tabulated in Table 5. From Table 5, it can be easily analyzed that the performance of CAD model accelerates from value of ˛ = 1.1 to ˛ = 1.5. Also, it is found from experimental study that the CAD performance reduces from value of ˛ = 1.5 to ˛ = 2.0. Hence, It is found that CAD is best at ˛ = 1.5. The quantitative comparative study has been also performed with Gaussian smoothing in the place of proposed enhancement method. Table 6 represents results obtained by CAD model with Gaussian smoothing. It has been analyzed from Table 6 that CAD model with Gaussian smoothing gives highest 95% accuracy whereas proposed CAD model with proposed enhancement

S.K. Chandra and M.K. Bajpai / Biomedical Signal Processing and Control 58 (2020) 101841

7

Fig. 3. Tumor region extraction by various methods.

method gives 97% accuracy. Hence, it can be concluded from experimental study that the proposed enhancement is more valuable for CAD model. The highest accuracy obtained at ˛ = 1.5 and it has been compared with other popular CAD models proposed by Cheng et al. [7], Zia et al. [8] and Muhammad et al. [9]. The quantitative results

obtained are tabulated in Table 7. It has been found that the proposed CAD model is able to get 98% accuracy. It can be observed from Table 7 that proposed enhancement and segmentation in CAD has highest 98% accuracy. It is concluded that the performance of CAD model is highly dependent on performance of image enhancement and segmentation.

8

S.K. Chandra and M.K. Bajpai / Biomedical Signal Processing and Control 58 (2020) 101841

Table 5 Quantitative analysis of CAD model with different values of ˛.

˛ = 1.1 ˛ = 1.2 ˛ = 1.3 ˛ = 1.4 ˛ = 1.5 ˛ = 1.6 ˛ = 1.7 ˛ = 1.8 ˛ = 1.9 ˛ = 2.0

References

Accuracy

Error rate

Specificity

FPR

90.49 91.10 93.10 95.15 97.50 96.90 95.10 92.10 90.30 89.10

9.51 8.90 6.90 4.85 2.50 3.10 4.90 7.90 9.70 10.90

94.74 90.80 92.80 94.08 96.08 94.74 96.85 93.80 91.80 90.40

5.26 9.20 7.20 5.92 3.92 5.26 3.15 7.20 8.20 9.60

Table 6 Quantitative analysis of CAD model with Gaussian smoothing method for different values of ˛ in segmentation.

˛ = 1.1 ˛ = 1.2 ˛ = 1.3 ˛ = 1.4 ˛ = 1.5 ˛ = 1.6 ˛ = 1.7 ˛ = 1.8 ˛ = 1.9 ˛ = 2.0

Accuracy

Error rate

Specificity

FPR

88.51 90.45 92.45 94.10 95.70 93.10 92.10 91.00 89.50 88.10

11.49 9.55 7.55 5.90 4.30 6.90 7.90 8.00 10.50 11.90

90.26 88.35 91.70 92.85 94.45 92.26 91.85 90.80 89.90 87.10

9.74 11.65 8.30 7.15 5.55 7.74 8.15 9.20 10.10 12.90

Table 7 Performance of proposed CAD model-2 at ˛ = 1.5 with FPDE-based image smoothing with other popular CAD models.

Cheng et al. [7] Zia et al. [8] Muhammad et al. [9] ˛ = 1.5

Accuracy

Error rate

Specificity

FPR

91.28 85.69 94.58 97.50

8.72 14.31 5.42 2.50

92.00 90.90 96.12 96.08

8.00 9.10 4.88 3.92

4. Conclusions The present manuscript presents a novel CAD model for brain tumor classification. It mainly focuses on effect image enhancement and segmentation in CAD model. A fractional linear diffusion model has been presented for image enhancement. The performance of proposed enhancement model has been analyzed. It has been found that the proposed model is able to preserve fine details present in the image while removing noises. Also, novel image segmentation method has been proposed. It has been found that the proposed segmentation method is able to segment tumor region more accurately. The proposed enhancement and segmentation method have been used in the proposed CAD model. The performance of the proposed CAD model has been analyzed by sensitivity, specificity, accuracy error rate and false positive rate. The performance of proposed CAD model with proposed enhancement and Gaussian smoothing has been analyzed. It has been found that proposed enhancement method increase the accuracy of CAD model. The proposed segmentation method accurately extract tumor due to which CAD model has obtained high performance in tumor classification. It has been observed that image enhancement and segmentation has large impact on tumor classification. The highest 97.50% accuracy, 96.08% specificity and 3.92% FPR have been obtained with proposed smoothing and segmentation method. Conflict of interest None declared.

[1] R. Siegel, K. Miller, A. Jemal, Cancer statistics, Cancer J. Clin. 65 (1) (2015) 5–29. [2] R.S. Fager, K.V. Peddanarappagari, G.N. Kumar, Pixel-based reconstruction (PBR) promising simultaneous techniques for CT reconstructions, IEEE Trans. Med. Imaging MI-12 (1) (1993) 4–9, http://dx.doi.org/10.1109/42.222660. [3] M. Bajpai, P. Munshi, P. Gupta, C. Schorr, M. Maisl, High resolution 3D image reconstruction using algebraic method for X-ray cone-beam geometry over circular and helical trajectories, NDT&E Int. 60 (2013) 62–69, http://dx.doi. org/10.1016/j.ndteint.2013.07.009. [4] B. Devkota, A. Alsadoon, P. Prasad, A. Singh, A. Elchouemi, Image segmentation for early stage brain tumor detection using mathematical morphological reconstruction, Proc. Comput. Sci. 125 (2018) 115–123, http:// dx.doi.org/10.1016/j.procs.2017.12.017, The 6th International Conference on Smart Computing and Communications. [5] American Brain Tumor Association. Available at http://www.abta.org/. [6] S.K. Chandra, M. Kumar Bajpai, Effective algorithm for benign brain tumor detection using fractional calculus, TENCON 2018 – 2018 IEEE Region 10 Conference (2018) 2408–2413, http://dx.doi.org/10.1109/TENCON.2018. 8650163. [7] J. Cheng, W. Huang, S. Cao, R. Yang, W. Yang, Z. Yun, Z. Wang, Q. Feng, Enhanced performance of brain tumor classification via tumor region augmentation and partition, PLoS One 10 (10) (2015) 1–13, http://dx.doi.org/ 10.1371/journal.pone.0140381. [8] R. Zia, A new rectangular window based image cropping method for generalization of brain neoplasm classification systems, Int. J. Imaging Syst. Technol. 28 (2017), http://dx.doi.org/10.1002/ima.22266. [9] M. Sajjad, S. Khan, K. Muhammad, W. Wu, A. Ullah, S.W. Baik, Multi-grade brain tumor classification using deep CNN with extensive data augmentation, J. Comput. Sci. 30 (2019) 174–182, http://dx.doi.org/10.1016/j.jocs.2018.12. 003. [10] S.K. Chandra, M. Kumar Bajpai, Fractional anisotropic diffusion for image denoising, 2018 IEEE 8th International Advance Computing Conference (IACC) (2018) 344–348, http://dx.doi.org/10.1109/IADCC.2018.8692094. [11] K.L. Kashyap, M.K. Bajpai, P. Khanna, G. Giakos, Mesh-free approach for enhancement of mammograms, IET Image Process. 12 (3) (2018) 299–306, http://dx.doi.org/10.1049/iet-ipr.2017.0326. [12] P. Ghamisi, M.S. Couceiro, J.A. Benediktsson, N.M. Ferreira, An efficient method for segmentation of images based on fractional calculus and natural selection, Expert Syst. Appl. 39 (16) (2012) 12407–12417, http://dx.doi.org/ 10.1016/j.eswa.2012.04.078. [13] A.M. Hasan, F. Meziane, R. Aspin, H.A. Jalab, Segmentation of brain tumors in MRI images using three-dimensional active contour without edge, Symmetry 8 (11) (2016). [14] S.Z. Oo, A.S. Khaing, Brain tumor detection and segmentation using watershed segmentation and morphological operation, Int. J. Res. Eng. Technol. 3 (3) (2014) 367–374. [15] E. Abdel-Maksoud, M. Elmogy, R. Al-Awadi, Brain tumor segmentation based on a hybrid clustering technique, Egypt. Informat. J. 16 (1) (2015) 71–81, http://dx.doi.org/10.1016/j.eij.2015.01.003. [16] W. Deng, W. Xiao, H. Deng, J. Liu, MRI brain tumor segmentation with region growing method based on the gradients and variances along and inside of the boundary curve, 3rd International Conference on Biomedical Engineering and Informatics 1 (2010) 393–396, http://dx.doi.org/10.1109/BMEI.2010. 5639536. [17] E. Dong, Q. Zheng, W. Sun, Z. Li, L. Li, Constrained multiplicative graph cuts based active contour model for magnetic resonance brain image series segmentation, Signal Process. 104 (2014) 59–69, http://dx.doi.org/10.1016/j. sigpro.2014.03.038. [18] Q. Zheng, E. Dong, Z. Cao, W. Sun, Z. Li, Active contour model driven by linear speed function for local segmentation with robust initialization and applications in MR brain images, Signal Process. 97 (2014) 117–133, http://dx. doi.org/10.1016/j.sigpro.2013.10.008. [19] W. Sun, E. Dong, Kullback–Leibler distance and graph cuts based active contour model for local segmentation, Biomed. Signal Process. Control 52 (2019) 120–127, http://dx.doi.org/10.1016/j.bspc.2019.04.008. [20] S.K. Chandra, M.K. Bajpai, Mesh free alternate directional implicit method based three dimensional super-diffusive model for benign brain tumor segmentation, Comput. Math. Appl. 77 (12) (2019) 3212–3223, http://dx.doi. org/10.1016/j.camwa.2019.02.009. [21] N. Sweilam, M. Khader, A. Nagy, Numerical solution of two-sided space-fractional wave equation using finite difference method, J. Comput. Appl. Math. 235 (8) (2011) 2832–2841, http://dx.doi.org/10.1016/j.cam.2010. 12.002. [22] J. Qin, The new alternating direction implicit difference methods for solving three-dimensional parabolic equations, Appl. Math. Modell. 34 (4) (2010) 890–897, http://dx.doi.org/10.1016/j.apm.2009.07.006. [23] Z.J. Fu, C. Chen, Recent Advances in Radial Basis Function Collocation Methods, Springer, 2014. [24] E. Kansa, Multiquadrics: a scattered data approximation scheme with applications to computational fluid-dynamics surface approximations and partial derivative estimates, Comput. Math. Appl. 19 (8) (1990) 127–145, http://dx.doi.org/10.1016/0898-1221(90)90270-T. [25] E. Kansa, Multiquadrics: a scattered data approximation scheme with applications to computational fluid-dynamics solutions to parabolic,

S.K. Chandra and M.K. Bajpai / Biomedical Signal Processing and Control 58 (2020) 101841

[26]

[27]

[28]

[29] [30]

[31]

hyperbolic and elliptic partial differential equations, Comput. Math. Appl. 19 (8) (1990) 147–161, http://dx.doi.org/10.1016/0898-1221(90)90271-K. J.H. Jung, S. Gottlieb, S.O. Kim, Iterative adaptive RBF methods for detection of edges in two-dimensional functions, Appl. Numer. Math. 61 (1) (2011) 77–91, http://dx.doi.org/10.1016/j.apnum.2010.08.006. K.B. Oldham, J. Spanier, The Fractional Calculus Theory and Applications of Differentiation and Integration of Arbitrary Order, 1st ed., Academic Press, New York, 2006. R. Khalil, M.A. Horani, A. Yousef, M. Sababheh, A new definition of fractional derivative, J. Comput. Appl. Math. 264 (2014) 65–70, http://dx.doi.org/10. 1016/j.cam.2014.01.002. M. Mohammadi, R. Schaback, On the Fractional Derivatives of Radial Basis Functions, 2013. K.L. Kashyap, M.K. Bajpai, P. Khanna, Globally supported radial basis function based collocation method for evolution of level set in mass segmentation using mammograms, Comput. Biol. Med. 87 (2017) 22–37, http://dx.doi.org/ 10.1016/j.compbiomed.2017.05.015. B.H. Menze, A. Jakab, S. Bauer, J. Kalpathy-Cramer, K. Farahani, J. Kirby, Y. Burren, N. Porz, J. Slotboom, R. Wiest, L. Lanczi, E. Gerstner, M. Weber, T. Arbel, B.B. Avants, N. Ayache, P. Buendia, D.L. Collins, N. Cordier, J.J. Corso, A.

9

Criminisi, T. Das, H. Delingette, Demiralp, C.R. Durst, M. Dojat, S. Doyle, J. Festa, F. Forbes, E. Geremia, B. Glocker, P. Golland, X. Guo, A. Hamamci, K.M. Iftekharuddin, R. Jena, N.M. John, E. Konukoglu, D. Lashkari, J.A. Mariz, R. Meier, S. Pereira, D. Precup, S.J. Price, T.R. Raviv, S.M.S. Reza, M. Ryan, D. Sarikaya, L. Schwartz, H. Shin, J. Shotton, C.A. Silva, N. Sousa, N.K. Subbanna, G. Szekely, T.J. Taylor, O.M. Thomas, N.J. Tustison, G. Unal, F. Vasseur, M. Wintermark, D.H. Ye, L. Zhao, B. Zhao, D. Zikic, M. Prastawa, M. Reyes, K. Van Leemput, The multimodal brain tumor image segmentation benchmark (BRATS), IEEE Trans. Med. Imaging 34 (10) (2015) 1993–2024, http://dx.doi. org/10.1109/TMI.2014.2377694. [32] G. Ghimpeeanu, T. Batard, M. Bertalmo, S. Levine, A decomposition framework for image denoising algorithms, IEEE Trans. Image Process. 25 (1) (2016) 388–399, http://dx.doi.org/10.1109/TIP.2015.2498413. [33] M. Polak, H. Zhang, M. Pi, An evaluation metric for image segmentation of multiple objects, Image Vision Comput. 27 (8) (2009) 1223–1227, http://dx. doi.org/10.1016/j.imavis.2008.09.008. [34] C. Zhao, W. Shi, Y. Deng, A new Hausdorff distance for image matching, Pattern Recogn. Lett. 26 (5) (2005) 581–586, http://dx.doi.org/10.1016/j. patrec.2004.09.022.