Brain tumor classification based on DWT fusion of MRI sequences using convolutional neural network

Brain tumor classification based on DWT fusion of MRI sequences using convolutional neural network

Pattern Recognition Letters 129 (2020) 115–122 Contents lists available at ScienceDirect Pattern Recognition Letters journal homepage: www.elsevier...

2MB Sizes 6 Downloads 76 Views

Pattern Recognition Letters 129 (2020) 115–122

Contents lists available at ScienceDirect

Pattern Recognition Letters journal homepage: www.elsevier.com/locate/patrec

Brain tumor classification based on DWT fusion of MRI sequences using convolutional neural network ✩ Javaria Amin a, Muhammad Sharif a,∗, Nadia Gul b, Mussarat Yasmin a, Shafqat Ali Shad c a

Department of Computer Science, COMSATS University Islamabad, Wah Campus Pakistan Radiology POF Hospital, Wah Cantt, Pakistan c Department of Computer Science, Luther College Decorah LOWA, USA b

a r t i c l e

i n f o

Article history: Received 21 September 2019 Revised 10 November 2019 Accepted 13 November 2019

Keywords: Sequences CNN DWT Global thresholding Filter

a b s t r a c t Tumor in brain is an anthology of anomalous cells. It leads to increase in death rate among humans. Therefore, in this manuscript, a fusion process is proposed to combine structural and texture information of four MRI sequences (T1C, T1, Flair and T2) for the detection of brain tumor. A discrete wavelet transform (DWT) along with Daubechies wavelet kernel is utilized for fusion process which provides a more informative tumor region as compared to an individual single sequence of MRI. After the fusion process, a partial differential diffusion filter (PDDF) is applied to remove noise. A global thresholding method is used for segmenting tumor region which is then fed to proposed convolutional neural network (CNN) model for finally differentiating tumor and non-tumor regions. Five publicly available datasets i.e., BRATS 2012, BRATS 2013, BRATS 2015, BRATS 2013 Leader board and BRATS 2018 are used for proposed method evaluation. The results show that fused images provide better results as compared to individual sequences on benchmark datasets. © 2019 Elsevier B.V. All rights reserved.

1. Introduction Focal part of nervous system is brain which is composed of spongy and non-replaceable soft tissues [1]. It can be affected by various types of diseases such as brain tumor. Major symptoms of brain tumor include difficulty in concentration, coordination issues and changes in speech, seizures, frequent headaches and memory loss. Brain tumor can be classified into different categories including rate of growth, origin and progression state [2–4]. The world health organization (WHO) has graded brain tumors from grade I to grade IV [5]. The assigned grade describes some particular features about brain tumor. These features are associated with specific outcomes. For instance, it assists a doctor to evaluate brain tumor’s growth, aggressiveness and prognosis [6]. Glioma is a primary brain tumor type and it has two classes such as benign (LGG) and malignant (HGG). Generally, low grade giloma (LGG) cells do not attack neighboring normal cells whereas brain tumor cells attack on their adjacent cells in case of HGG [7,8]. Therefore, accurate glioma classification at an initial stage is a significant requirement [9]. The computerized methods are beneficial to detect brain tumor in magnetic resonance imaging (MRI). However, segmenting and ✩ ∗

Handled by Associate Editor: G. Sanniti di Baja, Ph.D. Corresponding author. E-mail address: [email protected] (M. Sharif).

https://doi.org/10.1016/j.patrec.2019.11.016 0167-8655/© 2019 Elsevier B.V. All rights reserved.

classifying tumor in a single MRI scan or sequence is not an easy task [10]. Hence it is recommended to use multiple MRI sequences; T1, T1C, T2, Flair and DWI for the purpose. Therefore, in the proposed method, different details from each sequence are fused with each other to obtain a single image. Resultant fused image provides more information as compared to individual sequences. The individual and fused sequences are passed to the proposed CNN structure for tumor classification. The major contribution is defined as follows: •





Sequences of MRI as T1, T1C, T2 and Flair are fused using DWT with Daubechies wavelet kernel into a single MR image in which tumor region is more prominent as compared to individual MR sequences. PDDF [11] is applied for noise reduction. The global thresholding is performed on enhanced images for precise segmentation of tumor region. The segmented images are supplied to proposed 23 layers CNN architecture for brain tumor classification.

2. Related work Recently, computer-aided diagnosis (CAD) [12–18] systems have shown widespread adoption to detect tumor [19–23]. However, brain tumor detection is a challenge [24] due to fuzzy borders of tumor region [25]. The deformable models assist in extraction of

116

J. Amin, M. Sharif and N. Gul et al. / Pattern Recognition Letters 129 (2020) 115–122

tumor boundaries [19,26]. The geometric methods depend on prior detailed information related to the appearance and spatially distributed different types of tissues [27]. These models were used on the basis of Osher-Sethian level set method [28]. Charged fluid model (CFM) [29], region-based method with variational level set [30], hybrid level set [31], convolutional restricted Boltzmann machine (cRBMs) [32] and variational techniques [33] were utilized to segment the tumor region in MRI. Active contour (AC) or snakes models are predominant [112]. They can segment, match and track anatomical structure of images based on prior knowledge, size, shape and location. The parametric models’ representation allowed compact and accurate description related to the shape of object. The smoothness, connectivity and continuity of models compensated noise and irregular boundaries of an object [19]. The dynamic vector gradient flow [34], fuzzy classification, spatially and symmetry constrained [24], modified deformable region model [35], geometric, spatial constraints [36], expectation-maximization (EM) [37], localized AC model (LACM), hierarchical centroid shape descriptor (HCSD) [38] and Gaussian mixture model (GMM), FCM, WT, active contour and entropy [39] parametric models have been used for segmentation in literature. Deep learning (DL) methods have attained widespread acceptance in all areas of research. In comparison with conventional methods, these methods do not rely on hand-crafted feature extraction methods [17,40]. In DL methods, features are automatically learned in the hierarchy including learning of complicated features directly from the sample data. Currently, deep learning methods are used to obtain high segmentation performances [41]. Dvorak and Menze [42] utilized CNN for tumor segmentation. 3D CNN with conditional random field (CRF) was used to segment brain lesions [43]. The cascaded CNN architecture [44] was presented to segment the glioma and stroke lesion in which training is performed on the basis of patches. The convolutional encoder network (CEN) [45], long short term memory (LSTM) [46], CNN [47], U-Net CNN [48] and dual-force CNN [49] were presented for the detecting the brain tumor. 3. Proposed methodology The presented method contains four major steps. In the first step, four sequences of MRI are fused into a single image. Secondly, PDDF is used for noise removal while in the third step, global thresholding is applied for segmentation. Resultant segmented images are given to suggested CNN model in the final step. Presented model utilizes convolutional, batch normalization, ReLU, max-pooling, fully connected (FC) and softmax layers to classify non-tumor and tumor images. Fig. 1 shows the suggested model. 3.1. Proposed method for fusion of MRI sequences DWT is better for image fusion because it helps in numerical and functional analysis. It captures information about both location and its frequency. Therefore in this work, DWT with Daubechies kernel is utilized. MRI sequences (T1, T1C, T2 and Flair) are fused into a single image using DWT. The resultant fused image provides detailed information as compared to individual MRI sequences because each sequence of MRI contains different texture and structural information. The multi resolution theory deals with image analysis and representation in more than one resolution. The scaling function is utilized to create approximated images and each such image differs by a factor of two from its nearest neighboring image. The complementary functions (also known as wavelets) are used to encode the difference between neighboring approximated images [51]. DWT decomposes an input sequence into four sub bands in which one band represents average value i.e., LL (low–low) and

Fig. 1. Major proposed method steps (where 155 shows number of MRI slices).

rest of three bands represent horizontal i.e., LH (low– high), vertical i.e., HL (high–low) and diagonal i.e., HH (high-high) orientation. Fig. 2. Expresses the DWT procedure. In Fig. 2 2↓1 represents row wise down sampling of an input image 240 × 240 to 240 × 121 and 1↓2 denotes column wise down sampling of 240 × 121 to 121 × 121. In this process, Daubechies wavelet kernel computes coefficients matrices [cAverage = 121 × 121, cHorizontal = 121 × 121, cVertical = 121 × 121, cDiagonal = 121 × 121] obtained through wavelet decomposition. Finally, fused images are reconstructed using inverse discrete wavelet transform (IDWT). Flair with T1 and T1C with T2 are fused in F1 and F2 respectively. F1 and F2 are further fused to form F3. As a result, tumor region becomes more prominent as compared to individual sequences. Fig. 3. depicts the whole process of fusion. 3.2. Lesion enhancement and segmentation method Lesion enhancement is a vital step for the segmentation of lesion region [52]. PDDF [11] is applied for noise removal and enhancement of lesion region. The enhanced images EI(x, y) are segmented by global thresholding which is defined as follows:



f gI ( x,y ) =

 bgI (x,y) =

1EI (x,y) > T 0 else

(1)

1EI (x,y) = 0 0 else

(2)

In Eqs. (1) and (2), fgI and bgI denote foreground and background pixels of input images and T represents the threshold. The segmentation results are mentioned in Fig. 4. 3.3. Proposed CNN architecture Images obtained from segmentation are passed to the suggested CNN model [61] given in Fig. 5. In this architecture, six types of layers such as convolutional, batch normalization, ReLU, maxpooling, FC and softmax are utilized with different parameters.

J. Amin, M. Sharif and N. Gul et al. / Pattern Recognition Letters 129 (2020) 115–122

117

Fig. 2. Sub bands of input image using DWT.

Fig. 3. Fusion of image sequences on BRATS dataset.

Fig. 4. Proposed segmentation results on Brats18_CBICA _AAB_1 case (a) input image (b) PDDF (c) tumor localization (d) segmentation (e) boundary of tumor region (f) annotation/marking.

2D convolutional layers are applied for filters sliding to given input slices. The batch normalization layers are used to normalize inputs fgI through mean μB and variance σ B 2 over mini batch. The normalized activations Bn are measured as follows in Eqs. (3-6).

Bn =

f g I ( x , y ) − μB



σB 2 + ε

(3)

where ε is used to improve stability numerically when the variance of mini batch size is too small.

Bn = γ˜ Na + β˜

(4)

where  β and γ˜ denote offset and scale factor. These factors are learnable parameters updated in network training process. ReLU is

Fig. 5. Proposed CNN model for classification of tumor and non-tumor.

118

J. Amin, M. Sharif and N. Gul et al. / Pattern Recognition Letters 129 (2020) 115–122

Fig. 6. Block-wise architecture of proposed model (where 155 represents number of slices in each sequence).

used to perform threshold operation mentioned in Eq. (5).

 f (x ) =

x x≥0

0 x<0

(5)

Max-pooling layer is used to perform down sampling. In FC layer, all neurons of preceding layers £ are connected to each other and all features are combined (local information) that are learned through the previous layer. Softmax layer ψ is applied for classification on the basis of probability in Eq. (6).

exp ar

ψ ( £ ) = N

j=1

expaj £

(6)

where FC is denoted by £ and ar represents kernel vector of jth neuron while N shows total classes. The output ψ (£) defines the probability of jth class. The proposed CNN architecture is comprised of six blocks which are shown in Fig. 6. The first five blocks are similar to each other containing convolutional, batch normalization, ReLU and max-pooling layers in order. The sixth block differs from other blocks because it contains FC layer preceded by a softmax layer. The kernel size of convolutional layers in first five blocks is selected as height × width × channel in which height =3, width =3 and channel = 8, 16, 32, 64 and 128 in five blocks respectively. 2 × 2 maxpooling is used in all five blocks with stride 2. Batch normalization is applied to every block with 8, 16, 32, 64 and 128 channels. The results of different convolutional layers are depicted in Fig. 7.

4. Results and discussion The performance evaluation of presented technique is done on BRATS datasets. The image dataset of BRATS 2012 Challenge provided 10 HGG and 05 LGG cases. Later on, BRATS MICCAI introduced BRATS 2013 dataset consisting of two further datasets; Challenge and Leader board. The Challenge dataset included 20 HGG and 10 LGG cases in training while 15 (HGG and LGG) cases in testing. BRATS 2013 Leader board dataset comprised of 21 HGG and 4 LGG cases. BRATS 2015 Challenge dataset had 384 cases such that 220 HGG and 54 LGG were in training and 110 of both (HGG, LGG) were in testing [53,54]. BRATS 2018 Challenge consisted of 191 HGG and 75 LGG cases for training and 66 cases for validation [55].

Fig. 7. Intermediate results of sequences at convolutional layers.

4.1. Experiment#1 pixel based segmentation results The proposed segmented images are assessed with ground truth annotations shown in Table 1; in this experiment 67 cases of BRATS 2018 are utilized in which 32 and 35 cases belong to HGG and LGG respectively. Comparison of pixel based outcomes in terms of DSC with [50] is given in Table 2. 4.2. Experiment#2 feature based results using CNN The experiments are performed for the selection of proposed CNN architecture with respective training results and sequence of layers (Table 3). The CNN model was trained on BRATS 2018 dataset with 07, 11, 19, 23 and 28 layers. In this experiment, it is observed that least VE is obtained on twenty three layers but when the number of layers is less than and greater than 23, VE is increased. Therefore 23 layers are selected in the proposed CNN model for further experiments. The hyper parameters castoff in the proposed model are shown in Table 4 in which 0.01 learning rate and 40 training epochs are used. Mostly network becomes stable after 30 epochs, so total 40 epochs are selected to meet the satisfactory model for training. The model is trained with 0.5 holds out approach (50% data for training and 50% for testing is used). Training and VE on BRATS 2015

J. Amin, M. Sharif and N. Gul et al. / Pattern Recognition Letters 129 (2020) 115–122

119

Table 1 Proposed pixel based segmentation outcomes in terms of DSC on BRATS 2018 Challenge dataset (155/155 slices). HGG Cases of Brats18_CBICA

LGG Cases of Brats18_TCIA

Complete

0.998 0.985 0.987 0.958 0.771 0.971 0.965 0.964 0.959 0.968 0.970 0.875 0.986 0.995 0.968 0.932 0.860 0.967 0.948 Complete

09_141_1 09_177_1 10_202_1 10_241_1 10_282_1 10_299_1 10_307_1 10_310_1 10_325_1 10_330_1 10_346_1 10_351_1 10_387_1 10_393_1 10_408_1 10_410_1 10_413_1 10_420_1 10_442_1

0.984 0.983 0.971 0.982 0.994 0.997 0.981 0.992 0.894 0.924 0.993 0.942 0.973 0.920 0.957 0.833 0.984 0.970 0.981

0.972 0.995 0.974 0.982 0.988 0.898 0.992 0.987 0.994 0.991 0.994 0.981 0.992 0.967

12_466_1 12_470_1 12_480_1 13_615_1 13_618_1 13_621_1 13_623_1 13_624_1 13_630_1 13_633_1 13_634_1 13_642_1 13_645_1

0.992 0.978 0.971 0.982 0.986 0.997 0.970 0.986 0.981 0.993 0.992 0.994 0.981

Complete

AAB_1 AAG_1 AAL_1 AAP_1 ABB_1 ABE_1 ABM_1 ABN_1 ABO_1 ABY_1 ALN_1 ALU_1 ALX_1 AME_1 BHM_1 ANG_1 ANI_1 ANP_1 ANZ_1 Cases of Brats18_TCIA 01_131_1 01_147_1 01_499_1 02_168_1 02_283_1 02_290_1 02_331_1 02_368_1 02_370_1 02_471_1 02_473_1 02_491_1 02_605_1 Average

Fig. 8. Accuracy with respect to loss rate.

Fig. 9. Confusion matrices of BRATS dataset based on fused sequences (a) BRATS 2012 Image (b) BRATS 2013 Challenge (c) BRATS 2015 Challenge (d) BRATS 2013 Leader board (e) BRATS 2018 Challenge.

Table 5 Proposed results on Flair, T1 and (Flair+T1) MRI sequences. BRATS Datasets

SE

FNR

SP

ACC

FPR

2015 Challenge ⎭ 2018

1.00 0.93 1.00

0.00 0.70 0.00

0.17 0.30 0.51

0.46 0.50 0.71

0.83 0.07 0.49

2012} Image 2013} Leaderboard

0.90 0.92

0.10 0.08

0.92 0.90

0.91 0.93

0.08 0.10

2015 Challenge ⎭ 2018

0.41 0.82 0.90

0.59 0.18 0.10

1.00 0.98 0.54

0.78 0.58 0.50

0.00 0.02 0.46

2012} Image 2013} Leaderboard

1.00 1.00

0.00 0.00

0.47 0.95

0.78 0.96

0.53 0.05

0.91 0.95 0.92

0.09 0.05 0.08

0.99 0.83 0.80

0.89 0.89 0.87

0.01 0.17 0.20

0.99 0.93

0.01 0.07

0.95 0.99

0.97 0.95

0.05 0.01

⎫ 2013⎬

⎫ 2013⎬

Table 2 Results comparison on BRATS 2018 Challenge. Ref

Year

DSC

[50] Proposed Method

2019

0.91 0.96

2012} Image 2013} Leader board

Table 3 Validation error (VE) for selection of proposed CNN layers on BRATS 2018 Challenge. Total number of CNN layers

VE

7 11 15 19 23 (Selected layers) 28

0.2153 0.1153 0.1001 0.0380 0.0150 0.0210



2013⎬ 2015 Challenge ⎭ 2018

Table 4 Hyperparameters of proposed CNN model. Max Epochs

Validation Frequency

Learning rate

Patience

40

30

0.01

5

dataset is shown in Fig. 8. The classification is performed for two classes i.e., (1) non-tumor, and (2) tumor and shown by confusion matrices in Fig. 9 which are acquired by employing the proposed CNN model to fused sequences on each benchmark dataset. In Fig. 8, x-axis represents training epochs and y axis denotes loss (error) rate. The greatest prediction scores of training and VEs

MRI Sequences Flair

T1

(Flair+T1)

are achieved which shows that the error rate and accuracy are reciprocal of each other. The results are validated on different performance metrics mentioned in Tables 5–7. The results in Table 5 are taken on sequences Flair and T1 individually as well as on fused sequence (Flair + T1). The receiver operating characteristic curve (ROC) of only BRATS 2018 Challenge is presented in Fig. 10 using the individual as well as fused sequences. The results in Table 6 are taken on sequences (T1C and T2) individually as well as on fused sequence (T1C + T2) and ROC of individual as well as fused sequences of BRATS 2018 Challenge is presented in Fig. 11. The results in Table 7 are taken on fused sequence named as [(Fair + T1) + (T1C + T2)] after fusion of fused sequences (Fair + T1) and (T1C + T2). The ROC of fused sequences on BRATS 2018 Challenge is plotted in Fig. 12. The proposed model execution time on all fused sequences with performance measures on each dataset is given in Table 8 that also validates the efficiency of this model because of less computational time.

120

J. Amin, M. Sharif and N. Gul et al. / Pattern Recognition Letters 129 (2020) 115–122

Fig. 10. ROC on BRATS 2018 Challenge (a) Flair (b) T1 (c) (Flair + T1) MRI sequences (where normal shows non-tumor and abnormal shows tumor images).

Fig. 11. ROC on BRATS 2018 Challenge (a) T1C (b) T2 (c) (T1C + T2) MRI sequences.

J. Amin, M. Sharif and N. Gul et al. / Pattern Recognition Letters 129 (2020) 115–122

121

Fig. 12. ROC on BRATS 2018 Challenge (a) Flair, T1, Flair + T1 (b) T1C, T2, T1C + T2 (c) Fusion of [(Fair + T1) + (T1C + T2)] MRI sequences.

Table 6 Proposed results on T1C, T2 and (T1C+T2) MRI sequences. BRATS Datasets

SE

FNR

SP

ACC

FPR

BRATS Datasets

Computation time

2015 Challenge ⎭ 2018

1.00 0.74 0.20

0.00 0.26 0.80

0.65 0.90 1.00

0.62 0.50 0.65

0.35 0.10 0.00

2015 Challenge ⎭ 2018

1 min 5 s 1 min 35 s 1 min 38 s

2012} Image 2013} Leader board

0.75 0.64

0.25 0.36

1.00 1.00

0.86 0.73

0.00 0.00

2012} Image 2013} Leader board

1 min 1 s 1 min 6 s

2015 Challenge ⎭ 2018

0.95 0.52 1.00

0.02 0.48 0.00

0.98 1.00 0.38

0.59 0.45 0.61

0.02 0.00 0.68

2012} Image 2013} Leader board

1.00 0.92

0.00 0.08

0.56 1.00

0.77 0.94

0.44 0.00

0.68 1.00 0.95

0.32 0.00 0.05

1.00 0.89 0.94

0.86 0.95 0.95

0.00 0.10 0.06

0.87 0.91

0.13 0.09

0.78 1.00

0.82 0.93

0.22 0.00

⎫ 2013⎬

⎫ 2013⎬



2013⎬ 2015 Challenge ⎭ 2018

MRI Sequences

Table 8 Computational time for testing.

T1C

T2

(T1C+T2)

2012}Image 2013} Leader board

⎫ 2013⎬

Table 9 Comparison of presented method comparison with state-of-art work. BRATS Datasets

Techniques

Year

SE

2012 Challenge

[56] Proposed [57] Proposed [58] Proposed [59] [60] Proposed [50] Proposed

2019

0.92 0.97 0.84 1.00 0.95 0.99 0.70 0.88 0.98 0.94 0.99

2013 Leader board 2013 Challenge 2015 Challenge

Table 7 Proposed results on fusion of all [(Fair+T1) + (T1C + T2)] MRI sequences. MRI sequences

BRATS Datasets

⎫ 2013⎬

Fused [(Fair+T1) + (T1C + T2)] 2015 Challenge ⎭ 2018 2012} Image 2013} Leader board

2018 Challenge

SE

FNR

SP

ACC

FPR

0.99 0.98 0.99

0.01 0.02 0.01

0.95 0.92 0.93

0.98 0.96 0.97

0.05 0.08 0.07

0.97 1.00

0.03 0.00

0.97 1.00

0.97 1.00

0.03 0.00

Table 9 presents outcomes of proposed method in comparison with latest work done so far on the same datasets. In comparison Table 9, it is observed that existing methods achieved maximum 0.95 SE on BRATS 2013 challenge but the pro-

2017 2019 2018 2019 2019

posed method has 0.99 SE which shows that proposed method performed better. 5. Conclusion and future work In the presented method, sequences of MRI are fused at different levels by using WT to obtain detailed information and help for tumor detection. The fused images are transferred to CNN for automatic features learning and classification of brain tumor. The method achieves highest results on fused images such as 0.97 ACC on BRATS 2012 Image, 0.98 ACC on BRATS 2013 Challenge, 0.96

122

J. Amin, M. Sharif and N. Gul et al. / Pattern Recognition Letters 129 (2020) 115–122

ACC on BRATS 2013 Leader board, 1.00 ACC on BRATS 2015 Challenge and 0.97 ACC on BRATS 2018 Challenge datasets. This work is focused on fusion of only MRI sequences. In future, it can be extended on the fusion of other modalities such as PET and CT images to analyze the classification results. Declaration of Competing Interest None. References [1] J. Amin, et al., A distinctive approach in brain tumor detection and classification using MRI, Pattern Recognit. Lett. (2017). [2] K. Sridhar, et al., Developing brain abnormality recognize system using multi-objective pattern producing neural network, J. Ambient Intell. Humaniz. Comput. (2018) 1–9. [3] R. Anitha, D. Siva Sundhara Raja, Development of computer-aided approach for brain tumor detection using random forest classifier, Int. J. Imaging Syst. Technol. 28 (2018) 48–53. [4] R. Grant, Medical management of adult glioma, in: Management of Adult Glioma in Nursing Practice, Springer, 2019, pp. 61–80. [5] D.R. Johnson, et al., 2016 updates to the WHO brain tumor classification system: what the radiologist needs to know, Radiographics 37 (2017) 2164–2180. [6] E. Wright, et al., Incidentally found brain tumors in the pediatric population: a case series and proposed treatment algorithm, J. Neurooncol. 141 (2019) 355–361. [7] Y. Wu, et al., Grading glioma by radiomics with feature selection based on mutual information, J. Ambient Intell. Humaniz. Comput. (2018) 1–12. [8] S. Lahmiri, Glioma detection based on multi-fractal features of segmented brain mri by particle swarm optimization techniques, Biomed. Signal Process. Control 31 (2017) 148–155. [9] P. Jampani, et al., REVIEW on glioblastoma multiforme (GBM), 2018. [10] N. Nida, et al., A framework for automatic colorization of medical imaging, IIOAB J. 7 (2016) 202–209. [11] J. Amin, et al., Use of machine intelligence to conduct analysis of human brain data for detection of abnormalities in its cognitive functions, Multimed Tools Appl. (2019) 1–19. [12] S. Naqi, et al., Lung nodule detection using polygon approximation and hybrid features from CT images, Curr. Med. Imaging Rev. 14 (2018) 108–117. [13] A. Liaqat, et al., Automated ulcer and bleeding classification from wce images using multiple features fusion and selection, J. Mech. Med. Biol. 18 (2018) 1850038. [14] M. Sharif, et al., A framework for offline signature verification system: best features selection approach, Pattern Recognit. Lett. (2018). [15] J.H. Shah, et al., Facial expressions classification and false label reduction using LDA and threefold SVM, Pattern Recognit. Lett. (2017). [16] M. Raza, et al., Appearance based pedestrians’ gender recognition by employing stacked auto encoders in deep learning, Future Gener. Comput. Syst. 88 (2018) 28–39. [17] G.J. Ansari, et al., A novel machine learning approach for scene text extraction, Future Gener. Comput. Syst. 87 (2018) 328–340. [18] M. Sharif, et al., An overview of biometrics methods, in: Handbook of Multimedia Information Security: Techniques and Applications, Springer, 2019, pp. 15–35. [19] N. Gordillo, et al., State of the art survey on mri brain tumor segmentation, Magn. Reson. Imaging 31 (2013) 1426–1438. [20] G. Mohan, M.M. Subashini, MRI based medical image analysis: survey on brain tumor grade classification, Biomed. Signal Process. Control 39 (2018) 139–161. [21] T. Zhou, et al., Effective feature learning and fusion of multimodality data using stage-wise deep neural network for dementia diagnosis, Hum. Brain Mapp. 40 (2019) 1001–1016. [22] J. Fan, et al., BIRNet: brain image registration using dual-supervised fully convolutional networks, Med. Image Anal. 54 (2019) 193–206. [23] X. Zhang, et al., A survey on deep learning based brain computer interface: recent advances and new frontiers. 2019, arXiv:1905.04149. [24] H. Khotanlou, et al., 3D brain tumor segmentation in MRI using fuzzy classification, symmetry analysis and spatially constrained deformable models, Fuzzy Sets Syst. 160 (2009) 1457–1473. [25] R. Rewari, "Automatic Tumor Segmentation from MRI Scans," ed. Stanford University. [26] T. McInerney, D. Terzopoulos, Deformable models. Handbook of Medical Imaging Processing and Analysis, E.d Bankman (Ed.), Academic, New York, 20 0 0. [27] B.H. Menze, et al., A generative model for brain tumor segmentation in multi– modal images, in: International Conference on Medical Image Computing and Computer-Assisted Intervention, 2010, pp. 151–159. [28] S. Osher, J.A. Sethian, Fronts propagating with curvature-dependent speed: algorithms based on Hamilton–Jacobi formulations, J. Comput. Phys. 79 (1988) 12–49. [29] H.-H. Chang, D.J. Valentino, An electrostatic deformable model for medical image segmentation, Comput. Med. Imaging Graphics 32 (2008) 22–35.

[30] C. Li, et al., Minimization of region-scalable fitting energy for image segmentation, IEEE Trans. Image Process. 17 (2008) 1940–1949. [31] K. Xie, et al., Semi-automated brain tumor and edema segmentation using MRI, Eur. J. Radiol. 56 (2005) 12–19. [32] M. Agn, et al., Brain tumor segmentation using a generative model with an rbm prior on tumor shape, in: International Workshop on Brainlesion: Glioma, Multiple Sclerosis, Stroke and Traumatic Brain Injuries, 2015, pp. 168–180. [33] D. Cobzas, et al., 3D variational brain tumor segmentation using a high dimensional feature set, in: 2007 IEEE 11th International Conference on Computer Vision, 2007, pp. 1–8. [34] S. Luo, et al., A new deformable model using dynamic gradient vector flow and adaptive balloon forces, in: APRS Workshop on Digital Image Computing, 2003, pp. 9–14. [35] A.K. Law, et al., A fast deformable region model for brain tumor boundary extraction, in engineering in medicine and biology, in: 2002. 24th Annual Conference and the Annual Fall Meeting of the Biomedical Engineering Society EMBS/BMES Conference, 2002. Proceedings of the Second Joint, 2002, pp. 1055–1056. [36] M. Prastawa, et al., A brain tumor segmentation framework based on outlier detection, Med. Image Anal. 8 (2004) 275–283. [37] T. Haeck, et al., ISLES challenge 2015: automated model-based segmentation of ischemic stroke in MR images, in: International Workshop on Brainlesion: Glioma, Multiple Sclerosis, Stroke and Traumatic Brain Injuries, 2015, pp. 246–253. [38] E. Ilunga-Mbuyamba, et al., Localized active contour model with background intensity compensation applied on automatic MR brain tumor segmentation, Neurocomputing 220 (2017) 84–97. [39] S. Tchoketch Kebir, et al., A fully automatic methodology for MRI brain tumour detection and segmentation, Imaging Sci. J. 67 (2019) 42–62. [40] J. Amin, et al., Big data analysis for brain tumor detection: deep convolutional neural networks, Future Gener. Comput. Syst. 87 (2018) 290–297. [41] Z. Akkus, et al., Deep learning for brain mri segmentation: state of the art and future directions, J. Digit. Imaging 30 (2017) 449–459. [42] P. Dvorˇák, B. Menze, Local structure prediction with convolutional neural networks for multimodal brain tumor segmentation, in: International MICCAI Workshop on Medical Computer Vision, 2015, pp. 59–71. [43] K. Kamnitsas, et al., Efficient multi-scale 3D CNN with fully connected CRF for accurate brain lesion segmentation, Med. Image Anal. 36 (2017) 61–78. [44] H. Larochelle, P.-.M. Jodoin, A convolutional neural network approach to brain tumor segmentation, in: Brainlesion: Glioma, Multiple Sclerosis, Stroke and Traumatic Brain Injuries: First International Workshop, Brainles 2015, Held in Conjunction with MICCAI 2015, Munich, Germany, October 5, 2015, 2016, p. 195. Revised Selected Papers. [45] T. Brosch, et al., Deep convolutional encoder networks for multiple sclerosis lesion segmentation, in: International Conference on Medical Image Computing and Computer-Assisted Intervention, 2015, pp. 3–11. [46] M.F. Stollenga, et al., Parallel multi-dimensional LSTM, with application to fast biomedical volumetric image segmentation, in: Advances in Neural Information Processing Systems, 2015, pp. 2998–3006. [47] M.K. Abd-Ellah, et al., Two-phase multi-model automatic brain tumour diagnosis system from magnetic resonance images using convolutional neural networks, EURASIP J. Image Video Process. 2018 (2018) 97. [48] H. Dong, et al., Automatic brain tumor detection and segmentation using U-Net based fully convolutional networks, in: Annual Conference on Medical Image Understanding and Analysis, 2017, pp. 506–517. [49] S. Chen, et al., Dual-force convolutional neural networks for accurate brain tumor segmentation, Pattern Recognit. 88 (2019) 90–100. [50] Y. Wang, et al., Multimodal brain tumor image segmentation using WRN-PPNet, Comput. Med. Imaging Graphics 75 (2019) 56–65. [51] W. Ayadi, et al., A hybrid feature extraction approach for brain MRI classification based on Bag-Of-Words, Biomed. Signal Process. Control 48 (2019) 144–152. [52] I. Despotovic´ , et al., MRI segmentation of the human brain: challenges, methods, and applications, Comput. Math. Methods Med. 2015 (2015) 1–23. [53] BRATS2012, (accessed last time, 5/13/2018), https://www.smir.ch/BRATS/ Start2012/. [54] B.H. Menze, et al., The multimodal brain tumor image segmentation benchmark (BRATS), IEEE Trans. Med. Imaging 34 (2015) 1993. [55] S. Bakas, et al., Identifying the best machine learning algorithms for brain tumor segmentation, progression assessment, and overall survival prediction in the BRATS challenge, Comput. Vision Pattern Recognit. (2018) 1–49. [56] Z.U. Rehman, et al., Fully automated multi-parametric brain tumour segmentation using superpixel based classification, Expert Syst. Appl. 118 (2019) 598–613. [57] M. Havaei, et al., Brain tumor segmentation with deep neural networks, Med. Image Anal. 35 (2017) 18–31. [58] J. Chang, et al., "A mix-pooling CNN architecture with FCRF for brain tumor segmentation," vol. 58, pp. 316–322, 2019. [59] A. Selvapandian, K. Manivannan, Fusion based glioma brain tumor detection and segmentation using ANFIS classification, Comput. Methods Programs Biomed. 166 (2018) 33–38. [60] H. Li, et al., A novel end-to-end brain tumor segmentation method using improved fully convolutional networks, Comput. Biol. Med. 108 (2019) 150–160. [61] H.D. Hubel, T.N. Wiesel, Receptive Fields of Single neurones in the Cat’s Striate Cortex, J. Physiol. 148 (1959) 574–591.