CCDF: Automatic system for segmentation and recognition of fruit crops diseases based on correlation coefficient and deep CNN features

CCDF: Automatic system for segmentation and recognition of fruit crops diseases based on correlation coefficient and deep CNN features

Computers and Electronics in Agriculture 155 (2018) 220–236 Contents lists available at ScienceDirect Computers and Electronics in Agriculture journ...

17MB Sizes 0 Downloads 14 Views

Computers and Electronics in Agriculture 155 (2018) 220–236

Contents lists available at ScienceDirect

Computers and Electronics in Agriculture journal homepage: www.elsevier.com/locate/compag

Original papers

CCDF: Automatic system for segmentation and recognition of fruit crops diseases based on correlation coefficient and deep CNN features

T

Muhammad Attique Khana, Tallha Akramb, , Muhammad Sharifc, Muhammad Awaisb, Kashif Javedd, Hashim Alia, Tanzila Sabae ⁎

a

Department of Computer and Electrical Engineering, HITEC University, Museum Road, Taxila, Pakistan Department of EE, COMSATS University Islamabad, Wah Cantt, Pakistan c Department of Computer Science, COMSATS University Islamabad, Wah Cantt, Pakistan d Department of Robotics, SMME NUST, Pakistan e College of Computer and Information Sciences, Prince Sultan University, Riyadh 11586, Saudi Arabia b

ARTICLE INFO

ABSTRACT

Keywords: Contrast enhancement Disease segmentation Disease extraction Feature extraction Features selection Classification

In the agriculture farming business, plant diseases are the major reason for monetary misfortunes around the globe. It is an imperative factor, as it causes significant diminution in both quality and capacity of growing crops. Therefore, detection and taxonomy of various plants diseases is crucial, and it demands utmost attention. In plants, fruits act a major source of nutrients worldwide, however, various range of diseases adversely affect the production as well as the quality of the fruits. Therefore, utilization of an efficient machine vision technology not only detects the diseases at their early stages but also classify them accordingly. This research is primarily focusing on the detection and classification of various fruits diseases based on correlation coefficient and deep features (CCDF). The proposed technique incorporates two major steps of infected regions detection and finally feature extraction and classification. In the first step, initially contrast of input image is enhanced by utilizing a hybrid method - followed by proposed correlation coefficient-based segmentation method which separates the infected regions from the background. In the second step, two deep pre-trained models (VGG16, caffe AlexNet) are utilized for feature extraction of selected diseases (apple scab, apple rot, banana sigotka, banana cordial leaf spot, banana diamond leaf spot and deightoniella leaf and fruit spot). Parallel features fusion step is embedded to consolidate the extracted features prior to max-pooling step. Selection of most discriminant features are being performed using genetic algorithm before subjecting to the final stage of classification using mutli-class SVM. Experiments are being performed on publicly available datasets - plant village and CASC-IFW datasets to achieve the classification accuracy of 98.60%. Qualitative analysis of achieved results clearly shows that the proposed method outperforms several existing methods in terms of greater precision and improved classification accuracy.

1. Introduction 1.1. Background & motivation Health monitoring of plants and fruits is essential for sustainable agriculture. Early detection of fruit diseases not only helps to avoid the yield loses but also improves the quality of products. The classical method for fruit disease identification is based on visual inspection by agriculture experts. This method is prone to errors and suffers from high cost and time consumption. Moreover, in some cases visual inspection by experts is not feasible due to presence of crops at distant locations. Automated detection and identification of plant diseases has got

significant research interest in recent years. Sophisticated image processing coupled with advanced computer vision techniques results in accurate and fast identification with less human effort and labour cost. In this context, several approaches have been proposed for identification and classification of various fruit diseases (e.g. apple scab, apple rot, banana sigotka, banana cordial leaf spot, banana diamond leaf spot and deightoniella leaf and fruit spot) (Pujari et al., 2013a, 2014; Singh and Misra, 2015). The major cause of fruit diseases are microbes in particular, the fungi. The literature also features a number of surveys and comparative studies which discuss image processing based classification and identification of fruit diseases (Pujari et al., 2013b,c,d; Thakare et al., 2013; Bandi et al., 2013; Dubey and Jalal, 2014). Several

Corresponding author. E-mail addresses: [email protected] (M.A. Khan), [email protected] (T. Akram), [email protected] (M. Sharif), [email protected] (M. Awais), [email protected] (K. Javed), [email protected] (H. Ali), [email protected] (T. Saba). ⁎

https://doi.org/10.1016/j.compag.2018.10.013 Received 27 February 2018; Received in revised form 5 September 2018; Accepted 7 October 2018 0168-1699/ © 2018 Published by Elsevier B.V.

Computers and Electronics in Agriculture 155 (2018) 220–236

M.A. Khan et al.

segmentation techniques have also been proposed which include thresholding (Mehl et al., 2002), globally adaptive thresholding (Kim et al., 2005), genetic cellular neural network based segmentation (Ravindra Naik and Sivappagari, 2016a), snake segmentation (Sannakki and Rajpurohit, 2015), K-Means clustering (Singh and Misra, 2017), region growing method (Chuanlei et al., 2017), and edge detection (Revathi and Hemalatha, 2012). These techniques are efficiently capable to segment the plant diseases from a simple input image. However, they suffer from poor accuracy for complex images. Features extraction plays an important role for the representation of an image. A number of accurate feature classification techniques have been proposed in literature for apple and banana fruits. The precisely extracted features can provide good results but in some cases the high number of features degrade the system accuracy. The extracted features include local binary pattern (Samajpati and Degadwala, 2016), color features (Samajpati and Degadwala, 2016), texture features (Samajpati and Degadwala, 2016), global color histogram (Dubey and Jalal, 2012), color coherence vector features (Bhatt and Patalia, 2017), complete local binary pattern (Guo et al., 2010), histogram template features (Tigadi and Sharma, 2016), and HSV features (Ravindra Naik and Sivappagari, 2016b). The features are classified with the help of many classification methods such as support vector machine (SVM) (Ravindra Naik and Sivappagari, 2016b), multi-class SVM (M-SVM) (Shuaibu et al., 2017), artificial neural network (ANN) (Tigadi and Sharma, 2016), and Bayesian network of classification (Kleynen et al., 2005). These classification methods perform excellently and provide high accuracy but, when the classes are more relevant to each other (for example apple rot is similar to apple rust) then, the accuracy of the whole system decreases.

The major contributions of this work are listed as follows:

• For stretching the contrast of input image, a hybrid method is im-







1.2. Problem statement Generally, computer-based detection and classification of fruit diseases consists of four major steps including pre-processing, segmentation, features extraction and classification. For such systems, there exist a number of challenges that must be addressed to meet the high detection and classification accuracy. Poor visual quality of input image is a major challenge for accurate disease spot segmentation. The low contrast and irregularity of disease spots, convex edges and change in the scale of disease area affects the accuracy of disease segmentation. Recently, for computer-based systems, the robust and prominent features play a major role in the accurate classification of fruit diseases. The color and texture features are mostly used for the classification of apple and banana diseases. The proper selection of extracted features is also needed to improve the efficiency of classification step. Moreover, recent research in the domain of computer vision shows that the selection of most prominent features is a major challenge for any computer-based application to improve their recognition accuracy and execution performance.

plemented. At first, the top-bottom hat filter is applied on the input image and the intensity values of the improved image are adjusted. Next, the maxima and minima global values of the improved image are found and the contrast range is defined. Through the contrast range, low and high threshold values are found and are fed into final threshold function to get an enhanced image. For the identification of infected regions of fruits, color and texture features of the enhanced image are calculated and correlation coefficient between them is found to get the most correlated features. Then, the harmonic mean (HM) and mean deviation (M.D) of correlated features are calculated and fed into the activation function. The activation function finds out the infected regions of the fruits and assigns weights for the selection of background and infected regions. For classification of apple and banana diseases, a deep architecture is implemented in which the pre-trained deep models such as VGG16 and Caffe-ALexNet are used for extraction of deep features and their fusion on the parallel architecture. After that, max pooling of window size 2 × 2 is performed on fused vector and maximum valued feature vector is selected which has the size of about 1 × 2304 . Finally, the GA is implemented on the max pooling feature vector and the most prominent features for classification are selected. The results are calculated for each extracted feature and are associated with a fused vector on each disease separately. Then, the fused vector results are compared with a selected feature set in the form of several performance measures and their performance is compared with some state of the art methods.

1.4. Paper organization The rest of the paper is organized as follows: Section 2 discusses the related work. In Section 3, all the proposed method is discussed along with details of all its processing steps. Experimental results are presented in Section 4 and discussed in Section 5. Finally, the conclusions are drawn in Section 6. 2. Related work In literature, several techniques are presented for the identification and classification of banana and apple diseases using segmentation and classification methods. For identification of diseases, the simple thresholding is mostly adopted (Mehl et al., 2002) however, this technique does not accurately segment the disease regions due to variation of the infected parts. Rozario et al. (2016) introduced a new method for identification of infection regions in the fruits based on two steps. In the first step, the contrast of input image is enhanced by using median filter and histogram equalization. In the second step, the modified color based segmentation technique is implemented by using K-means clustering and the infection regions are extracted. K-means clustering is mostly used method for the extraction of apple and banana diseases and provides better performance (Dubey and Jalal, 2016; Vipinadas and Thamizharasi, 2016). Amara et al. (2017) implemented a deep learning based method for automatic classification of banana diseases. The LetNet deep architecture is used for classification under the complex background and high illumination and shows improved performance. Chuanlei et al. (2017) introduced a new method for the identification and classification of diseases using GA and correlation based features selection. The proposed method consists of three major steps. In the first step, the contrast of input image is enhanced and the infection region are segmented with the help of region growing method. In the second step, the texture, color, and shape features are extracted from spot images and their dimension is reduced by the implementation of GA

1.3. Contribution To address the problems stated above, this work proposes an automatic system for the identification and classification of the banana and apple diseases. The proposed system consists of two major steps i.e. identification and classification. At first, the contrast stretching is performed on the original image to enhance the visual quality of disease regions. Next, the disease regions are segmented using a novel correlation coefficient method. In the next step, the deep architecture is implemented using pre-trained models. The VGG-VD-16 and CaffeAlexNet models are used for the extraction of deep features. The deep features are fused based on parallel architecture. After fusion, the max pooling of 2 × 2 is performed and highest value features are selected. Further, the genetic algorithm (GA) is applied on maximum selected features and most prominent features are selected with the vector size 1 × 1000 . Finally, the multi-class SVM is utilized for the classification. 221

Computers and Electronics in Agriculture 155 (2018) 220–236

M.A. Khan et al.

(Xue et al., 2016) and correlation. Finally, the reduced features are classified by SVM (Ravindra Naik and Sivappagari, 2016b) which result in improved system performance. Dubey and Jalal (2016) introduced a new method for classification of apple diseases using color, texture and shape features. The infection regions in the fruits are segmented with Kmeans clustering method and then features are extracted for each infected region. Finally, the features are classified by M-SVM and showed that the introduced method is better as compared to other methods which use an individual set of features. From the above studies and literature survey, we conceived that the pre-processing step is a necessary for enhancing the visual quality of infection parts and most of the articles ignored this step. Secondly, the K-means clustering, simple thresholding, region growing and edge detection segmentation methods are used for disease identification but these techniques are much simple and can degrade the system accuracy when the input image is complex. Thirdly, we observed that the shape features are not performing well for classification and the color features are mostly used for this purpose and showed better performance.

quality. The top-bottom filter is represented by Eqs. (1)–(4):

Th (x , y ) = I (x , y ) I (x , y )

(1)

Bh (x , y ) = I (x , y ) •

(2)

I (x , y )

TA (x , y ) = Th (x , y ) + I (x , y ) +

(3)

TF (x , y ) = TA (x , y ) Bh (x , y ) + ,

(4)

where Th (x , y ) and Bh (x , y ) represent the top and bottom hat filter images respectively. TA (x , y ) represents the enhanced image obtained after addition of original image with top hat image. TF (x , y ) is enhanced RGB image. denotes a static parameter having value always between 0 and 1. and • are the opening and closing operators that respectively improve the global and local contrast of input image. The parameter is selected on the basis of mean value of image TF (x , y ) , and its value is changed for each iteration. Mostly, the value of lies in the range 0.4 and this value must change for other 0.4–0.469. In this work, datasets. The mean and variance of enhanced image TF (x , y ) is calculated by Eqs. (5) and (6):

3. Proposed method

N

µ=

This section explains the proposed method which broadly consists of two major steps i.e. disease identification and disease classification. Each of these steps is further divided into a sequence of multiple sub steps. The first step consists of contrast stretching and segmentation of disease regions whereas, the second step consists of deep architecture based classification of banana and apple diseases. Fig. 1 shows the overall system architecture of proposed solution whose each step is explained as under.

i=1 N 2

Xi N

= i=1

Taj (x , y ) =

Xi2 N2

( )

Xi 2 N

(5)

(TF (x , y ), ),

{µ ,

(6)

2},

where Taj (x , y ) represents a more enhanced image as compared to TF (x , y ), represents the intensity value adjusting function, and mean and variance of TF (x , y ) are represented by µ and 2 respectively. The images obtained after performing above contrast stretching steps are demonstrated in Fig. 2, along with their respective histograms. In the next step, Taj (x , y ) image is divided into RGB channels. This is done to improve the visual quality of infectious regions of image. The contrast range of an enhanced image is calculated from the local maximum and minimum of each channel. The RGB channels are calculated as given in Eq. set (7),

3.1. Contrast stretching Contrast stretching is applied to improve the image quality through variation in pixel intensity values. The purpose of contrast stretching is to enhance the local and global contrast of input image to make the diseased regions more visible as compared to the background. Overall system accuracy suffers due to several noise contributions. To overcome this issue, three step hybrid contrast stretching technique is adopted in this work. In the first step, the top-bottom hat filter is applied on the original image in order to adjust the pixels’ intensity values to improve the local contrast. The top-bottom filter directly affects on the background as well as foreground objects resulting in improved visual

R x, y =

R 3 j=1

Rj

,

G x, y =

G 3 j=1

Gj

,

B x, y =

B 3 j=1

Bj (7)

where R (x , y ), G (x , y ), B (x , y ) represent the red, green and blue

Fig. 1. A system architecture of proposed disease identification and classification. 222

Computers and Electronics in Agriculture 155 (2018) 220–236

M.A. Khan et al.

Fig. 2. Results of initial image enhancement steps (a) Original image I (x , y ) ; (b) enhanced image TA (x , y ) before adjusting the intensity values; (c) enhanced image after adjusting the intensities values TF (x , y ) .

channels respectively and j {1, 2, 3} for the respective channels R, G , and B respectively. Local maximum Lmax and local minimum Lmin are calculated by Eq. (8),

Lmax = max ( j ),

Lmin = min ( j ),

j

recent literature on computer vision with applications such as medical imaging and agriculture, etc. Sharif et al. (2018a), Attique Khan et al. (2018), Nasir et al. (2018), and Sharif et al. (2017). This work proposes a novel saliency framework for the extraction of diseased part from the enhanced image. The proposed saliency framework comprises of three primary steps: (a) extraction of texture and color features including Segmentation-based Fractal Texture Analysis (SFTA) (Costa et al., 2012) and local binary patterns (LBP) (Sharif et al., 2018a), (b) applying correlation coefficient technique on the extracted features and (c) calculation of mean deviation (M.D) and Harmonic Mean (H.M) for correlated and uncorrelated features and feeding into a weight assignment function. Initially, the texture and color features such as LBP and SFTA are extracted from the enhanced image. Mostly, texture features are utilized for the classification of diseases however, in our case they are important for the identification of disease spots in the given image. In the next step, three types of color features are extracted which include RGB, HSV, and YCbCr. The mean, range, variance, kurtosis and skewness is calculated for each channel and these features are then concatenated with texture features. The correlation coefficient is performed on these features with dimension 1 × 125. This function calculates the minimum and maximum correlated features from each other. The features with minimum cross-correlation among them are considered as presenters of diseased region whereas, the remaining features represent the background or healthy region. In the next step, H.M and M.D are calculated from both minimum and maximum correlated features and given as inputs to the weighted function for assignment of weights. The features which have values higher than the assigned weights classify the diseased regions whereas, the other features classify the healthy regions. Mathematically, the above calculation steps are described as following.

{R (x , y ), G (x , y ), B (x , y )} (8)

where denotes the minimum and maximum function for j which represents the red, green, and blue channel pixels, respectively. In the next step, the contrast range of an enhance image is obtained by adding the local maximum and minimum values. Mathematically, contrast range Lc is defined by Eq. (9), j j Lc = Lmin + Lmax .

(9)

Finally, the lower and higher threshold values are obtained from Taj (x , y ) image and are given as inputs to the threshold function which results in a completely enhanced image. Mathematically,

Fthr x , y =

Obj Lc (HT LT ) BG Lc < (HT LT )

(10)

where Fthr (x , y ) represents the final threshold function, Obj denotes the infected part and BG represents the background or healthy regions of image. HT and LT are the higher and lower threshold values respectively. From the above formulation (Eq. (10)), it is evident that when Lc (HT LT ) , then the effect on the infectious parts is more pronounced, thus resulting their enhanced visual quality. The fully threshold enhanced image is shown in Fig. 3. 3.2. Disease extraction Disease extraction means segmenting the disease spots in a given image. A number of segmentation techniques have been proposed in 223

Computers and Electronics in Agriculture 155 (2018) 220–236

M.A. Khan et al.

Fig. 3. Final enhanced image results with their respective histograms. (a) Red channel; (b) blue channel; (c) green channel; (d) final enhanced image.

Fig. 4. Final segmented results. (a) Original image; (b) enhanced image; (c) final segmented image; (d) original mapped image; (e) mesh graph image.

Let

FSFTA

{SFTA Features}, FLBP

symbol and f¯i denotes the mean of integrated vector Ffused (V ) . For a total of 125 features, each one is correlated with the other remaining 124 features. This results in a total number of 15,625 correlation operations. Highly correlated features classify the disease parts of the image whereas, the other features classify the healthy parts. The H.M and M.D are calculated from correlated and uncorrelated features and their values are fed as inputs to the weight assignment function. Mathematically, H.M and M.D are calculated by Eqs. (13) and (14), and weights assignment function is calculated by Eq. (15).

{LBP Features}, and Fcolor

{Color Features} having dimension n × 21, m × 59 , and l × 45 respectively. After concatenation, the length of fused vector is z × 125, z being the total number of sample images. The correlation coefficient is performed on integrated feature vector Ffused (V ) by Eqs. (11) and (12), (SFTA, LBP , Color ) (V12 )

(SFTA, LBP , Color ) (V1n )

=

n i=1 n i=1

=

n i=1 n i=1

(f1 f¯i )(f2 f¯i ) (f1 f¯i )2 (f2 f¯i ) 2 (f1 f¯i )(fn f¯i ) (f1 f¯i ) 2 (fn f¯i )2

(11)

= ,

where f1 , f2 , fn represent the extracted features,

n n 1 i = 1 fi

(12)

M . D = 0.7979

shows the correlation 224

(13) (14)

Computers and Electronics in Agriculture 155 (2018) 220–236

M.A. Khan et al.

Fig. 5. Final segmented image. (a) Saliency image; (b) 2D contour image; (c) area of saliency image; (d) mesh graph image; (e) 3D contour image; (f) mapped image.

Fig. 6. Compared with existing segmentation methods. (a) Otsu thresholding; (b) simple thresholding; (c) EM segmentation; (d) active contour segmentation; (e) proposed.

W fi =

ln (fi )2 + fi ×

(ln (fi ))

M . D + ln (M . D )2

Fsal x , y =

(15)

where is harmonic mean, M . D is for mean deviation, and W fi denotes the assigned weighted function. Finally, a threshold function for final saliency image is selected as given below Eq. (16):

Disease if (fi

W fi )

Healthy if (fi < W fi ),

(16)

where Fsal (x , y ) is the final saliency image as shown in Figs. 4 and 5. Algorithm 1 presents the main computation steps of proposed saliency method.

225

Computers and Electronics in Agriculture 155 (2018) 220–236

M.A. Khan et al.

Table 1 Disease identification results in terms of accuracy. Average accuracy is 90.46. Disease identification accuracy Image No

Accuracy (%)

Image No

Accuracy (%)

Image No

Accuracy (%)

1 2 3 4 5 6 7 8 9 10

93.56 94.25 90.23 90.17 92.19 93.25 95.47 97.35 95.56 96.12

11 12 13 14 15 16 17 18 19 20

93.17 91.59 90.34 91.46 97.45 94.50 95.35 96.24 90.11 96.10

21 22 23 24 25 26 27 28 29 30

92.76 93.45 98.00 94.90 94.89 95.13 93.19 89.23 89.10 88.00

Fig. 7. Proposed segmentation results compare with ground truth images. (a) Original image; (b) proposed segmented image; (c) mesh graph image; (d) ground truth image.

226

Computers and Electronics in Agriculture 155 (2018) 220–236

M.A. Khan et al.

Algorithm 1. Disease Detection Algorithm 1: 2: 3: 4: 5: 6: 7:

Saliency Image Output: Fsal (x , y ) Enhanced Image Input: Fthr (x , y ) While (i: 1 to N ) FSFTA SFTA Features FLBP LBP Features FColor Color Features Ffused (V ) Integrated Features

8: For (j: 1 to 125) 9: (SFTA, LBP, Color ) (V12)

Correlated Features

(SFTA, LBP , Color ) (V1n )

Correlated Features

10:

results for all images are calculated in comparison with their ground truth images as defined in Eq. (17), and visual results are demonstrated in Fig. 7. Mathematically, the accuracy is computed as,

Accuracy =

(17)

where TPi presents the similar pixels of segmented and ground truth image, Fsal (x , y ) is final saliency image, Gt (x , y ) is ground truth image and i represents the total number of pixels in the segmented and ground truth images. 3.3. Deep features Feature extraction plays an important object identification technique in machine learning domain (Akram et al., 2018; Sharif et al., 2018b; Iqbal et al., 2018). In literature, several features have been investigated for object identification purposes such as shape (Zhang et al., 2017; Khan et al., 2018), texture (Zhang et al., 2017; Majdar and Ghassemian, 2017), color (Koviera and Bajla, 2013) and geometric features (Yousefi et al., 2017; Sharif et al., 2018c). Each feature has advantages in its specific context. For example, in case of agricultural and biomedical imaging, the shape feature suffers from accuracy degradation for disease classification. This is because, certain diseases have similar shape features but are visually distinct. In this article, a number of diseases are analyzed for apple and banana fruits such as apple scab, apple rot, banana sigotka, banana cordial leaf spot, banana diamond leaf spot and deightoniella leaf and fruit spot. The banana diseases are apparently similar in their shape but are visually different in their color contents. In order to overcome this problem, deep features are extracted with the help of pre-trained models as VGG16 and Caffe-AlexNet, as shown in Fig. 8. The VGG16 consists of 16 convolutional layers (CLs) with the kernel size 3 × 3, 5 max pooling layers of size 2 × 2 , 3 fully connected layers (FCL) and finally, a linear softmax layer. At first, a feature vector of size 1 × 4096 is obtained from FCL7 with x35 output layer. Next, the CafeAlexnet (Alex et al., 2012; Russakovsky et al., 2015) is applied and is organized as 5 CL, 3 max pooling layers, 3 FCL and one softmax layer. From this architecture, a feature vector of size 1 × 4096 is extracted at

11: For (k: 1 to 15625) 12: CheckCorrelation Disease 13: Min Correlation Healthy 14: Max Correlation 15: k = k + 1 16: End For 17: j = j + 1 18: End For n 19: = n 1 20: 21: 22: 23:

TPi × 100, Fsal (x , y )(i ) + Gt (x , y )(i) TPi

i = 1 fi

M . D = 0.7979 Put into Eq. (16) Fsal (x , y ) Final Saliency Image End While

Fig. 6 demonstrates that for the considered dataset, the proposed saliency method outperforms other famous segmentation techniques i.e. Otsu thresholding, simple thresholding, expectation maximization (EM) segmentation and active contour based segmentation. Table 1 collects the disease identification results measured for 30 randomly selected images. The proposed saliency method achieves a maximum and average accuracy of 98.0 % and 90.46% respectively. The accuracy

Fig. 8. Proposed deep architecture for features extraction and selection. 227

Computers and Electronics in Agriculture 155 (2018) 220–236

M.A. Khan et al.

FCL8, where FCL8 is selected using fc6, fc7 and fc8, to which x19 is set as output layer. Thereafter, fused VGG16 and AlexNet deep CNN features based on parallel method. The primary purpose of feature fusion is to integrate the patterns to form a single vector in order to obtain significant results, compared to individual feature vector. In this article, we are adopting a parallel fusion methodology, explained below. Let FG1(n × 4096) and FG2(m × 4096) represent feature vectors extracted from VGG16 and Caffe-AlexNet, trained on same dataset, . In this work, as mentioned above, we employed parallel approach for features fusion having same dimensions. The results fused vector is constructed by finding maximum value, given as: i

order of its fitness values, WFi is last number of feature in the initialized population. The 1 is selected parent pressure, which is selected 6. Finally, the fitness function is defined as in Eq. (23):

Fitness (V ) =

3.3.3. Multi-class SVM The support vector machine (SVM) is mostly used in the classification and regression problems. In this work, six types of apple and banana diseases are classified. Therefore, a multi-class SVM (M-SVM) is used which works as a base classifier. The M-SVM separates an M class problem into a series of two class problems. In this work, a one-againstall method is used for M-SVM classification. The working principal is based on the highest value selection principle. It means, when the M classes are combined to each other then the class with higher probability value is selected as a winner by the classifier. The brief description of this method is as follows. Assume k-SVM models, where k = 1, …, CN and CN is the maximum number of classes. The N th SVM is trained with positive labels to predict a set of m training data n, i = 1, …, m and y (x1, y1), …, (x i , yj ) , where x i 1, 2, …, k is the j class of x i . The optimization problem solved by Eq. (24) which explains the cost function of SVM classifier defining the hyperplanes in Eq. (25) and later used the threshold function in Eq. (26) to assign the binary labels by Eq. (27) (Liu and Zheng, 2005).

(1 × 4096) , incorporates maximum inThe results feature vector, formation form both feature vectors.

3.3.1. Pooling Pooling is the next step after fusion in which dominant features are determined. The 2 × 2 window size is selected for fused vector and the highest value from each window operation is acquired. This is illustrated in Fig. 9, where the fused feature vector consists of total 16 blocks of window size 2 × 2 . After max pooling, the highest value feature is selected from each block. For this particular example, the dimension of resultant vector after max pooling operation is reduced to 1 × 9. The max pooling operation is described mathematically as Eqs. (19) and (20), 4096

Pooling =

=

FS (V ) = max (

[Fusion (V )2× 2] p=1

(19)

2 × 2 ),

(20)

where represents the max pooling operator for window size 2 × 2 . For the number of images n, the size of highest value feature vector has size n × 2304 .

=

w ,b , m

(w m)T

f (x i ) =

FSi (FSi)

where FSi = exp

×

SFi W Fi

i=1

1+

m i ,

m i

if yi = m ,

Fc : ( S > C ) (minPixels < S < maxPixels) ( Sconected > S )}

0 yi = +1 < 0 yi = 1

(24) (25)

(26) (27)

4. Experimental results The proposed method is verified on 6309 sample images of apple and banana fruits. The selected images are collected from Plant Village (Islam et al., 2017), CASC IFW (Singh et al., 2011) and Plant Village for Banana (Plant Village Dataset; Amara et al., 2017) datasets. For classification on apple fruit, two types of disease are selected i.e. apple scab and apple rot with healthy images. In addition, four types of banana diseases are selected for classification which include banana sigotka, banana cordial leaf spot, banana diamond leaf spot and deightoniella leaf spot. Both healthy and diseased images are applied for the selected fruit datasets. Fig. 10 shows few sample images from the selected datasets. For experimental results, the performance of M-SVM classifier is

(22) 1

bm

l

where yi = { +1, 1} are the class labels, (w, b) are pairs of parameters such as weights and constant which correspond to the hyperplane, (x ) is a non linear mapping function, f (x i ) is a selection function over n are some objects that are used for training the hyperplanes, and x i classifier (Liu and Zheng, 2005). The proposed M-SVM results are compared with few state of the art classification methods such as Decision Tree (DT), Logistic Regression (LR), Linear SVM (L-SVM), Quadratic SVM (Q-SVM), Cubic SVM (C-SVM), Fine-KNN (F-KNN), Weighted-KNN (W-KNN) and Ensemble Boosted Tree (EBT).

where is crossover function, FSi = × FS1 + (1 ) × FS2 and FSi + 1 = × FS2 + (1 ) × FS1. The parameters FSi , FSi + 1 are the parent features and is a value between 0 and 1. After cross over, the uniform mutation (Sivanandam and Deepa, 2007) is performed having a mutation rate of 0.3. Then the selection is done by Roulette wheel criteria (Lipowski and Lipowska, 2012). The probability of Roulette wheel is calculated by Eq. (22):

P=

(x i ) +

(F ) = {Fx

(21)

(FSi , FSi +1)

1 nT n (w ) w + C 2

min m m

3.3.2. Genetic Algorithm The Genetic Algorithm (GA) is applied to optimize the feature vector obtained as a result of max pooling. The primary purpose of this optimization step is to determine from the extracted feature vector, the most useful and prominent features while discarding the redundant ones. This work applies the GA for prominent features extraction for the classification of apple and banana diseases. The GA is selected to improve the classification accuracy in the presence of similar shapes but different color contents of diseased image regions. The classical GA consists of five major steps which include (a) population vector initialization that is 1000, (b) computation of fitness of each individual of population, (c) selection of the fittest individuals, (d) crossover and mutation of individuals, (e) generation of new population for the next iteration. A uniform crossover (Sastry et al., 2014) is performed in this work with a selection rate of 0.5. The crossover function is implemented as Eq. (21), C

(23)

where FSi denotes the set of selected features after applying Roulette wheel selection criteria. FSi and denote respectively, the mean and standard deviation of selected features. In this work, the fitness function of Eq. (23) yields and optimized feature vector of size 1 × 1000 . Later, a multi-class SVM based classification step is performed on this vector as discussed below.

(18)

= max ([FG1i , FG2i ])

(FSi FSi )2

, SFi is sorted population in descending 228

Computers and Electronics in Agriculture 155 (2018) 220–236

M.A. Khan et al.

Fig. 9. An example of max pooling of window size 2 × 2 .

Fig. 10. Sample images of selected diseases.

compared with the other classifiers including the DT, L-SVM, Q-SVM, CSVM, F-KNN, W-KNN, and EBT. Experiments are performed to obtain the performance results of all stages i.e. (a) VGG16, (b) Cafe-AlexNet, (c) Fusion process and (d) proposed max pooling and GA based selection method. In order to make comparisons, six experiments with each set of extracted features (i.e. VGG-VD-16, Caffe-AlexNet, Fused vector) are performed for each selected disease. Five performance metrics are calculated for each experiment. These include sensitivity (Sen), specificity (Spec), precision (Prec), false positive rate (FPR) and accuracy (Acc). All the experiments are performed using Matlab 2017a, on a personal computer with 3.40 GHz Intel Core i7 microprocessor. A GPU is attached with the system and all the results are obtained in a highspeed environment. We currently have a Sapphire R9-290 X Tri-X OC Edition with 4 Gigabytes of DDR5 Video RAM and a 512 bit of Memory Bus Width.

samples of apple scab using 10-fold cross validation. The classification results of individual feature set are presented in Table 2 showing a maximum accuracy of 93.4% and 94.3% on M-SVM with FPR rates 0.06 and 0.05 respectively. After performing parallel fusion, the classification accuracy improves to 94.4% for FPR rate 0.04 and specificity of 100% on M-SVM. This shows improved performance as compared to individual features as shown in Fig. 11. Finally, the proposed selected features are utilized for the classification results and shows improved performance having maximum accuracy 96.2% with FPR 0.03, and sensitivity 97.0% on M-SVM. The proposed results are sufficiently well as compared to individual and fused feature vector as presented in Table 2 and Fig. 11. The execution time of proposed method is 2.732 Image Per Second (IPS) and overall training time is 132.456 s.

4.1. Experiment 1: Apple scab

In this experiment, classification performance is evaluated for apple rot disease with healthy images. The total number of 2679 samples are collected including 621 disease leaf and 2058 healthy images for evaluation. The 50:50 strategy is done for training and testing the proposed method using 10-fold cross validation. It means 310 disease images and 1029 healthy images are selected for both training and testing the classifier. The results are presented in Table 3 and show a maximum accuracy of 98.1% with FPR of 0.02. A value of 98.0% is achieved each

4.2. Experiment 2: Apple rot

The total 2688 samples of apple scab disease including 630 disease and 2058 healthy images are collected for classification. For testing the performance 50:50 strategy is adopted for training and testing. It means 315 disease and 1029 healthy images are used for training and testing. In testing, the extracted deep features such as VGG16 and Cafe-AlexNet are individually utilized for the classification of healthy and unhealthy 229

Computers and Electronics in Agriculture 155 (2018) 220–236

M.A. Khan et al.

Table 2 Apple Scab: The results on extracted features and fused vector. Pro denotes the proposed selected features. Method

Deep features AlexNet

DT

LR

L-SVM

Q-SVM

C-SVM

F-KNN

W-KNN

EBT

M-SVM

VGG ✓

































Performance evaluation Fused



















Pro

Sen (%)

Spec (%)

Prec (%)

FPR

ACC (%)



89.1 86.0 89.0 91.5

93.6 89.0 89.0 89.0

89.8 86.0 89.0 91.5

0.10 0.14 0.11 0.10

90.4 86.1 88.9 91.7



83.5 75.0 80.6 89.1

89.0 83.0 72.0 93.6

84.0 75.5 81.5 89.8

0.11 0.17 0.28 0.10

83.3 75.0 80.6 90.4



88.5 89.0 92.0 94.6

94.0 89.0 92.0 93.6

89.5 89.0 92.0 93.5

0.06 0.11 0.08 0.06

88.9 88.9 92.4 94.5



91.5 91.5 91.5 92.0

94.0 89.0 94.0 93.4

91.5 91.5 91.5 92.0

0.06 0.11 0.06 0.05

91.7 91.7 91.7 92.6



91.5 88.5 92.0 89.1

94.0 89.5 94.0 93.6

91.5 94.0 92.0 89.8

0.06 0.10 0.08 0.10

91.7 88.9 91.9 90.4



83.0 85.0 86.0 86.0

94.0 92.0 94.0 94.4

85.0 86.0 87.0 87.7

0.11 0.08 0.06 0.06

83.3 84.3 86.1 86.6



80.5 89.0 83.0 89.1

94.0 89.0 94.0 93.6

83.0 89.0 85.0 89.8

0.06 0.11 0.06 0.16

80.6 88.9 83.3 90.4



89.7 88.5 89.0 91.5

90.2 94.0 100 94.0

89.60 89.5 91.0 91.5

0.09 0.06 0.05 0.06

90.6 88.9 88.9 91.7



93.5 94.0 94.5 97.0

98.0 94.0 100 94.2

93.0 94.0 95.0 96.0

0.05 0.06 0.04 0.03

93.4 94.2 94.4 96.9

Fig. 11. Comparison of proposed feature selection method with individual features and fused features.

230

Computers and Electronics in Agriculture 155 (2018) 220–236

M.A. Khan et al.

Table 3 Apple Rot: The proposed features selection results. Method

Performance evaluation Sensitivity (%)

Specificity (%)

Precision (%)

FPR

Accuracy (%)

92.5 90.4 95.0 97.5 97.5 87.5 89.0 96.5 98.0

92.0 82.3 92.0 97.0 97.0 77.0 80.0 95.0 98.0

91.5 87.0 93.5 96.5 96.5 85.0 86.5 95.5 98.0

0.07 0.10 0.05 0.03 0.03 0.13 0.12 0.04 0.02

92.5 87.7 94.3 97.2 97.2 84.9 86.8 96.2 98.1

Decision Tree Logistic Regression Linear SVM Quadratic SVM Cubic SVM Fine KNN Weighted KNN EBT M-Class SVM

Table 4 Banana Sigatoka Disease: Classification results on extracted features and proposed selected features. Method

Deep features AlexNet

DT

LR

L-SVM

Q-SVM

C-SVM

F-KNN

W-KNN

EBT

M-SVM



















VGG ✓

















Performance evaluation Fused



















Pro

Sen (%)

Spec (%)

Prec (%)

FPR

ACC (%)



83.0 85.0 90.3 91.1

83.0 80 89.1 90.0

83.0 85.5 91.0 91.5

0.17 0.08 0.09 0.08

83.3 85.0 91.0 91.7



83.5 87.0 88.5 100

77.0 97.0 97.0 100

83.5 88.5 89.5 100

0.16 0.13 0.12 0.00

83.3 86.7 88.3 100



85.0 83.0 95.0 100

83.0 83.0 90.0 100

85.0 83.0 95.5 100

0.15 0.17 0.05 0.00

85.0 83.0 95.0 100



88.0 90.0 90.0 100

93.0 93.0 93.0 100

89.0 90.5 90.5 100

0.12 0.13 0.10 0.00

88.3 86.7 90.0 100



88.0 93.0 86.5 100

93.0 93.0 90.0 100

89.0 93.0 86.5 100

0.12 0.07 0.13 0.00

88.3 93.3 86.7 100



85.0 83.5 88.5 98.7

87.0 87.0 90.0 96.0

85.0 83.5 88.5 99.0

0.15 0.17 0.12 0.01

85.0 83.3 88.3 98.7



92.0 92.0 91.5 98.5

97.0 97.0 93.0 97.7

92.0 92.0 91.5 99.0

0.08 0.08 0.07 0.01

91.7 91.7 91.7 98.7



85.0 92.0 95.0 95.5

80.0 93.0 96.0 97.0

85.5 92.0 95.5 97.5

0.15 0.08 0.05 0.03

85.0 91.7 95.0 97.4



95.0 95.0 96.5 100

90.0 93.0 93.0 100

95.5 95.5 97.0 100

0.05 0.05 0.03 0.00

95.0 95.0 96.7 100

for sensitivity, specificity and precision. The QSVM and CSVM also perform well and classification accuracy is 97.2%. Table 3 clearly indicates that the proposed method achieves improved performance on M-SVM by utilizing selected features. The execution time of M-SVM is 2.673 IPS and average execution time of all selected classification method is 2.934 IPS. The training execution time is 152.467 s, which is little bit high as compared to the one for Experiment 1.

4.3. Experiment 3: Banana Sigatoka In this experiment, the classification is performed on the images of Banana Sigatoka. A total of 1200 images are collected from the Plant Village Dataset with equal percentage of healthy and unhealthy images. For evaluation of results, 10-fold cross validation is performed and a 50:50 strategy is adopted for training and testing. It means 300 healthy 231

Computers and Electronics in Agriculture 155 (2018) 220–236

M.A. Khan et al.

Fig. 12. Comparison of proposed selected features results with individual feature set and fused vector. Table 5 Banana cordial leaf spot: Classification results. Method

Deep features AlexNet

DT

LR

L-SVM

Q-SVM

C-SVM

F-KNN

W-KNN

EBT

M-SVM



















VGG ✓

















Performance evaluation Fused



















Pro

Sen (%)

Spec (%)

Prec (%)

FPR

ACC (%)



87.5 70.0 88.5 92.5

90.0 85.0 84.0 90.0

87.5 72.0 89.0 92.5

0.12 0.30 0.11 0.07

87.5 70.0 89.2 92.5



72.5 70.0 90.0 97.0

75.0 85.0 95.0 95.6

72.5 72.0 90.0 97.3

0.26 0.30 0.05 0.02

72.5 70.0 90.0 97.7



75.0 80.0 95.0 100

80.0 85.0 90.0 100

75.5 80.0 95.5 100

0.25 0.20 0.05 0.00

75.5 80.0 95.0 100



85.0 85.0 95.0 100

85.0 85.0 90.0 100

85.0 85.0 95.5 100

0.15 0.15 0.05 0.00

85.0 85.0 95.5 100



82.5 82.5 95.0 96.0

85.0 85.0 90.0 92.0

82.5 82.5 95.5 97.5

0.17 0.17 0.05 0.04

82.5 82.5 95.0 96.9



82.5 77.5 85.0 92.0

90.0 85.0 80.0 84.0

83.0 78.0 85.5 95.5

0.35 0.22 0.15 0.08

82.5 77.5 85.5 93.8



77.5 75.0 76.0 92.5

85.0 90.0 52.0 85.0

78.0 77.5 88.5 93.5

0.22 0.25 0.24 0.07

77.5 77.0 81.5 92.5



85.0 72.5 95.0 97.0

90.0 80.0 90.0 96.0

85.5 73.0 95.0 97.0

0.15 0.27 0.15 0.03

85.0 72.5 95.0 96.9



85.0 87.5 97.5 100

85.0 85.0 100 100

85.0 87.5 97.5 100

0.05 0.13 0.025 0.00

85.0 87.5 97.5 100

and 300 diseased images are selected from total images. The experimental results are performed on AlexNet, VGG16, fused vector and presented in Table 4. The results are also compared with the proposed feature selection method. The proposed feature selection method results are compared with each extracted feature set and fused feature vector as shown in Fig. 12. The proposed method achieves a maximum

classification accuracy of 100% on M-SVM. The LR, L-SVM, Q-SVM, and C-SVM also perform well on selected features and achieve 100% accuracy. The execution time of M-SVM is 3.145 IPS and average execution time of other classification methods is 3.752 IPS. Also, the average training time is 147.896 s. 232

Computers and Electronics in Agriculture 155 (2018) 220–236

M.A. Khan et al.

Fig. 13. Comparison of proposed selected features results with individual feature set and fused vector for banana cordial leaf. Table 6 Deightoniella Leaf Spot: Proposed features selection results. Method

Decision Tree Logistic Regression Linear SVM Quadratic SVM Cubic SVM Fine KNN Weighted KNN EBT M-Class SVM

Performance evaluation Sensitivity (%)

Specificity (%)

Precision (%)

FPR

Accuracy (%)

86.9 88.9 88.9 79.8 88.9 89.4 86.3 79.8 90.9

92.0 96.0 96.0 96.0 96.0 88.0 100 96.0 100

86.9 91.1 91.1 86.6 91.1 86.3 94.65 86.6 96.3

0.13 0.11 0.11 0.20 0.11 0.10 0.13 0.20 0.09

88.9 91.7 91.7 86.1 91.7 88.9 91.7 86.1 94.4

Table 7 Banana Diamond Leaf Spot: Classification results on proposed selected features. Method

Decision Tree Logistic Regression Linear SVM Quadratic SVM Cubic SVM Fine KNN Weighted KNN EBT M-Class SVM

Performance evaluation Sensitivity (%)

Specificity (%)

Precision (%)

FPR

Accuracy (%)

94.9 89.5 92.2 96.9 96.9 94.9 93.75 91.75 98.45

96.0 100 100 100 100 96.0 100 96.0 100

94.5 89.0 91.65 96.3 96.3 94.5 93.1 91.15 98.1

0.05 0.10 0.07 0.03 0.03 0.05 0.06 0.08 0.01

94.7 87.7 91.2 96.5 96.5 94.7 93.0 91.2 98.2

4.4. Experiment 4: Banana cordial leaf spot

and 600 unhealthy) are collected and a strategy of 50:50 for training and testing. All classification results are obtained on 10-fold cross validation. The classification is performed on selected classifier as shown by the results in Table 6 with achieved maximum accuracy of 94.4% with FPR 0.09 and a sensitivity of 90.0%, specificity of 100% and the precision rate of 96.3% on M-SVM. The best execution time of this experiment is 2.145 IPS.

In this experiment, the banana cordial leaf spot disease images are classified along with banana healthy leaves. A total of 1200 banana disease spot images are collected with equal percentage of healthy and unhealthy images. For experimental results, 50:50 strategy is adopted. All results are performed by 10-fold cross validation. The classification is performed for each individual feature set (VGG16 and AlexNet) and performance results are compared with fused feature vector as shown in Table 5 and Fig. 13. 100% accuracy and sensitivity is achieved with MSVM for proposed selected features. The comparison of proposed features selection method with individual features and a fused feature vector is clearly shown in Fig. 13. The execution time of this experiment is 1.934 IPS and average time of other methods is 2.103 IPS.

4.6. Experiment 6: Banana diamond leaf spot Banana Diamond Leaf Spot is caused by Cercospora hayi often in the conjunction with Fusarium spp. The hidden localities become the prominent lesions during transportation. Initially, this disease looks as a yellow shape and effect of the fruit growth. For this experiment, a total of 1200 leaf images are collected with equal percentage of healthy and diseased image. Again, a 50:50 strategy is adopted for training and testing. All results are obtained by using 10-fold cross validation. The classification results on proposed features selection method are presented in Table 7 which show a maximum accuracy of 98.2% with FPR of 0.01 on M-SVM. As indicated by the table, the selected features

4.5. Experiment 5: Deightoniella leaf spot disease of banana Deightoniella leaf spot disease in is much common especially in the old banana farms. This disease is identified by the appearance of inverted ‘V’ formed via leaf edges. Total 1200 leaves images (600 healthy 233

Computers and Electronics in Agriculture 155 (2018) 220–236

M.A. Khan et al.

Table 8 Classification results on all selected diseases. Method

Deep features AlexNet

DT

LR

L-SVM

Q-SVM

C-SVM

F-KNN

W-KNN

EBT

M-SVM



















VGG ✓

















Performance evaluation Fused



















Pro

Sen (%)

Spec (%)

Prec (%)

FPR

ACC (%)



87.5 88.5 91.0 91.0

91.7 77.0 82.0 87.0

87.7 90.5 92.5 91.5

0.5 0.11 0.09 0.09

87.5 88.6 90.9 91.3



81.5 82.0 87.5 89.0

86.0 82.0 91.7 83.0

82.0 82.0 87.7 90.5

0.18 0.18 0.12 0.11

81.8 81.8 87.5 89.9



83.3 93.0 93.0 97.5

91.7 86.0 90.0 95.0

84.3 94.0 94.0 98.0

0.16 0.07 0.07 0.02

83.3 93.2 93.2 97.7



91.7 95.5 93.0 97.3

91.7 91.0 86.0 95.0

91.7 96.0 94.0 97.4

0.08 0.04 0.07 0.02

91.7 95.5 93.2 97.2



91.7 93.0 93.0 93.5

91.7 86.0 86.0 90.0

91.7 94.0 94.0 94.5

0.08 0.07 0.07 0.06

91.7 93.2 93.2 96.1



79.1 86.5 81.5 88.5

90.0 82.0 77.0 80.0

85.3 86.5 82.0 91.0

0.20 0.13 0.18 0.11

79.2 86.4 81.8 89.9



83.3 90.5 90.5 92.5

83.3 86.0 86.0 90.0

83.3 91.5 91.5 93.0

0.20 0.09 0.09 0.07

83.3 90.9 90.9 92.8



83.3 88.5 91.0 97.0

83.3 82.0 84.0 97.0

83.3 89.5 92.5 97.0

0.16 0.11 0.09 0.03

83.3 88.6 90.9 97.1



92.2 97.3 97.5 98.5

92.6 95.0 95.0 100

92.3 97.4 97.4 98.5

0.07 0.02 0.02 0.01

92.6 97.2 97.2 98.6

Fig. 14. Comparison of proposed method on all selected apple and banana diseases.

perform efficiently on M-SVM as compare to other classification methods. The execution time of proposed method for M-SVM is 1.675 IPS whereas, the average time is 1.745 IPS which is better as compared to previous experiments.

4.7. Experiment 7: Classification of all selected diseases In this experiment, the classification results are obtain for all selected type of diseases such as apple scab, apple rot, banana sigotka, banana cordial leaf spot, banana diamond leaf spot and deightoniella leaf and fruit spot. A 50:50 strategy is adopted for training and testing 234

Computers and Electronics in Agriculture 155 (2018) 220–236

M.A. Khan et al.

Table 9 Comparison with existing methods. Reference

Year

Disease Name

Accuracy (%)

Dubey and Jalal (2012) Nachtigall et al. (2016) Dubey and Jalal (2012) Dubey and Jalal (2016) Amara et al. (2017) Proposed

2012 2016 2012 2016 2017 2017

Apple Rot Apple Scab Apple Scab Apple Scab Sigotka Apple Scab Apple Rot Banana sigotka Banana cordial leaf spot Banana diamond leaf spot Deightoniella leaf and fruit spot Overall

90.71 96.60 96.66 93.75 98.61 96.90 98.10 100 100 94.40 98.20 98.60

the classifier. The 10-fold cross validation is performed for experimental results for each feature vector. The results are obtained on individual feature vectors as presented in Table 8 and compared with fused feature vector as shown in Fig. 14. The proposed selected feature results of Table 8 show a maximum accuracy of 98.6% with FRP 0.01, a sensitivity of 98.5%, specificity of 100% and precision of 98.5%. Also, the graphical comparison is shown in Fig. 14. The execution time of Experiment 7 is 1.456 IPS and average time is 1.546 IPS which is significantly well as compared to other experiments.

Fig. 15. Confidence interval plot of accuracy of experiments obtained after performing Monte Carlo simulations for 50 iterations.

significantly good as compared to all feature vectors. The final experiment is performed for classification of all selected diseases and results are presented in Table 8 and Fig. 14, showing a maximum accuracy 98.6% on M-SVM. The comparison of proposed method with existing approaches is shown in Table 9. The efficiency and accuracy of proposed approach is validated by the promising comparison results of Table 9. Finally, we perform Monte Carlo simulations of our experiments on M-SVM. The results are obtained for 50 runs of each experiment. For each experiment, the minimum, average and maximum values are obtained for different performance metrics (i.e. sensitivity, specificity, precision, FPR, and accuracy) and results are demonstrated in Table 10. The basic aim of this test is to check the range of classification results after N = 50 iterations. Finally, Fig. 15 shows the plot of confidence interval range of accuracy obtained after Monte Carlo results for all experiments. The results demonstrate that a high value of accuracy is obtained by the proposed method.

5. Discussion This section interprets the performance of proposed fruits diseases detection and classification method. As discussed earlier, the proposed method consists of two major steps (a) disease detection from the input image and (b) diseases classification. The overall architecture of proposed method is shown in Fig. 1. The first step i.e. disease detection is further divided into contrast stretching and disease extraction steps. The results of proposed contrast stretching and disease extraction approaches are shown in Figs. 2–5. Fig. 6 demonstrates that the proposed segmentation approach yields better visual performance as compared to other approaches. Moreover, the disease detection results on randomly selected images are presented in Table 1 which shows a maximum accuracy of 98.0%. The visual results with ground-truth image are demonstrated in Fig. 7. The second step performs the classification of selected apple and banana diseases. The deep features are extracted from healthy and unhealthy sample images and best features are selected by max pooling. Further, GA is adopted to optimize the selected features as demonstrated in Fig. 8. For each disease, an individual experiment is done for classification, which consists of VGG16 features, Cafe-AlexNet features, their fusion vector and proposed selected feature vector. The classification results for individual experiments are presented in Tables 2, 3, 7, 6, 5 and 4. Figs. 11–13 clearly demonstrate that the fused vector classification results are better as compared to individual features set. In addition, the proposed selected feature vector results perform

6. Conclusion This article investigates machine learning methods to detect and classify apple and banana diseases. Six types of diseases are selected for classification in this article including apple scab, apple rot, banana sigotka, banana cordial leaf spot, banana diamond leaf spot and deightoniella leaf and fruit spot. In the proposed method, series of steps are amalgamated including contrast stretching, diseases segmentation, deep features extraction and classification. The primary reason behind contrast stretching is to aid the segmentation process to accurately identify the diseased regions using correlation coefficient method in combination with color & texture features, H.M and M.D. For classification, selected features are utilized to produce best possible results. In our proposed method, the primary focus is on the segmentation and the feature selection step. We sincerely believe that, the mentioned

Table 10 Minimum, Average and Maximum values of various performance metrics of proposed selection method on MSVM for all selected experiments. Exp.#

1 2 3 4 5 6 7

Measures Sensitivity (Min, Avg, Max)

Specificity (Min, Avg, Max)

Precision (Min, Avg, Max)

FPR (Min, Avg, Max)

Accuracy (Min, Avg, Max)

(96, 96.82, 97) (97, 97.86, 98.00) (99, 99.98, 100) (99, 99.97, 100) (90, 90.04, 90.90) (97, 97.92, 98.45) (97, 97.90, 98.50)

(93, 93.76, 94.20) (97, 97.92, 98.00) (99, 99.96, 100) (99, 99.99, 100) (99, 99.96, 100) (99, 99.97, 100) (99, 99.94, 100)

(95, 95.92, 96.00) (97, 97.90, 98.00) (99, 99.98, 100) (99, 99.99, 100) (99, 95.78, 96.30) (97, 97.70, 98.10) (98, 98.10, 98.50)

(0.03, 0.04, 0.05) (0.02, 0.03, 0.04) (0.00, 0.001, 0.002) (0.00, 0.001, 0.02) (0.09, 0.12, 0.14) (0.01, 0.03, 0.04) ( 0.01, 0.02, 0.03)

(96, 96.48, 96.90) (98, 98.02, 98.10) (99, 99.99, 100) (99, 99.99, 100) (94, 94.01, 94.40) (97.90, 98.00, 98.20) (98, 98.20, 98.60)

235

Computers and Electronics in Agriculture 155 (2018) 220–236

M.A. Khan et al.

areas still have more room for accuracy, therefore, as a future work we will be covering mentioned domains as well as the feature extraction step. Moreover, few more diseases will be added in order to authenticate the proposed method.

Pujari, Jagadeesh D., Yakkundimath, Rajesh Siddaramayya, Byadgi, Abdulmunaf Syedhusain, 2013a. Statistical methods for quantitatively detecting fungal disease from fruits images. Int. J. Intell. Syst. Appl. Eng. 1 (4), 60–67. Pujari, Jagadeesh Devdas, Yakkundimath, Rajesh, Byadgi, Abdulmunaf Syedhusain, 2013b. Grading and classification of anthracnose fungal disease of fruits based on statistical texture features. Int. J. Adv. Sci. Technol. 52 (1), 121–132. Pujari, Jagadeesh D., Yakkundimath, Rajesh Siddaramayya, Byadgi, Abdulmunaf Syedhusain, 2013c. Statistical methods for quantitatively detecting fungal disease from fruits images. Int. J. Intell. Syst. Appl. Eng. 1 (4), 60–67. Pujari, Jagadeesh D., Yakkundimath, Rajesh, Byadgi, Abdulmunaf Syedhusain, 2013d. Automatic fungal disease detection based on wavelet feature extraction and PCA analysis in commercial crops. Int. J. Image* Graph. Signal Process. 6 (1), 24. Pujari, Jagadeesh D., Yakkundimath, Rajesh, Byadgi, Abdulmunaf S., 2014. Identification and classification of fungal disease affected on agriculture/horticulture crops using image processing techniques. In: 2014 IEEE International Conference on Computational Intelligence and Computing Research (ICCIC). IEEE, pp. 1–4. Ravindra Naik, M., Sivappagari, Chandra Mohan Reddy, 2016a. Plant leaf and disease detection by using HSV features and SVM classifier. Int. J. Eng. Sci. 3794. Ravindra Naik, M., Sivappagari, Chandra Mohan Reddy, 2016b. Plant leaf and disease detection by using HSV features and SVM classifier. Int. J. Eng. Sci. 3794. Revathi, P., Hemalatha, M., 2012. Homogenous segmentation based edge detection techniques for proficient identification of the cotton leaf spot diseases. Int. J. Comput. Appl. (0975-888) Volume. Rozario, Liton Jude, Rahman, Tanzila, Uddin, Mohammad Shorif, 2016. Segmentation of the region of defects in fruits and vegetables. Int. J. Comput. Sci. Inform. Secur. 14 (5), 399. Russakovsky, Olga, Deng, Jia, Su, Hao, Krause, Jonathan, Satheesh, Sanjeev, Ma, Sean, Huang, Zhiheng, et al., 2015. Imagenet large scale visual recognition challenge. Int. J. Comput. Vision 115 (3), 211–252. Samajpati, Bhavini J., Degadwala, Sheshang D., 2016. Hybrid approach for apple fruit diseases detection and classification using random forest classifier. In: 2016 International Conference on Communication and Signal Processing (ICCSP). IEEE, pp. 1015–1019. Sannakki, S.S., Rajpurohit, V.S., 2015. Classification of pomegranate diseases based on back propagation neural network. Int. Res. J. Eng. Technol. (IRJET) 2. Sastry, Kumara, Goldberg, David E., Kendall, Graham, 2014. Genetic algorithms. In: Search Methodologies. Springer US, pp. 93–117. Sharif, Muhammad, Khan, Muhammad Attique, Akram, Tallha, Javed, Muhammad Younus, Saba, Tanzila, Rehman, Amjad, 2017. A framework of human detection and action recognition based on uniform segmentation and combination of Euclidean distance and joint entropy-based features selection. EURASIP J. Image Video Process. 2017 (1), 89. Sharif, Muhammad, Khan, Muhammad Attique, Iqbal, Zahid, Azam, Muhammad Faisal, Ikram Ullah Lali, M., Javed, Muhammad Younus, 2018a. Detection and classification of citrus diseases in agriculture based on optimized weighted segmentation and feature selection. Comput. Electron. Agric. 150, 220–234. Sharif, Muhammad, Tanvir, Uroosha, Munir, Ehsan Ullah, Khan, Muhammad Attique, Yasmin, Mussarat, 2018b. Brain tumor segmentation and classification by improved binomial thresholding and multi-features selection. J. Ambient Intell. Hum. Comput. 1–20. Sharif, Muhammad, Khan, Muhammad Attique, Faisal, Muhammad, Yasmin, Mussarat, Fernandes, Steven Lawrence, 2018c. A framework for offline signature verification system: best features selection approach. Pattern Recogn. Lett. Shuaibu, Mubarakat, Lee, Won Suk, Hong, Young Ki, Kim, Sangcheol, 2017. Detection of Apple Marssonina blotch disease using particle swarm optimization. Trans. ASABE 60 (2), 303–312. Singh, Vijai, Misra, A.K., 2015. Detection of unhealthy region of plant leaves using Image Processing and Genetic Algorithm. In: 2015 International Conference on Advances in Computer Engineering and Applications (ICACEA). IEEE, pp. 1028–1032. Singh, Vijai, Misra, A.K., 2017. Detection of plant leaf diseases using image segmentation and soft computing techniques. Inform. Process. Agric. 4 (1), 41–49. Singh, S., Bergerman, M., Hoheisel, G., Lewis, K., Baugher, T., 2011. Comprehensive automation for specialty crops (CASC) developing a more sustainable and profitable US specialty crop industry. Sivanandam, S.N., Deepa, S.N., 2007. Introduction to Genetic Algorithms. Springer Science & Business Media. Thakare, Vishal S., Patil, Nitin N., Sonawane, Jayshri S., 2013. Survey on image texture classification techniques. Int. J. Advancem. Technol. 4 (1), 97–104. Tigadi, Basavaraj, Sharma, Bhavana, 2016. Banana plant disease detection and grading using image processing. Int. J. Eng. Sci. 6512. Vipinadas, M.J., Thamizharasi, A., 2016. Detection and grading of diseases in banana leaves using machine learning. Int. J. Sci. Eng. Res. 7 (7). Xue, Bing, Zhang, Mengjie, Browne, Will N., Yao, Xin, 2016. A survey on evolutionary computation approaches to feature selection. IEEE Trans. Evol. Comput. 20 (4), 606–626. Yousefi, Ehsan, Baleghi, Yasser, Sakhaei, Sayed Mahmoud, 2017. Rotation invariant wavelet descriptors, a new set of features to enhance plant leaves classification. Comput. Electron. Agric. 140, 70–76. Zhang, Shanwen, Wu, Xiaowei, You, Zhuhong, Zhang, Liqing, 2017. Leaf image based cucumber disease recognition using sparse representation classification. Comput. Electron. Agric. 134, 135–141.

References Akram, Tallha, Khan, Muhammad Attique, Sharif, Muhammad, Yasmin, Mussarat, 2018. Skin lesion segmentation and recognition using multichannel saliency estimation and M-SVM on selected serially fused features. J. Ambient Intell. Hum. Comput. 1–20. Alex, Krizhevsky, Sutskever, Ilya, Hinton, Geoffrey E., 2012. Imagenet classification with deep convolutional neural networks. In: Advances in neural information processing systems, pp. 1097–1105. Amara, Jihen, Bouaziz, Bassem, Algergawy, Alsayed, 2017. A deep learning-based approach for banana leaf diseases classification. In: BTW (Workshops), pp. 79–88. Attique Khan, M., Akram, Tallha, Sharif, Muhammad, Shahzad, Aamir, Aurangzeb, Khursheed, Alhussein, Musaed, Haider, Syed Irtaza, Altamrah, Abdualziz, 2018. An implementation of normal distribution based segmentation and entropy controlled features selection for skin lesion detection and classification. BMC Cancer 18 (1), 638. Bandi, Sudheer Reddy, Varadharajan, A., Chinnasamy, A., 2013. Performance evaluation of various statistical classifiers in detecting the diseased citrus leaves. Int. J. Eng. Sci. Technol. 5 (2), 298–307. Bhatt, Malay S., Patalia, Tejas P., 2017. Species and variety classification of leaves from images. Int. J. Comput. Sci. Network Secur. (IJCSNS) 17 (1), 109. Chuanlei, Zhang, Shanwen, Zhang, Jucheng, Yang, Yancui, Shi, Jia, Chen, 2017. Apple leaf disease identification using genetic algorithm and correlation based feature selection method. Int. J. Agric. Biol. Eng. 10 (2), 74–83. Costa, Alceu Ferraz, Humpire-Mamani, Gabriel, Traina, Agma Juci Machado, 2012. An efficient algorithm for fractal analysis of textures. In: 2012 25th SIBGRAPI Conference on Graphics, Patterns and Images (SIBGRAPI). IEEE, pp. 39–46. Dubey, Shiv Ram, Jalal, Anand Singh, 2012. Detection and classification of apple fruit diseases using complete local binary patterns. In: 2012 Third International Conference on Computer and Communication Technology (ICCCT). IEEE, pp. 346–351. Dubey, Shiv Ram, Jalal, Anand Singh, 2014. Adapted approach for fruit disease identification using images. Available from: arXiv preprint . Dubey, Shiv Ram, Jalal, Anand Singh, 2016. Development of a multi-spectral vision system for the detection of defects on apples. SIViP 10 (5), 819–826. Guo, Zhenhua, Zhang, Lei, Zhang, David, 2010. A completed modeling of local binary pattern operator for texture classification. IEEE Trans. Image Process. 19 (6), 1657–1663. Iqbal, Zahid, Khan, Muhammad Attique, Sharif, Muhammad, Shah, Jamal Hussain, Rehman, Muhammad Habib ur, Javed, Kashif, 2018. An automated detection and classification of citrus plant diseases using image processing techniques: a review. Comput. Electron. Agric. 153, 12–32. Islam, Monzurul, Dinh, Anh, Wahid, Khan, Bhowmik, Pankaj, 2017. Detection of potato diseases using image segmentation and multiclass support vector machine. In: 2017 IEEE 30th Canadian Conference on Electrical and Computer Engineering (CCECE). IEEE, pp. 1–4. Khan, Muhammad Attique, Akram, Tallha, Sharif, Muhammad, Javed, Muhammad Younus, Muhammad, Nazeer, Yasmin, Mussarat, 2018. An implementation of optimized framework for action classification using multilayers neural network on selected fused features. Pattern Anal. Appl. 1–21. Kim, Moon S., Lefcourt, Alan M., Chen, Yud-Ren, Tao, Yang, 2005. Automated detection of fecal contamination of apples based on multispectral fluorescence image fusion. J. Food Eng. 71 (1), 85–91. Kleynen, Olivier, Leemans, Vincent, Destain, M-F., 2005. Development of a multi-spectral vision system for the detection of defects on apples. J. Food Eng. 69 (1), 41–49. Koviera, R., Bajla, I., 2013. Image classification based on hierarchical temporal memory and color features. In: Proceedings of the 9th Int. Conf. on Measurement, pp. 63–66. Lipowski, Adam, Lipowska, Dorota, 2012. Roulette-wheel selection via stochastic acceptance. Physica A: Stat. Mech. Appl. 391 (6), 2193–2196. Liu, Yi, Zheng, Yuan F., 2005. One-against-all multi-class SVM classification using reliability measures. In: Proceedings. 2005 IEEE International Joint Conference on Neural Networks, 2005. IJCNN’05., vol. 2. IEEE, pp. 849–854. Majdar, Reza Seifi, Ghassemian, Hassan, 2017. A probabilistic SVM approach for hyperspectral image classification using spectral and texture features. Int. J. Remote Sens. 38 (15), 4265–4284. Mehl, P.M., Chao, K., Kim, M., Chen, Y.R., 2002. Detection of defects on selected apple cultivars using hyperspectral and multispectral image analysis. Appl. Eng. Agric. 18 (2), 219. Nachtigall, Lucas G., Araujo, Ricardo M., Nachtigall, Gilmar R., 2016. One-against-all multi-class SVM classification using reliability measures. In: Tools with Artificial Intelligence (ICTAI), 2016 IEEE 28th International Conference on. IEEE, pp. 472–476. Nasir, Muhammad, Khan, Muhammad Attique, Sharif, Muhammad, Lali, Ikram Ullah, Saba, Tanzila, Iqbal, Tassawar, 2018. An improved strategy for skin lesion detection and classification using uniform segmentation and feature selection based approach. Microsc. Res. Techn. Plant Village Dataset. < https://plantvillage.org/topics/banana/infos > .

236