Enhancement of template-based method for overlapping rubber tree leaf identification

Enhancement of template-based method for overlapping rubber tree leaf identification

Computers and Electronics in Agriculture 122 (2016) 176–184 Contents lists available at ScienceDirect Computers and Electronics in Agriculture journ...

2MB Sizes 0 Downloads 10 Views

Computers and Electronics in Agriculture 122 (2016) 176–184

Contents lists available at ScienceDirect

Computers and Electronics in Agriculture journal homepage: www.elsevier.com/locate/compag

Original papers

Enhancement of template-based method for overlapping rubber tree leaf identification Sule T. Anjomshoae ⇑, Mohd Shafry Mohd Rahim Dept of Software Engineering, Faculty of Computing, Universiti Teknologi Malaysia, 81310 Skudai, Johor, Malaysia

a r t i c l e

i n f o

Article history: Received 6 September 2015 Received in revised form 24 January 2016 Accepted 2 February 2016

Keywords: Rubber tree leaves Template matching Overlapping object Key point detection SIFT

a b s t r a c t The position of rubber tree leaflets is one of the critical features for rubber clone classification. These leaflets exist in three possible positions: overlapping, touching, or separated. This paper proposed a template-based method for overlapping rubber tree leaf identification. Initially, the key point based feature extraction method is adopted. The key features of overlapping and non-overlapping leaf assist in identifying similar shapes through comparison, using the nearest neighbor algorithm. This process is implemented by constructing a directory which consists of various rubber leaf images with different positions. Next, the key points in the input leaf image are compared with the key points of the template image to identify the position of leaflets. The outcome of this study proves that the template-based method is suitable for overlapping and non-overlapping rubber tree leaf identification. Ó 2016 Elsevier B.V. All rights reserved.

1. Introduction Rubber clone inspection and verification is an important process because it must guarantee that recommended rubber clones produce the maximum yield in future (Shigematsu et al., 2011). Plant experts examine rubber tree clones based on physical characteristics of leaves. Currently, researchers are investigating the automation of clone classification based on rubber tree leaflets (Ong et al., 2012). There are several features of rubber tree leaf to differentiate clone types. Leaf tip, leaf base, form of the leaf, and leaf margin are some of the features that differ rubber clone types. Another unique feature is leaflet position that is created by three compound leaflets radiate from one mutual leaf base (palmate leaflets) (Anjomshoae et al., 2015). These leaflets exist in three positions: overlapping, touching or separated. In our previous study (Anjomshoae, 2014), we discussed that these positions can be classified into two categories: overlapping and nonoverlapping leaflets, based on the angle ranges of the petioles. The identification of overlapping object structures is important not only for feature classification, but also for data analysis in general (Silva and Zhao, 2013). The recognition of overlapping objects in an image is challenging due to the inherited complexity of shape and background noises (Saba et al., 2010). It requires to infer the

⇑ Corresponding author at: U8A-802, Kolej Perdana, UTM, Skudai, Johor, Malaysia. Tel.: +60 183193890. E-mail addresses: [email protected] (S.T. Anjomshoae), [email protected] (M.S.M. Rahim). http://dx.doi.org/10.1016/j.compag.2016.02.001 0168-1699/Ó 2016 Elsevier B.V. All rights reserved.

contour of occluded part in certain cases. Several techniques have been used to retrieve the information about overlapping object boundaries by involving Law’s texture and Canny’s edge detection (Canny, 1986; Kumar Mishra et al., 2012; Caballero and Carmen Aranda, 2012). Other researchers have also used contour-based retrieval methods for overlapping object identification (Zhang et al., 2012; Sun et al., 2013). However, a single contour-based retrieval method or a boundary detection method alone may not be suitable for the overlapping leaf identification because leaves have similar intensity levels. Several researchers have confirmed the good fit of key point extraction methods for object recognition problems. The key point extraction methods are used for contour-based retrieval methods as well as template-based methods (Ye and Shan, 2014; Yoo et al., 2014; De Araújo adn Kim, 2011). To the best of our knowledge, the template-based recognition is an effective method for rubber tree leaflet position identification. This method is able to extract additional features such as corner, edge, and blob. The key features of overlapping and non-overlapping leaf can be used to identify similar shapes through comparison of the nearest neighbor algorithm based on various similarity measures. Therefore, this study addresses the problem of overlapping leaf identification using a template matching method through comparison of key points. The common problem with the template matching method is increasing the accuracy of matching results. The wrong matches can be reduced by setting up a larger directory. However, the long iteration throughout the matching process is another consideration

S.T. Anjomshoae, M.S.M. Rahim / Computers and Electronics in Agriculture 122 (2016) 176–184

for the key feature based identification. Therefore, speeding up this process is crucial during the matching process. To increase the accuracy and speed up the process, this research enhanced the template matching result by several factors. First, the decrement of the distance ratio by creating a loop allowed matching more key points. Therefore, a limited number of directory images were used for the leaflet position identification. Second, the contrast adjustment process in the background of rubber tree leaf images increased the accuracy of matching results. The findings of this study indicate that reducing the number of template images decreases the execution time without affecting the template matching result. Based on the ground truth dataset validation process, the results of this study showed that the proposed framework outperforms the previous method. The paper is organized as follows. The next section presents a general overview of overlapping object recognition, overlapping leaf segmentation and key point extraction methods for template-based recognition. Section 3 introduces the algorithm that adopted in the template matching process. In Section 4, the template-based overlapping rubber tree leaf identification framework is introduced. The experimental results and the performance evaluations are shown in Section 5. Section 6 concludes this paper with the overview of the proposed framework. 2. Related works There are several research areas where object recognition methods have been applied successfully, such as image segmentation (Li et al., 2012), finger-print classification (Munir et al., 2012), large scale data analysis (Alzate and Suykens, 2011) and medical image analysis (Fabbri et al., 2014) among others. Although, several object recognition algorithms have been proposed in recent years, most of them considered only single objects; not touching or overlapping objects. However, there are some applications such as shape analysis (Clausen et al., 2007), text segmentation (AbellaPérez and Medina-Pagola, 2010) and medical image analysis (Park et al., 2013), among others, where it is common that one or more objects might be overlapping. For these types of applications, overlapping object recognition is essential. There are two major steps in the overlapping object recognition method: (1) segment the extracted contour into clusters; (2) group the segmented clusters which belong to the same class. However, there is no perfect method for segmentation due to different purposes (Melen and Ozanian, 1993). The contour segmentation mainly consists of two methods of geometrical features: Hough transformation method and curvature based method. Hough transform can project the contour pixels on a 2D image plane onto the parameter plane to find object segments (Ballard, 1981; Pei and Horng, 1995). Though, the randomized Hough transform method (Xu et al., 1990) could reduce the computational complexity, a random selection of contour pixels might cause a detection of false shapes when the objects are overlapped. On the other hand, several researchers have developed various methods of overlapping leaf segmentation. Valliammal proposed an approach to extract the leaf edges through a combination of thresholding method and H-maxima transformation (Valliammal and Geethalakshmi, 2011). The morphological filters like the h-maxima transform belong to the class of connected operators. They preserve contour information, however, whether this is an advantage or disadvantage depends on the application. Furthermore, Valliammal and Geethalakshmi (2012) worked on non-linear k-means algorithm to improve the segmentation results for high resolution images. At the first level of segmentation K-means clustering is carried out to distinguish the structure of the leaf. The process continued with Sobel edge detector to eliminate the unwanted segments and extract the exact part of the leaf

177

shape. However, this method does not separate touching objects well, specifically, if the overlapping objects have a similar level of intensity. Jie-Yun and Hong (2011) used color features for segmentation purpose. The traditional transformation from RGB model to HSI model is improved, at the same time the leaf color information is extracted using the similarity distance between pixels. It has a high degree of accuracy in 24-bit true color images, on the other hand, the algorithm is not producing rapid result. The active contour model is another method for image segmentation that Cerutti et al. (2011) used in their project with a particular enhancement of the method proposed by Guillaume et al. (2011). He presented a system for leaf segmentation in natural images that combines basic segmentation steps with an estimation of descriptors shows the general shape of a simple leaf. The algorithm has promising results compare to standard active contour; however, the algorithm is not suitable for overlapping objects. Although, several clustering algorithms have been reported in the literature addressing the problem of overlapping clusters, they have some limitations in practical problems. For this reason, we have considered addressing the problem of overlapping clustering through a template-based method using SIFT features. SIFT is the shortened form of ‘‘Scale-invariant feature transformation”. The dominant proponent of SIFT is Lowe, who in 1999, proposed SIFT to address feature matching challenges that arise due to scaling, rotation, and transformation. The SIFT descriptor is robust to moderate perspective transformations and illumination variations. The standard SIFT algorithm firstly detects interest points by searching for the scale-space extrema of differences-of-Gaussians (DoG) within a difference-of-Gaussians. Next, the position-dependent histograms of local gradient directions around the interest points are statistically accumulated as the SIFT descriptor. In the end, the SIFT descriptor is utilized to match the corresponding interest points between different images. Experimentally, the SIFT algorithm has been proven to be useful in practice for image matching and object recognition, including image copy detection (Ling et al., 2013), multi-object recognition (Kim et al., 2012), image stitching (Brown and Lowe, 2007), neurosurgery (Qian et al., 2013), human action recognition (Liu et al., 2013), video tracking (Saeedi et al., 2006). Recently, several SIFT-based template matching methods were proposed. De Araújo adn Kim (2011) presented a grayscale template matching algorithm for matching colored objects. However, the execution time is the main drawback of this method. Gurjal and Kunnur (2012) used the template matching algorithm for the hand gesture recognition. This algorithm compares the given data with a data set to determine whether or not it can be classified as a member of the stored data set. This method is computationally appropriate and can be quite accurate for a fewer number of gestures. The algorithm loses its efficiency, however, for a larger data set. 3. Methodology In this work, SIFT key point detection method was used to detect the main features of the rubber tree leaves, such as occluded region of the leaflets and edges. If the region is occluded, SIFT is not only extract the edge and corner key points of that region, it extracts the blob and ridge features as well. If the leaves are separated, the region between leaflets remains blank and no feature will be detected as demonstrated in Fig. 1. SIFT algorithm is defined through four main phases: finding the scale space extreme, key point localization, orientation assignment, and key point descriptor. Finding Scale-space extreme: This phase involves examinations on all scales and image locations that are implemented by a difference-of-Gaussian (DoG) function. In the region of 3  3, the greatest or the smallest value between them can be found by

178

S.T. Anjomshoae, M.S.M. Rahim / Computers and Electronics in Agriculture 122 (2016) 176–184

The computation of principal curvature can be tested with this formula:

ðDxx þ Dyy Þ2 2

Dxx Dyy  ðDyy Þ

<

ðr þ 1Þ2 r

ð8Þ

Consequently, if inequality fails, the key point is discarded from the potential key point list. Orientation assignment: Remaining points are assigned to a fitting position based on the image gradient directions. These equations describe the process:

mðx; yÞ ¼

qffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi ðLðx þ 1; yÞ  Lðx  1; yÞÞ2 þ ðLðx; y þ 1Þ  Lðx; y  1ÞÞ2 ð9Þ 

Fig. 1. Key point feature extraction.

hðx; yÞ ¼ tan1 comparison of current scale with upper and lower scales. The results of these findings are also considered as potential interest points. Gaussian function comprises the scale space function and DoG. The convolution of a variable scale Gaussian is defined as G (x, y, r). Scale space of an image, L(x, y, r) is produced from variable scale Gaussian and an input image, I(x, y).

1 2 2 2 eðx þy Þ=2r 2pr2

ð1Þ

Lðx; y; rÞ ¼ Gðx; y; rÞ  Iðx; yÞ

ð2Þ

Gðx; y; rÞ ¼

Then, the difference-of-Gaussian (DoG) operation is produced by following formulation:

Dðx; y; rÞ ¼ Gðx; y; krÞ  Gðx; y; rÞ  Iðx; yÞ ¼ Lðx; y; krÞ  Lðx; y; rÞÞ

ð3Þ

Key point localization: Once a potential key point has been spotted by comparison of the neighboring pixels, the next stage is to implement a complete fit to the nearby data for location, scale, and ratio of principal curvatures. The 3D quadratic function was used to identify the current location of the key point candidates by expanding the scale space function. Because D(x, y, r) changed, the origin is now at this current location:

DðxÞ ¼ D þ

@Dr 1 @2D x þ xr 2 x 2 @x @x

ð4Þ

x = (x, y, r)r is the balance where D and its products are evaluated at the sample point. The location of ^x is assigned by taking the derivative of this function with respect to x and determine it as:

^x ¼ 

@ 2 D1 @D @x2 @x

ð5Þ

Hereafter, the process attempts to remove some insignificant candidates from the list of key points. Poorly localized key points or candidates with low contrast on the edges will be eliminated at this stage. Dð^ xÞ evaluates the low contrast points with default threshold value and can be obtained by substituting two equations above:

Dð^xÞ ¼ D þ

1 @D1 ^x 2 @x

ð6Þ

If the value of Dð^xÞ is below than 0.03, this point will be discarded. The poorly localized small curvature points in the perpendicular direction are removed on the edges that acquired from the DoG function. Then, the Hessian matrix is used to compute the ratio of principal curvatures at the position of the key point.





Dxx

Dxy

Dyy

Dyy



ð7Þ

 Lðx; y þ 1Þ  Lðx; y  1Þ Lðx þ 1; yÞ  Lðx  1; yÞ

ð10Þ

For the image L(x, y), the gradient magnitude, m(x, y), and the orientation h(x, y) is computed by subtracting pixels. The gradient orientation of sample points around the key points is formed from the gradient histogram. The gradient histogram covers the 360 degree radius orientations. The samples are added to the histogram based on their gradient magnitude weight and the Gaussianweighted circular window. The maximum point in the histogram is identified and any other local points with 80% of the peak value are used to generate a key point with that orientation. Key points at the same locations can also have more than one direction. Key point descriptor: The image gradients are measured at the selected scale in the region around each key point. These are transformed into a representation that allows for significant levels of local shape distortion and change in illumination. 4. Template-based leaflet position identification The design of template-based classification is one of the more practical ways of overlapping object identification. Once overlapping and non-overlapping leaflet positions are defined, similar positions can be identified accordingly using the nearest neighbor algorithm. Therefore, this study suggested the shape context based feature extraction method for computing similarity between the input image and the template image to identify the leaflet positions. For the template matching, the directory was constructed using rubber leaf images that comprise both overlapping and nonoverlapping leaves. Initially, the proposed framework converts images to grayscale and applies threshold operation to obtain better result for the key point detection. In this study, Otsu’s method was used to perform the histogram based thresholding (Otsu, 1979). The algorithm assumes that the image to be threshold contains two classes of pixel as foreground and background. Then, it calculates the optimum threshold separating these two classes. The algorithm searches for the threshold that minimizes the variance within the class, defined as a weighted sum of variances of the two classes:

r2w ðtÞ ¼ w1 ðtÞr21 ðtÞ þ w2 ðtÞr22 ðtÞ

ð11Þ

Weights wi are the probabilities of two classes separated by a threshold t and r2i variances of these classes. Fig. 2 demonstrates the general procedure of the leaflet position identification process. In template matching process, identifying leaflet positions using the approximate nearest neighbor search includes two steps: (1) key point extraction of the template image and the input image; (2) key point matching. The matching process consists of four main modules: key point extraction, key point matching, iteration

179

S.T. Anjomshoae, M.S.M. Rahim / Computers and Electronics in Agriculture 122 (2016) 176–184

Template Directory

Search for Match

Overlapping

Separate

Matching Result Fig. 2. Matching process.

process, and elimination process. Finally, the matched images will be displayed. The flowchart of the template matching process is illustrated in Fig. 3. First module reads images and extracts SIFT key points. In this module, key points of the input image and key points of the directory image are extracted simultaneously. It returns parameters; the image array in double format, descriptors where each row keeps the invariant descriptor data for each key point, and the location. The location parameter stores these four values: row, column, scale, and orientation of the key point. The orientation value is allocated in the range between [PI, PI] radians. After extracting all the features, the feature matching process is executed between the input image and the directory image. The features of the directory image are loaded and the features of the input image are correlated with the directory image features to identify the overlapping and non-overlapping leaves using the

nearest neighbor algorithm. Fig. 4 illustrates the correlation of key points between the template dictionary and the input image. The nearest neighbor operation is produced by following formulation:

Where

dT1 ¼

M X di1

dT2 ¼

i¼1

 Ratio 1 ¼

d11 d21 d31 dT1 dT1 dT1

M X

ð12Þ

di2

i¼1



 Ratio 2 ¼

d12 d22 d32 dT2 dT2 dT2

 ð13Þ

d is distance, T is the number of input images to match, and M is the number of matched key points. This module returns the locations of maximum ‘depth’ values and stores the location in the database array. If the numbers of matched key points are greater than 2, it calculates the distances between matched key points and the center of the key points.

Distance ¼ abs½Ratio 1  Ratio 2 < Threshold

ð14Þ

Valid Points ¼ sum ðDistanceÞ

ð15Þ

Validity Ratio ¼

Number of Valid Points Number of Matched Points

ð16Þ

Next, it calculates the validity ratio of the key points simply by dividing the valid matched key points by matched key points. When the validity ratio is determined, distances which are below the threshold will be rejected. This procedure is implemented to

Input image Fig. 3. Template matching flowchart.

Directory image

Fig. 4. Correlation of key points on rubber leaves.

S.T. Anjomshoae, M.S.M. Rahim / Computers and Electronics in Agriculture 122 (2016) 176–184

control the similar pattern of the matching key points from the center. The total of the differences which are below the threshold are accepted as valid matching key point. The algorithm selects the best candidates and checks for the following possible iteration for the selected candidates. Consequently, it increases the distance ratio in the next loop within selected template images to identify more corresponding key points. Furthermore, the threshold value decreases to 0.01 to identify additional matching key points. The numbers of candidate are stored in the mask and the iteration continues until only one candidate remains. Fig. 5 demonstrates the result of key feature matching between the input image and the database image.

80

Overlap

Touching

Separate

60

Number of Leaves

180

40

20

0

PB 350

RRIM 2001

RRIM 2002

RRIM 2025

RRIM 3001

5. Experimental result Fig. 6. Distributions of positions according to clone type.

The leaf samples was collected from the Rubber Research Institute of Malaysia (RRIM). The sampled data contains five different rubber tree clones namely, RRIM 3001, RRIM 2025, RRIM 2001, RRIM 2002, and PB 350. The classified samples were scanned and a new dataset was created with a total of 250 rubber tree leaf images. Based on our data set, distributions of the positions according to clone type are as shown in Fig. 6. Clone PB 350, RRIM 2001, RRIM 2002, and RRIM 3001 have the overlapping leaflets while clone RRIM 2025 featured separated leaflet positions. Several positions were considered while the new dataset was created. The background noise is removed to avoid unnecessary key point detection and increase the quality of the identification result. The data sets are separated into two groups, namely, training and validation. The training part is constructed using fifty samples to verify that the program works without any mistakes. Further, to compare and validated our proposed method with the

previous work, a dataset with another fifty samples from different leaflet positions has been tested. The division of the dataset and their characteristics is clarified in Table 1. The template directory includes eleven rubber tree leaves and the algorithm makes a decision among these images. The statistic results of the experiments are demonstrated in Fig. 7 for overlapping leaves and non-overlapping leaves respectively. This figure shows the corresponding correct key point matches between the input image and the directory image. The red line of the graph shows the number of key points that are repeatably detected features. The blue is the present method and green lines is the previous method that show the number of key points that have their descriptors correctly matched to a directory image. Here, we evaluated the stage of key point detection and localization.

Fig. 5. Key feature matching process.

181

S.T. Anjomshoae, M.S.M. Rahim / Computers and Electronics in Agriculture 122 (2016) 176–184 Table 1 Specification of the dataset. Dataset

Features

Samples

Overlapping Separated

4 4

160 90

Division of dataset Training

Validation

25 25

25 25

1

1

0.9

0.9

0.8

0.8

0.7

0.7

0.6 0.5 0.4

Detected Features

0.3

Probability

Probability

The detected key points are corresponded in both input image and directory image by comparing a pixel to its neighborhoods at the current and adjacent scales. In Fig. 7, we provided the number of key points that are detected by the previous and the proposed method. Compared with the previous method, proposed method obtains more matching key points. It evidences that proposed method is better in key point matching. Then, aiming at the next stage, we calculated the identification rate for each key point in every image. Table 2 shows the measures in terms of percentage rate for the rubber tree leaflet position identification. The matched features by

the previous method performed lower accuracy percentage for both overlapping and non-overlapping leaves, therefore, this matter causes an undesirable compromise for the identification task. The percentage ratio between our method and the previous method is in highest 97.5–79.5. It means that our method is better in key point matching, if the number of key points found by the previous method is not higher than the number of key points found by the present method. Fig. 8 shows the comparison between the previous method and the present method results. Based on the visual comparison, the proposed method displayed more acceptable matching performance than the previous method. The matching results from the previous method displayed either wrong or ambiguous matching. However, the result of the proposed method is closer to the input image that is considered best matching result. The most striking result to emerge from this comparison is that although the present method suggests fewer number of templates, matching results are more satisfactory. The previous method uses a larger template directory which increases the processing time and also hardly finds

0.6 0.5 0.4 Detected Features

0.3

Model 1

Model 1

0.2

0.2 Model 2

0

Model 2

0.1

0.1 100

150

200

250

300

350

400

450

0 100

150

200

250

300

350

400

450

Number of Key Point Features

Number of Key Point Features

Fig. 7. (a) Overlapping rubber tree and (b) for the non-overlapping rubber tree leaflet position recognition results based on matched key points. The red line of the graph shows the number of key points that are repeatably detected features. The blue (Model 1) is the present method and green lines (Model 2) is the previous method show the number of key points that have their descriptors correctly matched to a directory image. (For interpretation of the references to colour in this figure legend, the reader is referred to the web version of this article.)

Table 2 The identification accuracy for the rubber tree leaflet positions. Clone ID

Recognition accuracy (%)a

Recognition accuracy (%)b

Clone ID

Recognition accuracy (%)c

Recognition accuracy (%)d

350_T2_04 350_T3_09 2001_T1_03 2001_T1_06 2001_T1_09 2001_T2_01 2001_T2_03 2001_T2_04 2001_T2_05 2001_T2_06 2001_T2_07 2001_T2_09 2001_T3_06 2001_T4_09 2002_T1_08 2002_T2_06 2002_T2_08 2002_T2_11 2002_T4_02 2002_T5_07 2025_T3_01 3001_T2_06 3001_T4_01 3001_T4_10 3001_T5_09

80.8 96.3 84.2 97.0 98.7 97.7 98.1 93.3 72.9 81.7 98.2 97.1 80.4 94.9 93.0 95.8 99.3 82.5 76.3 61.7 75.9 96.0 84.3 91.5 94.4

58.4 69.8 56.1 66.1 76.6 69.7 68.3 70.9 54.1 67.6 54.0 64.4 71.3 48.0 42.5 57.7 44.1 66.2 69.8 61.5 70.5 52.7 62.7 65.4 52.5

350_T1_04 350_T1_09 350_T2_02 350_T3_05 350_T3_06 350_T4_05 350_T4_12 2001_T3_03 2001_T4_05 2001_T4_12 2002_T3_04 2002_T3_12 2002_T5_02 2002_T5_05 2025_T1_01 2025_T1_04 2025_T3_09 2025_T3_10 2025_T4_02 2025_T4_05 2025_T4_06 2025_T5_02 2025_T5_10 3001_T3_10 3001_T5_02

85.2 80.7 97.2 87.6 89.4 98.3 81.4 97.5 84.7 90.1 97.2 96.9 95.0 88.6 96.2 83.3 92.6 72.0 82.3 97.3 87.4 80.3 96.3 83.5 93.1

70.8 55.4 52.0 62.3 45.3 76.4 70.8 79.5 64.4 73.8 71.2 62.5 60.9 52.0 67.3 58.0 73.6 51.3 61.7 70.8 57.4 64.7 64.0 56.6 51.2

(%)a Overlapping Leaves-Present Method; (%)b Overlapping Leaves-Previous Methods; (%)c Non-Overlapping Leaves-Present Method; (%)d Non-Overlapping Leaves-Previous Method.

182

S.T. Anjomshoae, M.S.M. Rahim / Computers and Electronics in Agriculture 122 (2016) 176–184

the correct matches. Further analysis showed that it is not reliable to use black background as recommended by the previous method. From the comparison of two results, it can be seen that the blank background images performed consistent result for the rubber leaf position identification (see Fig. 8). To achieve further performance evaluation, our method of identification was compared with a ground truth dataset by image benchmarking experiment. The identification of overlapping and non-overlapping leaves were manually annotated in the ground truth dataset before benchmarking process is carried out. Based on observation, outcomes were measured in terms of how accurate the algorithm finds matches. From the image matching process, three statuses can be concluded: wrong match (WM), ambiguous (AM), and correct match (CM). If the image corresponds with an opposite position template, it is considered wrong match. If the image matches with a template that is not a correct match yet is not wrong match, the status is called ambiguous. The last status is that if the image corresponds to an exact position, it is called the correct match. All the non-overlapping leaf images are matched with the correct templates while some uncertain matches are observed from overlapping leaves. For the overlapping leaves, twenty out of twenty-five images were matched with the correct (CM) template and the remaining is ambiguous (AM). An ambiguous match is when an image with one side overlapping leaf was matched with a template image that both sides overlapping. Some of the representative results from each dataset are given in Fig. 9.

Previous Method

6. Discussions and conclusions The study explored the concept of overlapping leaf identification with regard to the development of automated rubber clone inspection system. Rubber tree leaflet positions exist in three forms: overlapping, touching or separated. This study focused on the feature extraction of rubber tree leaflet positions and the identification of them. The feature extraction phase is one of the most difficult tasks in image processing and as many features as possible should be extracted for a better object recognition. The rest of the following process, including object recognition and classification, all depends on the quality of feature extraction process (Bala, 2010; Mahmood, 2012). There are several challenges and issues need to be addressed with feature extraction of the overlapping rubber tree leaf. One of the challenges is overlapping objects might have similar intensity levels. Therefore, it is demanding to infer the contour of the occluded part (Zhang et al., 2012). This study proved that the best method is the key point extraction method to overcome this issue. SIFT method is adopted for this research to extract key features to distinguish between the three leaf shape forms. SIFT is a practical method of detecting blobs and ridges features which are critical throughout the matching process (Song and Lu, 2013). The results of this study indicated that SIFT can obtain leaf shape information, including edge, corner, blob and ridge features.

Present Method

Fig. 8. Comparison of the results of the rubber leaflet position between the previous method and the present method. Results show input images with matched corresponding template images.

S.T. Anjomshoae, M.S.M. Rahim / Computers and Electronics in Agriculture 122 (2016) 176–184

183

Fig. 9. Examples of results of the rubber leaflet position identification based on extracted key features. Results show input images along with matched corresponding template images. Image scales by 370  300.

The findings from this study enhanced the result of template matching method by several factors. We precisely tuned the original algorithm to obtain the optimum match from the directory. The proposed algorithm utilizes a fewer number of template images for rubber tree leaflet position identification. This reduces execution time without affecting the template matching result. Additionally, the higher accuracy of key point matching is achieved by the noise removal process in the background of rubber tree leaf images. In our opinion, although the template-based method is not practical for rubber clone classification, its superior accuracy makes it suitable for the overlapping object identification. Further studies should be directed toward classifying clones by detecting other important features of rubber tree leaves, such as leaf tip, leaf base, form of the leaf, and leaf margin. That will enhance the accuracy of rubber clone classification to a great extent.

References Abella-Pérez, R., Medina-Pagola, J.E., 2010. An incremental text segmentation by clustering cohesion. In: Proceedings of HaCDAIS, pp. 65–72. Anjomshoae, S.T., Rahim, M.S.M., Javanmardi, A., 2015. Hevea leaf boundary identification based on morphological transformation and edge detection. Pattern Recognit. Image Anal. 25 (2), 291–294. Anjomshoae, Sule, 2014. A new feature extraction algorithm for overlapping leaves of rubber tree. Masters Thesis, Universiti Teknologi Malaysia, Faculty of Computing. Bala, A., 2010. An improved watershed image segmentation technique using MATLAB. Int. J. Sci. Eng. Res. 3 (6). Ballard, D., 1981. Generalizing the Hough transform to detect arbitrary shapes. Pattern Recognit. 13, 111–122. Brown, M., Lowe, D.G., 2007. Automatic panoramic image stitching using invariant features. Int. J. Comput. Vision, 59–73. Alzate, C., Suykens, J.A.K., 2011. Sparse kernel spectral clustering models for largescale data analysis. Neurocomputing 74, 1382–1390. http://dx.doi.org/10.1016/ j.neucom.2011.01.001.

Caballero, Carlos, Carmen Aranda, M., 2012. WAPSI: Web Application for Plant Species Identification Using Fuzzy Image Retrieval. In: Advances on Computational Intelligence. Springer, Berlin Heidelberg, pp. 250–259. Canny, J., 1986. A computational approach to edge detection. IEEE T. Pattern Anal. PAMI-8 (6), 679–698. Cerutti, G., Tougne, L., Vacavant, A., Coquin, D., 2011. A parametric active polygon for leaf segmentation and shape estimation. In Advances in Visual Computing. Springer, Berlin Heidelberg, pp. 202–213. Clausen, S., Greiner, K., Andersen, O., Lie, K.A., Schulerud, H., Kavli, T., 2007. Automatic segmentation of overlapping fish using shape priors. In Image Analysis. Springer, Berlin Heidelberg, pp. 11–20. De Araújo, Kim, H.Y., 2011. Ciratefi: An RST-invariant template matching with extension to color images. Integr. Comput.-Aided Eng. 18 (1), 75–90. Fabbri, S., Strnad, L., Caramazza, A., Lingnau, A., 2014. Overlapping representations for grip type and reach direction. Neuroimage 94, 138–146. Guillaume, C., T. Laure, V. Antoine and C. Didier, 2011. A parametric active polygon for leaf segmentation and shape estimation. In: Proceedings of the 7th International Conference on Advances in Visual Computing-Volume Part I (ISVC’11), vol. 6938, pp. 202–213. Gurjal, Pallavi, Kunnur, Kiran, 2012. Real time hand gesture recognition using SIFT. Int. J. Electron. Electrical Eng. 2 (3), 19–33. Jie-Yun, B., Hong, E.R., 2011. An algorithm of leaf image segmentation based on color features. Key Eng. Mater. 474–476, 846–851. Kim, D., Rho, S., Hwang, E., 2012. Local feature-based multi-object recognition scheme for surveillance. Eng. Appl. Artif. Intell. 25, 1373–1380. Kumar Mishra, P., Kumar Maurya, S., Kumar Singh, R., Kumar Misra, A., 2012, March. A semi automatic plant identification based on digital leaf and flower images. In Advances in Engineering, Science and Management (ICAESM), 2012 International Conference, IEEE, pp. 68–73. Ling, H.F., Yan, L.Y., Zou, F.H., Liu, C., Feng, H., 2013. Fast image copy detection approach based on local fingerprint defined visual words. Signal Process. 93, 2328–2338. Liu, L., Shao, L., Rockett, P., 2013. Human action recognition based on boosted feature selection and naïve Bayes nearest-neighbor classification. Signal Process. 93, 1521–1530. Li, Y., Shi, H., Jiao, L., Liu, R., 2012. Quantum evolutionary clustering algorithm based on watershed applied to SAR image segmentation. Neurocomputing 87, 90–98. http://dx.doi.org/10.1016/j.neucom.2012.02.008. Lowe, D.G., 1999. Object recognition from local scale-invariant features. In: Proceedings of the 7th IEEE International Conference on Computer Vision, pp. 1150–1157. Munir, M.U., Javed, M.Y., Khan, S.A., 2012. A hierarchical k-means clustering based fingerprint quality classi fi cation. Neurocomputing 85, 62–67. http://dx.doi. org/10.1016/j.neucom.2012.01.002.

184

S.T. Anjomshoae, M.S.M. Rahim / Computers and Electronics in Agriculture 122 (2016) 176–184

Mahmood, N.H., 2012. Ultrasound liver image enhancement using watershed segmentation method. Int. J. Eng. Res. Appl. (IJERA) 2 (3), 691–694. Melen, T., Ozanian, T., 1993. A fast algorithm for dominant point detection on chain coded contours. In: Fifth International Conference on Computer Analysis of Images and Patterns, LNCS 719. Springer, Berlin, pp. 245–253. Ong Chin Wei, et al., 2012. Digital Image Recognition System for Rubber Clones Produced In Malaysia. Irc 2012 International Rubber Conference. Otsu, N., 1979. A threshold selection method from gray-level histograms. IEEE T. Syst. Man Cyb. 9 (1), 62–66. Park, C., Huang, J.Z., Ji, J.X., Ding, Y., 2013. Segmentation, inference and classification of partially overlapping nanoparticles. Pattern Anal. Mach. Intel., IEEE Trans. 35 (3), pp. 1–1. Pei, S., Horng, J., 1995. Circular arc detection based on the Hough transform. Pattern Recogn. Lett. 16, 615–625. Qian, Y., Hui, R., Gao, X.H., 2013. 3D CBIR with sparse coding for image-guided neurosurgery. Signal Process. 93, 1673–1683. Saba, T., Rehman, A., Sulong, G., 2010. Non-linear segmentation of touched roman characters based on genetic algorithm. Int. J. Comput. Sci. Eng. 2 (6), 2167– 2172. Saeedi, P.P., Lawrence, D., Lowe, D.G., 2006. Vision-based 3-D trajectory tracking for unknown environments. IEEE Trans. Robotics 22 (1), 119–136. Shigematsu, A., Mizoue, N., et al., 2011. Importance of rubberwood in wood export of Malaysia and Thailand. New Forest. 41 (2), 179–189. Silva, Thiago Christiano, Zhao, Liang, 2013. Uncovering overlapping cluster structures via stochastic competitive learning. Inform. Sci. 247, 40–61.

Song, F., Lu, B., 2013, January. An Automatic Video Image Mosaic Algorithm Based on SIFT Feature Matching. In: Proceedings of the 2012 International Conference on Communication, Electronics and Automation Engineering. Springer, Berlin Heidelberg, pp. 879–886. Sun, S.W., Wang, Y.C.F., Huang, F., Liao, H.Y.M., 2013. Moving foreground object detection via robust SIFT trajectories. J. Vis. Commun. Image Represent. 24 (3), 232–243. Valliammal, N., Geethalakshmi, S.N., 2012. Plant leaf segmentation using non linear K means clustering. Int. J. Comput. Sci. 9 (3), pp. 1694–0814. Valliammal, N., Geethalakshmi, S.N., 2011. Hybrid image segmentation algorithm for leaf recognition and characterization. In: Proceeding of the International Conference Process Automation, Control and Computing (PACC). Xu, L., Oja, E., Kultanena, P., 1990. A new curve detection method: randomized Hough Transform (RHT). Pattern Recogn. Lett. 11, 331–338. Ye, Y., Shan, J., 2014. A local descriptor based registration method for multispectral remote sensing images with non-linear intensity differences. ISPRS J. Photogramm. Remote Sens. 90, 83–95. Yoo, J., Hwang, S.S., Kim, S.D., Ki, M.S., Cha, J., 2014. Scale-invariant template matching using histogram of dominant gradients. Pattern Recogn. 47 (9), 3006– 3018. Zhang, H., Yanne, P., Liang, S., 2012, August. Plant species classification using leaf shape and texture. In: Industrial Control and Electronics Engineering (ICICEE), 2012 International Conference, IEEE, pp. 2025–2028.