Computers and Electronics in Agriculture 162 (2019) 493–504
Contents lists available at ScienceDirect
Computers and Electronics in Agriculture journal homepage: www.elsevier.com/locate/compag
Original papers
Segmentation and counting algorithm for touching hybrid rice grains Suiyan Tan a b
a,b
, Xu Ma
b,⁎
a
b
, Zhijie Mai , Long Qi , Yuwei Wang
T
b
College of Electronic Engineering, South China Agricultural University, Guangzhou 501642, China College of Engineering, South China Agricultural University, Guangzhou 501642, China
A R T I C LE I N FO
A B S T R A C T
Keywords: Hybrid rice Watershed algorithm Neural network Corner point Circular template
The ability to segment and count of touching hybrid rice grains can enable the automatic evaluation of seeding performance. In this paper, an algorithm that separates and counts touching rice grain, which consists of the watershed algorithm, an improved corner point detection algorithm, and neural network classification algorithm, is presented. To reduce the over-segmentation regions caused by the watershed algorithm, wavelet transform and Gaussian filter are first applied to enhance the contrast intensity of grayscale image and to reduce noise, followed by an improved corner point detection algorithm based on adaptive-radius circular template. The over-segmentation regions are identified and merged by detecting whether the end points of the splitting lines coincide with the corner points. Considering that regions of different grain quantity vary in appearance and corner point characteristic, a Back Propagation (BP) neural network classifier is employed to classify the undersegmentation regions into five categories: one grain, two grains, three grains, four grains, and more than four grains. The proposed algorithm was tested on three hybrid rice varieties under different realistic touching scenarios formed in the sowing process. The tests results showed that the corner point detection algorithm using an adaptive-radius circular template achieved better corner point accuracy than that using a fixed-radius template, and the over-segmentation regions were more accurately merged. For grain regions of different grain quantity, BP neural classifier achieved an average classification accuracy of 92.4%, which was suitable for counting rice grains in under-segmentation regions. The overall segmentation and counting method proposed in this study could achieve an average accuracy of 94.63%, which was verified by manual counting results.
1. Introduction
adjustment to the seeder can be timely implemented. However, automatic counting rice grains in a seeding tray is challenging, since once hybrid rice grains are sown onto the soil in the nursery tray through the precision seeder, seeds usually touch each other. When imaged under a camera, only some grains appear as single grain, while the others appear as a cluster of multiple touching grains. The multiple touching grains often touch each other by adhesion, overlapping, and crosslinking. Therefore, accurate segmentation and counting of the hybrid rice grains is the key to the precise evaluation of seeding performance. In the literatures, previous studies have been conducted on the automatic segmentation and counting of touching objects, including touching grains and cells. Techniques for segmenting and counting of touching objects mainly include ellipse-fitting, Active Contour Model (ACM), watershed algorithm, and concave point detection. Zhang et al. (2005) utilized an ellipse-fitting algorithm to separate and count touching grains. In Zhang’s method, the edges between touching grains were fitted by a direct least-square ellipse-fitting method, and the touching grains were separated by morphology transform with representative ellipse. Wang and Chou (2004) developed an ACM to
Hybrid rice is one of the most important grain crops in China. Currently, commercial farming of hybrid rice mostly employs transplanting techniques, where seeds are sown and raised in nursery trays into seedlings. Seedlings are later transplanted using compatible machinery. Due to hybrid rice’s strong tiller ability and proper seeding rate, around one to three grains sown in each hole of nursery tray, is desired for optimum growth and efficient utilization of rice grains (He et al., 2018). In the actual sowing process, the seeding performance are hard to predict, since it is influenced by many parameters, including the operational parameters of precision seeder and the physical properties of rice (Zhan et al., 2015). Physical properties of rice grain, such as size, moisture and weight, are affected by the external environment, such as the hot weather. In order to ensure proper and precise seeding rate, it is necessary to accurately evaluate seeding performance, especially the seeding quantity in each hole of the trays. In this way, when the mechanical seeding performance degrades due to external environments, manual and mechanical reseeding or automatic operation parameter
⁎
Corresponding author. E-mail address:
[email protected] (X. Ma).
https://doi.org/10.1016/j.compag.2019.04.030 Received 20 June 2018; Received in revised form 11 April 2019; Accepted 23 April 2019 Available online 02 May 2019 0168-1699/ © 2019 Elsevier B.V. All rights reserved.
Computers and Electronics in Agriculture 162 (2019) 493–504
S. Tan, et al.
automatically segment the touching rice kernels. Wang first introduced an Inverse Gradient Vector Flow (IGVF) to automatically generate a field center for individual rice kernel, then the centers were employed as references for setting initial deformable contours that were required for building an ACM. The final contours of rice kernels were calculated, and touching rice were successfully separated. Duan et al. (2011) used watershed algorithm to segment and count the touching rice spikelet. Qin et al. (2013) proposed a modified watershed segmentation algorithm based on the extended-maxima transform to segment the touching corn kernels, and the algorithm improved the over-segment problem brought by traditional watershed algorithm. Mebatsion and Paliwal (2011) employed a Fourier analysis-based and edge curvature algorithm to separate touching kernels. He applied Fourier approximation to smooth the boundary contours. The curvature values along boundary were calculated and the concave points were detected. For multiple concave points, nearest-neighbor and radian critical distance difference criteria were used to draw segmentation lines. Visen et al. (2001) segmented the occluding grains using classification and concave point detection. Grains were first characterized as either an isolated kernel or a group of occluding kernels by determining the degree of their inertial equivalent ellipse. Then, for the occluding kernels, concave points were detected and splitting lines were drawn under multiple criteria. Zhong et al. (2009) employed a combined watershed and concave points algorithm to segment the touching rice kernels. Similar algorithms were used for cell separation. Bai et al. (2009) developed a cell segmentation technique based on concave point detection and ellipse fitting. Wang et al. (2012) segmented the touching cells by generating a classifier and detecting the bottleneck of the cells. In Wang’s method, Wang firstly constructed a classifier to classify the shape of the cells, and these cells were determined whether they need to be segmented. The segmentation line was then determined by detecting the bottleneck part of the touching cells. The above algorithms to segment and count the touching objects have their own advantages and disadvantages. The ellipse fitting method demands the object to have approximately regular ellipsoid shape, otherwise the splitting rate would decrease and the time consumption would increase (Grbić et al., 2016). The watershed algorithm is widely used for separation, which uses topological theory and identifies separation lines through a simulation process of water flooding (Wang and Paliwal, 2006, Bleau and Leon, 2000,). Its major deficiency is that it is sensitive to noise and rough boundaries in digital image, hence prone to be over-segmented. In addition, when the grains are tightly stuck to each other, it is likely to cause under-segmentation. The concave point detection is based on the principle that touching objects generally form concave points at the adhesion part and that splitting lines can be generated through concave point pair search (Zhang and Li, 2017). The concave points are obtained by detecting the abrupt change of the boundary curvature of touching object. However, previous researches (Hobson et al., 2009, Mebatsion and Paliwal, 2011) have shown that the boundary curvature is severely influenced by noise and boundary roughness. To address these problems, Liu et al., 2017 used changes in the response value of a circular template as it moves along the boundary to detect concave points, which is less affected by noise. However, when the grains are under complicated touching conditions, a fixed-radius circular template may lead to false or missed corner point detection, and the corner point detection accuracy decreases. Furthermore, concave point pair matching is the hardest step in the separation process, and the determination of splitting lines consider multiple criteria, for example the nearest-neighbor criterion and radian critical distance criterion (Lin et al., 2014, Yao et al., 2017). In summary, different techniques have been applied to the separation and counting of touching grains. Among them, the watershed algorithm and concave point detection are most effective, yet they have limitations in several aspects. For example, watershed algorithm may lead to over-segmentation and under-segmentation. Methods such as concave point method can be a supplemental method that improve
segmentation accuracy by searching concave point pairs. Despite this, applications of watershed-based segmentation algorithm have been limited to separating scenario of simple touching grains. In one study, grains were placed on a black or white background instead of more realistic environments, where the noise level was low and image contrast intensity was good. In the other study, grains were placed manually or by a vibrator, so that the pattern of how they touched was generated artificially, where most of them only slightly adhered to each other. To address the problem of segmentation and counting rice grains in more realistic and challenging overlapping conditions, we directly worked on hybrid rice grains that were sown onto the soil by the precision seeder, from a height of 20–30 cm. Grains touched each other in a natural and complex pattern, including serious and slight adhesion, cross-linking, and overlapping. To overcome the limitations of the watershed algorithm and concave points method, an improved algorithm that separated and counted touching grains, which combined with watershed algorithm, an adaptive-radius circular template-based corner point algorithm and neural network-based classification of under-segmentation rice grains, was proposed. First, some image preprocessing was performed to improve image quality. Then, over-segmentation regions were identified and merged based on whether the endpoints of splitting lines coincided with corner points after the watershed algorithm. Meanwhile, an improved corner point detection algorithm using adaptive-radius circular template was proposed. Considering that regions with different number of rice grain varied in appearance and corner point characteristic, the under-segmentation regions were classified by BP neural network into five categories: single grain, two grains, three grains, four grains, and more than four grains. Finally, accuracy of the proposed methods was assessed by comparing the proposed counting algorithm with the manually counted results. The proposed algorithm overcome the over-segmentation and undersegmentation problems that were inherent in the watershed method. In this paper, seeding images were acquired with a low-cost webcam, which was economically suitable for applications in actual agricultural production. The objective of this paper is to develop an algorithm to separate and count the touching rice grains, which provides a reference for precise evaluation of hybrid rice seeding performance. 2. Materials and methods 2.1. Test system The test system consisted of an HD Logitech C920 Webcam, a three dimensional rail, an illuminant cabinet, and a computer, as presented in Fig. 1 The Logitech C920 featured autofocus capability, which was mounted on the illuminant cabinet and aimed vertically downward to capture the top view of nursery seeding tray. The range of view could be adjusted through the three-dimensional rail. The computer was connected to the camera with a USB cable, through which the nursery tray images can be received and processed. The illuminant cabinet was installed in the captured area. Five LED surface illuminants were fixed on the top and four sides of the cabinet, which ensured the uniform distribution of illumination. To obtain good quality images, the illuminants were adjusted for optimal illumination. The test system was installed on the 2SJB-500 automatic sowing line, and it was placed between the precision seeder and soil coverer device. The hybrid rice grains were sown by the precision seeder onto the soil on the nursery tray, resulting in different patterns of touching grains. Matlab2014a was used as the image processing tool. The computer was configured with a 3.19 GHz Intel Core i5-4200U processor. 2.2. Test materials Hybrid rice conventionally has an elliptical shape. To explore the adaptability of proposed algorithm to different types of rice grains, three hybrid rice varieties, Teyou 338, Peizataifeng, and Taifengyou 494
Computers and Electronics in Agriculture 162 (2019) 493–504
S. Tan, et al.
7 1
5
4 3
2
6
1. Precision seeder, 2. Soil coverer, 3. Digital camera, 4. Illuminants, 5. Illuminant cabinet, 6. Nursery tray, 7. Computer Fig. 1. Test system and its structure diagram.
208, were used in the tests. Among these, Teyou 338 was relatively plump and round with a length-width ratio of 2.6, Peizataifeng was slender with a length-width ratio of 3.6, and Taifengyou 208 was even more slender with a length-width ratio of 4.4. A total of 300 seeds of each variety were randomly selected to measure their length, width and thickness. The thousand grain weight of each variety was measured 20 times. The physical properties of rice grains are shown in Table 1. Fig. 2a1, a2 and a3 show the images of the three varieties of hybrid rice. Rice grains were soaked for several hours before the sowing process. Then they were germinated to the chest breaking stage, with little or no bud present.
Table 1 Mean and standard errors of grain physical properties. Physical property
Length (mm)
Width (mm)
Thickness (mm)
Thousand grain weight (g)
Teyou 338 Peizataifeng Taifengyou 208
8.15 ± 0.36 8.77 ± 0.45 9.58 ± 0.40
3.10 ± 0.14 2.44 ± 0.10 2.18 ± 0.14
2.17 ± 0.09 1.94 ± 0.06 1.89 ± 0.07
28.8 ± 0.5 25.0 ± 0.7 24.8 ± 0.6
(b1)
(c1)
(d1)
(a2)
(b2)
(c2)
(d2)
(a3)
(b3)
(c3)
(d3)
(a1)
Fig. 2. Image preprocessing and the watershed algorithm: Row (1) Teyou 338, Row (2) Peizataifeng, Row (3) Taifengyou 208, Column (a) RGB image of seeding tray, Column (b) Grayscale image, Column (c) Watershed algorithm was directly applied on grayscale image, Column (d) Watershed algorithm was applied after image preprocessing. 495
Computers and Electronics in Agriculture 162 (2019) 493–504
S. Tan, et al.
2.4.1. Corner point detection along rice grain boundaries using circular template In this study, circular template is used to perform corner point detection. First, the binary image threshold between rice grains and the background is obtained using Otsu’s method (Otsu, 1979), as shown in Fig. 4b, and the grain boundary pixels are obtained using the Canny operator (Canny, 1986), as shown in Fig. 4c. Next, an appropriate radius of the circle template is selected, and the center point of template moves along the boundary of grain. At each boundary pixel p, where p runs over the grain boundary, a Corner Response Function (CRF) is proposed to identify corner points. At point of boundary p, its CRF value is,
2.3. Image preprocessing The watershed algorithm is a classical segmentation algorithm that identifies separation lines between different regions through the simulation of water flooding process. However, the watershed algorithm is susceptive to noise and the inconspicuous contrast intensity of grayscale in the image (Liu et al., 2016). The existence of noise produces many spurious lines, forming many over-segmentation regions. The existence of inconspicuous contrast intensity of grayscale image lead to the loss of the correct separation lines, forming under-segmentation regions. In Fig. 2, column (a) shows the RGB seeding images and column (b) shows the corresponding grayscale images. When watershed algorithm is directly applied on the grayscale image, many spurious separation lines are identified, leading to serious over-segmentation problems, as shown in column (c). The inconspicuous contrast intensity of gray scale problem can be reduced by enhancing the contrast intensity of a grayscale image. To this end, wavelet transform is applied to enhance the edge information and change the gray-scale of different regions in the images (Kim et al., 2016). Since the applied wavelet transform may have amplified noise and details in the images, Gaussian filter is then applied to reduce noise and smooth rough edges. Finally, watershed algorithm is applied. After the preprocessing steps on the grayscale image using wavelet transform and Gaussian filter, the over-segmentation is significantly alleviated, as shown in column (d) of Fig. 2. Although the image preprocessing steps improve the image segmentation quality, over-segmentation problems still present due to noise, fine textures and rough image boundaries. Further, under-segmentation problems caused by touching rice grains still needs to be solved. Solving over-segmentation and under-segmentation problems are crucial for the accurate automatic counting of hybrid rice grains.
CRF (p) =
np ap
(1)
where ap is the area (pixels) of circular template and np is grain area (pixels) inside the circular template. Hence, CRF(p) is a ratio between two areas. If the radius of template is fixed, then ap is fixed. The CRF(p) values along different points on the boundary of a rice grain is shown in Fig. 5, where it can be observed that the changes of CRF are in accordance with the changes of boundary curvature. The abrupt change of the CRF value indicates a corner point. Hence, when a boundary pixel that is a local maximum of the CRF curve, with a CRF value greater than a threshold, it can be considered as a concave point. Similarly, when a boundary pixel that is the local minimum of the CRF curve, with a CRF value less than a threshold, it can be considered as a convex point (Zhong et al., 2009). To avoid identifying the two local maximums of the CRFs on the flat sides of grain as corner points, as point p2 and p6 in Fig. 5, the local maximum threshold for concave points and the local minimum threshold for convex points are set to 0.6 and 0.4 respectively, based on simulation and test experimental results. 2.4.2. Drawbacks of the fixed-radius circular template method Although the circular template is widely adopted in the literatures for concave point detection, it cannot correctly identify corner points in all situations. Since using circular template, an abrupt change of CRF indicates a corner point, only one continuous grain boundary, which encloses the grain area np, is allowed in a circular template. Otherwise, an abrupt change of CRF might not correspond to an abrupt change of grain boundary curvature, which may lead to false or missed detection of corner points. For example, Fig. 6a shows a binary image of three touching grains and its corner points detected by a fixed-radius circular template. Fig. 6f shows the CRF along the boundary, where points b' and e' are not corner points. However points b' and e' are the local maximums of the CRF curve with CRF values greater than the threshold 0.6, hence are falsely identified as concave points using the fixed-radius circular template method. These are because as circular template moves to b' or e', as shown in Fig. 6b and c, two different grains very close to each other are both within the circular template, thus the corresponding np values are made up of two disjoint areas enclosed by two separate grain boundaries, denoted as S0 and S1 in Fig. 6b and c. The abrupt changes in the CRF values at positions b' and e' do not reflect abrupt changes in the curvature of the local segments of the boundary that the center point located on. Similarly, although points c' and k' are concave points, they
2.4. Merging of over-segmentation regions based on an improved corner point detection A corner point refers to a point with sharp change of the boundary curvature, which can be either the intersection of two edges or a feature point with two main directions in the neighborhood. According to the direction of the curvature variation, the corner points can be divided into concave points and convex points. Fig. 3 shows images of some touching rice grain. Hybrid rice grains are elliptic and have two obvious convex points at both ends of the long axis. When two hybrid rice grains touch each other, concave points are formed at the touching points. In some cases, convex points may also be formed. More than two touching grains sometimes form a closed region, as shown in Fig. 3c, whose boundary pixels are referred to as a hole. It was previously reported that endpoints of the splitting lines must be the concave, convex points or holes (Liu et al., 2017). Since spurious splitting lines whose endpoints are not cancave points, convex points or holes can be removed, the key to identifying and merging over-segmentation regions is to detect the corner points along the grain boundaries and the endpoints of the splitting lines.
Fig. 3. Touching rice grains and corner points: Concave and convex points formed at the touching point, (c) Closed region enclosed by three touching grains. Note that: a concave points marked with a blue ‘o’ and a convex points marked with a red ‘+’. (For interpretation of the references to colour in this figure legend, the reader is referred to the web version of this article.)
496
Computers and Electronics in Agriculture 162 (2019) 493–504
S. Tan, et al.
Fig. 4. Image processing: (a) RGB image, (b) Binary image after Otsu’s method, (c) Grain boundary image using Canny operator.
Fig. 5. Corner point detection with circular template and the CRF.
497
Computers and Electronics in Agriculture 162 (2019) 493–504
S. Tan, et al.
Fig. 6. Corner points detection using circular template with fixed radius: (a) Corner points detection using fixed-radius circular template. Concave points marked with blue ‘o’ and convex points marked with red ‘+’, (b) Circular templates at position b’, (c) Circular templates at position e’, (d) Circular template at position c’, (e) Circular template at position k’, (f) CRF along the boundary. Note that: In image (b), (c), (d) and (e), grain boundary pixels marked with green ‘*’, circular template center marked with blue ‘o’, and circular template boundary marked with yellow ‘*’. (For interpretation of the references to colour in this figure legend, the reader is referred to the web version of this article.)
are not correctly identified as concave points using the fix-radius circular template method. c' is supposed to be a local maximum of the CRF curve, but it is detected as a non-local maximum. Although k' is a local maximum of the CRF curve, its CRF values is less than the 0.6 threshold. These are because when the circular template moves to c' and k', as shown in Fig. 6d and e, parts from two different grains are within the circular template and the corresponding np value is the area that is enclosed by two separate boundaries, denoted as S0 and S1 in Fig. 6d and e. The abrupt changes in the CRF values at positions c' and k' do not reflect abrupt changes in the curvature of the local segments of the
boundary that the center point located on. 2.4.3. An improved adaptive-radius circular template method To alleviate the aforementioned problems related to using the fixedradius circular template method for corner point detection, an improved corner point detection algorithm, which uses an adaptive-radius circular template, is proposed. The radius of template adjusts adaptively according to local situation to ensure that only one continuous grain boundary is within the template all the time. Thus, the np value corresponds to one area enclosed by one continuous grain boundary. 498
Computers and Electronics in Agriculture 162 (2019) 493–504
S. Tan, et al.
Fig. 7. Corner points detection using circular template with adaptive radius: (a) Corner points detection using adaptive-radius circular template. Concave points marked with blue ‘o’ and convex points marked with red ‘+’, (b) The adaptive-radius circular templates at corner points, (c) The discontinuous boundaries S0 and S1 within circular template at position k, (d) The discontinuous boundaries S0 and S1 within circular template at position c, (f) CRF along the boundary. Note that: In image (c) and (d), the initial radius template boundary marked with yellow ‘*’ and adaptive-radius template boundary marked with pink ‘*’. (For interpretation of the references to colour in this figure legend, the reader is referred to the web version of this article.)
499
Computers and Electronics in Agriculture 162 (2019) 493–504
S. Tan, et al.
The merge of over-segmentation regions is achieved using the following steps:
The change of CRF can accurately reflect the change of the grain boundary curvature. The improved algorithm is applied to the same grain region in Fig. 6a, as shown in Fig. 7a. The algorithm is described in details as follow. First, an initial value r is set as template radius, for example r = 5 which corresponds to a circular template with a radius of 5 pixels. Then, the radius of the circular template centered at p is adjusted as it moves along the grain boundaries. If one continuous grain boundary is within the template with radius r, the radius r is set as the initial value. Otherwise, the radius adjusts to an appropriate value according to the following rules. Among the n + 1 (n > 0) disjoint grain boundaries in the template with radius r, there is only one boundary that contains the center point p, which is marked as S0 and categorized as Group1. Apart from S0, the other n discontinuous boundaries are categorized as Group2. These boundaries in Group2 are marked as S1, S2…Sn. In most cases, there are two discontinuous grain boundaries in the template, one boundary that contains the center point p is marked as S0 and the other boundary is in Group2 marked as S1, as shown in Fig. 7c and d. The discontinuous boundaries can be obtained by applying the AND operation on the grain boundary image and the circular template image. The distances from the template center p to each point of the n boundaries in Group2 are calculated, which are denoted as d1, d2…dm. m is the total number of pixels in all n boundaries. Among these distances, the minimum distance dmin = min(d1, d2…dm) is taken as the appropriate radius of the circular template. Fig. 7b shows the adaptive radius of the circular templates at different corner points. Fig. 7a shows the results of the corner points detected by circular templates with adaptive-radius, and Fig. 7f shows the corresponding CRF curve along the boundary of the touching rice grain. Here, the adaptive-radius template ensures that the np value is one area enclosed by one grain boundary where the template center is located and the change of CRF is in accordance with the change of local boundary curvature. Thus, if a local maximum and the local minimum of CRF satisfy the threshold condition, it can be correctly detected as concave and convex point. Table 2 lists the CRF values and template radius when the fixed-radius and adaptive-radius methods are adopted in Figs. 6a and 7a respectively.
(1) Firstly, corner points along the grain boundaries are identified by the improved corner point method using adaptive-radius circular template, and the coordinates of corner points are saved, as shown in Fig. 8b. Closed regions in the image are then found by subtraction of binary image from the filled grain image obtained by the hole filling algorithm (Soille, 2003). Holes coordinates are saved. (2) Secondly, the splitting lines images are obtained from the watershed algorithm. Then the splitting lines images are skeletonized (Liu et al., 2016) and the endpoints of the splitting lines are extracted, as shown in Fig. 8c. Coordinates of the endpoints are saved. (3) Next, the distances between each endpoint and all corner points are calculated. Since that the CRF curve is smoothened, the end points of a correct splitting line might not coincide with a corner point exactly. Therefore, if the distances between an endpoint and a corner point is less than two pixels, the endpoint is considered to coincide with a corner point. Then, this endpoint is eliminated. As shown in Fig. 8d, the remaining over-splitting lines and their remaining endpoints are plotted together with the correct splitting lines. (4) Finally, the over-segmentation regions are merged. Two regions that share an identified over-splitting line are merged using morphological close operation. Then the resulting merged regions are added to the binary image, as shown in Fig. 8g. 2.5. Classification of the under-segmentation regions based on BP neural network After the previous segmentation and merging steps, there are undersegmentation regions left either because some correct splitting lines are not formed by the watershed algorithm, or because the correct splitting lines are misjudged as over-splitting lines. In the literatures, splitting lines for under-segmentation regions are identified by concave point pair search. Multiple criteria should be considered, and the algorithm is complicated. In some cases, the splitting lines are formed under different criteria, the counting results are wrong. For example, as shown in Fig. 9a, there are two cross-linking grains. After concave point pair search and the drawing of splitting lines, the counting results, as shown in Fig. 9b–d, are incorrect, which are three grains, three grains and four grains respectively. Regions formed by different number of rice grain vary in appearance, which can be characterized by features such as area and corner point characteristics (Liu et al., 2017). Feature extracted from undersegmentation grain regions, such as area, number of concave points and convex points and number of closed regions, are useful indicators for counting quantity of grains within. To exploit all available features for under-segmentation regions to achieve high accuracy counting result, a
2.4.4. Merging of over-segmentation regions When multiple grains touch each other, concave or convex points occur at the touching points. As introduced in Section 2.4, the endpoints of splitting lines must be the corner points or holes in regions. In this paper, the over-segmentation regions are identified by individual splitting lines. A splitting line, whose endpoints are holes or corner point, is identified as a correct splitting line. While a splitting line, whose endpoints are neither holes nor corner points, is identified as an over-segmentation line. The two grain regions that have the oversplitting line are identified as over-segmented regions and merged into one region using morphological operations.
Table 2 CRF values and template radius using the fixed-radius and adaptive-radius methods. Circular template with fixed radius
Circular template with adaptive radius
p
radius/pixel
np/pixel
ap/pixel
CRF(p)
p
radius/pixel
np/pixel
ap/pixel
CRF (p)
a' b' c' d' e' f' g' h' i' j' k'
5 5 5 5 5 5 5 5 5 5 5
21 50 52 53 50 25 36 42 25 26 42
71 71 71 71 71 71 71 71 71 71 71
0.29 0.71 0.73 0.75 0.71 0.35 0.51 0.6 0.36 0.37 0.59
a b c d e f g h i j k
5 2 2 2 2 5 5 2 5 5 2
21 6 8 8 6 25 38 7 26 27 7
71 11 11 11 11 71 71 11 71 71 11
0.29 0.54 0.73 0.77 0.58 0.35 0.52 0.65 0.36 0.38 0.65
500
Computers and Electronics in Agriculture 162 (2019) 493–504
S. Tan, et al.
Fig. 8. Determining and merging of the over-segmentation regions: (a) Part of RGB image of seeding tray, (b) Watershed segmentation after image preprocessing, and corner point detection, (c) Splitting lines and endpoints, (d) Over-splitting lines and their remaining endpoints, (e) Partial enlargement of image (c), (f) Partial enlargement of image (d), (g) Merging result of over-segmentation regions. Note that endpoints of splitting lines marked with green ‘Δ’, convex points marked with red ‘+’ and concave points marked with blue ‘o’. Part of the seeding tray was captured, and the shapes of grain regions that connected to image edges were irregular. Therefore, these grain regions were eliminated. (For interpretation of the references to colour in this figure legend, the reader is referred to the web version of this article.)
corner point step, the Otsu’s method and the Canny operator are first used to obtain grain boundary pixels. Since most of the regions are single-grain after watershed algorithm and the merging of over-segmentation regions, to speed up the algorithm, grain regions, whose area is less than a certain threshold, are considered as one grain and excluded from the under-segmentation classification step. The remaining regions are further classified and counted by the BP neural network. The threshold for identifying single-grain region is 1.2 times the average of 500 single-grain areas for three hybrid rice varieties.
Back Propagation (BP) neural network classifier is employed to classify the under-segmentation regions into five categories. Neural networks can learn and model non-linear and complex relationships between the input and output variable, and hence is suitable for the rice grain task. Neural network classifiers have been previously implemented for grain quality inspection and variety classification (Chaugule and Mali, 2017, Kurtulmuş and Ünal, 2015). The neural network structure used in this study is a two-layer BP neural network. The number of input neurons is four, representing the four input features, including area, number of concave points, number of convex points, and number of closed regions. The hidden layer is set to six neurons. The number of output neuron is five, corresponding to the five categories of under-segmentation regions, including those containing one grain, two grains, three grains, four grains, or more than four grains.
3. Results and analysis 3.1. Performance of corner point detection methods To demonstrate the performance of the proposed corner point algorithm, two seeding images of each hybrid rice variety were acquired from the test system, and the performance of using fixed-radius and adaptive-radius circular template method in separating touching grain are compared. After applying image preprocessing and watershed algorithm, the corner points along grains boundaries were detected using the circular template with a fixed-radius and adaptive-radius, respectively. Then, the over-segmentation regions were determined and merged. The results containing the number of concave and convex
2.6. The overall process for segmentation and counting of touching hybrid rice grains In conclusion, the complete process of separating and counting the touching grains is shown in Fig. 10, where the steps including image preprocessing, watershed segmentation, merging over-segmentation regions based on corner point detection, and classification of undersegmentation regions using BP neural network are demonstrated. In the
Fig. 9. Error counting results after concave point pairs search. 501
Computers and Electronics in Agriculture 162 (2019) 493–504
S. Tan, et al.
Fig. 10. Block diagram of separation and counting the touching hybrid rice grains. Table 3 Comparison results of corner point methods. Hybrid rice
Taifengyou 208 Teyou338 Perzataifeng
Image No.
1 2 1 2 1 2
Circular template with a fixed radius
Circular template with adaptive radius
Number of concave points
Number of convex points
Number of regions merged
Number of under segmentation regions left
Number of concave points
Number of convex points
Number of regions merged
Number of undersegmentation regions left
232 213 273 252 244 203
965 1020 1132 1075 954 907
461 445 477 464 324 408
58 48 38 27 50 32
450 414 367 348 351 284
1341 956 1116 1060 939 898
404 383 440 432 286 385
32 27 20 15 33 19
Table 4 Grain regions classification result of Teyou 338. Results
Size of training set Size of test set Correct classification False classification Classification accuracy/%
Number of regions 1grain
2grains
3grains
4grains
The others
2000 2852 2835 17 99.4
600 691 685 6 99.1
300 349 318 31 91.2
150 204 171 33 83.8
100 108 88 20 81.5
Fig. 11. Two misclassified results.
points, number of regions that merged, and under-segmentation regions left to be counted are listed in Table 3.
Table 5 Grain regions classification result of Peizataifeng. Results
Size of training set Size of test set Correct classification False classification Classification accuracy/%
3.2. Classification results of grain regions based on BP neural network In this study, BP neural network was employed to classify and count the number of rice grains in under-segmentation regions. To demonstrate the classification accuracy, twenty-five seeding images of each hybrid rice variety were acquired from the test system. The single-grain regions, two-grain regions, three-grain regions, four-grain regions, and more than four grains regions were extracted. These five types of regions were divided into a training set and a test set. Four feature parameters, including area, number of concave points, number of convex points, number of closed regions, were extracted automatically for each region, The BP neural network was first trained only using the training set and later tested using the test set. Table 4–6 shows the classification results on the test set for different hybrid rice varieties. Overall, the mean classification accuracy was 91% for Teyou338, 92.92% for Peizataifeng and 93.2% for Taifengyou208. The overall average accuracy of the three varieties was 92.4%. In general, the accuracy decreased a little when regions of more than three grains are present. The more grains stick together, the more complicate appearance they may have. For instance, grains are standing upright or upper and lower overlapping after sowing process. Those regions are inclined to be misclassified. As shown in Fig. 11, grain region1 has four grains stuck together with two of them heavily upper and lower overlapping, therefore it is misclassified as three grains. In another case, grain region2 has five grains. One of the grains is mildewed and its color is similar to the soil (background), therefore grain
Number of regions 1grain
2grains
3grains
4grain
The others
2300 2327 2313 14 93.2
650 650 633 17 99.4
275 283 248 35 97.4
150 146 127 19 87.6
150 155 149 27 87.0
Table 6 Grain regions classification result of Taifengyou 208. Results
Size of training set Size of test set Correct classification False classification Classification accuracy/%
Number of regions 1grain
2grains
3grains
4grain
The others
2000 1952 1949 3 99.8
600 609 599 10 98.4
270 299 265 34 88.6
150 173 140 33 80.9
150 181 178 3 98.3
502
Computers and Electronics in Agriculture 162 (2019) 493–504
S. Tan, et al.
Table 7 Counting accuracy of the proposed method. Peizataifeng
Manual count Proposed method Accuary/%
Teyou338
Taifengyou208
1
2
3
1
2
3
1
2
3
466 498 93.13
462 485 95.02
552 568 97.1
568 597 94.89
550 518 94.18
613 656 92.98
445 464 95.73
560 593 94.10
455 480 94.50
Fig. 12. Segmentation and counting result of Teyou338: (a) RGB image of Teyou338, (b) After detection of corner point and endpoint of splitting line, over-splitting lines left and marked with pink line, (c) Determining and merging of over-segmentation regions, (d) Classification result of under-segmentation regions.
left were marked in pink. Merging results of the over-segmentation regions were shown in Fig. 12c. The overall counting results were shown in Fig. 12d. In this study, grain regions whose area is less than 220 pixels were considered as single grain regions and filled with yellow, as shown in Fig. 12d. These regions were counted and excluded from the classification step. Then, the remaining regions were classified using the BP neural network. Regions classified as one-grain region were filled with green. Regions classified as two-grain region were filled with red, and regions classified as three-grain region were filled with blue.
region2 is misclassified as four grains. 3.3. Accuracy of the overall algorithm for segmentation and counting of touching rice grains Finally, the overall accuracy of the proposed algorithm in segmenting and counting touching rice gains, which combined watershed algorithm, corner point detection and neural network classification, was verified by manual counting. The proposed method was tested on three RGB images for each of the three hybrid varieties, including Peizataifeng, Teyou338 and Taifengyou 208. Each RGB image contained from 400 to 700 grains. The counting results of the proposed algorithm were compared to that of manual counting. The accuracy of the proposed method was shown in Table 7, indicating that the average accuracy of three varieties was 94.63%. Fig. 12 showed the intermediate steps of separating and counting of touching grains. Fig. 12a showed one of the RGB seeding images of Teyou 338. Fig. 12b showed the image after the watershed segmentation. In Fig. 12b, after detection of corner point and end points of splitting lines, the over-splitting lines
3.4. Discussions (1) From the test results of Table 3, corners point detection using adaptive-radius circular template has a better accuracy than that using fixed-radius circular template. Fixed-radius circular template is likely to cause false or miss detection of corner points, resulting in misjudging correct splitting lines as over-segmentation lines and merging of wrong grain regions. From Table 3, the number of 503
Computers and Electronics in Agriculture 162 (2019) 493–504
S. Tan, et al.
ellipse fitting. Pattern Recognit. https://doi.org/10.1016/j.patcog.2009.04.003. Bleau, A., Leon, L.J., 2000. Watershed-based segmentation and region merging. Comput. Vis. Image Underst. https://doi.org/10.1006/cviu.1999.0822. Canny, J., 1986. A computational approach to edge detection. IEEE Trans. Pattern Anal. Mach. Intell. https://doi.org/10.1109/TPAMI.1986.4767851. Chaugule, A., Mali, S.N., 2017. A new method using feature extraction for identifying paddy rice species for quality seed selection. Imaging Sci. J. https://doi.org/10.1080/ 13682199.2017.1317901. Duan, L., Yang, W., Bi, K., Chen, S., Luo, Q., Liu, Q., 2011. Fast discrimination and counting of filled/unfilled rice spikelets based on bi-modal imaging. Comput. Electron. Agric. https://doi.org/10.1016/j.compag.2010.11.004. Grbić, R., Grahovac, D., Scitovski, R., 2016. A method for solving the multiple ellipses detection problem. Pattern Recognit. https://doi.org/10.1016/j.patcog.2016.06.031. He, H., You, C., Wu, H., Zhu, D., Yang, R., He, Q., Xu, L., Gui, W., Wu, L., 2018. Effects of nursery tray and transplanting methods on rice yield. Agron. J. https://doi.org/10. 2134/agronj2017.06.0334. Hobson, D.M., Carter, R.M., Yan, Y., 2009. Rule based concave curvature segmentation for touching rice grains in binary digital images. In: 2009 IEEE Instrumentation and Measurement Technology Conference, I2MTC 2009, https://doi.org/10.1109/IMTC. 2009.5168727. Kim, S.E., Jeon, J.J., Eom, I.K., 2016. Image contrast enhancement using entropy scaling in wavelet domain. Signal Process. https://doi.org/10.1016/j.sigpro.2016.02.016. Kurtulmuş, F., Ünal, H., 2015. Discriminating rapeseed varieties using computer vision and machine learning. Exp. Syst. Appl. https://doi.org/10.1016/j.eswa.2014.10.003. Lin, P., Chen, Y.M., He, Y., Hu, G.W., 2014. A novel matching algorithm for splitting touching rice kernels based on contour curvature analysis. Comput. Electron. Agric. https://doi.org/10.1016/j.compag.2014.09.015. Liu, T., Chen, W., Wang, Y., Wu, W., Sun, C., Ding, J., Guo, W., 2017. Rice and wheat grain counting method and software development based on android system. Comput. Electron. Agric. https://doi.org/10.1016/j.compag.2017.08.011. Liu, T., Wu, W., Chen, W., Sun, C., Zhu, X., Guo, W., 2016a. Automated image-processing for counting seedlings in a wheat field. Precis. Agric. https://doi.org/10.1007/ s11119-015-9425-6. Liu, Z., Cheng, F., Zhang, W., 2016b. A novel segmentation algorithm for clustered flexional agricultural products based on image analysis. Comput. Electron. Agric. https://doi.org/10.1016/j.compag.2016.05.009. Mebatsion, H.K., Paliwal, J., 2011. A Fourier analysis based algorithm to separate touching kernels in digital images. Biosyst. Eng. https://doi.org/10.1016/j. biosystemseng.2010.10.011. Otsu, N., 1979. A threshold selection method from gray-level histograms. IEEE Trans. Syst. Man. Cybern. https://doi.org/10.1109/TSMC.1979.4310076. Qin, Y., Wang, W., Liu, W., Yuan, N., 2013. Extended-maxima transform watershed segmentation algorithm for touching corn kernels. Adv. Mech. Eng. https://doi.org/ 10.1155/2013/268046. Soille, P., 2003. Morphological image analysis: principles and applications. Comput. Simul. Stud. https://doi.org/10.1007/978-3-662-05088-0. Visen, N.S., Shashidhar, N.S., Paliwal, J., Jayas, D.S., 2001. Identification and segmentation of occluding groups of grain kernels in a grain sample image. J. Agric. Eng. Res. https://doi.org/10.1006/jaer.2000.0690. Wang, H., Zhang, H., Ray, N., 2012. Clump splitting via bottleneck detection and shape classification. Pattern Recognit. https://doi.org/10.1016/j.patcog.2011.12.020. Wang, W., Paliwal, J., 2006. Separation and identification of touching kernels and dockage components in digital images. Can. Biosyst. Eng. / Le Genie des Biosyst. au Canada. Wang, Y.C., Chou, J.J., 2004. Automatic segmentation of touching rice kernels with an active contour model. Trans. ASAE 47, 1803–1811. Yao, Y., Wu, W., Yang, T., Liu, T., Chen, W., Chen, C., Li, R., Zhou, T., Sun, C., Zhou, Y., Li, X., 2017. Head rice rate measurement based on concave point matching. Sci. Rep. https://doi.org/10.1038/srep41353. Zhan, Z., Yafang, W., Jianjun, Y., Zhong, T., 2015. Monitoring method of rice seeds mass in vibrating tray for vacuum-panel precision seeder. Comput. Electron. Agric. https:// doi.org/10.1016/j.compag.2015.03.007. Zhang, G., Jayas, D.S., White, N.D.G., 2005. Separation of touching grain kernels in an image by ellipse fitting algorithm. Biosyst. Eng. https://doi.org/10.1016/j. biosystemseng.2005.06.010. Zhang, W., Li, H., 2017. Automated segmentation of overlapped nuclei using concave point detection and segment grouping. Pattern Recognit. https://doi.org/10.1016/j. patcog.2017.06.021. Zhong, Q., Zhou, P., Yao, Q., Mao, K., 2009. A novel segmentation algorithm for clustered slender-particles. Comput. Electron. Agric. https://doi.org/10.1016/j.compag.2009. 06.015.
under-segmentation regions left after merging are significantly lower when adaptive-radius template method used, despite that more regions are merged when the fixed-radius circular template method used. Therefore, the improved corner points method using adaptive-radius circular template improves the overall touching rice grain segmentation and counting problem. (2) The average classification accuracy of grain regions based on the BP neural network was 92.4%. Table 4–6 showed that BP neural classifier achieves high classification accuracy for regions of singlegrain, two-grains, and three-grains. The accuracy decreased when regions of more than three grains were present. In this paper, some simple characteristics, such as area, corner points and hole, were used for classification. To improve the classification accuracy of regions of more than four grains, more characteristics would be explored, for example shape and texture. (3) In this study, the watershed algorithm and corner point detection were first applied to separate touching grain. Under-segmentation grain regions that left mostly had less than three grains. Therefore, the high classification accuracy of BP neural network for regions of less than three grains is suitable for our counting task for undersegmentation regions. (4) The overall method proposed in this study can achieve an average accuracy of 94.63%, when verified by manual counting results. This approach is prone to fail in detecting corner points where the boundary curvature change is not obvious. To this end, further research will focus on developing corner point algorithm where the curvatures change is subtle or slow. 4. Conclusions In this paper, an accurate segmentation and counting algorithm for touching hybrid rice grains was proposed. The algorithm separates and counts touching rice grains based on the watershed segmentation algorithm, an improved corner point algorithm and a BP neural network classification algorithm. The proposed algorithm was tested for three hybrid rice varieties with different realistic touching scenarios formed in the sowing process. The proposed method achieved an average accuracy of 94.63%, compared to the manual counting results. The proposed method has the potential to improve the precise evaluation of seeding performance and be used in actual agricultural production, once integrated into the automatic rice grain seeding and sowing pipeline developed in our research group. Acknowledgements The authors gratefully acknowledge the financial support from the National Key Research and Development Program of China (Grant No. 2017YFD0700802); the National Natural Science Foundation of China (Grant No. 51675188); the Earmarked Fund for Modern Agro-industry Technology Research System (Grant No. CARS-01-43); the PHD Startup Fund of the Natural Science Foundation of Guangdong Province of China (Grant No. 2017A030310354), and the China Scholarship Council. References Bai, X., Sun, C., Zhou, F., 2009. Splitting touching cells based on concave points and
504