Computers and Electronics in Agriculture 71S (2010) S48–S53
Contents lists available at ScienceDirect
Computers and Electronics in Agriculture journal homepage: www.elsevier.com/locate/compag
Combining discriminant analysis and neural networks for corn variety identification Xiao Chen a , Yi Xun b , Wei Li a,∗ , Junxiong Zhang a a b
College of Engineering, China Agricultural University, Beijing 100083, PR China The MOE Key Laboratory of Mechanical Manufacture and Automation, Zhejiang University of Technology, Zhejiang Province 310014, PR China
a r t i c l e
i n f o
Article history: Received 7 September 2008 Received in revised form 29 August 2009 Accepted 14 September 2009 Keywords: Machine vision Neural networks Discriminant analysis Corn Variety identification
a b s t r a c t Variety identification is an indispensable tool to assure grain purity and quality. Based on machine vision and pattern recognition, five China corn varieties were identified according to their external features. Images of non-touching corn kernels were acquired using a flat scanner. A total of 17 geometric features, 13 shape and 28 color features were extracted from color images of corn kernels. Two optimal feature sets were generated by stepwise discriminant analysis, and used as inputs to classifiers. A two-stage classifier combining distance discriminant and a back propagation neural network (BPNN) was built for identification. On the first stage, corn kernels were divided into three types: white, yellow and mixed corn by distance discriminant analysis. And then different varieties in the same type were identified by an improved BPNN classifier. The classification accuracies of BAINUO 6, NONGDA 86, NONGDA 108, GAOYOU 115, and NONGDA 4967 were 100, 94, 92, 88 and 100%, respectively. © 2009 Elsevier B.V. All rights reserved.
1. Introduction Variety identification is indispensable in grain marketing, for the following situations: farmers need to know the variety of the grain seed they plant, to ensure the correct product is grown and harvested; bulk handlers need to be certain of harvested varieties to ensure correct segregation for specific markets; marketers need to be confident that the products they sell meet varietal purity standards for target markets around the world. However, most traditional methods of variety identification are slow and complex for use. To overcome these shortages, an approach for quickly identifying corn varieties is highly desirable and beneficial. Thus, automatic variety identification was researched based on computer technology and machine vision. Machine vision can take dimensional measurements more accurately and consistently, and can give an objective measure of color and morphology of the object which an inspector could only assess subjectively. In classification of cereal grains, morphological features are the widely used measurements, because they characterize the appearance of a kernel. Twenty-three morphological features of grain kernels were extracted and used for the discriminant analysis. After evaluating these features, kernel length, maximum radius, and perimeter are of utmost importance for classifying grain varieties (Majumdar and Jayas, 2000a). Over 45 different morphological features were used to distinguish 15 Indian
∗ Corresponding author. Tel.: +86 010 62736527; fax: +86 010 62737333. E-mail address:
[email protected] (W. Li). 0168-1699/$ – see front matter © 2009 Elsevier B.V. All rights reserved. doi:10.1016/j.compag.2009.09.003
wheat varieties (Shouche et al., 2001). Color features were also used to identify grain kernels. A digital image analysis algorithm was developed based on color features to classify individual kernels of grain varieties. The classification accuracies between 90 and 95% were acceptable for a particular application (Majumdar and Jayas, 2000b). A method was presented for clustering of pixel color information to segment features within corn kernel images. Features for various corn damages were identified by red, green, and blue pixel value inputs to a neural network (Steenhoek et al., 2001). Recently, researchers combined various external features (morphological, color, and textural) to improve the classification accuracy of grain kernels. When used in different combinations of two, morphology–color, features gave the best results (Majumdar and Jayas, 2000c). Based on analyzing morphological and textural parameters, rice grains were classified into healthy and defective grains (Satish et al., 2006). For rice quality inspection, a total number of 59 features (54 color features and five morphological features) were extracted from the rice images (Kumar and Bal, 2006). Owing to the variation in morphology, color and texture, grain kernels cannot be easily identified and classified using a unique mathematical function. However, neural networks have the potential of solving problems in which some inputs and corresponding output values are known, but the relationship between the inputs and outputs is not well understood or is difficult to translate into a mathematical function. These situations are commonly found in inspection of agricultural products. Neural network classifiers have been successfully implemented in grain quality inspection and variety classification. Eight morphology features of corn seed were extracted using image analysis, and the BPNN was implemented to
X. Chen et al. / Computers and Electronics in Agriculture 71S (2010) S48–S53
predict size category. But sizing accuracy for flatness decision was less than 80% (Steenhoek and Precetii, 2000). Yun (2004) selected area, complexity and hue to be characteristic parameters of corn, and presented the detection algorithm based on back propagation (BP) network classification. The average recognition accuracy of the standard corn, broken corn and different kinds of kernels could reach 95%. Over 80% correct identification in three randomly chosen wheat varieties having environmentally induced variation showed that an artificial neural network has excellent potential in varietal identification (Dubey et al., 2006). Wang et al. (2007) analyzed morphological characteristics and color parameters of wheat, and developed a wheat quality identification model by applying BP neural network. The average identification rate reached 93%. To get the best performance, researchers have done much work to design the network optimally. Color characteristics of grain were analyzed, and different neural network calibration models were developed to classify vitreous and non-vitreous kernels (Wang et al., 2003). Neethirajan et al. (2007) extracted 16 image features including color and morphology from soft X-ray images, and identified the sprouted and healthy kernels using statistical and neural network classifiers. Experiments showed that a four-layer BPNN suited best for grain classification applications. Jayas et al. (2000) indicated that a BPN network was best suited for classifying agricultural produces. And specialist networks resulted in very high classification accuracy (over 95%) for cereal grains (Visen et al., 2002). It has been proved that the three-layer network was able to learn the complex and non-linear relationship between inputs and outputs. Although many studies have been focused on wheat or rice varieties identification and quality inspection, little attention has been paid to corn kernels. In our study, we proposed a vision-based approach combined with pattern recognition techniques to identify corn varieties. The specific goals were (1) to develop algorithms to extract the external features of corn kernels; (2) to generate the optimal features sets for identification using stepwise discriminant analysis; (3) to identify kernels variety using a two-stage classifier. 2. Materials and methods 2.1. Grain samples According to color characteristics, corn kernels are divided into three types: white, yellow and mixed corn in the China
S49
National Standard. We collected corn samples from five corn varieties, including white corn (BAINUO 6), yellow corn (NONGDA 86, NONGDA 108, GAOYOU 115), and mixed corn (NONGDA 4967) (Fig. 1). About 150 kernels were randomly selected from each variety. Images were captured by a flatbed scanner (EPSON PERFECTION 2480), without consideration of the illumination conditions and man-made disturbances. Furthermore, the scanner based imaging system could catch high quality images, and be easy to automate the extraction of seeds characteristics. Images were acquired at a resolution setting of 300 dpi, one of which contained 12 corn seeds. We avoided contact between seeds when scanning. Images were processed to extract features using a program which was coded and designed in VC++. 2.2. Feature extraction Once the image has been cleanly segmented into discrete objects of interest, kernels in the image were labeled and then external features of each kernel were extracted. A total of 58 features were extracted for identifying corn varieties, including 30 morphological features and 28 color features. Morphological features analysis of corn kernels included extraction of basic geometric features (e.g., area, perimeter) and derived shape features (Table 1). Shape features are physical dimensional measures that characterize the appearance of kernels. The special geometric features are briefly defined as follows: ith Equivalent Width: From the point of corn tip, the major axis was divided into eight parts and seven points of intersection were generated. The lines that were drawn through those points while maintaining perpendicularity with the major axis were defined as equivalent widths (Fig. 2). Color features have been widely used to classify grain varieties. But different from most grain, colors of a corn kernel are not quite uniform. Both the germ and tipcap of corn are typically white, whether the whole kernel is white or yellow. In our experiments, corn kernels were placed on the scanner randomly. But the germup placement (corn germ toward the scanner) would yield a higher whiteness value than germ-down placement. In order to eliminate the possible color difference caused by kernel placement, the image area representing the kernel germ and tipcap should be eliminated and not considered in the color measurement (Liu and Paulsen, 2000).
Fig. 1. Five corn varieties: (a) BAINUO 6, (b) GAOYOU 115, (c) NONGDA 86, (d) NONGDA 108 and (e) NONGDA 4967.
S50
X. Chen et al. / Computers and Electronics in Agriculture 71S (2010) S48–S53
Table 1 Extracted morphological features of individual corn kernels. No.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17
Geometric features
No.
Feature
Code
Perimeter Area Length Width Major axis length Minor axis length First Equivalent Width Second Equivalent Width Third Equivalent Width Fourth Equivalent Width Fifth Equivalent Width Sixth Equivalent Width Seventh Equivalent Width Maximum radius Minimum radius Mean of radius Standard deviation of all radii
P A L W Major axis Minor axis W1th W2th W3th W4th W5th W6th W7th MaxR MinR MeanR SDR
After acquiring a color image, corn kernels were isolated from their black background using a given threshold. The R (red) plane was used for segmentation. Any pixel with R > 75 was considered as a pixel representing a corn kernel, otherwise, it was considered as a background pixel. Once corn kernels in each image were received, the B (blue) plane was then used to delete germ and tipcap areas of corn kernels. Take a yellow corn kernel as an example, the area used for color measurement are shown in Fig. 3. The image acquired is generally in three-dimensional RGB color space. However, RGB color space is not perceptually uniform, and the proximity of colors does not indicate color similarity. Color space transformations are effective means of distinguishing color images. The classification performance could be improved by weighting each color component in a different way. To study the effect of color features on the identification performance of corn varieties, four transformations of RGB (red, green, and blue) color space were evaluated, i.e., rgb, YCbCr, I1 I2 I3 , and HSV. Normalized RGB (rgb): To remove the brightness from the RGB color space, normalized values of red, green and blue can be obtained from Eq. (1). The “normalized RGB” color space denoted
1 2 3 4 5 6 7 8 9 10 11 12 13
Shape features Feature
Code
Circularity Rectangular aspect ratio Aspect ratio Radius ratio Haralick ratio First Invariant moment Second Invariant moment Third Invariant moment Fourth Invariant moment First Fourier descriptor Second Fourier descriptor Third Fourier descriptor Fourth Fourier descriptor
Cs Rs WL Rr HraR ϕ1 ϕ2 ϕ3 ϕR4 R1 R2 R3 4
by “rgb”:
r = R/(R + G + B) g = G/(R + G + B) b = B/(R + G + B)
(1)
where r, g and b are the normalized values between 0 and 1, which give that r + g + b = 1. YCbCr: The Y element represents the luminance component, and the Cb, Cr elements represent two chrominance components. Eq. (2) is the formula of YCbCr transformation (Umbaugh, 2005).
Y = 0.299R + 0.587G + 0.114B Cb = −0.1687R − 0.3313G + 0.500B + 128 Cr = 0.500R − 0.4187G − 0.0813B + 128
(2)
I1 I2 I3 : The transformation of RGB color space into I1 I2 I3 color space can be achieved by the equation (3) (Ohta, 1985).
I1 = (R + G + B)/3 I2 = (R − B)/2 I3 = (−R + 2G − B)/4
(3)
Furthermore, mean and standard deviation of these color components were calculated. In total, 28 color features were extracted for identification (Table 2). 2.3. Stepwise discriminant analysis
Fig. 2. Definition of ith Equivalent Width.
Not all features contribute significantly to classification, so stepwise discriminant analysis was used to select features. The features selection serves several important purposes: (1) it reduces the computational burden of the redundant features, (2) it tends to improve the performance of classification algorithms, and (3) it reduces memory and storage demands. PROC STEPDISC (SAS, 2004) was used to perform the stepwise discriminant analysis. Stepwise selection begins with no variables in the classification model. At each step of the process, the variables within and outside the model are evaluated. The variable within the model, at that particular step, which contributes least to the model as determined by the Wilks’ Lambda method is removed from the model. Likewise, the variable outside the model that contributes most to the model and passes the test to be admitted is added. When no more steps can be taken, the number of variables in the model is reduced to its final form. Based on these analyses, the features were ranked by the level of contribution (determined by Correlation Coefficient (CC) and Average Squared Canonical Correlation (ASCC)) to the classifier. In this method, Wilks’ Lambda is the
X. Chen et al. / Computers and Electronics in Agriculture 71S (2010) S48–S53
S51
Fig. 3. Result of color analysis: (a) original image of a corn kernel in germ-up placement. (b) Image after deleting the germ and tipcap areas in germ-up placement. (c) Original image of a corn kernel in germ-down placement. (d) Image after deleting the tipcap areas in germ-down placement.
likelihood ratio statistic for testing the hypothesis that the means of the classes on the selected variables are equal in the population. Lambda is close to zero if any two groups are well separated. The ASCC indicates the separation of the groups, and it is close to 1 if all groups are well separated and if all or most directions in the discriminant space show good separation for at least two groups. 2.4. Mahalanobis distance discriminant One of the main reasons to use the Mahalanobis distance method is that it is very sensitive to inter-variable changes in the training data. In addition, since the Mahalanobis distance is measured in terms of standard deviations from the mean of the training samples, the reported matching values give a statistical measure
of how well the features of the unknown kernels match (or do not match) the original training kernels. The Mahalanobis distance from x to group t is calculated by Eq. (4) (Krzanowski, 1993): dt (x) = [(x − mt ) St−1 (x − mt )]
1/2
(4)
where x is a p-dimensional vector containing the quantitative variables of an observation (p is the number of features considered), t is a subscript to distinguish the groups, mt is the p-dimensional vector containing variable means in group, St is the covariance matrix within group t. During the testing phase, the Mahalanobis distance, for a particular test vector representing a corn kernel, was calculated for all the classes of corns. The test sample was classified as belonging to
Table 2 Extracted color features of individual corn kernels. No.
Feature
Code
No.
Feature
Code
1 2 3 4 5 6 7 8 9 10 11 12 13
Mean of R Mean of G Mean of B Standard deviation of R Standard deviation of G Standard deviation of B Mean of r Mean of g Mean of b Standard deviation of r Standard deviation of g Standard deviation of b Mean of Y
R¯ ¯ G B¯
Mean of Cr Standard deviation of Y Standard deviation of Cb Standard deviation of Cr Mean of I1 Mean of I2 Mean of I3 Standard deviation of I1 Standard deviation of I2 Standard deviation of I3 Mean of H Mean of S Standard deviation of H
Cr Y Cb Cr I¯1 I¯2 I¯3
r g b Y¯
15 16 17 18 19 20 21 22 23 24 25 26 27
14
Mean of Cb
Cb
28
Standard deviation of S
S
R G B r¯ g¯ b¯
I1 I2 I3 ¯ H S¯ H
S52
X. Chen et al. / Computers and Electronics in Agriculture 71S (2010) S48–S53
Table 3 The top 18 selected color features for recognizing three types (white, yellow and mixed) corn kernels using stepwise discrimination (STEPDIS procedure of SAS). Rank
Features
ASCC
Wilk’s Lambda
Rank
Features
ASCC
Wilk’s Lambda
1 2 3 4 5 6 7 8 9
B¯ g¯ g B ¯ G R¯
0.2323 0.4494 0.5697 0.5827 0.6414 0.6501 0.6562 0.6582 0.6656
0.0706 0.0090 0.0045 0.0026 0.0019 0.0015 0.0014 0.0012 0.0011
10 11 12 13 14 15 16 17 18
R H I3 I¯2
0.6708 0.6720 0.6737 0.6771 0.6900 0.6909 0.6958 0.6997 0.7038
0.0010 0.0091 0.0087 0.0077 0.0068 0.0065 0.0063 0.0062 0.0059
S S¯ r
a particular class to which its Mahalanobis distance was the minimum among the calculated distances. The DISCRIM procedure (SAS, 2004) was used to compute discriminant functions for classifying observations.
I2 G g I¯3 ¯ H
the RPROP adaptation process is not blurred by the unforeseeable influence of the size of the derivative, but only depends on the temporal behavior of its sign. This leads to an efficient and transparent adaptation process. In our study, neural networks were designed, trained and tested using the MATLAB neural network toolbox.
2.5. Back propagation neural network 3. Results and discussion In our research, a three-layer BPNN was built for corn identification. In general, the neural network with too few hidden neurons will not have a sufficient capability to represent the input–output relationship accurately. Contrarily, the network with too many hidden neurons may lead to a problem of data overfitting, and affects the system’s generalization capability (Duda et al., 2001). Hence, determining the optimal number of hidden neurons is a very crucial step in designing classifiers. However, such a determination is not an easy task since it depends largely on experiences and trialan-error method. In our design approach, the number of hidden neurons was firstly calculated by Eq. (5), and then varied to see any significant improvement in performance. If no improvement was observed, the number of neurons was fixed to train the network. h = (m + n)1/2 + a,
a ∈ [0, 10]
(5)
where m is the number of output neurons and represents the corn varieties, n is the number of input neurons and equals the number of input features used, h is the number of hidden neurons. The activation function determines the processing inside the neurons. It can be linear or non-linear function depending on the network topology. In our work, the logarithmic and hyperbolic type tangent functions were used at input and processing levels. Respectively, these functions are given by Eqs. (6) and (7). Net represents weighted sums of the previous layer’s output (Hornberg, 2006): f (Net) =
1 (1 + e−Net )
(6)
f (Net) =
(eNet − e−Net ) (eNet + e−Net )
(7)
To overcome the inherent disadvantages of pure gradientdescent, several adaptive techniques have been developed to train networks. Considering training time and convergence speed, the Resilient propagation (RPROP) algorithm was chosen for the best performance (Riedmiller and Braun, 1993). RPROP performs a local adaptation of the weight-updates according to the behavior of the error function. Contrary to other adaptive techniques, the effect of
3.1. Features selection using stepwise discriminant analysis The three types (white, yellow and mixed corn) can be recognized just by color features. However, to the three varieties of yellow corn, all external features should be considered for classifying. So the stepwise discriminant analysis was applied to 28 color features and 58 external features, respectively, and then two optimal feature sets were generated. Features in the descending order of their contribution level to the models were received (Tables 3 and 4). For recognizing three types (white, yellow and mixes) corn kernels, all 28 color features were analyzed using stepwise selection. Table 3 lists the color features in the descending order of their con¯ had the most tribution level to the color model. The mean of blue (B) contribution (ASCC = 0.2323), and the top six features were from the RGB and rgb color spaces. So compared with other color spaces, the RGB and rgb contribute more to distinguishing the three types. Removing those features that contribute little to the color model, and therefore their removal did not affect the classification accuracies. Finally, the 18 most significant color features were selected for classifying corn varieties. To classify the three varieties of yellow corn, the stepwise analysis was conducted to exploit optimal feature set from all 58 morphological and color features. Features which have good contribution (ASCC values) to the model were selected, including five color features, three shape features and two geometric features. Though the most significant one was geometric feature, the overall contribution of color features was more than others (Table 4). 3.2. Variety identification using a two-stage classifier Considering the characteristics of five corn varieties, a two-stage classifier was built (Fig. 4). On the first stage, corn kernels were divided into three types: white, yellow and mixed corn using Mahalanobis distance analysis. And then three varieties of yellow corn were identified by an improved BP classifier.
Table 4 The top 10 selected morphological and color features for recognizing the three varieties of yellow corn kernels using stepwise discrimination (STEPDIS procedure of SAS). Rank
Features
ASCC
Wilk’s Lambda
Rank
Features
ASCC
Wilk’s Lambda
1 2 3 4 5
A g r¯ R 3 g¯
0.2984 0.6326 0.6745 0.6860 0.6971
0.4032 0.1345 0.1059 0.0984 0.0916
6 7 8 9 10
I1 r ϕ3 W2th ϕ1
0.7087 0.7076 0.7204 0.7298 0.7269
0.0842 0.0849 0.0777 0.0729 0.0745
X. Chen et al. / Computers and Electronics in Agriculture 71S (2010) S48–S53
S53
image processing techniques. After ranking features using stepwise discriminant analysis, the optimum sets of features for two models were created individually. In the end, a two-stage classifier was developed for identifying, which combined the Mahalanobis distance and BPNN classifier. Experiments showed the average classification accuracy for five corn varieties was up to 90%. It was found, that the method combining the Mahalanobis distance and BPNN classifier may be successfully employed for corn variety identification. However, further investigations are required to study the performance of the method while testing composite samples from different growing regions and crop years. Conflict of interest No conflict of interest. Fig. 4. Identification of corn varieties using a two-stage classifier.
Table 5 The classification results by the improved BPNN classifier when testing. Actual variety
GAOYOU 115 NONGDA 86 NONGDA 108
Numbers of corn kernels classified into each variety
Accuracy (%)
GAOYOU 115
NONGDA 86
NONGDA 108
44 2 2
4 47 2
2 1 46
88 94 92
Two-thirds of the samples (100 kernels for each corn type) were randomly selected as training set, while the rest of the samples were used as validation set for classification. To identify three types of kernels (white, yellow and mixed corn), the top 18 color features were selected as inputs of the Mahalanobis minimum distance classifier. We achieved high classification accuracies for the three corn classes. Both BAINUO 6 and NONGDA 4967 achieved a classification accuracy of 100%. As the result showed, the reduced color feature set selected by stepwise discriminant analysis was sufficient to the Mahalanobis classifier, and classifier was appropriate to identify samples accurately. After three types of kernels were recognized by Mahalanobis discriminant analysis, identification varieties of the same color were further researched on this step. A three-layer BPNN was built and then trained with the RPROP algorithm for classification. The top 10 morphological and color features were selected as inputs of the network. The network consisted of 10 input neurons, eight neurons in hidden layer and three output neurons. When BPNN training and testing were repeated, the classification accuracies obtained were comparable, reflecting the reliability of the method. The summarized results of classification for testing set are shown in Table 5. The best classification accuracy was 94% for NONGDA 86, and better result accuracy was 92% for NONGDA 108. The GAOYOU 115 was classified with the least accuracy (88%), for some of them were misclassified as NONGDA 86 or NONGDA 108. As both color and morphological features of these kernels are so similar, the capability of 100% correct identification is hard to obtain. All varieties in the same color type have a chance of misclassifying to other varieties. 4. Conclusion Method for classifying five corn varieties was presented. The image processing techniques, stepwise discriminant analysis, the Mahalanobis distance analysis and the back propagation neural network were used in this method. All 750 corn kernels were investigated, and 58 features were extracted from each kernel using
Acknowledgements We thank the Doctoral Program of Higher Education program (No. 20050019005) for their financial assistance, and the SiNong Corn Commission for grain samples preparation. References Dubey, B.P., Bhagwat, S.G., Shouche, S.P., Sainis, J.K., 2006. Potential of artificial neural networks in varietal identification using morphometry of wheat grains. Biosystems Engineering 95 (1), 61–67. Duda, R.O., Hart, E.P., Stork, G.D., 2001. Pattern Classification. John Wiley & Sons, Inc., New York, USA. Hornberg, Alexander, 2006. Handbook of Machine Vision. Wiley–VCH Verlag GmbH & Co, KGaA, Weinheim. Jayas, D.S., Paliwal, J., Visen, N.S., 2000. Multi-layer neural networks for image analysis of agricultural products. Journal of Agricultural Engineering Research 77 (2), 119–128. Kumar, P.A., Bal, S., 2006. Non Destructive Quality Control of Rice by Image Analysis. ASABE Paper No. 066083. ASAE, St. Joseph, MI. Krzanowski, W.J., 1993. Principles of Multivariate Analysis: A User’s Perspective. Oxford University Press, Oxford. Liu, J., Paulsen, M.R., 2000. Corn whiteness measurement and classification using machine vision. Transactions of the ASAE 43 (3), 757–763. Majumdar, S., Jayas, D.S., 2000a. Classification of cereal grains using machine vision: I. Morphology models. Transactions of the ASAE 43 (6), 1669–1675. Majumdar, S., Jayas, D.S., 2000b. Classification of cereal grains using machine vision: II. Color models. Transactions of the ASAE 43 (6), 1677–1680. Majumdar, S., Jayas, D.S., 2000c. Classification of cereal grains using machine vision: IV. Combined morphology, color, and texture models. Transactions of the ASAE 43 (6), 1689–1694. Neethirajan, S., Jayas, D.S., White, D.N.G., 2007. Detection of sprouted wheat kernels using soft X-ray image analysis. Journal of Food Engineering 81 (3), 509–513. Ohta, Y., 1985. Knowledge-Based Interpretation of Outdoor Natural Color Scenes. Pitman Publishing Inc., Marshfield, MA. Riedmiller, M., Braun, H., 1993. A direct adaptive method for faster backpropagation learning: the RPROP algorithm. In: Proceedings of the IEEE International Conference on Neural Networks, IEEE, San Francisco. SAS Institute Inc., 2004. SAS OnlineDoc® 9.1.3. SAS Institute Inc., Cary, NC. Satish Bal, Tapan Kumar Basu, Maqsood Ali, Pratyush Chandra Proddutur, 2006. The determination of morphological and textural features of rice grains in a sample using digital image processing technology and the classification of the rice grains in the sample. ASABE Paper No. 066009. ASAE, St. Joseph, MI. Shouche, S.P., Rastogi, R., Bhagwat, S.G., Sainis, Jayashree Krishna, 2001. Shape analysis of grains of Indian wheat varieties. Computer and Electronics in Agriculture 33 (1), 55–76. Steenhoek, L., Precetii, C., 2000. Vision sizing of seed corn. ASAE Paper No. 00-3095. ASAE, St. Joseph, MI. Steenhoek, L.W., Misra, M.K., Batechelor, W.D., Davidson, J.L., 2001. Probabilistic neural networks for segmentation of features in corn kernel images. Applied Engineering in Agriculture 17 (2), 225–234. Umbaugh, S.E., 2005. Computer Imaging: Digital Image Analysis and Processing. Taylor & Francis, New York. Visen, N.S., Paliwal, J., Jayas, D.S., White, N.D.G., 2002. Specialist neural networks for cereal grain classifications. Biosystems Engineering 82 (2), 151–159. Wang, N., Dowell, F.E., Zhang, N., 2003. Determining wheat virtuousness using image processing and a neural network. Transactions of the ASAE 46 (4), 1143–1150. Wang, Z., Cong, P., Zhou, J., Zhu, Z., 2007. Method for identification of external quality of wheat grain based on image processing and artificial neural network. Transaction of the CSAE 23 (1), 158–161 (in Chinese). Yun, L., 2004. Study on Grain Appearance Quality Inspection using Machine Vision. China Agriculture University, Beijing (in Chinese).