Scale invariant feature approach for insect monitoring

Scale invariant feature approach for insect monitoring

Computers and Electronics in Agriculture 75 (2011) 92–99 Contents lists available at ScienceDirect Computers and Electronics in Agriculture journal ...

1MB Sizes 43 Downloads 90 Views

Computers and Electronics in Agriculture 75 (2011) 92–99

Contents lists available at ScienceDirect

Computers and Electronics in Agriculture journal homepage: www.elsevier.com/locate/compag

Original paper

Scale invariant feature approach for insect monitoring b,d,∗ ˜ Luis O. Solis-Sánchez a,b , Rodrigo Castaneda-Miranda , Juan J. García-Escalante d , b ˜ Irineo Torres-Pacheco c , Ramón G. Guevara-González c , Celina L. Castaneda-Miranda , a Pedro D. Alaniz-Lumbreras a Universidad Autónoma de Zacatecas, Laboratory of Digital Signal Processing, Unidad Academica de Ingenieria Electrica, Av. Ramón López Velarde 801, Zacatecas, Zacatecas 98067, Mexico b Universidad Autónoma de Querétaro, Laboratory of Biotronics, Facultad de Ingeniería, Cerro de las Campanas S/N, C.P. 76010 Querétaro, Querétaro, Mexico c Universidad Autónoma de Querétaro, Laboratory of Biosystems, Facultad de Ingeniería, Cerro de las Campanas S/N, C.P. 76010 Querétaro, Querétaro, Mexico d Universidad Politécnica del Sur de Zacatecas, Laboratory of Biotronics, Ingeniería en Mecatrónica, Domicilio conocido S/N Juchipila, Zacatecas C.P.99960, Mexico

a r t i c l e

i n f o

Article history: Received 24 March 2010 Received in revised form 14 September 2010 Accepted 2 October 2010 Keywords: Precision agriculture SIFT Machine vision Greenhouse IPM

a b s t r a c t One of the main problems in greenhouse crop production is the presence of pests. In order to address this problem, the implementation of a Integrated Pest Management (IPM) system involving the detection and classification of insects (pests) is essential for intensive production systems. Traditionally, this has been done by placing hunting traps in fields or greenhouses and later manually counting and identifying the insects found. This is a very time-consuming and expensive process. To facilitate this process, it is possible to use machine vision techniques. This work describes an application of the machine vision system LOSS V2 algorithm, an expanded version of the LOSS algorithm discussed in a previous work by the same authors. This expanded version demonstrated improved potential and was used to detect and identify the following pest species: Diabrotica (Coleoptera: Chrysomelidae), Lacewings (Lacewings spp.), Aphids (Aphis gossypii Genn.), Glassy (Empoasca spp.), Thrips (Thrips tabaci L.), and Whitefly (Bemisia tabaci Genn.). The algorithm identifies pest presence in the crop and makes it possible for the greenhouse manager to take the appropriate preventive or corrective measures. The LOSS V2 involves the application of the LOSS algorithm for initial pest identification, followed by the application of the image processing technique known as scale invariant feature transform (SIFT). This allows for more accurate pest detection because it is possible to discriminate and identify different types of insects. Therefore, when compared to manual pest counting, the newly developed LOSS V2 algorithm showed more precision in identifying different pest varieties, and also, a much higher determination coefficient, R2 = 0.99. © 2010 Elsevier B.V. All rights reserved.

1. Introduction The supply of agricultural products and ecosystem services are essential to human existence and the quality of life. However, recent agricultural practices which have greatly increased the global food supply have inadvertently had a detrimental impact on the environment and ecosystem services, highlighting the need for more sustainable agricultural methods (Tilman et al., 2002). In a general sense, Integrated Pest Management (IPM) is pest management system which, taking into account the associated environment and population dynamics of the pest species, utilizes suitable techniques and methods in a compatible manner

∗ Corresponding author at: Universidad Autónoma de Querétaro, Laboratory of Biotronics, Facultad de Ingeniería, Cerro de las Campanas S/N, C.P. 76010 Querétaro, Querétaro, Mexico. Tel.: +52 492 1166274. E-mail addresses: [email protected] (L.O. Solis-Sánchez), [email protected] ˜ (R. Castaneda-Miranda). 0168-1699/$ – see front matter © 2010 Elsevier B.V. All rights reserved. doi:10.1016/j.compag.2010.10.001

and maintains the pest population levels low enough to avoid economic loss (Koumpouros et al., 2004; Smith et al., 1983; Smith and Reynolds, 1966). The four main steps of IPM are detection, identification, application of the correct management and registration of the management applied. The most important factor in IPM is the diagnosis of the pest presence and for this diagnosis, the most important factor is the detection procedure. For greenhouse production, IPM systems have been very successful at detecting and adequately identifying insects (Allen and Rajotte, 1990). Nevertheless, the detection and monitoring of insects can be tedious and time-consuming activities (Wise et al., 2007). Moreover, these tasks are often disregarded when other activities in the greenhouse are considered more urgent. If the demands to improve yields without compromising environmental integrity or public health are to be met, the sustainability of agriculture and ecosystem services is crucial (Tilman et al., 2002). As IPM is an approach which includes reducing excessive pesticide application, the pest detection and identification process is

L.O. Solis-Sánchez et al. / Computers and Electronics in Agriculture 75 (2011) 92–99

the cornerstone of IPM. More research on insect traps, for example, on pheromone traps, would provide a means of obtaining large amounts of data which could help to improve the detection and identification process (Knight et al., 2006). A great deal of data from many different traps in many distinct locations and with various types of lures and dosage levels has been collected and analyzed for several insect species. In general, however, few scientists have seriously considered why and how these variables influence trap catch, or how traps may be improved on the basis of such an understanding. This is primarily because such work requires detailed experimentation or extensive observational studies that imply direct counting (Wise et al., 2007). This may become prohibitively time-consuming if the pests are abundant or patchily distributed (Ward et al., 1985). No matter what type of traps are used or insect exploration techniques are applied, time spent is excessive. One approach to overcome this situation is the use of machine vision techniques (Solis-Sánchez et al., 2009). Ridgway et al. (2002) developed a rapid machine vision method for the detection of adult beetles, rodent droppings and ergot in harvested wheat. For most machine vision tasks of object detection, the applications which require further object recognition are of particular importance (Villamizar and Sanfeliu, 2006). Computerized image analysis has become a tool for detecting and identifying various non-grain particles and insects in wheat. A machine vision system for detecting insects in grains consists of a high-speed integrated machine vision software package used with a monochrome CCD (charge coupled device) camera and a personal computer (Neethirajan et al., 2007). Boissard et al. (2008) promoted computer vision and artificial intelligence, a multidisciplinary cognitive vision approach that integrates techniques from different domains. Artificial intelligence learning may help to fine-tune vision algorithms so they can be adapted to various application needs. They proposed a cognitive vision system that combines image processing, learning and knowledge base techniques of images applied to rose leaves scanned in situ to detect whiteflies. Solis-Sánchez et al. (2009) proposed the application of machine vision techniques to Whitefly (Bemisia tabaci Genn.) scouting where digital image processing was done by segmenting insects captured in the traps. It was then possible to count and classify them by identifying geometric characteristics such as the projection area, the eccentricity and the solidity of the segmented insect. Therefore, the identification and counting of insects was possible utilizing machine vision techniques that take advantage of the whiteflies structural moments. However, it still requires differentiating between pests that normally can converge by two or more in the same crop. One solution might be the integration of machine vision techniques such as the scale invariant feature transform (SIFT) to the algorithm LOSS, thereby increasing the potential of pest detection and allowing it to be used as an important tool for precision agriculture. The selection of geometric features using the SIFT is governed by a set of rules that make the most of each feature. The technique called boosting, which serves to select geometric features and appearance more relevant to sets of training samples, has shown to be more effective (Viola and Jones, 2001). Object recognition has benefited from strategies that use a combination of local geometric features with those of appearance. One of the most mentioned is the use of local area descriptors invariant to scaling (Lowe, 2004). In computer vision, Torralba et al. (2004) proposed an extension of the boosting algorithm with the purpose of sharing features across multiple classes of objects and reducing the total number of classifiers. Yokono and Poggio (2004) prefer the use of Harris corners at various resolution levels as points of interest, which permit selection of more robust object characteristics than those selected by filters derived from the Gaussian under rotation and scale.

93

Viola and Jones (2001) introduced a comprehensive screen for rapid assessment of features. Once calculated, the integral image allows an image response to the convolution with a Haar base (Papageorgiou et al., 1998), to any position and scale in real time. Unfortunately, no such system is invariant to the orientation of the object or occlusions. Other recognition systems that work well in complex scenes are those based on the computation of multi-scale local features such as SIFT (Lowe, 2004). An important idea in the SIFT descriptor is that it incorporates local orientation for each point of interest, allowing the scale and orientation invariance. Furthermore, a large number of SIFT features can be calculated in real time to a single image. Alternatively, an ideal automated system for detection would carry out the described task along with recording daily information regarding each insect species (Neethirajan et al., 2007). It would be even better if the system were able to describe the development phase based on the size, the form and the color, discriminating the particles of dust or any other object adhered to the trap, and integrate the data into an automatic diagnosis system. Sequential monitoring is carried out with the objective of measuring the way in which pest numbers change over time (Knight et al., 2006; Dent, 1999). In the present work, machine vision techniques were applied to insect scouting. The digital image processing was carried out by means of the segmentation of the insects captured by the traps. Thus, it was possible to count and classify them through the identification of their geometric characteristics such as the projection area, the eccentricity and the solidity of the segmented insect (Jahne, 2005). With this information, the identification and counting of insects was possible by means of machine vision techniques utilizing the structural moments of the segmented objects. Due to the introduction of SIFT techniques, it was easier to apply the algorithm LOSS for the detection and classification of the pest, and to identify pest varieties. The fusion between LOSS and SIFT resulted in the algorithm named LOSS V2, which was able to detect five pests in hunting traps.

2. Materials and methods 2.1. Biological materials, hardware and software The study was done in fields and greenhouses. Crops: type saladette Tomato (Lycopersicum esculentum Mill), Alfalfa (Medicago sativa L.), Avocado (Persea americana Mill), Broccoli (Brassica oleracea L.), Lettuce (Lactuca sativa L.). Pests: Diabrotica (Diabrotica spp.), Lacewings (Lacewings spp.), Aphids (Aphis gossypii Glover), Glassy (Empoasca spp.), Thrips (Thrips tabaci L.) and the Whitefly (B. tabaci Genn.). Traps: two different types of sticky traps were used: HORIVER Monitoring and HORIVER Capture (Koppert Biological Systems, Koppert Mexico SA de CV). The monitoring traps were 10 cm wide by 25 cm high and the capture traps were 25 cm wide by 40 cm high. Each trap was properly labeled and classified by field and location (Vale, 1982). Hardware: a high resolution digital camera was used (Panasonic Lumix, Japan). Pentium IV processor (Intel, USA), 1 Gb RAM memory, video card 256 kHz, 80 GB HDD (Samsung, South Korea), Image Processing Toolbox Guide. (The Mathworks, Inc.). The software developer platform was MatLab (The Mathworks, Inc., Version 7.0). The experiment started with the placement of 12 hunting traps and 12 monitoring traps, which were distributed as follows: 4 hunting traps and 4 monitoring traps in the three different fields and greenhouses designated for experiment. The traps were collected every week. The monitoring lasted approximately five weeks. Trap collection (removal and new trap setting) was done every 15 days

94

L.O. Solis-Sánchez et al. / Computers and Electronics in Agriculture 75 (2011) 92–99

1

2 NO

START LOSS V2

Compare features from the other pests if the object is not compatible with the features of the pests. The process is repeated until the object is identified

Repeat the Trip algorithm to compare features

PREPROCESSSING IMAGE YES IDENTIFICATION OF CLUSTERS

Identify another plague

0.75≤ ε ≤0.95 Solidity =1 75≤ A ≤175

LOSS ALGORITHM (Solis et al. 2009)

LABELING OF OBJETS FROM CLUSTERS

NO 2

YES

Witheflies INVARIANT FEATURES EXTRACTION

Repeat algorithm to compare features

NO 2

YES

Invariant Features for trips 0.95≤ ε ≤0.98 Solidity =1 55≤ A ≤90

REPEAT PROCESS FROM P=NP-1

2

PEST IDENTIFICATION

YES 1

NO END

Fig. 1. LOSS V2 algorithm flow chart. Condensed critical route of the machine vision techniques proposed to identify different kinds of pests. The object is compared with the database. If it is compatible with any of the insects in the database, the correct identification is made. If it is not compatible with any insects in the database, it is most likely an insect not included in the study, dust or some other type of material.

and sample acquisition (image sample for processing) was done weekly, recording temperature, humidity, number of traps and time of day. The samples removed from the fields and greenhouses were transported in cardboard boxes with Styrofoam sheets placed on the inner sides of the box and separated by 10 cm to avoid contact during transport. In the previous work by the same authors, the LOSS algorithm was applied to Whitefly (B. tabaci Genn.) identification and classification using machine vision techniques. Digital image processing was carried out by means of segmenting the insects captured by the traps. Thus, it was possible to count and to classify them through the identification of their geometric characteristics such as the projection area, the eccentricity and the solidity of the segmented insect, taking advantage of the whiteflies’ structural moments

(Solis-Sánchez et al., 2009). However, only one species was able to be identified. Discrimination between various types of species was not possible. Because of this, more research was done in an attempt to achieve multiple species discrimination. Fusion of the SIFT with the algorithm LOSS V2 improved the performance of the LOSS algorithm and made it possible to detect, classify and identify five different types of species. As illustrated in Fig. 1, the algorithm LOSS V2 works using machine vision techniques and the scale invariant feature transform (SIFT). The SIFT was used to detect and classify a greater number of pests to increase the potential of the LOSS algorithm, making it more precise at finding and taking advantage of the multiple features of each insect.

L.O. Solis-Sánchez et al. / Computers and Electronics in Agriculture 75 (2011) 92–99

Image Reading

Characteristics Storage in Database

Image Creation in Dynamic memory

Interest Count Items

Analysis and Image Enhancement

Segmentation and Targeting Elements

95

Obtaining Characteristic Features

Comparison with the Image record

Fig. 2. Block diagram of the process of the proposed algorithm for image capturing and storage of the scale invariant features.

2.2. Preprocessing and classification

from the difference of two nearby scales separated by a constant multiplicative factor k:

The original image of the hunting traps must be analyzed, but there are several problems as far as imaging is concerned. The objects assigned to certain classes are one of the problems, since an analysis as specific as this requires more work. For classification, it is not possible to use only the area feature because the separation between its parameters is not significant. This means that there is a high correlation between some classes (Jahne, 2005). For image taking, the methodology proposed in the previous work (Solis-Sánchez et al., 2009) was followed. To initiate the object identification, the images were subjected to a segmentation procedure using a Sobel mask (Sobel and Feldman, 1973). This step allowed discrimination of the dust particles in the background and separation of a possible object from another that could be an insect. The initial classification was made by the algorithm LOSS. For the second classification, more object characteristics are necessary in the image to be used as descriptors: area and weight, perimeter, compactness and center of gravity. The theory of moments is useful to extract the eccentricity of objects. The eccentricity (Eq. (1)) is a more appropriate characteristic than the roundness of the object because it has a clearly defined range of values. It is 0 for a perfectly round object and 1 for a line shaped object:

ε=

(2,0 − 0,2 )2 − 420,2 (2,0 + 0,2 )2

(1)

where 2,0 and y 0,2 are the central moments of second order of any object inside an image (Kilian, 2001). 2.3. Scale invariant feature transform (SIFT) To increase the potential of the algorithm LOSS, the scale invariant feature transform (SIFT) was necessary to detect and classify a greater number of pests. In order to implement SIFT, more characteristics had to be obtained for each object in the preprocessed image. The scale space of an image is defined as a function (Eq. (2)), L(x, y, ) that is produced from the convolution of a variable-scale Gaussian, G(x, y, ), with an input image I(x, y): L(x, y, ) = G(x, y, ) ∗ I(x, y)

D(x, y, )

= (G(x, y, k) − G(x, y, )) ∗ I(x, y) = L(x, y, k) − L(x, y, )

(4)

The reason for choosing this function is because it is particularly efficient to compute. As the smoothed images, L (convolution) need to be computed in any case for scale space feature description, D (difference of Gaussian) can therefore be computed by simple image I (input image) subtraction. The convolution of these operators to a desired orientation is performed by orienting the filter. Fast convolution on any region of the original image is efficiently obtained by integral image (Lowe, 2004). As a result of this operation, location of the clusters was able to be classified from the insects in the traps. For classification, the first step was to gather our labeled image and obtain all properties of each object detected in the images such as: area, center of mass, eccentricity, solidity, convex area, orientation, perimeter, number of pixels, diameters and region end points. These properties facilitate more accurate classification. Based on the structural moments, the image was filtered by area and solidity; the set of samples was processed to detect possible pests, where each object in the region of these properties was the invariant feature. The properties obtained from each detected object were stored in a database created for each image. The characteristics obtained from each object (insect) are called invariant due to the fact that positioning or illumination becomes irrelevant when identifying the insect. Once the acquisition and analysis of the main characteristics of each insect was done, it was necessary to process the sample image and compare each object detected with the database of main characteristics. First, the object had to be identified. If it satisfied the characteristics of one of the insects, this object was then identified. If it did not meet the characteristics, it was compared with another insect of study. This was done until the total number of studied

(2)

where * is the convolution operation in x and y, G(x, y, ) =

1 2 2 2 e−(x +y )/2 2 2

(3)

To efficiently detect stable key point locations in scale space Lowe, using scale space extreme in the difference-of-Gaussian function (Eq. (4)) convolved with the image, D (x, y, ), can be computed

Fig. 3. An image of a hunting trap that is preprocessed to detect the possible insects; (a) original image of a hunting trap with whiteflies; (b) preprocessed gradient of magnitude for whiteflies from the original image.

96

L.O. Solis-Sánchez et al. / Computers and Electronics in Agriculture 75 (2011) 92–99

Fig. 4. LOSS algorithm applied in first instance to detect the possible insects in a hunting trap; (a) preprocessed image of Thrips in a hunting trap; (b) classified image according to LOSS algorithm of the Thrips using the preprocessed image (a).

Fig. 5. (a) Original image of the pest in the hunting trap. (b) Image preprocessing. (c) Target of features invariant to scaling with the LOSS V2 algorithm. (d) Identification with the proposed algorithm. (e) Graphic of the labeling of potential elements (insects) according to their geometric characteristics with the LOSS V2 algorithm.

L.O. Solis-Sánchez et al. / Computers and Electronics in Agriculture 75 (2011) 92–99

insects had been analyzed. The comparison through the database of these insects with the objects lead us to the identification and detection of possible pests present. The invariant features obtained for each object present in the images processed were cleansed through an algorithm to ensure that the required processing was carried out as quickly and efficiently as possible. The following diagram (Fig. 2) shows the general procedure for analyzing each processed image. The characteristics in the database are area, centroid, solidity, orientation, bounding box, eccentricity, pixels per area, perimeter and diameter. When processing images, a comparison was made between the images acquired and the database of main insect characteristics. Characteristics were taken from each insect according to the properties obtained for each object. Each of the different types of pests was then identified in different types of images and despite the distance and degree of object rotation, they could be identified by applying the techniques invariant to scaling. This resulted in a new and more powerful algorithm that can identify five different types of pests in any acquired image.

97

3.3. Scale invariant feature transform (SIFT) After identifying the clusters of different types of objects or pests (Fig. 5), we need to obtain the scale invariant features (SIFT). All features invariant to scaling were taken from a sample image of a classified object or pest and stored in a database. Thus, through a database comparison, all objects in a processed image could be identified based on their invariant features (SIFT). The classification and identification was possible due to the accuracy resulting from the LOSS/SIFT combination. For the identification and classification of the whitefly, LOSS was sufficient. However, for the simultaneous identifications of diverse pests, it was necessary to implement vision transformation invariant to scaling (SIFT). The integration of the LOSS object features with those of the SIFT invariant object features made it possible to classify the insects into clusters, thereby making it possible to discriminate between pests. This new process was called algorithm LOSS V2. Each insect was isolated to determine their geometric characteristics; this procedure was carried out for each of the images acquired from different types of pests in order to obtain the features invariant to scaling. As an example, we show the identification of

2.4. Data adjustment The images obtained from the 88 hunting trap samples were processed with the new algorithm developed in this work called LOSS V2. The number of objects or pests from each trap was counted by hand and the algorithm LOSS V2 was applied for the automatic counting. Once the data was processed, the results were presented as a table for each pest; and the number of pests counted by hand in correlation with the data obtained automatically via the proposed counting algorithm. 3. Results and discussion 3.1. Preprocessing During the fieldwork, the task of quantifying and manually classifying each of the insect traps was carried out with the help of an expert entomologist who determined the pests present in the trap set. With all the data in the register, a general block diagram of the procedure intended to continually identify pests was developed. Once the algorithm was detailed with different types of images (Fig. 3), the objects in the images were classified into different classes, and then divided into groups from which properties were obtained (according to area, solidity, eccentricity, perimeter, centroid etc.). This allowed for the formation of possible pest clusters. An image was obtained from each trap and was identified and classified using the algorithm LOSS. The results obtained allowed for the detection of objects (possible pests) in the images. 3.2. Classification To classify objects in processed images, it was necessary to separate them into clusters to form groups through the algorithm LOSS and discriminate between the different types of insects inside the image (Fig. 4). The experiment was done with hunting trap samples with whiteflies. In this image, the objects (whiteflies) are separated from the other objects in the background. This algorithm, based on the algorithm LOSS (Solis-Sánchez et al., 2009), must identify features which allow us to work with features invariant to scaling such as the projection area, the eccentricity and the solidity of the segmented insect or objects in the image.

Fig. 6. (a) Original image of the pest of the hunting trap sample. (b) Target of features invariant to scaling the algorithm LOSS V2. (c) Original image of the pest with two samples of Thirps for identification, (d) result of LOSS V2 algorithm. (d) Original image of the pest with three samples of Thrips for identification, (e) invariant feature extraction and result of the V2 algorithm LOSS. (g) Graphic from the identification of (e) where the three insects are seen isolated according to their features.

98

L.O. Solis-Sánchez et al. / Computers and Electronics in Agriculture 75 (2011) 92–99

Fig. 7. Hunting trap samples collected from the greenhouses and fields with different kinds of insects.

Thrips. These sets of images (Fig. 6) show the original image from which all features, including SIFT, were taken, making possible the classification and identification of the pest. The scatter plots of the area versus eccentricity, and area versus solidity, were used to visualize the properties of each one of the insects and to define the possible clusters which delimit the classes. In the sample image, there are different types of pests, mostly Thrips. Using the LOSS V2, the Thrip pests were isolated and specifically identified. The same procedures were applied to each trap to define groups and thus identify the classes. The algorithm quantified the amount contained in trap objects.

3.4. Data adjustment An analysis and comparison between manual quantification and using the LOSS V2 algorithm (machine vision) was undertaken for the number of objects and pests found. The manually quantified number of pests is very similar to that obtained by machine vision. The observed differences in the images acquired from the hunting trap samples (Fig. 7) were due to the lighting conditions of the time of day. To see these results graphically and obtain the correlation with our experiment, we developed the following graph (Fig. 8). The graph shows the high correlation between manual counting and machine vision counting

Fig. 8. Graphics showing correlations between manual count and machine vision count using LOSS V2 from hunting trap samples located in different experimental fields and greenhouses.

L.O. Solis-Sánchez et al. / Computers and Electronics in Agriculture 75 (2011) 92–99

and the different insects are classified according to their invariant features. It is significant to note that correlation coefficient between manual and machine vision whitefly counting was much higher with the LOSS V2 algorithm than with the LOSS algorithm alone. In addition, using the LOSS V2 is faster and the invariant features obtained with the SIFT make it possible to better discriminate between and classify pests. Of the insects identified visually in each of the samples collected in the greenhouses (Fig. 7), approximately the same number of insects was identified with the LOSS V2 algorithm and correct classification was achieved of the groups formed by the six types of pests that were sought. This classification of the groups was achieved by identifying the previously mentioned properties of each object present in the image; in some cases discriminating different objects in the same group. These objects were then eliminated because they did not meet the requirements of our database. This precise classification and identification was able to be made despite the presence of several different kinds of insects as well as dust and other materials in the hunting trap. As a result from this investigation, a new algorithm (Fig. 1) was developed that combined the LOSS algorithm and scale invariant feature transform (SIFT). The algorithm allows us to detect and identify pest presence in the crop and its severity thereby making it possible for the greenhouse manager to take the appropriate preventive or corrective actions. The application of the machine vision system was named LOSS V2, which is an expanded version of LOSS algorithm described in a previous work. 4. Conclusions Integrating SIFT and LOSS algorithms enable us to discriminate between pests such as Diabrotica (Coleoptera: Chrysomelidae), Lacewings (Lacewings spp.), Aphids (A. gossypii Genn.), Glassy (Empoasca spp.), Thrips (T. tabaci L.), and Whitefly (B. tabaci Genn.). This work gave rise to a new algorithm which was named LOSS V2 and with this new algorithm the simultaneous classification and identification of 5 pests was possible at a high correlation level with a manual count. This paper presents the results of using the new LOSS V2 algorithm to analyze the geometrical characteristics of insects that affect greenhouse and open-field production, allowing us to establish a classification of the species. The most important result is the improved accuracy and precision given by the LOSS V2, in spite of different types of insects that exist in a crop, achieving high determination coefficients, ranging from R2 = 0.96 to R2 = 0.99 for all the species under study. This could be a very valuable and time-saving tool for insect experts and entomologists. Acknowledgements We wish to give special acknowledgement to CONACyT for supporting the present research, and to Ms. Cindy Esparza Haro of the Translation and Edition Office of the Universidad Politécnica del Sur

99

de Zacatecas (UPSZ) for assisting with the English content of this document and to Maureen Sophia Harkins for final revision. We ˜ for his valuable also wish to acknowledge Dr. Rafael Bujanos-Muniz support regarding entomological aspects. References Allen, W.A., Rajotte, E.G., 1990. The changing role of extension entomology in the IPM era. Annual Review of Entomology 35, 379–397. Boissard, P., Martin, V., Moisan, S., 2008. A cognitive vision approach to early pest detection in greenhouse crops. Computers and Electronics in Agriculture 62 (2), 81–93. Dent, D., 1999. Insect Pest Management, second ed. CABI Publishing, pp. 14, 30. Jahne, B., 2005. Digital Image Processing. Springer, pp. 506–513. Kilian, J., 2001. Simple Image Analysis By Moments (WWW document) http://vaderio.googlepages.com/SimpleImageAnalysisbyMoments.pdf. Knight, A., Hilton, R., VanBuskirk, P., Light, D., 2006. Using Pear Ester to Monitor Codling Moth in Sex Pheromone Treated Orchards. Oregon State University Extension Service EM 890, p. 8. Koumpouros, Y., Mahaman, B.D., Maliappis, M., Passam, H.C., Sideridis, A.B., Zorkadis, V., 2004. Imagen processing for distance diagnosis in pest management. Computers and Electronics in Agriculture 44, 121–131. Lowe, D.G., 2004. Distinctive image features from scale-invariant keypoints. International Journal of Computer Vision 60 (2), 91–110. Neethirajan, S., Karunakaran, C., Jayas, D.S., White, N.D., 2007. Detection techniques for stored-product insects in grain. Food Control 18, 157–162. Papageorgiou, C.P., Oren, M., Poggio, T., 1998. A general framework for object detection. In: Proceedings of the IEEE International Conference on Computer Vision, Bombay, January 1998, p. 555. Ridgway, C., Davies, E.R., Chambers, J., Mason, D.R., Bateman, M., 2002. Rapid machine vision method for the detection of insects and other particulate biocontaminants of bulk grain in transit. Biosystems Engineering 83 (1), 21–30. Smith, R.H., Boutwell, J.L., Allen, J.W., 1983. Evaluating Practice Adoption: One Approach (WWW document) www.joe.org/joe/1983september/83-5-a5.pdf. Smith, R.F., Reynolds, H.T., 1966. Principles, definitions and scope of integrated pest control. In: Proceedings of FAO (United Nations Food and Agriculture Organisation) Symposium on Integrated Pest Control 1, pp. 11–17. Sobel, I., Feldman, G., 1973. A 3 × 3 Isotropic Gradient Operator for Image Processing, presented at a talk at the Stanford Artificial Project in 1968, unpublished but often cited, orig. in Pattern Classification and Scene Analysis, Duda, R. and Hart, P., John Wiley and Sons, pp. 271-272. ˜ Solis-Sánchez, L.O., García-Escalante, J.J., Castaneda-Miranda, R., Torres-Pacheco, I., Guevara-González, R.G., 2009. Machine vision algorithm for whiteflies (Bemisia tabaci Genn.) scouting under greenhouse environment. Journal of Applied Entomology 133 (7), 546–552. Tilman, D., Cassman, K.G., Matson, P.A., Naylor, R., Polasky, S., 2002. Agricultural sustainability and intensive production practices. Nature 418, 671–677. Torralba, A., Murphy, K.P., Freeman, W.T., 2004. Sharing features: efficient boosting procedures for multiclass object detection. In: Proceedings of the 18th IEEE Conference on Computer Vision and Pattern Recognition, Washington, July 2004, pp. 762–769. Vale, G.A., 1982. The trap-orientated behaviour tsetse flies (Glossinidae) and other Diptera. Bulletin of Entomological Research 72, 71–93. Villamizar, M., Sanfeliu, A., 2006. Cómputo de Características Invariantes a la Rotación para el Reconocimiento de Distintas Clases de Objetos. [WWW document]. URL http://www.iri.upc.edu/groups/lrobots/publications/villamizar ja06.pdf, XXVII Jornadas de Automatica, Almería 2006 - ISBN: 84-689r-r9417-0. Viola, P., Jones, M., 2001. Rapid object detection using a boosted cascade of simple features. In: Proceedings of the 15th IEEE Conference on Computer Vision and Pattern Recognition, Kauai, December, pp. 511–518. Ward, S.A., Rabbinge, R., Mantel, W.P., 1985. The use of incidence counts for estimation of aphid populations. 1. Minimum sample size for required accuracy. Netherlands Journal of Plant Pathology 91, 93–99. Wise, J.C., Gut, L.J., Isaacs, R., 2007. Michigan Fruit Management Guide. Department of Plant Pathology and Department of Horticulture, Michigan State University. Yokono, J.J., Poggio, T., 2004. Rotation invariant object recognition from one training example. Technical Report 2004-010, MIT AI Lab.