An approach based on digital image analysis to estimate the live weights of pigs in farm environments

An approach based on digital image analysis to estimate the live weights of pigs in farm environments

Computers and Electronics in Agriculture 115 (2015) 26–33 Contents lists available at ScienceDirect Computers and Electronics in Agriculture journal...

1MB Sizes 1 Downloads 36 Views

Computers and Electronics in Agriculture 115 (2015) 26–33

Contents lists available at ScienceDirect

Computers and Electronics in Agriculture journal homepage: www.elsevier.com/locate/compag

An approach based on digital image analysis to estimate the live weights of pigs in farm environments Apirachai Wongsriworaphon a, Banchar Arnonkijpanich b,c, Supachai Pathumnakul a,⇑ a

Supply Chain and Logistics Research Unit, Department of Industrial Engineering, Faculty of Engineering, Khon Kaen University, Khon Kaen 40002, Thailand Department of Mathematics, Faculty of Science, Khon Kaen University, Khon Kaen 40002, Thailand c The Centre of Excellence in Mathematics, CHE, Si Ayutthaya Rd., Bangkok 10400, Thailand b

a r t i c l e

i n f o

Article history: Received 28 September 2014 Received in revised form 22 March 2015 Accepted 8 May 2015

Keywords: Locally linear embedding Neural network Pig weighing system VQTAM

a b s t r a c t In this study, an estimation system for the live weights of pigs is proposed that could be practically employed in a real farm environment without disturbing the animals. This approach is based on computer-assisted visual image capture and a supervised learning algorithm known as vector-quantized temporal associative memory (VQTAM). The method is composed of three parts, which are boundary detection, feature extraction, and pattern recognition. To identify an image’s edge, a method that is based on user interaction via mouse-clicking on the pig image is employed to avoid edge detection errors if the pig’s image and its background are not in contrast. Two image features, (1) the average distance from the pig’s centroid to the boundary points and (2) the pig’s perimeter length, are extracted and used as the inputs of VQTAM. Next, the solutions from VQTAM are improved by an autoregressive model (AR) and locally linear embedding (LLE). This approach has been examined using a specific farm for a case study. The results indicate that the method based on VQTAM and improved by LLE provides the most accurate prediction with an error rate of less than 3% on average. Ó 2015 Elsevier B.V. All rights reserved.

1. Introduction The live weight of an animal is an important factor for managing various stages of its supply chain. In the farm stage, the live weight information can be used to estimate the animal growth, uniformity, feed conversion efficiency, and disease occurrence (Menesatti et al., 2014). Animal-based food products are usually related to the sizes of the animals that are used in production, e.g., shrimp products are processed from different shrimp sizes, which are expressed in terms of the number of shrimp per pound or per kilogram (Pathumnakul et al., 2009), and pork-based products are processed from various pig primal cuts and their sizes (Khamjan et al., 2013). In swine supply chain management, matching suitable size pigs to the food products during the slaughtering and food processing stages could improve the production efficiency by reducing the raw material procurement, inventory, shortage and perished costs. The ability to accurately estimate pig sizes before setting up harvesting or procurement plans is very crucial for the industry (Apichottanakul et al., 2012).

⇑ Corresponding author. Tel./fax: +66 43 202697. E-mail addresses: [email protected] (A. Wongsriworaphon), [email protected]. th (B. Arnonkijpanich), [email protected] (S. Pathumnakul). http://dx.doi.org/10.1016/j.compag.2015.05.004 0168-1699/Ó 2015 Elsevier B.V. All rights reserved.

Methods to estimate animal size, which is typically expressed as the animal’s weight, have been widely studied and applied. The most traditional methods are conducted by eye and hand, based on the personal opinion of the buyer or stockman (Wu et al., 2004) or via direct weighing of the animal. One of the most widely studied methods has been the body measurement approach. An animal’s body measurements have been used to distinguish variations in animal sizes and shapes (Lanari et al., 2003: Salako, 2006) and also to estimate their live weights (Schofield et al., 1999; Slippers et al., 2000; Pope and Moore, 2002; Topal and Macit, 2004; Mollah et al., 2010; Tasdemir et al., 2011; Menesatti et al., 2014). Although the direct weighing method provides the most accurate result, it involves a cumbersome and time-consuming task (Brandl and Jorgensen, 1996) and could cause injury and stress to animals and stockmen when forcing the animal onto the scale. Not only the direct weighing of animals on the ground scale but also various body measurement methods that require direct contact with an animal’s body, such as measuring a pig’s girth behind the front legs (Pope and Moore, 2002), could cause dangerous events due to the animal being stressed during the process of forcing the animal into position for an accurate measurement (Tasdemir et al., 2011). To avoid direct contact with animals’ bodies, body measurement methods based on computer-assisted visual images and

A. Wongsriworaphon et al. / Computers and Electronics in Agriculture 115 (2015) 26–33

digital images have been studied and proposed in the literature, as in the review of Frost et al. (1997). Digital image-based methods have been applied to determine live weights and the growth of various animals, such as pigs (Brandl et al., 1996; Pastorelli et al., 2006; Parsons et al., 2007; Wang et al., 2008), chickens (Mollah et al., 2010) and cattle (Tasdemir, 2011; Menesatti et al., 2014). The top and side view images of animals were mostly used to extract features, such as the projected area and perimeter (Wang et al., 2008), to find the correlation of an image’s features and a pig’s live weight. Most of the digital image-based methods for estimating a pig’s live weight require the pig to be in an appropriate position and quite stationary (Wang et al., 2008). This requirement is not practical on a farm because it is difficult to set up an animal in a required position and keep it stationary. Some of the previous studies sought to develop methods for measuring a pig’s live weight while it was moving normally and avoid the need for a fixed position without motion, such as in the studies of Schofield et al. (1999) and Wang et al. (2008). Schofield et al. (1999) used pig images when the pigs were in the feeding station area. The good/normal quality images were taken under additional low-level illumination setups in the feeding station area. A method that allows for pig movement during imaging was proposed by Wang et al. (2008). In their work, pigs were allowed to walk through the passage from one pen to another. Images of a pig’s walking were taken by a video camera. The completed images were selected and processed to extract significant features. Then, the extracted features were used to estimate the pig’s live weight by employing the artificial neural network approach. Although the approach provided accurate estimation, fast operation and no stress to the pigs, the pig image must contain high contrast between the pig and the background, as required in the other methods. To obtain high contrast pig images, the ground of the passage was covered by a black sheet carpet when whitish pigs were imaged. It is clear that the accuracy of the estimation of a pig’s live weight is dependent on the quality of the digital images and the efficiency of the image segmentation process. Many studies have attempted to introduce a variety of techniques for image segmentation, but it remains difficult to obtain good solutions (Ilea and Whelan, 2011; Minagawa et al., 2011). Most of the image-processing techniques that have been used to estimate a pig’s live weight have two main limitations, in practice. First, images should be taken of individual pigs, and second, it is necessary to provide a suitable environment, such as a dark background, to distinguish the pig body from the surroundings. These two constraints appear to be impractical in a farm operation. To address these problems, in this study, we design and develop a model for the approximation of pig live weights by using machine vision, which can detect pig live weights on a farm without interfering with the pigs. The approach consists of two main steps. The first step is to process digital images of the pigs to obtain various physical features. The images of the pigs are captured by a fixed overhead digital camera. There could be more than one pig in a photograph. To reduce the errors that arise in pig body edge detection due to unclear circumstances or backgrounds, the edge of a pig image is manually detected by the user instead of using computerized edge detection techniques. Then, the features of the image are extracted. Afterward, an alternative supervised learning algorithm, such as vector-quantized temporal associative memory (VQTAM) (Barreto and Araujo, 2004), is used to approximate input–output mappings between the feature space and the live weight space. This technique is developed for pattern recognition and to estimate the pigs’ live weights from the extracted features. Moreover, the accuracy of the weight estimation is improved by an autoregressive model (AR) and locally linear embedding

27

(LLE) (Roweis and Saul, 2000). The innovative part of this study is that the developed method could be efficiently applied in a real farm environment without any special setup, and the proposed animal weight prediction algorithm based on the VQTAM is suitable for the addressed problem. 2. Materials and methods 2.1. Materials The data used in this study were obtained from a case study farm that belongs to one of the largest pork processors in Thailand. The breed is called B91, which is a crossbreed of Largewhite, Lancerace and Duroc and has a whitish color. The pigs’ ages were between 22 and 27 weeks old. Hundreds of top-view images of the pigs were taken. For this study, 456 good-quality images of pigs with various weights that ranged from 88 to 132 kg were selected. The live weight of an individual pig was measured using the weighing scale, after the pigs were imaged. The weight range of 88–132 kg is the range of commercial pig live weights in Thailand. From the 456 images, 406 are chosen to comprise a training dataset, whereas the others are utilized for testing. 2.2. Image acquisition and selection The pigs were imaged by a ‘‘Sony DSC-HX5’’ digital camera with a 640  480 pixel resolution when they were standing in the setup area (90  160 m2), which was perpendicular to the camera. The camera was mounted in a fixed position 2.80 m above the farm floor (see Fig. 1). The images were manually selected to be processed under the criteria that the entire body of the pig should stand inside the setup rectangular area and that the image should be of good quality, as shown in Fig. 2. 2.3. Boundary detection To obtain the feature data of the prediction model, the boundary data of a pig’s body image is required. Because the pixel coordinate is a location in an image, it cannot be used directly for numerical computation; therefore, in practice, it is necessary to convert the pixel locations to 2D vectors (i.e., in x, y coordinates) in the Cartesian coordinate system. Then, the vector-valued functions can be computed using mathematical techniques. In our work, the pixel locations of the image’s boundary are manually identified. A graphical user interface (GUI) program is developed

Fig. 1. The installation of a digital camera to acquire a top-view picture.

28

A. Wongsriworaphon et al. / Computers and Electronics in Agriculture 115 (2015) 26–33

to assist the operator with the process of identifying the boundary pixel locations by performing mouse clicks on the image. As suggested by Wang et al. (2008), to obtain a more accurate live weight approximation, the head and tail of the pig should be removed from the image, as shown in Fig. 3(a). Then, the locations of the pixels are converted to the Cartesian coordinate system by a developed algorithm based on the GINPUT function of the commercial software MATLAB with a laptop 1.7 GHz Intel Core i5-3317U (see Fig. 3(b)). Our proposed boundary detection process differs from most of those in the literature in which the image segmentation technique is the most commonly employed method (Brandl and Jorgensen, 1996; Wang et al., 2008). The main disadvantage of the segmentation method is that the final boundary is sensitive to the intensity of light. In addition, another weakness is that differences between the pig color and the background color must be clear (Frost et al., 1997). It is obvious that the image segmentation technique works well in a controllable environment, such as with a dark-colored background. However, in practice, the level of illumination in animal housing is quite low, and the actual environments could not yield high-quality image segmentation. For example, a pig image taken in a farm (Fig. 4(a)) was processed by the Sobel, Canny, and Prewit algorithms, which are embedded in the edge function of the MATLAB program. The edges in the pig images were detected as shown in Fig. 4(b)–(d), respectively. The qualities of the image segmentations were not good enough to be processed. Our proposed method, which is based on the operator’s vision using the naked eye to manually detect the image boundary, is more flexible and practical. The environment has a low effect on the quality of the boundary solution.

After the boundary vectors of a pig’s body are revealed, two major features are extracted: the average distance from the pig’s centroid to the boundary points and the pig’s perimeter length. These two features will be employed in our pig weight estimation model. The notations and feature extraction algorithms are described as follows.

dt avgb lengthp

the vector of boundary point t a set of all boundary vectors (i.e., pt 2 boundary v ectors; 8t) the Euclidean distance between the pig’s centroid and the boundary vector t the average distance from the pig’s centroid to the boundary vectors the length of the pig’s perimeter.

Algorithm AVG_BOUNDARY_DIST(boundary_vectors) 1. begin 2. for each pt in boundary_vectors do 3. let dt be the Euclidean distance between pt and the mean value of boundary_vectors. 4. compute the average distance

av g b ¼

1 jboundary

5. end

jboundary Xv ectorsj

v ectorsj

Algorithm LENGTH_OF_PERIMETER(boundary_vectors) 1. begin 2. for each pt in boundary_vectors do 3. let dt be the Euclidean distance between pt and a previous vector pt1 . 4. compute the length of the perimeter of the pig lengthp ¼

jboundary Xv ectorsj

dt :

t¼1

5. end

2.4. Feature extraction

Notation pt boundary_vectors

Fig. 2. An example of a good quality top-view image of a pig.

t¼1

dt :

The algorithms for extracting essential features of pigs, i.e., the average boundary distance and the perimeter length, could be described in flow charts as shown in Figs. 5 and 6, respectively. 2.5. Pattern recognition An alternative supervised learning algorithm for defining the relationship between the features and weight levels is developed in this section. In the task of system identification, the aim of supervised learning is to approximate a continuous forward and inverse mapping between the input and output data spaces. By using the forward mapping, the feature set obtained in the previous step will be converted proportionally into weight values. From the research literature, artificial neural network (ANN) models have been successfully applied to identify such mappings (Narendra and Parthasarathy, 1990; Barreto and Araujo, 2004). Some of the most popular methods of supervised learning are the multilayer perceptron (MLP) and the radial basis function (RBF) networks. Recently, Barreto and Araujo (2004) have proposed a system identification technique that is based on the self-organizing map (SOM) (Barreto and Araujo, 2004) for function approximation, instead of MLP and RBF. In their work, the SOM is used to approximate input–output mappings for connecting input and output data spaces. This technique is called vector-quantized temporal associative memory (VQTAM). 2.5.1. VQTAM for pig weight estimation The VQTAM scheme is constructed on the SOM architecture. Originally, standard SOM is categorized as an unsupervised learning algorithm that is designed to find the topological structure embedded within a multidimensional data space. A SOM consists of two major modules: the input space and the lateral lattice space.

29

A. Wongsriworaphon et al. / Computers and Electronics in Agriculture 115 (2015) 26–33

(b) The Cartesian coordinates

(a) The pixel boundary detected manually

Fig. 3. An example of the pixel boundary and Cartesian coordinates of a pig’s image.

(a) The original image

(b) Edge detection by the Sobel method

(c) Edge detection by the Canny method (d) Edge detection by the Prewit method Fig. 4. Edge detection of a pig’s image under real farm conditions by the Sobel, Canny and Prewit algorithms in MATLAB’s edge function.

n In the input space, the weight vectors, win i 2 R , are used to approximate a distribution of input vectors by means of a clustern ing process in the training data, xin j 2 R ; j ¼ 1; . . . ; m. In a 2D-lattice structure, a connectivity between the neurons is arranged in which each neuron i corresponds to a weight vector win i . This feature is used to connect both modules together. The main goal of classical SOM is to transform n-dimensional patterns in input space into a two-dimensional array of neurons, such that topological ordering and neighborhood preservation occurs. VQTAM was proposed to increase the efficiency of SOM at mapping patterns in input space into output space. The space of the outputs must be included as the third module in the structure of the classical SOM. By combining the data in output space, xout 2 R z ; j ¼ 1; . . . ; m, with j out 2 R z is also attached to the existing xin j , a weight vector, and wi

win i ,

which belongs to neuron i. During training on both each out spaces, the set of weight vectors, i.e., win i and wi , gradually converges to the center of the clusters in input space and output space, respectively. Note that for a given finite set of training data, training induced by VQTAM adapts all of the weight vectors iteratively according to minimization of the quantization error in input space. out Then, a function as a relation between win provides an i and wi explicit forward mapping of the input space to the output space.

In this work, we choose the feature set and the live weight of the pigs as the spaces of inputs and outputs, respectively. To enable the VQTAM to convert the feature set consisting of av g b and lengthp into the levels of pig weights, each input vector xj that is fed to the training algorithm must be determined as follows.

h xj ¼ xin j

xout j

 wi ¼ win i

iT

wout i

h j xin j ¼ av g b

ð1Þ T

ð2Þ j

lengthp

iT

ð3Þ

xout ¼ ½W j  j where

xin j

ð4Þ

consists of two features, i.e., the average distance from the th

boundary and the length of the perimeter of the j pig. The vector xout j

contains the weight of the j

th

pig. During the training stage,



the neuron i that corresponds to the weight vector that is closest to the input vector xin j is selected as the winning neuron, such that 

in i ¼ arg min8i fkxin j  wi kg. The recursive formulas for updating each neural weight on both the input and output spaces are as follows:

30

A. Wongsriworaphon et al. / Computers and Electronics in Agriculture 115 (2015) 26–33



Start

hði ; i; tÞ ¼ exp

kri  ri k2  2r2 ðtÞ

! ð7Þ 

where r i and r i are the locations of the neurons i and i in the lateral lattice space, respectively. The function rðtÞ is an exponentially decreasing function. A trained VQTAM network can then be used for the testing stage, such that each vector of the test set h iT test test xin is used for the winner assignment: lengthp test ¼ av g b

Obtain a set of pt by mouse clicking on the boundary of the pig body

Calculate the mean value of the set of obtained pt



in out that i ¼ arg min8i fkxin test  wi kg. Afterward, a weight vector wi

can be used corresponds to the weight of the winner neuron win i as an estimation of the output. This arrangement means that an approximate weight of a test pig is obtained by using the following equation:

Define dt computed from the Euclidean distance between each pt and the mean value of pt

W test  wout i

ð8Þ

Such extensions of SOM to VQTAM demonstrate that an adequate number of weight vectors are required in VQTAM. However, the appropriate number of neurons is difficult to predetermine. If the accuracy of the prediction must be preserved while the number of weight neurons decreases, an output value wout should be improved, e.g., by combining it with linear i interpolation algorithms. In this paper, we propose to use two interpolation techniques to improve the accuracy of the weight prediction derived from VQTAM: the autoregressive model and the locally linear embedding model.

Obtain the average boundary distance from the average value of all dt

End

2.5.2. Autoregressive (AR) model A linear AR model of the k nearest neighbors of win can be i described as follows: Set a number k to be the number of nearest neighbors, as follows: win ; . . . ; win . Afterward, the system of linear i i

Fig. 5. Flow chart of the average boundary distance algorithm.

1

k

equations is given by

Start

AV ¼ B

ð9Þ

where A is the regression matrix, which consists of

win ; . . . ; win , i1 ik

as

follows:

Obtain a set of pt by mouse clicking on the boundary of the pig body

3  ½win i 6 1 7 7 6 A ¼ 6 ... 7 5 4  ½win i 2

ð10Þ

k

Define dt computed from the Euclidean distance between pt and pt-1

A data matrix B consists of wout ; . . . ; wout , which can be written as i i

2 6 6 B¼6 4

Obtain the perimeter length of the pig’s body from the sum of all dt

½wout  i1 .. .  ½wout ik

3 7 7 7 5

1

k

ð11Þ

By using the concept of the least squares problem, Eq. (9) can be solved by the pseudo-inverse matrix 1

V ¼ ðAT AÞ AT B

End

ð12Þ

Therefore, after the VQTAM training process is finished, an estimate of a test pig weight is improved by the AR model using the following equation

Fig. 6. Flow chart of the perimeter length algorithm.



in in in win i ( wi þ aðtÞhði ; i; tÞ½xj  wi 

ð5Þ



wout ( wout þ aðtÞhði ; i; tÞ½xout  wout i i j i :

ð6Þ 

The learning rate aðtÞ is a decreasing function of time, and hði ; i; tÞ is a Gaussian neighborhood function that is defined by

test xout  V T xin test  W test

ð13Þ

2.5.3. Locally linear embedding (LLE) model To obtain a high-accuracy prediction, the number of neurons must be sufficiently large. This arrangement leads to a time-consuming problem during the training process. To overcome this restriction, in addition to the AR model, a locally linear

31

A. Wongsriworaphon et al. / Computers and Electronics in Agriculture 115 (2015) 26–33

in the test set can be written as a linear combination of its k nearest neural weights: in in in xin q ¼ c q1 wi þ c q2 wi þ . . . þ c qk wi 1

2

ð14Þ

k

In the second step, for each vector xin q , the coefficients of the linear combination are calculated to obtain the best representation of the local geometries of the input space. In this step, the LLE attempts to reconstruct the k coefficients. This action corresponds to minimizing the reconstruction error that appears in the cost function

/ðCÞ ¼

k X X 2 kxin cql win q  i k

8q

l¼1

ð15Þ

l

where cql is the unknown matrix. This objective function is intended to minimize the cost function under two constraints. In the first constraint, each vector xin q is reconstructed only from its nearest neural weights, or cql ¼ 0, if win is not a neighbor of xin q . In the second il Pk constraint, the coefficients sum to one, or l¼1 c ql ¼ 1; 8q. The optimal matrix cql can be efficiently computed by solving a constrained least squares problem. In the third step, in the output space, the estimates of the test pig weights are improved using the coefficients from the previous step: q out out out xout q  W  c q1 wi þ c q2 wi þ . . . þ c qk wi 1

2

k

ð16Þ

2.5.4. VQTAM-based pattern recognition For the process of performing pattern recognition, we proposed to use an alternative supervised learning algorithm, i.e., VQTAM, instead of the popular MLP neural network. This choice is the most prominent feature of the present paper. A potential reason for using VQTAM is that MLP is very sensitive to the initialization of the parameters that are contained in feedforward neural networks. In addition, the MLP performance will heavily depend on the values of the parameters, including, for example, the initial weight and bias values, the number of nodes in the hidden layer, and the compressed outputs of the neurons obtained through the sigmoid activation function. MLP utilizes the backpropagation (BP) learning procedure, which is based on a gradient descent algorithm for training the network. Note that a gradient descent-based optimization is directly dependent on the shape of the error surface. If an error surface has a large number of local minima and multimodals, MLP can easily become stuck in the local optima. Thus, MLP training suffers from sensitivity to the parameter settings and initial conditions of the adjustable parameters. However, VQTAM derived from SOM can defeat these limitations. SOM can be categorized as a neural map-based method, such as k-means. Among these techniques, k-means is very sensitive to the initialization of the parameters, i.e., the prototype distribution, which can lead to convergence in local minima. Therefore, k-means should be improved by combining it with a topology-preserving algorithm such as SOM. The neighborhood-based topology preservation in SOM reduces the influence of the initialization and also avoids the convergence of the cost functions toward a local optimum (Arnonkijpanich et al., 2011). Furthermore, the flexible

architecture of VQTAM has a positive effect on the classification performance. In this work, we use a prototype-based classification algorithm, i.e., VQTAM, instead of classification models that are derived from the connection weights between neurons, such as in MLP. During training on both spaces, the prototypes, i.e., the out weight vectors win i and wi , gradually converge to the center of the clusters in the input space and output space, respectively. If the given training data and the learned prototypes are sufficiently dense, then a set of prototypes can be fitted to the data manifold on both spaces. Then, a function of the relationship between win i and provides forward and inverse mappings between the input wout i and output space. In this way, reasonably good performance for the classification is obtained. In addition, if the testing set resembles the training data in such a way that the SOM neurons, i.e., prototypes, can be used for data interpolation, then VQTAM-based models offer the possibility of achieving better classification compared to MLP. 3. Results and discussion In this section, the performance of the proposed pig weight estimation method is assessed. Three algorithms, (1) the standard VQTAM, (2) the AR-based VQTAM, and (3) the LLE-based VQTAM, were evaluated and compared. In the AR-based VQTAM and the LLE-based VQTAM, the solutions obtained by the standard VQTAM were improved by the auto-regression method (AR) and the locally linear embedding method, respectively. We train standard VQTAM, AR-based VQTAM, and LLE-based VQTAM with a two-dimensional rectangular neighborhood structure in which the neurons are arranged on a square grid. In this framework, these algorithms were compared under various numbers of neurons, including, for example, 25, 49, 100, and 169 neurons. From the set of 456 images, 406 were chosen to be the training dataset, and the other 50 images composed the testing set. Each testing image was examined with 10 replications. The relative comparisons are shown in Fig. 7. The relative error is the percent deviation of the predicted weight from the actual weight, which can be described as follows:



 jWeightActual  Weightpredicted =WeightActual j  100%

The results show that the proposed algorithms can accurately predict pig live weights with a deviation of less than 5%, on average, from the actual weights in the testing set. The algorithm that combines VQTAM and LLE outperformed the others. The solution was obviously more accurate than the others for all of the scenarios. In the case of 169 neurons (i.e., a 13  13 square grid), the deviation of the predicted pig weights from the actual weights was 2.94% on average when the algorithm with VQTAM and LLE was employed for the estimation. The classical VQTAM-based algorithm had the lowest performance. This finding could imply that the solution

4.50

Relative error (%)

embedding (LLE) (Roweis and Saul, 2000) technique is employed in this paper to improve the quality of the estimations. The LLE algorithm is one of the most widely used nonlinear dimensionality reduction techniques. This method is used to reduce the dimensionality of high-dimensional data while preserving local geometries in a low-dimensional representation of the original data. The outline of the LLE application to our work is composed of three steps. In the first step, a predetermined number of k nearest neighbors are assigned. Thus, each vector xin q

4.00

25 neurons. 49 neurons. 100 neurons. 169 neurons.

3.50

3.00

2.50 VQTAM

VQTAM+AR

VQTAM+LLE

Fig. 7. The performance comparisons with various numbers of neurons.

32

A. Wongsriworaphon et al. / Computers and Electronics in Agriculture 115 (2015) 26–33

obtained from VQTAM alone must be adjusted or fine-tuned by some algorithm, such as AR or LLE. The results also indicated that higher numbers of neurons offered more precise solutions for all of the proposed algorithms. This trend followed the work of Barreto and Araujo (2004), which suggested that the number of neurons should be sufficiently large to obtain a small approximation error. Although a large number of neurons could provide better solutions, there was a trade-off in terms of having a long computation time. The relation between the number of neurons and the computation time is presented in Fig. 8. The higher the number of neurons was, the longer the computation time. The accuracy of our method is also close to that of the back propagation-based method proposed by Wang et al. (2008). The correlation of the predicted live weights obtained by VQTM + LLE and the actual weights of 50 pigs in the test set data is characterized by R2 = 0.81, as shown in Fig. 9.

Moreover, to examine the repetitiveness of the model under various environments, ten pigs with weights that ranged from 86 to 119 kg were selected to be tested. Each pig was imaged 10 times at different times and in various positions during its movement in a setup area. The results are shown in Table 1. The errors of the predicted weights from the actual weights in some of the sample tests (i.e., pig numbers 3, 8, and 9) deviated highly between the minimum and maximum errors. For example, in sample 9, the minimum error was only 0.07%, while the maximum error was approximately 8.04%. However, the predicted weight does not deviate much from the actual weight in all of the cases. The highest and average errors are less than 9% and 4%, respectively. This finding indicates that the method is somewhat sensitive to the environment and the pig’s position, but it still yields a very accurate weight prediction. 4. Conclusions

Time, seconds

60.00 43.13

40.00 32.93

20.00 12.94 7.56

0.00

5×5

7×7

10×10

13×13

Number of neurons Fig. 8. The relation between the number of neurons and the computation time.

125 y = 0.8522x + 15.778 R² = 0.8198

Prediction weight, kg

120 115 110 105 100 95 90 90

95

100

105

110

115

120

125

Actual weight, kg Fig. 9. Plot of the prediction weight based on VQTAM + LLE vs the actual weight.

The objective of this study is to develop an estimation system for pig live weights that can be practically employed on a real farm without disturbing the animals. We propose a new approach that is based on computer-assisted visual images and artificial neural networks. Pig identification consists of three modules: boundary detection, feature extraction and pattern recognition. To identify an image’s edge, we suggest a method that is based on user interaction via mouse-clicking on the pig image instead of the auto-segmentation method, to avoid data distortion when the pig’s image and its background do not have a high contrast. Each pig image is used to obtain the pixel locations of the pig’s outline and convert them from pixel coordinates to 2D vectors. The resulting boundary vectors can be interpreted as the two essential features that are used in this work, i.e., the average distance from the pig’s centroid to the boundary points and the pig’s perimeter length. The artificial neural network method based on VQTAM has been developed to predict the pig live weights. Accurate results can be obtained by the approach within a reasonable computation time. However, the quality of the prediction is somewhat sensitive to the environment and the pig’s position Even though the proposed approach is practical, there are two main limitations that must be improved in a future study. First, the efficiency of the method is based on the quality of the top view images of the pigs, including the position and posture of the pigs. This arrangement requires pigs to walk naturally or stand in the setup area. This circumstance is a chance occurrence, and few quality images could be obtained. Second, in real situations, it might be possible to image only some of the pigs; not all of the pigs on the farm could be imaged and measured. Therefore, a method that can

Table 1 The repetitiveness of the model under various environments. Pig No. Actual weight (kg)

1 86

2 92

3 96

4 101

5 107

6 108

7 110

8 115

9 118

10 119

Image Image Image Image Image Image Image Image Image Image

1 (kg) 2 (kg) 3 (kg) 4 (kg) 5 (kg) 6 (kg) 7 (kg) 8 (kg) 9 (kg) 10(kg)

82.02 88.04 85.74 82.02 88.03 85.74 82.03 88.03 85.74 88.02

91.02 90.55 87.05 91 90.51 87.04 91 91.01 90.57 87.04

101.59 100.63 90.24 91.61 99.25 97.39 96.9 96.9 96.9 90.58

100.21 99.12 100.21 100.21 100.21 99.34 100.21 100.21 100.21 100.21

108.05 108 108 108 108 108 108 108.19 108 108.12

106.37 106.99 105.31 104.89 103.81 104.56 104.56 103.23 104.89 106.99

108.42 109.82 106.43 113.36 108.24 112.08 112.71 111.26 115.14 113.5

110.34 114.56 115.08 105.75 109.86 114.01 110.82 111.49 106.29 110.36

115.48 114.98 115.48 111.99 111.88 108.08 117.11 109.99 115.48 115.48

121.51 121.51 120.52 121.51 116.94 121.51 121.51 118.63 121.51 120.52

Range (kg) Min error (%) Max error (%) Avg. error (%)

6.02 0.30 4.63 2.42

3.98 1.07 5.39 2.52

11.35 0.94 6.00 3.45

1.09 0.78 1.86 0.98

0.19 0.93 1.11 0.97

3.76 0.94 4.42 2.63

8.71 0.16 4.67 2.29

9.33 0.07 8.04 3.62

9.03 0.75 8.41 3.73

4.57 0.31 2.11 1.73

A. Wongsriworaphon et al. / Computers and Electronics in Agriculture 115 (2015) 26–33

accurately predict based on other postures and using a large area of the pig pen is important to research in future work.

Acknowledgments This work was supported by the Higher Education Research Promotion and National Research University Project of Thailand and the Office of the Higher Education Commission, through the Food and Functional Food Research Cluster of Khon Kaen University. The authors also acknowledge the Betagro Science Centre Company’s assistance with collecting data, providing invaluable guidance and making this research possible. The second author was supported by the National Research Council of Thailand and Khon Kaen University, Thailand (Grant numbers: 560027 and 570016). This research is partially supported by the Centre of Excellence in Mathematics, the Commission on Higher Education, Thailand.

References Apichottanakul, A., Pathumnakul, S., Piewthongngam, K., 2012. The role of pig size prediction in supply chain planning. Biosyst. Eng. 113, 298–307. Arnonkijpanich, B., Hasenfuss, A., Hammer, B., 2011. Local matrix adaptation in topographic neural maps. Neurocomputing 74 (4), 522–539. Barreto, G., Araujo, A., 2004. Identification and control of dynamical systems using the self-organizing map. IEEE Trans. Neural Networks 15 (5), 1244–1259. Brandl, N., Jorgensen, E., 1996. Determination of live weight of pigs from dimensions measured using image analysis. J. Comput. Electron. Agr. 15, 57–72. Frost, A.R., Schofield, C.P., Beaulah, S.A., Mottram, T.T., Lines, J.A., Wathes, C.M., 1997. A review of livestock monitoring and the need for integrated systems. Comput. Electron. Agr. 17, 139–159. Ilea, D.E., Whelan, P.F., 2011. Image segmentation based on the integration of colour–texture descriptors—A review. Pattern Recogn. 44, 2479–2501. Khamjan, S., Piewthongngam, K., Pathumnakul, S., 2013. Pig procurement plan considering pig growth and size distribution. Comput. Ind. Eng. 64 (4), 886–894.

33

Lanari, M.R., Taddeo, H., Domingo, E., Pérez, Centeno M., Gallo, L., 2003. Phenotypic differentiation of exterior traits in local Criollo goat population in Patagonia (Argentina). Arch. Tierz Dummerstorf 46 (4), 347–356. Minagawa, H., Saito, S., Ichikawa, T., 2011. Image segmentation and techniques: a review. Int. J. Adv. Res. Technol. 1 (2), 118–127. Menesatti, P., Costa, C., Antinucci, F., Steri, R., Pallottino, F., 2014. A low-cost stereovision system to estimate size and weight of live sheep. Comput. Electron. Agr. 103, 33–38. Mollah, R., Hasan, A., Salam, A., Ali, A., 2010. Digital image analysis to estimate the live weight of broiler. Comput. Electron. Agr. 72 (1), 48–52. Narendra, K.S., Parthasarathy, K., 1990. Identification and control of dynamical systems using neural networks. IEEE Trans. Neural Networks 1 (1), 4–27. Parsons, D.J., Green, D.M., Schofield, C.P., Whittemore, C.T., 2007. Real-time control of pig growth through an integrated management system. Biosyst. Eng. 96 (2), 257–266. Pastorelli, G., Musella, M., Zaninelli, M., Tangorra, F., Corino, C., 2006. Static spatial requirements of growing-finishing and heavy pigs. Livestock Sci. 105 (1–3), 260–264. Pathumnakul, S., Piewthongngam, K., Khamjan, S., 2009. Integrating a shrimpgrowth function, farming skills information, and a supply allocation algorithm to manage the shrimp supply chain. Comput. Electron. Agr. 66 (1), 93–105. Pope, G., Moore, M., 2002. DPI Pig Tech Notes: Estimating Sow Live Weights Without Scales. Department of Primary Industries, Queensland, Australia . Roweis, S.T., Saul, L.K., 2000. Nonlinear dimensionality reduction by locally linear embedding. Science 290 (5500), 2323–2326. Schofield, C.P., Marchant, J.A., White, R.P., Brandle, N., Wilson, M., 1999. Monitoring pig growth using a prototype imaging system. J. Agric. Eng. Res. 72, 205–210. Salako, A.E., 2006. Application of morphological indices in the assessment of type and function in sheep. Int. J. Morphol. 24 (1), 13–18. Slippers, S.C., Letty, B.A., De Villerrs, J.F., 2000. Prediction of body weight of Nguni goats. S. Afr. J. Anim. Sci. 30 (1), 127–128. Tasdemir, S., Urkmez, A., Inal, 2011. Determination of body measurements on the Holstein cows using digital image analysis and estimation of live weight with regression analysis. Comput. Electron. Agr. 76 (2), 189–197. Topal, M., Macit, M., 2004. Prediction of body weight from body measurements in Markaraman sheep. J. Appl. Anim. Res. 25, 97–100. Wang, Y., Yang, W., Winter, P., Walker, L., 2008. Walk-through weighing of pigs using machine vision and an artificial neural network. Biosyst. Eng. 100, 117–125. Wu, J., Tillett, R., McFarlane, N., Ju, X., Siebert, J.P., Schofield, P., 2004. Extracting the three-dimensional shape of live pigs using stereo photogrammetry. Comput. Electron. Agr. 44 (3), 203–222.