Pa~em Recognition Le~ers ELSEVIER
Pattern Recognition Letters 17 (1996) 345-355
Visual guidance of a pig evisceration robot using neural networks S.S. Christensen *, A.W. Andersen, T.M. J~rgensen, C. Liisberg Rise National Laboratories, DK-4000 Roskilde, Denmark Received 20 December 1995
Abstract The application of a RAM-based neural network to robot vision is demonstrated for the guidance of a pig evisceration robot. Tests of the combined robot-vision system have been performed at an abattoir. The vision system locates a set of feature points on a pig carcass and transmits the 3D coordinates of these points to the robot. An active vision strategy taking advantage of the generalisation capabilities of neural networks is used to locate the control points. A neural network PC-expansion board that provides a new classification every 180 txs is used to speed up the neural network processing. Keywords: Active Vision; RAM neural networks; WISARD
1. Introduction The consumers have in the recent years been increasingly concerned by the risk of contamination of meat by pathogenic micro-organisms. As a response to these concerns a project was initiated to develop better methods for separating and removing the internal organs of slaughtered pigs. The aim of the project was to reduce the risk of contaminating the pig meat by changing the current process used to remove the intestines, stomach and pluck set. The common practice in Europe is an evisceration technique where the organs and the digestive tract are removed separately from the carcass. First the rectum is taken out using a mechanical rectum loosener; the operator ties a knot on the rectum in order to limit the risk of contamination with intestinal contents. The gut set and the stomach
* Corresponding author. Email:
[email protected].
are taken out after cutting the connective tissue. Subsequently the plucks consisting of heart, lungs, liver, trachea, oesophagus, tongue and diaphragm are removed. The head remains with the rest of the carcass. This process involves two workers on the slaughterline. Studies of the different sources of contamination were performed. Based on these studies a new automated process was developed where the intestines, the stomach and the pluck set are removed as a single entity. The automation of the process has several advantages: • The tools can be disinfected between each pig. It is not feasible to get workers to disinfect their hands between each pig. • The workers are relived of a tiresome and hard operation (Van der Sluis, 1994). • It is to possible to obtain a consistent quality. Pigs do however exhibit variations in size, appearance and exact placement of different organs. This complicates the task of the robotics path plan-
0167-8655/96/$15.00 © 1996 Elsevier Science B.V. All rights reserved SSDI 0 1 6 7 - 8 6 5 5 ( 9 5 ) 0 0 1 3 0 - 1
346
S.S. Christensen et a l . / Pattern Recognition Letters 17 (1996) 345-355
ning and a vision system was introduced to provide the robotics system with coordinates of important control points. Based on the information about these predefined points, the robotics system is capable of calculating a cutting path (Wadie et al., 1994). An example of a control point would be the point that determines the start of the cutting path. The pig is hanging with the head downwards and the first cut is performed along the spine of the pig from the tail end towards the head. The cutting stops after cutting the kidney tab. There is a sharp bend of the spine at the starting position and it is usually possible to see part of the bone through the surrounding tissue. As the vision system must supply 3D coordinates to the robot, a method of obtaining 3D coordinates is required. SIEMENS developed for this purpose a 3D system based upon the projection of a number of structured light patterns onto the pig carcass. These 3D data together with normal 8-bit grey-value 2D images form the input data for the image processing system. Fig. 1 illustrates schematically the different
Focus control
signal
Fig. 1. The basic components of the automated evisceration system. A sensor system recording 2D grey-level data and 3D information. These image data are preprocessed and fed into an artificial neural network. The neural network calculates a focus control signal that defines the region sampled by the preprocessor. The focus control signal will move the focus point of the preprocessor until a control point is located. The coordinates of the control points are transmitted to the robot controller. The robot uses its tools to remove the intestines, the stomach and the pluck set.
components of the proposed automatic pig evisceration system. It was the task of Riso to develop the image processing system for this project, and in this paper we will present the algorithms we used to solve this task. The timing requirements for the image processing were dictated by the line speed at the abattoir. In one hour approximately 360-400 pigs are processed. No individual process on the slaughter line is thus allowed to last for more than 10 seconds. The automated system consists of two processing stations. The pig arrives at the first station where the 3D and 2D image data are obtained. The pig is then moved to the next station where the robot removes the intestines, the stomach and the pluck set. This gives a maximum of 10 seconds to record the 3D and 2D data and to locate the control points. In order to calculate the cutting path the robot controller requires the data values before the pig arrives. When the time needed for the pig to pass between the two stations also is taken into account, the actual amount of time at disposal at the first station is reduced to 6 seconds. As the recording of 3D data involves the projection of several structured light patterns the amount of time for the subsequent analysis of the image data is 3 seconds. Within these 3 seconds a total of 12 points must be located. During the development of the system it was anticipated that several prototypes of tools and fixtures would have to be tested. The environment in which the image data were recorded was thus expected to change several times during the project together with the set of control points that the image processor should locate. A concept that would simplify the process of adapting to these changes was needed. We selected a method based on the principle of learning by example. The basic idea is that the system should learn to locate a specific control point based on a series of images where a manual operator has marked the correct location of the control point. Adaptation to changes in the environment would thus just require recording of a new set of images and a manual marking of the control point on each of these images. The success of such a concept is of course very dependent upon the quality of the preprocessing of the image data and the learning algorithm. We chose an artificial neural network as our
S.S. Christensen et a l . / Pattern Recognition Letters 17 (1996) 345-355
adaptive system. A set of simple preprocessors were developed to prepare the image data for training the artificial neural network. The remaining of this paper is structured as follows: First we introduce the artificial neural network architecture used. Next an image processing strategy is presented where the artificial neural network is used to perform a guided search for control points on the pig images. Finally we will present results obtained with this system at the abattoir.
2. R A M - b a s e d
network
The use of artificial neural networks has been a topic of increasing interest during the last decade (Alexander, 1990; McClelland and Rumelhart, 1986; Moody and Darken, 1989; Wong et al., 1995). In order for such an approach to be applicable to realworld problems, a fast implementation and a training procedure that provides the desired generalisation are
347
required. We have found that the RAM-based type of neural networks fulfils these requirements. One of the main advantages of using RAM neural networks is the fast training and the high classification speed. A number of RAM neural networks have been considered in the literature (Austin and Stonham, 1987; Albus, 1975; Gorse and Taylor, 1991; J¢rgensen and Christensen, 1995; Miller, 1990). The type of RAM neural networks we are considering corresponds to the so-called WISARD architecture (Alexander and Stonham, 1979). The RAM neural networks consist of a number of Look Up Tables (LUTs) which act as classifiers. Each LUT probes a subset of the binary input data. The sampled bit sequence is used to construct an address. This address corresponds to a specific entry in the LUT. The number of output values from a given LUT entry is equal to the number of possible classes. For each class the output can take on the values 0 or 1. A value of 1 corresponds to a vote on that specific class. The output vectors from all LUTs are added,
Iaput patterl I el el ]1 11 ol Ol Ol 11 Xl 11 11 Ol II el 11 el Ol Ol Ol II ]1 11 II Ol II Ol II ol ol 11 el 11 ol
[ 0l 0l II 0{ II 0l
/
= eoloumn entry
J
I ''*r°
I
I I el Ol I'1 II ]1 o'1
I ol tl 1~11 Ilol
111olo~11o Io I
I
Output vector Sum of output vectors [ t I i[2131alo!
1 [ ~ ] W'lnner take all decision Fig. 2. The structure of a RAM-based neural network. The input vector is a binary pattern. A number of the input bits are used to construct an address (column entry) for look-up table 0, LUT 0. This address is used to access a specific column in the look-up table. The content of this column is read out as a binary vector. Different bit patterns are used to create addresses for the other look-up tables. The output vectors generated by the look-up tables are added together to produce votes for the different classifications. The classification obtaining most votes is selected.
348
S.S. Christensen et a l . / Pattern Recognition Letters 17 (1996) 345-355
and succeedingly a winner takes all decision is made to perform the classification. Fig, 2 shows the overall structure of this system. In order to perform a simple training of the network the output values are initially set to 0. For each example in the training set the following steps are then carried out: • Present the input and the target class to the network. • For all LUTs find the addressed entry. • Set the output values of the target class to 1 in all entries. By use of this training strategy it is guaranteed that each training pattern always obtains the maximum number of votes. As a result the network makes no misclassification on the training set (ambiguous decisions might occur). One of the advantages of this architecture is that it is only necessary to present each training example once. Unlike most other architectures there is no need for a number of iterations through the training examples before they are learned. This property makes this architecture well suited for large training sets. For further details on this architecture please refer to (Jorgensen et al., 1994).
3. The image processing strategy The 2D image data obtained from the sensor system are preprocessed and fed into the RAM-based neural network system. The neural network makes a classification of the preprocessed data. This classification is used to locate specific feature points on the pig carcass. The 3D data obtained from the 3D sensor system are not used to locate the control points initially due to the time required to calculate the 3D coordinates. Instead the 3D data are used for a small number of points around the located control point. These 3D coordinates are then used to perform a final verification of the located control point. In the following we will focus on the processing of the 2D grey-level image data. 3.1. Search process
The basic idea is that the user records a number of representative images of pigs and marks a control
point on each of these images. An artificial neural network is then trained on these data to provide the coordinates of the control point. The processing time of the image processing system is proportional to the number of pixels that are processed. The size of the image is defined by the size of the search area and the required precision. We have a possible search area of approximately 50 cm times 75 cm and a required precision of 1 mm. An image size of 7 6 8 . 5 1 2 pixels was therefore selected. In order to speed up the processing of the images a scheme that reduces the number of pixels sampled by the system is needed. One might consider different strategies for training the artificial neural network. One approach would be to feed all image data into the neural network and then train it to generate the x, y coordinates directly as its output. This direct approach is however not feasible when the image data consists of 768.512 pixels. This number is far beyond the maximum number of input parameters artificial neural networks are capable of processing. One way to reduce the number of input data processed by the artificial neural network would be only to process data within a small region. The artificial neural network could then be trained to determine whether the control point was located in the centre of the region. It would then be possible to locate the control point by scanning the image. The disadvantage of this method is that it requires a large number of classification steps and on average half the regions are processed before the control point is located. Another approach based on visual search and multi resolution is used instead (Burt and van der Wal, 1990; Blake and Yuille, 1992; Gerrissen, 1991). The pigs are all oriented with the head pointing downwards. If we are looking for the tail of the pig and the first region contains the head of the pig, we actually know that the tail is positioned above the current region. If we take advantage of this kind of directional knowledge it is possible to reduce the number of regions investigated. The artificial neural network is accordingly trained to generate a direction vector pointing from the centre of the current region towards the control point. Fig. 3 illustrates how the position of the search region is guided by direction vectors,
S.S. Christensen et al. / Pattern Recognition Letters 17 (1996) 345-355
Fig. 3. An illustration of the basic guided search concept. We want to locate the tail of the pig. In the first region we see the head. Even though we do not know the length of the pig, we know we must search above the current position and go to the right. In region 2 we have enough information to determine to go to the right. The tail is within region 3 and a smaller search region is required to make a more precise positioning.
The RAM-based neural network architecture requires that the output direction vectors are quantized into a set of discrete directions. A set of 8 direction vectors was selected. The algorithm is: Initially pixels within a restricted search region are processed. The centre of the search region is denoted the focus point (X,, Y~). If the focus point does not coincide with the control point, a direction vector from the focus point towards the control point is calculated by the neural network. The focus point is subsequently moved in the calculated direction. This process is repeated until the focus point coincides with the control point. The next focus point is thus calculated as Xc,t+ 1
349
along the tracking path does not point directly towards T the correct position is still found as the error is corrected by the subsequent direction vectors. It is one of the advantages of this method that even if a wrong direction vector is calculated the system may still converge onto the correct control point. This property contributes to the robustness of the algorithm. A stop criterion to end the search process is needed. Experiments showed that a stop criterion based upon a zero direction vector calculated by the neural network is sensitive to noise. It was therefore decided to let the neural network calculate 4 direction vectors for each iteration. The 4 vectors were calculated for 4 search regions revolved around the current focus point. The average value of these 4 direction vectors is then used to calculate the next focus point. Once the target point is located the 4 direction vectors will point towards the current focus point and an average value of 0 is obtained. The robustness of the system is greatly enhanced by calculating 4 direction vectors. If one of the direction vectors is wrong it is now possible to detect and discard the erroneous vector. In difficult situations
~:~"
~i~iii~
i i !ii¸¸¸
4(i~!i
~i~
~ii
I
.....~....
,
~
° IiS
p
I
/:, A
where l~ is the step length at time t and (r=,. r~,~)is the direction vector towards the target point calculated by the artificial neural network at time t. Fig. 4 illustrates how the tracking algorithm utilises the direction vectors calculated by the neural network. The centre of the first search region is located at position S. The control point is located at position T. Although the second direction vectors
/iii~, i e,"
¢
Fig. 4. T denotes a control point that should be detected. The grey arrows indicate the direction vectors towards the control point as estimated by the neural network. The black arrows illustrate how the focus point is moved from the start position S towards T.
350
S.S. Christensen et al. / Pattern Recognition Letters 17 (1996) 345-355
the system provides better estimates using these 4 vectors. The disadvantage is of course that the processing time is increased by a factor of 4. Figs. 5(a) and 5(b) illustrate how the 4 direction vectors are used to calculate an average direction. The above-mentioned search scheme effectively reduces the number of regions that are examined. The size of the search regions still remains to be determined. If a too small region is used, one might risk that similar local features on different parts of the image can not be distinguished. When the system only receives image data from a small part of the pig skin, it is impossible to calculate an appropriate direction vector. If we are close to the control point however, it is still possible to calculate the correct direction vectors even with a small search region. Large search regions have the advantage that it is possible to see sufficiently many structures to calculate the direction vectors, but we must process a large number of pixels. We can reduce the number of pixels within the large regions by subsampling only part of the pixels. This increases the processing speed considerably. The disadvantage is that smaller structures disappear, and the precision of the located control points is correspondingly low. A multiresolution system is used to combine the advantages of large subsampled regions and small high resolution regions. The size of the search region is changed during the search process while the num-
'"\ / ' ° ' \ Resulting ~ Vector
/ Resulting • Vector
/
\
Fig. 5. (a) Instead of calculating the direction vector from the centre of the search region directly it is calculated as the average of the direction vectors estimated by the neural network for the four corners of the search region. (b) When the control point is close to the centre of the search region all 4 corner direction vectors will point towards the centre. A null vector is thus obtained as the resulting direction vector. This provides a convenient stop criterion.
Fig. 6. This figure illustrates how the search region is moved towards a control point T. When the search region is close to T, a smaller search region is used in order to obtain a more precise position of T: The step length is decreased as T is approached for the same reason.
ber of pixels sampled within each search region is kept constant. This has the effect that the pixeis initially are sampled sparse in a large area. As the region size becomes smaller, the pixels are sampled at a higher resolution providing information about finer details. Once the search region is close to the control point, a small search region with high resolution data is used. It will still be possible to calculate direction vectors towards the control point in this situation since we have already ensured that the system is operating in a known context close to the control point. The step length I t is changed during the search process in order to increase the convergence speed. Initially the step length l t is assigned a large value in order to ensure a fast convergence towards the control point. As the focus point approaches the control point, l t is decreased in order to allow for smaller and more accurate steps. The above-mentioned stop criterion does also provide the information required to change resolution. Fig. 6 shows how the size of the search region and the step length is reduced close to the control point. 3.2. P r e p r o c e s s i n g
The RAM-based neural network architecture requires binary input values. It is thus necessary to
S.S. Christensen et al. / Pattern Recognition Letters 17 (1996) 345-355
convert the grey-scale image data into a suitable binary representation. The most straightforward way of converting the grey-level image data to binary data would be to threshold the image data directly. The problem is however that no global threshold value exists that preserves important structures in both dark and bright regions. It is however possible to take advantage of the fact that the system at each iteration step only processes image data sampled within a restricted region. It is only necessary to preserve information about the dominating structures within this region in order to calculate a direction vector. These structures are emphasised if the pixels within the current search region are thresholded using their average intensity as the threshold value. This rather simple mechanism ensures that a low threshold value is used in dark areas to pick out structures from the shadows while a high threshold value is used in bright areas. One of the advantages of this procedure is that it is robust to changes in the intensity of the illumination. It has furthermore the advantage that it only requires a minimum of computing power. A disadvantage of this scheme is that a structure may look different in different overlapping search regions as these might have different threshold values. If the search system is trying to centre on a structure this structure might change shape during the centring process. The artificial neural network is expected to learn to handle this situation correctly. Another method to convert grey-level information into a binary representation while still maintaining information about structures in both dark and bright regions would be to use more than one globally defined threshold level. For each threshold value a binary pixel image is created. These binary images are then treated as one long binary vector and sent to the artificial neural network. The disadvantage of this method is that the number of input bits for the artificial neural network is larger than with the adaptive threshold method. The advantage is that structures are stable throughout the search process. During the last iterations of the search algorithm high resolution data from a small region are used. It might however be beneficial to use additional data obtained outside the current small search region. A sampling scheme that provides information about the average grey-level intensities along a given direction
351
is used. A line is drawn from the centre of the current search region. The line is subdivided into a number of segments. The average intensity of the pixels within each segment is calculated. Average pixel intensities are calculated in this way for 8 directions. The above-mentioned binarisation method using more than one threshold value is used to convert the average intensities into binary values. If another artificial neural network architecture supporting real-valued inputs was used there would of course not be any need to binarise the input pattern. It might however still be necessary to apply some sort of preprocessing to compensate for differences in illumination. In the next section the results of applying the backpropagation algorithm (McClelland and Rummelhart, 1986) to these datasets are presented together with the results obtained using the RAM-based neural networks.
4. Results At the end of the project period an evisceration system was implemented at an abattoir in Roskilde, Denmark. The system consisted of a robot with a set of specially designed tools, a 3D and 2D recording system, a fixture frame to hold the pigs during operation, a robotics control system, and the image processing system. The image processor was implemented on a 486-66 Mhz PC running MS-Windows 3.1. Image data were obtained during 3 measurement sessions at different stages of the project. The first set of 67 images was obtained at an early state of the project before any modifications were made at the abattoir. The next set of 37 images was recorded after a fixture frame had been developed for the pigs. The last set of 72 recordings were performed during test operations of the complete system. The first data set was used to perform initial tests of the overall concept and used to select an artificial neural network architecture. The training examples were created by centring a search window on the pixel position and calculate the resulting preprocessed binary input image for the artificial neural network together with the desired direction vector. One image may thus create one training example for each pixel. In order to reduce the number of training
352
s.s. Christensen et a l . / Pattern Recognition Letters 17 (1996) 345-355
examples only every 10th position was used to create a training example. Close to the control point all positions were however used for training. This scheme produces approximately 4000 training examples for one 5 1 2 . 7 6 8 pixels pig image. A training set of 30 pigs were used giving a total of 120 000 training examples. The number of pixels sampled within a search region is 400. The popular backpropagation algorithm was tested on this dataset. This algorithm does unfortunately
scale poorly with the number of training examples. This tumed out to be a major problem as it proved unfeasible to complete training of a backpropagation system with this amount of data. The described RAM-based neural network architecture completed training in less than 2 hours. Subsequent tests on the remaining 37 pig images demonstrated a performance by the system equivalent to the performance of humans tested on the same dataset. Fig. 7 shows how the system on a test image is
Fig. 7. This image shows a pig in a fixture frame with the guts hanging out. The white eireles show how the image processor managed to move from the start position to the control point on the pig.
s.s. Christensen et al./ Pattern Recognition Letters 17 (1996) 345-355
capable of locating the correct control point. The second data set differed from the first data set in a number of aspects: It was recorded using a different camera, the field of view was changed, a new background was used, the pigs were mounted in a fixture frame. Even though there were major differences between the two datasets, it was nevertheless possible to use the same program to locate the control points after retraining on the new image data. The third data set differed from the second data sets in some minor aspects as some additional objects had been introduced into the scene. The two last data sets were recorded under operational conditions and the coordinates of the located control points were transmitted to the robot controller. Tests on these data confirmed that the system located the control points with a precision equivalent to that of humans. The system was able to locate the control points with a precision of + 3 pixels corresponding to + 3 mm. This precision fulfilled the requirements of the evis-
353
ceration robot. The tests demonstrated that the system was capable of removing the intestines, the stomach and the pluck set in one piece within a l0 seconds time frame. Fig. 8 shows how the control point is located on a typical pig. Notice how the step length decreases as the control point is approached. The distribution of the processing time used by the image processor was examined. The preprocessing step for one search region took 600 p.s. The RAM-based neural network used 7 ms to calculate one direction vector. The total time used to calculate a direction vector for one region is thus approx. 8 ms. For each movement of the focus point 4 direction vectors are calculated giving a processing time of 32 ms for one movement. A maximum of 16 search steps are needed to locate the control point. The maximum search time for one control point is thus 512 ms. This is however not acceptable as 12 control points must be located in less than 3 seconds. In order to reduce the processing time a hardware
Fig. 8. Illustrationof the tracking path on a typical pig image. The start position is the upper left circle. Notice how the step length decreases as the target point is approached.
354
S.S. Christensen et al./ Panern Recognition Letters 17 (1996) 345-355
implementation of the RAM-based neural network was developed as a PC expansion board. This board performs one classification in less than 180 p,s. The overhead of the windows system gives a worst-case classification time for the board of 300 Ixs. This reduces the processing time for one direction vector to 1 ms and gives a maximal processing time of 64 ms. This system has no problems of locating all 12 control points in less than 3 seconds leaving sufficient time for data acquisition and movement of the pig. We have recently converted the software version of the RAM-based neural network to 32-bit running under Windows NT. Benchmark results gave a classification time of 500 p,s on a Pentium 90 MHz PC. After completion of the test period, work has now been initiated to design a commercial viable version of the automatic pig evisceration system.
5, Summary
control points with the required precision (:k 3 mm) and the robot and the tools functioned satisfactory. The combination of a guided search algorithm and a RAM-based neural network system has proved to be an attractive alternative to more traditional image processing methods.
Acknowledgement The project was funded by the EU BRITE programme (BE4152) and the partners were: Danish Meat Research Institute, Roskilde (DK), Ricardo Hitec (UK), Preston, AMARC, University of Bristol (UK), Siemens AG, Munich (D), Ris~ National Laboratory (DK). We would like to thank Carsten Roogaard for helpful suggestions and comments.
References AIbus, J.S. (1975). A new approach to manipulator control: The
An image processing system for location of control points on pig carcasses has been presented. The image processor uses a guided search algorithm to locate control points on the pig carcasses. The search process is controlled by an artificial neural network that calculates direction vectors towards the control point. The advantage of this approach is that changes in the image data or the definition of the control point are handled by retraining the system. The experimental results confirmed that it was possible to adapt the system to changes in the surroundings by recording a new set of training images and retrain the system. A hardware PC expansion board was used to accelerate the neural network classifications on a 486-66 MHz PC. With this setup it was possible to locate one control point in 64 ms. A 32-bit software implementation of the RAM-based neural network architecture running on a Pentium 90 MHz PC was demonstrated to be able to perform classifications within 500 ixs. The image processing system was integrated with the other components of the automatic evisceration system and tested in an abattoir at Roskilde. The image processing system was capable of locating the
Cerebellar Model Articulation Controller (CMAC). J. Dynamic Systems, Measurement, and Control, Trans. A S M E 97, 220-227. Alexander, I. (1990). An Introduction to Neural Computing. Chapman and Hall, London. Alexander, I. and T.J. Stonham (1979). Guide to pattern recognition using random-access memories. Computers and Digital Techniques 2, 29-40. Austin, J. and TJ. Stonham (1987). Distributed associative memory for use in scene analysis. Image and Vision Computing 5
(4), 251-260. Blake, A. and A. Yoille, Eds. (1992). Active Vision. MIT Press, Cambridge, MA. Burt, P.J. and G. vail der Wal (t990). An architecture for multiresolution, focal, image analysis. In: Proc. lOth lnternat. Conf. on Pattern Recognition, Vol. II, 305-311. Gerrissen, J.F. (1991). On the network-basodemulation of human visual search. Neural Networks 4, 543-564. Gorse, D. and J.G. Taylor(1991). A continuousinput RAM-based stochastic neural model. Neural Networks 4, 657-665. Jcrgensen, T.M., S.S. Christenscn, A.W. Andersen and C. Liisberg (1994). Training and optimizationof a RAM based neural network system for machine vision tasks. In: Proc. SPIE Internat. Symposium on Photonics, Sensors & Controls for Commercial Applications.
J~rgensen, T.MJ. and S.C. Christensen (1995). Optimisation of RAM nets using inhibition between classes. In: Proc. Weightless Neural Network Workshop.
McClelland, J.L. and D.E. Rumelhart(1986). Parallel Distributed Processing. Vols. 1 and 2 (MIT Press, Cambridge, MA).
S.S. Christensen et al. / Pattern Recognition Letters 17 (1996) 345-355 Miller, W.T. (1990). CMAC: An associative neural network alternative to backpropagation. Proc. 1EEE 78 (10), 1561-1567. Moody, J. and CJ. Darken (1989). Fast learning in networks of locally-tuned processing units. Neural Computation l, 281-294. Van der Sluis, W. (1994). Reducing carpal tunnel syndrome. Meat International 4 (2). Misset International, Doetinchem, The Netherlands.
355
Wadie, I.H.C., G.L. Pumell and K. Khodabandehloo (1994). Two dimensional modelling of pig carcass spines for robotic evisceration, ln: Proc. Euriscon '94, Vol. 2, 729-738. Wong, B.K., T.A. Bodnovich and Y. Selvi 0995). A bibliography of neural network business applications research: 1988-September 1994. Expert Systems 12 (3), 253-262.