Pattern Recognition Letters ELSEVIER
Pattern Recognition Letters 16 (1995) 687-697
Identification and inspection of 2-D objects using new moment-based shape descriptors Andrzej Sluzek * School of Applied Science, Nanyang Technological University, Nanyang Avenue, Singapore 2263, Singapore Received 1 March 1994; revised 9 January 1995
Abstract
A new method of using the moment invariants for the identification and inspection of 2-D shapes is proposed. The prototype object is described by a family of shapes. These shapes are created by occluding the object by circles (of different radius) located in the object's centre of area. The moment invariants of such shapes are functions of a parameter describing the size of circles. Using these functions it is possible to create from a single moment invariant many shape descriptors, so there is no need to use the moments of higher order. The application of the method to the quality inspection of industrial parts is presented. Keywords: Moment invariants; Shape descriptors; 2-D object identification; Digital circles; Automated quality inspection
1. Introduction
Analysis of images containing isolated 2-D objects remains an important issue in industrial applications of computer vision. One of the most typical problems is the identification and inspection of objects randomly located on a moving conveyor belt. The most usual solution of this task is based on geometric moments of shapes. This is due to the small computational complexity of moment-based algorithms, which is a very important factor for vision systems working in real-time environment. Although the first papers on prospective applications of geometric moments to the position-invariant identification of 2-D shapes were published thirty years ago (Hu, 1962; Alt, 1962), the moment-based techniques still attract the attention of researchers.
* Email:
[email protected]
Many papers have been published on different types of 2-D moments (e.g. Teague, 1980; Reddi, 1981; Cyganski et al., 1983; Abu-Mostafa and Psaltis, 1984; Teh and Chin, 1988), on moments of 3-D objects (e.g. Sadjadi and Hall, 1980; Reeves et al., 1984), on computational aspects of moments (e.g. Casasent and Psaltis, 1980; Zakaria et al., 1987; Chen, 1990; Li and Shen, 1991; Dai et al., 1992), on moment invariants of higher order (Li, 1992), on parametet estimation using moments (e.g. Pinjo et al., 1985; Friedberg, 1986; Safaee-Rad et al., 1992), on moments of contours (e.g. Sluzek, 1988; Safaee-Rad et ai., 1992), and on similar topics. The number of papers describing applications of moment invariants to the identification of objects is very limited (e.g. Smith and Wright, 1971; Dudani et al., 1977; Cash and Hatamian, 1987). This lack of publications suggests that the existing applications are still based on the method described by Ha (1962), and no significant improvements have been proposed
0167-8655/95/$09.50 © 1995 Elsevier Science B.V. All rights reserved
SSDI 0 1 6 7 - 8 6 5 5 ( 9 5 ) 0 0 0 2 1 - 6
688
A. Sluzek~PatternRecognitionLetters16 (1995)687-697
(although people doing applications in industry might not be interested in publishing their results). There are, however, two important disadvantages of the moment invariants: Only the moment invariants based on the moments of order 2 are actually almost invariant to rotation, translation and scaling of digital 2-D objects. The published results (e.g. Teh and Chin, 1986; Pilch, 1991; Dai et al., 1992) show directly or indirectly that the moments of higher order are very sensitive to digitalization errors, minor shape deformations, camera non-linearity, and non-ideal positioning of a camera. Because of that, the corresponding moment invariants can hardly be used for reliable shape identification (some examples are given in Sections 3 and 4 of this paper). There are only two geometric invariants of order 2 (Hu, 1962). They can be used to differentiate between images of real objects only if their shapes are significantly different. Otherwise, even for objects of different classes these moment invariants can be so similar that more features are necessary to identify the objects. Because of the above-mentioned reasons these new features usually should not be the moment invariants of higher order. The problem is even more difficult if the quality of objects has to be verified. For many industrial objects the most endangered fragments are small protruding details. The 2nd-order moment invariants of objects with such details missing are only insignificantly different from the moment invariants of undamaged objects. Usually the values are within the acceptable error margin, and therefore it is impossible to detect damaged objects. If the moment invariants of higher order are applied, the differences between damaged objects and undamaged ones are obviously greater (because of sensitivity of higherorder moments to minor shape deformations). Unfortunately, the overall effects of digitalization, camera non-linearity, and non-ideal positioning of a camera are also increased proportionally so that it is again impossible to identify damaged objects. The paper presents a method of using moment invariants of order 2 to create more position-invariant descriptors and to improve the resolution of these descriptors. The basic ideas are explained in Fig. 1. The object of interest is partially occluded by
R
mk !
C
12
r',l E
F
.,d
•
A
Fig. 1. Unoccludedobjects A and B, and the objectsoccludedby circles of differentradius locatedin the object's centreof gravity. The area of the circles is 25% of the object's area (C and D) or 100% of the object's area (E and F). circles located in its centre. The area of a circle can be between 0% and some maximum percentage (e.g. 100% or 300%) of the object's area. Using this approach, an object is represented by a family of different shapes, and the moment invariants of all these shapes can be shape descriptors (features) of the object. Of course, the same method can be applied for the moment invariants of higher order, but (as it has been explained) the usefulness of those descriptors would be rather limited. The organization of this paper is as follows. Section 2 contains a short theoretical introduction to the proposed method. Selected implementation problems are mentioned in Section 3, and an example of the industrial application of the proposed method is described in Section 4. Some general remarks (regarding computational aspects of the method and its prospective modifications) are contained in the final Section 5. 2. Moment invariants and m-invariant functions
Let R be a bounded region in the OXY plane. Moments of order p + q of R are defined as
mpq= f fRxPyq dx dy
(2.1)
A. Sluzek / Pattern Recognition Letters 16 (1995) 687-697
which for digital images becomes mpq =
(2.2)
E iPJ q. (i, j)ER
The central moments are computed with the coordinate system translated to the centre of R, i.e.
Mpq=
f f (x-a)P(y-b)q dx dr
(2.3)
or
(i-a)P(j-b)
Mpq=
q
(2.4)
(i, j)~R
where
a =mlo/moo
and
b=m01/m00
are co-ordinates of the centre of area of R. It has been shown (Hu, 1962) that some moment expressions are invariant to translation, rotation and scaling of shapes. Among them are invariants containing the moments of order 2: 11 =
M2o + M02 (moo)2
'
(m2o - m o 2 ) 2 + 4M1Z1 Iz = (m0o)4 ,
(2.5)
from which another moment invariant can be created: 1[(11)2-12] M2°M°2-MZx 18 = ~ = (m00)4
It is possible to evaluate whether the invariant I x can differentiate between two shapes R 1, R 2 using the following discrimination parameter:
Q(I x, R 1, R2) =
V( R1, Ix) + V( R 2, Ix) iix(R, ) _ix(R2)l
(2.7)
(Numbers 3 - 7 are usually assigned to moment invariants of order 3 proposed by Hu (1962).) 18 is invariant to any non-singular linear mapping of R (Cyganski et al., 1983; Sluzek, 1990) and it also has slightly smaller computational complexity. For digital images (especially those captured by a camera) the moment invariants are only approximately invariant to translation, rotation and scaling of the corresponding 2-D objects. Because of digitalization error, camera non-linearity, etc., the actual values deviate from the theoretical ones. For a given shape R and the selected invariant Ix, the magnitude of the deviation V(R, I x) can be estimated from a sufficient number of images (captured in diversified conditions) of R.
(2.8)
where Ix(R 1) and Ix(R z) are the theoretical values of the invariant for R 1 and R 2. The best resolution can be achieved for the invariants having the smallest value of Q(I x, Rli R2). If the value of Q(Ix, R1, R 2) exceeds 1, the invariant I x cannot differentiate between R 1 and R z. Unfortunately, this happens quite often (especially f6r invariants of higher order) even for significantly different shapes, which appears to be the major disadvantage of the moment invariants. Several examples of this problem are given in Sections 3 and 4. In order to optimize the resolution of the moment-based descriptors, the following method is proposed, in which objects are represented by a family of shapes. Let C ( a ) be a family of circles defined by the equation: ( x - a) 2 + ( y - b) 2 = otmoo/rr
(2.6)
689
(2.9)
where 0 < c~ < amax; i.e. the centre of the circle is located in the centre of the region R, and the area of the circle is the area of R multiplied by a. The amax value is selected so that the circles are not too large and always at least a part of R is visible. It can be easily found that for the Object A (see Fig. 1) areax = 1.5708, while for the Object B (see Fig. 1) areax = 3.4907. For the part of R which is not occluded by a given circle from C(a), one can compute the moments and the moment invariants which depend both on the corresponding value of a and on the shape of R. Thus, for a given shape R it is possible to create functions 11(a), 12(Ce), 18(a), etc. which are obviously invariant to rotation, translation and scaling of R. These functions will be called m-invariant functions of R. The analytic description of m-invariant functions is usually quite complex even for very simple shapes (an example can be found in (Sluzek, 1994)). For more sophisticated shapes, it is almost impossible to find the analytic description of m-invariant functions.
690
A. Sluzek / Pattern Recognition Letters 16 (1995) 687-697
However, the numerical approximation of m-invariant functions can be found quite easily using digital representations of R and C(a). In order to do that, the original shape R is obscured by computergenerated circles of different size (from 0 to amax, with the adequate sampling period) and the moment invariants of the visible parts of R are being calculated. Examples of such m-invariant functions are shown in Figs. 2 and 3 (logarithmic scaling is used because of the range of m-invariant functions).
lying several values of ct (e.g. {al, c~2,..., a,}). The created features are
{Ix(al), Ix(a2) ..... Ix(an) }
(note that Ix(0)=Ix). They will be called m-in-
variant descriptors. However, not every value of a is equally suitable for creating these new shape descriptors. For a given value aj, if the whole circle is inside R (e.g. Fig. 1E and Fig. 1F) the value of any m-invariant function Ix(a ) can be directly computed from the corresponding invariant I x of the region R, no matter what the shape of R is. For example:
3. Shape descriptors from m-invariant functions
11 ( ~i)2/2~r
Prospective application of m-invariant functions is very similar to that of moment invariants. However, the m-invariant functions are a more flexible tool. The most important advantage is that while a single moment invariant I x creates only one descriptor of a shape, the corresponding m-invariant function Ix(a) can be used to define any number of features describing the same shape. This can be done by speci-
.,I
5.0.
(3.1)
-
11(ai) =
(1
(3.2)
-- O~i) 2
For such values of a, m-invariant functions and the corresponding moment invariants provide exactly the same information about R. Because of that, m-invariant descriptors should be usually created using the circles which are partially outside the
Iog[lll
4.0. x
3.0.
x
x
x
x x
2.0.
x x x x xx x xx ~xxxX
1.0.
-1.0] -2.0
***,**.****¢*****¢
x xx
0,0.
.÷*÷ xxXX
÷~÷
xxx
xx x
o( I
0.15
I
I
I
I
0.75
I
I
I
I
I
1.5
Fig. 2. M-invariant function l l ( a ) of the object A (the square from Fig. 1) indicated by " × ", and of the object B (the triangle from Fig. 1) indicated by " + ". The range of ot is (0%, 150%).
691
A. Sluzek / Pattern Recognition Letters 16 (1995) 687-697
region R (e.g. Figs. 1E and 1F). For example, for the object B (the triangle) recommended values of a are above 0.3491, and for the object A (the square) c~ should be greater than 0.7854. A discussion regarding this problem can be found in (Sluzek, 1994). The most important factor in selecting {a l, a 2. . . . . a,} is the discriminant power of the corresponding m-invariant descriptors {Ix(C~l), Ix(a2), .... Ix(a,)}, i.e. how different these values are for different objects. For example, from Figs. 2 and 3 one can easily see that if ot is close to 150%, both m-invariant functions 11(a) and 18(c~) are significantly different for the objects A and B. In that case, even if the captured silhouettes of the objects are significantly distorted (e.g. because of non-linearity of a camera, poor illumination, etc.) there should be no problem to identify them correctly. Ordinary moment invariants are usually less reliable shape descriptors. In general, for a given set of shapes {R1, R 2. . . . , R,,} and for a given m-invariant function Ix(a), the optimum value aopt (in order to distin-
5.0~
guish between R 1 and the shapes R 2. . . . . R n) can be found using the following criterion: min 0~< Ot ~< Otmax
Q(Ix(a), R1, Ri) ).
( max
(3.3)
2<~i~n
It is also possible to create m-invariant descriptors which have approximately the same values for different objects. 11(0.8) and •8(0.63) are examples of such features for the objects A and B (see Figs. 2 and 3). For that purpose, the most appropriate values of a could be found using the criterion max
Q(Ix(a),
( min
Ri) )
R 1,
(3.4)
0 <~ a <<.Olrnax ~ 2 <~ i <. n
instead of Eq. (3.3). The next important question is whether the sensitivity (to digitalization errors, camera non-linearity, etc.) of m-invariant functions is similar to that of the corresponding moment invariants. Since the magnitude of m-invariant functions tends to increase rapidly for higher values of a (see Figs. 2 and 3), the absolute errors would be obviously larger. However,
log[18} x x
4.0
x x x x
3.0
x x
x
x x x
2.0
x ~ x x x x x x
x xx
x
x x
0.0
,," x x
÷+÷+÷++++ +÷+
0.15
0.75
~÷**+÷++~*+÷
1.5
Fig. 3. M-invariant function Is(a) of the object A (the square from Fig. 1) indicated by " × ", and of the object B (the triangle from Fig. 1) indicated by " + ". The range of a is (0%, 150%).
A. Sluzek / Pattern Recognition Letters 16 (1995) 687-697
692
Table 1A Selected moment invariants and m-invariants descriptors for the objects S1, $2 and $3 (see Fig. 4) Descriptor
S1
$2
11 12 13 14 18 19 110 11(1.18) 12(0.31) 12(1.18) 18(1.18)
0.1631 -I- 0.0033 0.0 + 0.192 x 10 - 4 0.0 + 0.612 × 10 - 5 0.0 + 0.152 × 10 - 5 6.650 x 10 - 3 + 0.225 x 10 - 3 3.635 x 1 0 - 2 4- 0.245 x 1 0 - 2 9.334 x 10 - 3 + 0.846 x 10 - 3 -0.0 + 0.015 × 10 - 3 0.0 + 0.116 --
0.1647 + 0.0034 1.083 x 10 - 4 + 0.895 5.071 x 10 - 5 + 4.553 5.031 × 10 - 5 + 4.937 6.752 X 10 - 3 -I- 0.267 3.735 X 1 0 - 2 + 0.223 9.822 X 10 - 3 ± 1.107 19.883 + 2.980 0.568 × 10 - 3 + 0.185 13.176 + 8.012 97.347 + 18.400
Table 1B Discriminant parameters Q for the descriptors and objects of Table 1A Descriptor
Q(I~,,$1, $2)
Q(1x, $2, $3)
Q(I~, $2, $3)
11 12 13 I4 18 19 11(1.18) 12(0.31) 12(1.18)
4.188 1.004 1.019 1.012 4.823 4.680 4.002 -0.352 0.616
1.015 0.323 1.035 1.001 1.010 1.213 1.122 -0.150 --
1.346 0.815 2.126 9.499 1.388 1.575 1.818 0.608 0.457 --
18(1.18)
--
--
0.503
11o
because the selected m-invariant descriptors should satisfy the criterion (3.3), the relative distance between classes of objects is maximized, and the actual significance of errors is reduced as much as possible. It should be also mentioned that since the occluding circles are computer-generated ones, the algorithm does not create any extra distortion of input images. Advantages of m-invariant descriptors can be illustrated by the following example. Fig. 4 shows three simple objects S1, $2 and $3 of relatively similar shapes. Selected moment invariants and m-invariant descriptors have been computed for a large number of digital images of these objects. 11, 12, and I 8, are invariants of order 2 (see Eqs. (2.5)-(2.7), 13 and I 4 are the simplest invariants of order 3 proposed by Hu (1962), while 19 and 110 are selected moment invariants (of order 4 and 6 respec-
$3 × x x x x x
0.1699 -t- 0.0036 3.226 x 10 - 4 + 0.851 × 10 - 4 13.492 x 10 - 4 + 13.349 x 10 - 5 6.187 x 10 - 5 -I- 6.045 x 10 - 5 7.136 X 10 - 3 -I- 0.266 X 10 - 3 4.010 x 1 0 - 2 -I- 0.210 X 1 0 - 2 10.983 x 10 - 3 + 1.004 X 10 -3 12.483 + 1.520 1.398 X 10 - 3 + 0.195 × 10 - 3 -36.296 4- 12.310
10 - 4 10 - 5 10 - 5 10 - 3 10- 2 10 - 3
x 10 - 3
tively) taken from (Li, 1992). There are also four m-invariant descriptors created from the m-invariant functions Ii(ot), I2(ot) and I8(a), approximately satisfying the criterion (3.3). Table 1A and Table 1B contain the results which include reference values (analytic values or values calculated from noise-flee, high-resolution digital images of the objects), maximum deviations from the reference values, and the discrimination parameters Q. From Table 1B it is obvious that ordinary moment invariants can hardly be used to distinguish between the objects of interest. The only exception is I 2 which can distinguish between S1 and $3, and (with low level of confidence since the parameter Q is close to 1) between $2 and $3. The selected m-invariant descriptors provide significantly higher discriminant power. For example, it is possible to distinguish between the objects of interest using only one descriptor I2(0.31). Thus, we can conclude that if the shape descriptors created from m-invariant functions are used, the identification of 2-D shapes could be more reliable, more flexible, and it would not require invariants of higher order which are more sensitive to digitalization errors.
$1
$2
$3
Fig. 4. Three objects of similar shape used to compare moment invariants and descriptors created from m-invariant functions.
A. Sluzek /Pattern Recognition Letters 16 (1995) 687-697
693
4. An example of application The shape descriptors based on m-invariant functions can be applied for the shape identification and the expected results should be (because of the reasons explained in Section 3) much more reliable than using ordinary moment invariants. However, this approach is particularly useful for the quality inspection of objects represented by 2-D shapes. In many industrial objects, the most endangered fragments are small protruding details. For such objects, it is impossible to detect damaged ones using ordinary moment invariants. This is because the overall errors (due to camera non-linearity, variable orientation and scale of objects) for invariants of undamaged objects are larger than errors introduced by the actual shape deformation. This section describes some experimental results. The purpose of these experiments was to verify applicability of the m-invariant functions in the quality inspection of industrial parts (and in the identification of these parts) in real-time robotic systems. In the performed experiments, the selected objects of interest were two types of cassette holders (denoted X and Y) from cassette recorders. Fig. 5 shows examples of these objects, and Fig. 6 contains typical shapes of damaged objects. The images have been captured using a WATEC902 b / w CCD camera and an Overlay Frame Grabber (OFG) PC peripheral card (resolution 768 × 512). In order to simplify the image analysis, a favourable environment has been created (i.e., diffused light and sharp contrast to the background). The experiments have shown that using ordinary moment invariants it is impossible (because of the
Fig. 6. Some typical damaged objects.
overall errors) either to distinguish between X and Y or to detect damaged objects (denoted XD and YD). The range of some moment invariants is given in Table 2, and the values of the parameter Q (see Eq. (2.8)) for selected invariants of order 2 and 3 are contained in Table 3. Since only distorted (captured by a camera) images of the objects were available, the reference values of invariants and m-invariant functions have been approximated using statistical methods. Much better results have been achieved using m-invariant descriptors. Figs. 7 and 8 show selected m-invariant functions for undamaged objects X and Y. It can be easily noticed that using 11(2.2) or Is(2.15) it is possible to distinguish between undamaged objects X and Y with a high level of confidence (e.g. Q(I1(2.2), X, Y) = 0.229). The results are given in Table 4. Table 2 Moment invariants of the objects X and Y (see Fig, 5) Objects
x
Y
Fig. 5. Examples of two classes of objects (X and Y) used for the experiments.
X damaged X (XD) Y damaged Y (YD)
Invariants
tl
~
&
0.242-0.250 0.249-0,256 0.242-0,247 0.240-0.258
0.0037-0.0041 0.0025-0.0059 0.0051-0.0071 0.0048-0.0079
0.0141--0.0145 0.0137--0.0145 0.0143.-0.0146 0.0126.-0.0148
A. Sluzek/Pattern Recognition Letters 16 (1995) 687-697
694
Table 4 M-invariant descriptors used to identify objects X and Y, and the corresponding values of the parameter Q
Table 3 Parameter Q for the selected pairs of objects and selected invariants Invariants
Q(Ix, X, Y)
Q(Ix, x, XD)
Q(lx, Y, YD)
It 12 13 14 18
4.4 1.1 5.8 5.9 3.9
1.7 4.9 6.7 7.7 3.1
2.4 4.3 5.7 5.8 2.8
3.0
Objects
M-invariant descriptors
X y
18(2.15) 7.35-8.61 1.43-1.49 0.105
a(Ix(a), X, Y)
I1(2.2) 40.21-52.66 13.05-14.60 0.229
Iog[ll)
2.0
,.l,l'l"
.,..
..,,:,:x*,x.z.x'x'x'x'x'~4M~ •
1.0
;¥:¥:¥:¥:V,~ v
¥,v,v,v,V.¥ [P IV.I "i'm
.~,¥'¥
@ 4,
.al ,Ida ¥
~.
0.0 i.l.lll.~G.ill
.IlI'VeV "
.4"~
-1 .fl
c~ -2,0
I
I
I
I
i
I
I
i
0.3
0.6
0.9
1.2
1.5
1.8
2.1
2.4
Ir
Fig. 7. M-invariant function ll(Ot) of the template object X (indicated by " + ") and of the template object Y (indicated by " × "). Functions obtained for other examples of the objects are superimposed (indicated by " . " ) . The range of ot is (0%, 230%).
log[l 6) 2.0 zamz~z,~.~.It'.*,,~..+,~
1.0
-.,:~" ~ . ~. :o.
'"
,in"
'z.
,I,'
,,,,
~. ~"
:k,:k'~ ,al .l
-1 .(]
i l l l e i Ill
*.
....
':4'
~ll.i ,lli
.,...**
-2.0
4''
go,*"
I 0.3
,: ,,:.'
.zlx,.~,:l,z
:A 0.0
"~. is ".'"
k-
I 0.6
I 0.9
I 1.2
i 1.5
I 1.8
~ 2.1
~ ,.',~ I 2.4
(~
)
Fig. 8. M-invariant function Is(a) of the template object X (indicated by " + ") and of the template object Y (indicated by " × "). Functions obtained for other examples of the objects are superimposed (indicated by "."). The range of tx is (0%, 230%).
A. Sluzek /Pattern Recognition Letters 16 (1995) 687-697 Table 5 M-invariant descriptors used to detect damaged objects X and Y Objects
X damaged damaged Y damaged damaged
A similar approach can be used to detect damaged objects. Figs. 9 and 10 show selected m-invariant functions for the template objects X and Y, with the superimposition of the corresponding functions obtained for typical damaged objects (twelve damaged objects have been used for the experimen!s). The best results have been achieved when the damaged objects were divided into two classes (XD1, XD2 and YD1, YD2, respectively). For ordinary moment invariants, there was no significant improvement no
M-invariant descriptors
X (class XD1) X (class XD2) Y (class YD1) Y (class YD2)
18(1.5)
18(1.7)
3.81-3.93 1.34-1.42 3.76-6.05 4.30-4.43 1.66-1.75 5.10-7.64
---8.15-8.68 2.94-3.06 7.63-27.29
695
18(2.0) 8.81-10.24 10.94-11.35 1.28-3.87 3.90-4.28 3.90-3.98 0.91-1.49
log(18) 2.(] ....
.... ..... .
.:
, .¢.+.-
:::;;:+.t:.'::.:.~
1.0
.:,,.'~...' ..::~.+.*"
....¢• ..;÷:*' ...¢.
0.0
...-'
L : ..'
.:.:.' ... ::.....'" ...' : ..:.....
* °
"......•%:
¢
...-
..,
...
..,."
.,.: . +%..
.;i.,;..;.;i.~:::...........
÷
...:.
m
.¢.'t" '
2+
.÷i÷' iat!4't
-1.0
•
.¢
.÷ ¢
i÷l¢l+l+It
:°
tl~tllltl~lllCtltl@ I
-2.0
of
.@
I
I
t
I
I
I
I
0,3
0,6
0.9
1,2
1.5
"1,8
2,'1
;%
I
2.4
Fig. 9. M-invariant function 18(a) of the template object X (indicated by " + "). Functions obtained for typical damaged objects X are superimposed (indicated by " - " ) . The range of a is (0%, 230%)•
3.0~ I°g[t8) 2.0
,.." .."
":
.~.~.~':~'x.x."
." ,:~.x...... .. .. ,,I x • ;k'~ :~ .~:k
1.0 #'.." g...
"
.
~
:
: .¥ . ... ::... : .¥.g: .... " ". "... d.
"
•
.
..'..¥: ....
x
0.0 .zl.$:."
-1.0
~#:¥~¥:¥:¥:¥!
.x~' '¥ ~"~:¥: "x"'z! "~!"z!~ ~"~!z"! 'z: '"
o~ -2.0
J
I
I
I
I
I
0.3
0.6
0,9
1.2
1.5
1.8
F....%....
2,1
:
.~
2.4
Fig. 10. M-invariant function I s ( a ) of the template object Y (indicated by " × "). Functions obtained for typical damaged objects Y are superimposed (indicated by " . "). The range of is (0%, 230%).
696
A. Sluzek / Pattern Recognition Letters 16 (1995) 687-697
matter how many classes of damaged objects had been created. It is possible to identify damaged objects X using, for example, 18(1.5) to detect damaged objects XD1, and 18(2.0) to detect objects XD2. For any damaged object, at least one of these features is significantly different for undamaged ones. The same features can be used to detect damaged objects Y although a higher level of confidence can be expected if 18(1.5) is substituted by 18(1.7) (in order to detect damaged objects YD1). The experimental results are contained in Table 5. The experiments have shown that very high reliability can be achieved using the above-mentioned shape descriptors to identify undamaged objects and/or to detect damaged ones. The experiments have been conducted using about 50 captured images of undamaged objects X and Y, and more than 80 images of twelve damaged objects. The orientation and position of objects were random, and the distance between the camera and objects was not fixed. A hundred percent accuracy has been achieved both in the identification of objects and in the quality inspection. Although the quality of input images was very good, even in the real industrial environment it should not be very difficult to create conditions providing a similar quality of images. Currently the algorithm is being combined with the existing robot system capable of picking objects from a conveyor belt (Tunali et al., 1994). In that system, the algorithm of determining the object's location/orientation is based on the same principles (i.e., moments of the second order). Therefore, the identification and quality inspection module only insignificantly increases the total time of image processing (see Section 5), and the preliminary tests show that the system is able to run within the existing time constraints.
5. Final remarks Since m-invariant functions are a generalization of ordinary moment invariants, the basic properties of moment invariants are preserved. For example, if a moment invariant is invariant to mirror reflections, the same is the case for the corresponding m-invariant function.
However, there are some implementation differences between moment invariants and m-invariant descriptors. For images containing several isolated objects, the connected regions and their moment invariants can be found with one pass through the image. M-invariant descriptors require two passes. In the first pass the centres of regions are calculated, and only after that the locations of occluding circles are known. In the majority of real-time applications (e.g. in robotics) it should not be a serious problem since this is still only a minor fraction of the total processing cycle. It should be mentioned that the occlusions do not create any connectivity problems (even if a region would be no longer connected after the occlusion) because the images are not actually modified. The radius of an "occluding" circle is just used to calculate which pixels should not contribute to the new shape descriptors. Therefore, it is possible to apply simultaneously any number of "occluding" circles of different size, i.e. to calculate any number of m-invariant descriptors within two passes through the image. Recently, another modification of m-invariant functions has been considered. Instead of occluding circles, a combination of occluding rings is being used. Such m-invariant functions are functions of several variables. This approach can be particularly useful when the objects of interest have very sophisticated shapes and/or they contain holes. These issues will be addressed in a future paper.
Acknowledgements The author would like to thank Mr Amzar Yee Chee Seng from CENTRAL-MIDORI (S) Pte Ltd for his assistance in obtaining the samples of objects used for the experiments. The author thanks the reviewers for their constructive comments.
References Abu-Mostafa, Y.S. and D. Psaltis (1984). Recognitive aspects of moment invariants. IEEE Trans. Pattern Anal, Mach. Intell. 6, 698-706.
A. Sluzek / Pattern Recognition Letters 16 (1995) 687-697 Alt, F.L. (1962). Digital pattern recognition by moment invariants. J. Assoc. Comput. Mach. 9, 240-258. Casasent, D. and D. Psaltis (1980). Hybrid processor to compute invariant moments for pattern recognition. Opt. Lett. 5, 395397. Cash, G.L. and M. Hatamian (1987). Optical character recognition by the method of moments. Computer Vision, Graphics, and Image Processing 39, 291-310. Chen, IC (1990). Efficient parallel algorithms for the computation of two-dimensional image moments. Pattern Recognition 23, 109-119. Cyganski, D., J.A. Orr and Z. Pinjo (1983). A tensor operator method for identifying the affine transformation relating image pairs. Proc. IEEE Conf. Computer Vision and Pattern Recognition, 351-363. Dal, M., P. Baylou and M. Najim (1992). An efficient algorithm for computation of shape moments from run-length codes or chain codes. Pattern Recognition 25, 1119-1128. Dudani, S., K. Breeding and R. McGhee (1977). Aircraft identification by moment invariants. IEEE Trans. Comput. 26, 39-45. Friedberg, S.A. (1986). Finding axes of skewed symmetry. Computer Vision, Graphics, and Image Processing 34, 138-155. Hu, M.K. (1962). Visual pattern recognition by moment invariants. IRE Trans. Inform. Theory 8, 179-187. Li, B.C. and J. Shen (1991). Fast computation of moment invariants. Pattern Recognition 24, 807-813. Li, Y. (1992). Reforming the theory of invariant moments for pattern recognition. Pattern Recognition 25, 723-730. Pilch, L. (1991). Analiza wybranych wspolczynnikow ksztaltu (in Polish). PhD Thesis, Academy of Mining and Metallurgy, Krakow. Pinjo, Z., D. Cyganski and J.A. Orr (1985). Determination of 3-D object orientation from projections. Pattern Recognition Lett. 3, 351-356. Reddi, S.S. (1981). Radial and angular moment invariants for image identification. IEEE Trans. Pattern Anal. Mach. Intell. 3, 240-242. Reeves, A.P., R.J. Prokop, S.E. Andrews and F. Kuhl (1984).
697
Three-dimensional shape analysis using moments and Fourier description. Proc. 7th Intermat. Conf. Pattern Recognition, Montreal, 447-449. Sadjadi, F.A. and E.L. Hall (1980). Three-dimensional moment invariants. IEEE Trans. Pattern Anal, Mach. IntelL 2, 127136. Safaee-Rad, R., K.C. Smith, B. Benhabib and I. Tchoukanov (1992). Application of moment and Fourier descriptors to the accurate estimation of elliptical shape parametersi Pattern Recognition Lett. 13, 497-508. Sluzek, A. (1988). Using moment invariants to recognize and locate partially occluded 2D objects. Pattern Recognition Lett. 7, 253-257. Sluzek, A. (1990). Zastosowanie metod momentowych do identyfikacji obiektow w cyfrowych systemach wizyjnych (in Polish). WPW, Warszawa. Sluzek, A. (1994). Shape identification using new moment-based invariants. Proc. IEEE 9th lnternat. Conf. TENCON'94, Singapore, 314-318. Smith, F.W. and M.H. Wright (1971). Automatic ship photo interpretation by the method of moments. IEEE Trans. Cornput. 20, 1089-1094. Teague, M.R. (1980). Image analysis via the general theory of moments. J. Opt. Soc. Am. 70, 920-930. Teh, Ch.-H. and R.T. Chin (1986). On digital approximation of moment invariants. Computer Vision, Graphics, and Image Processing 33, 318-326. Teh, Ch.-H. and R.T. Chin (1988). On image analysis by the methods of moments. IEEE Trans. Pattern Anal Mach. lntell. 10, 496-513. Tunali, E.T., K. Tan and A. Sluzek (1994), A vision guided robot system for picking objects from a moving conveyor belt. Proc. Internat. Conf. on Data and Knowledge Systems for Manufacturing and Engineering, Hongkong, 210-215. Zakaria, M.F., L.J. Vroomen, P.J.A. Zsombor-Mutray and J.M.H.M. Vankessel (1987). Fast algorithm for the computation of moment invariants. Pattern Recognition 20, 639-643.