Image and Vision Computing 21 (2003) 1027–1036 www.elsevier.com/locate/imavis
A method of optimum transformation of 3D objects used as a measure of shape dissimilarity Hermilo Sa´nchez-Cruz*,1, Ernesto Bribiesca Department of Computer Science, Instituto de Investigaciones en Matema´ticas Aplicadas y en Sistemas, Universidad Nacional Auto´noma de Mexico, Apdo. 20-726, Mexico, D.F. 01000, Mexico Received 4 June 2001; received in revised form 2 June 2003; accepted 12 June 2003
Abstract In this work, we present a method which transforms an object into another. The computation of this transformation is used as a measure of shape-of-object dissimilarity. The considered objects are composed of voxels. Thus, the shape difference of two objects can be ascertained by counting how many voxels we have to move and how far to change one object into another. This work is based on the method presented in [Pattern Recognition 29 (1996) 1117], and our contributions to such a work are a method of optimum transformation of objects and a proposed method of principal axes, which is used to orientate objects. The proposed method is applied to global data. Finally, we present some results using objects of the real world. q 2003 Elsevier B.V. All rights reserved. Keywords: Object transformation; Principal axes; Shape-of-object dissimilarity
1. Introduction One of the most important challenges to computer vision is the shape-of-object recognition [2]. This paper proposes a measure of shape-of-object dissimilarity based on object transformation. Several authors have used different techniques for 3D object recognition. Besl and Jain [3] describe some techniques for obtaining, processing and characterising range data in order to recognise 3D objects. Jain and Hoffman [4] define a similarity measure between the set of observed features and the set of evidence conditions for a given object in a database. Cohen-Or and Levin [5] construct 3D intermediate objects by a distance field metamorphosis. The advantage of that new approach is the capability of morphing between objects of different topological genus. Lohmann [6] considers a similarity measure based on the quotients of volumes of the studied 3D objects over well-known geometrical objects. Adan et al. [7] propose a method * Corresponding author. Fax: þ 52-5-622-3620. E-mail addresses:
[email protected] (H. Sa´nchezCruz),
[email protected] (E. Bribiesca). 1 Part of this work is a section of the Doctoral Dissertation of the first author presented to the Universidad Nacional Auto´noma de Mexico. 0262-8856/$ - see front matter q 2003 Elsevier B.V. All rights reserved. doi:10.1016/S0262-8856(03)00119-7
which uses a new set of global features as discriminatory parameters. These parameters are invariant under rotation, translation and scaling. Holden et al. [8] have evaluated eight different similarity measures applied to 3D serial magnetic resonance images. Mokhtarian et al. [9] describe a method to recognise free-form 3D objects by means of 3D models under different viewing conditions based on the geometric hashing algorithm and global verification. Other methods are focused on 3D shape recognition, for instance: Dickinson et al. [10] present some techniques for recognition of 3D objects from a single 2D image. In such techniques, from an input image, a set of features or primitives may be extracted. Zhang et al. [11] define an automatic construction of a view-independent relational model for 3D object recognition. To prove our proposed method, we used digital elevation model (DEM) data from the valley of Mexico and other free-form objects. DEMs are digital representations of the Earth’s surface. Generally speaking, a DEM is generated as a uniform rectangular grid organised in profiles. The digitalisation of these models is based on 1:250,000 scale contours. Fig. 1 shows a DEM from the valley of Mexico. This model is a 3D mesh of 250 £ 250
1028
H. Sa´nchez-Cruz, E. Bribiesca / Image and Vision Computing 21 (2003) 1027–1036
Fig. 1. A digital elevation model from the valley of Mexico. This model shows three important volcanoes: La Malinche, Popocate´petl and Iztaccı´huatl, respectively.
elements and displays three different volcanoes. On the upper left-hand side of the figure there is a volcano called La Malinche, and on the upper right-hand side there are two other volcanoes: Popocate´petl above and, slightly below, the Iztaccı´huatl. In this work DEMs, and the other above-mentioned objects, are represented as binary solids composed of voxels. To preserve essential shape characteristics of these volcanoes, we use morphological operators to erode DEMs [12]. Fig. 2 illustrates the volcanoes now in isolation and decomposed into voxels. Fig. 2(a) shows La Malinche composed of 19,154 voxels. Fig. 2(b) presents Popocate´petl which has 20,769 voxels. Finally, Fig. 2(c) illustrates Iztaccı´huatl here represented by 19,776 voxels. Note that the volcanoes are not oriented and each consists of a different number of voxels. This paper is organised as follows: In Section 2 we present concepts and definitions. In Section 3 we describe the proposed method of principal axes which makes objects invariant under rotation. In Section 4 we present the method of optimum transformation of objects, and we show some progressive transformations of different volcanoes. Section
5 gives the measure of shape-of-object dissimilarity between any two objects based on the transformation of one into another. Finally, in Section 6 we give some conclusions.
2. Concepts and definitions In order to introduce the proposed method of object transformations and the measure of shape-of-object dissimilarity, a number of definitions are presented in this section. An important simplification in this work is the assumption that ‘objects’ have been isolated from images of the real world, and are defined as a result of previous processing. Objects are composed of voxels. The next definitions are based on Ref. [13]: 1. A voxel, ðrow; column; sliceÞ; is a volume element represented geometrically by a cube located at spatial coordinates vðr; c; sÞ; which may be marked as filled with matter or empty. 2. A 3D binary image is a spatial representation of a solid or a 3D scene, in which each voxel takes either the value
H. Sa´nchez-Cruz, E. Bribiesca / Image and Vision Computing 21 (2003) 1027–1036
1029
Fig. 2. Three different volcanoes in the valley of Mexico represented by voxels: (a) the volcano La Malinche composed of 19,154 voxels; (b) Popocate´petl composed of 20,769 voxels; (c) Iztaccı´huatl which is represented by 19,776 voxels.
zero or the value one. The value of the image located at spatial coordinates ðr; c; sÞ is denoted by Iðr; c; sÞ: 3. The centroid of a volume V (of constant density), ðr; c ; sÞ; is the center of mass of the volume. It is the mean position (row, column, slice) for all voxels in the volume and is given by X X 1 1 r ¼ r; c ¼ c; #V ðr;c;sÞ[V #V ðr;c;sÞ[V s ¼
1 #V
X
s:
ðr;c;sÞ[V
4. An object is a solid (of constant density) and is represented by triplets ðx; y; zÞ measured from the origin. Each triplet represents a voxel with matter. An object is the set of voxels ðx; y; zÞ where Iðx; y; zÞ ¼ 1: 5. The Euclidean distance between two voxels a ¼ ða1 ; a2 ; a3 Þ and b ¼ ðb1 ; b2 ; b3 Þ is defined by: qffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi dða; bÞ ¼ ðb1 2 a1 Þ2 þ ðb2 2 a2 Þ2 þ ðb3 2 a3 Þ2 :
3. Invariance of objects under translation, rotation, and volume In order to transform an object into another it is necessary to make the objects invariant under translation, rotation, and
volume (i.e. they should be composed of the same number of voxels). Several authors have used different methods to make object representation invariant under translation, rotation, and scaling. Hu [14] describes visual pattern recognition by moment invariants. Bracewell [15] and Brigham [16] are part of the standard literature on the Fourier Transform, which is used to produce invariance under rotations and scaling. An interesting review of Fourier Transform techniques for invariant pattern recognition appears in Ref. [17]. Lin [18] defines the ‘universal principal axes’ for shapes. Galvez and Canton [19] present an approach for shape reconstruction, normalisation and recognition of 3D objects using principal axes. Recently, Bullow et al. [20] used principal axes for 3D object analysis without considering large differences in shape. To make objects invariant under rotation, we use the method of principal axes. 3.1. Invariance of objects under rotation A rotation of an object is a rigid transformation of R3 about which is performed on an axis of rotation E; in the same sense by the same amount [21]. The angular momentum L of n particles with respect to the origin of a given coordinate system is often given as: L ¼
n X j¼1
mj ðrj £ v j Þ;
ð1Þ
H. Sa´nchez-Cruz, E. Bribiesca / Image and Vision Computing 21 (2003) 1027–1036
1030
where mj is the mass of the jth voxel, rj is the radius vector and v j is the velocity. In this case, objects are considered as rigid solids. Velocity can be written as v j ¼ vj £ rj ; where vj is the angular velocity. Considering vi ¼ dik vk ; where dik is the Kronecker delta (if i ¼ k; then dik ¼ 1; if i – k; then dik ¼ 0). The components of the angular momentum L ¼ ðL1 ; L2 ; L3 Þ are Li ¼ vk
n X
mj ðdik xðl jÞ xðl jÞ 2 xði jÞ xðk jÞ Þ;
ð2Þ
j¼1
where i; k [ {1; 2; 3}; there is an implicit summation over l; and j denotes the jth particle (in the content of this work, particles are considered as voxels). The summation of Eq. (2) is defined as the moment of inertia tensor Tik : Tik ¼
n X
mj ðdik xðl jÞ xðl jÞ 2 xði jÞ xðk jÞ Þ:
ð3Þ
j¼1
Using nine different combinations of this quantity (which forms a second-order tensor [22]), the well-known expression of regular moments: X ð4Þ Mpqr ¼ x1p xq2 xr3 f ðx; y; zÞ; and considering mj ¼ 1 and f ðx; y; zÞ ¼ 1 (because considered objects are solids of constant density), then we obtain the inertia matrix given by: 0 1 2M110 2M101 M020 þ M002 B C ð5Þ M002 þ M200 2M011 C T ¼B @ 2M110 A: 2M101
2M011
M200 þ M020
Now, let us form the inner product between Tik and any other vector: Tik Ak ¼ Bi : The new vector B with components Bi differs from Ai : The operation Tik Ak rotates and changes the magnitude of B: If we want to find all vectors which are not rotated by the inner product, we have to solve the equation Tik Ak ¼ lAi ; where l is a scalar. The above-mentioned vectors are called eigenvectors or principal axes of the tensor Tik : To make objects invariant under rotation, we use principal axes. The last equality can be written as follows: Tik Ak 2 lAi ¼ ðTik 2 ldik ÞAi ¼ 0; or ðT11 2 lÞA1 þ T12 A2 þ T13 A3 ¼ 0 T21 A1 þ ðT22 2 lÞA2 þ T23 A3 ¼ 0
ð6Þ
T31 A1 þ T32 A2 þ ðT33 2 lÞA3 ¼ 0: This system has a nontrivial solution, if and only if its determinant vanishes. When we solve the last equation, we obtain a third-order polynomial, with three roots: l1 ; l2 and l3 : We solved the last equation by finding
eigenvectors as in Ref. [22] and by using Mathematica [23]. For irregular objects and a given size of the eigenvector, the three ls may be different. Generally speaking, the smallest l corresponds to the more elongated direction of the object, and the largest l to the direction where object is less dispersed. Of course, other values of l corresponds to intermediate situations. Thus, the ls give a certain degree of information about the shape of the object. 3.2. Invariance of objects under volume and translation To describe most objects of the real world, we require different amounts of information (different number of voxels). When we want to transform one object into another it is necessary that both objects have the same amount of information (the same number of voxels in each one). To make objects invariant under volume, we use morphological operators to erode DEMs [12] and a method of volume normalisation which was presented in Ref. [1]. A translation of an object is a rigid transformation of R3 : A translation moves each voxel of the object in the same direction by the same amount [21], which leaves the distances between any two voxels unchanged. Thus, we will make a translation to superimpose the objects, and make the centroids of objects coincide. Objects are already oriented by means of the method of principal axes and they are composed of the same number of voxels. At this stage, we have concluded the normalisation-of-object step, and we are ready to transform one object into another. Fig. 3 displays the volcanoes from Fig. 2, already invariant under translation, rotation and volume. Here, each volcano is composed of 16,228 voxels. Fig. 3(a) illustrates the normalised volcano called La Malinche, Fig. 3(b) presents Popocate´petl, and finally Fig. 3(c) shows Iztaccı´huatl.
4. How to transform an object into another When two objects are invariant under translation, rotation, and volume, the method here proposed transforms the first object into the second. This is an advantage because when this transformation is performed, the distribution of shape-of-object difference between the objects is computed. Therefore, when the distribution of shape-of-object difference is more concentrated, these objects appear less similar. On the other hand, when the distribution is more uniform, these objects appear more similar. This concept coincides with the intuitive psychological definition of ‘shape-of-object comparison’. Fig. 4 illustrates how to transform one object into another. Fig. 4(a) shows the object O1 and Fig. 4(b) presents
H. Sa´nchez-Cruz, E. Bribiesca / Image and Vision Computing 21 (2003) 1027–1036
1031
Fig. 3. The volcanoes already normalised, each composed of 16,228 voxels: (a) La Malinche; (b) Popocate´petl; (c) Iztaccı´huatl.
the object O2. The object O1 corresponds to the volcano La Malinche and O2 corresponds to Popocate´petl. The objects O1 and O2 have the same number of voxels: 16,228. The object O1 will be transformed into object O2. The steps to transform object O1 into O2 are as follows: 1. Find common voxels and leave them unchanged. When the objects O1 and O2 are invariant under translation, rotation, and volume by means of the proposed method of principal axes and the method of volume
normalisation, then, the overlapping of the abovementioned objects is performed, which defines the common voxels. Thus, let the 3D binary image of O1 be IO1 and the 3D binary image of O2 be IO2 : Then IC is defined by: IC ¼ IO1 > IO2 :
ð7Þ
Clearly, IC corresponds to the common voxels between the objects O1 and O2. Fig. 4(a) shows IO1 ; Fig. 4(b) IO2 ; Fig. 4(c) illustrates the overlapping of objects, and
Fig. 4. Transformation of objects: (a) the object O1; (b) the object O2; (c) the overlapping of the object O1 on O2; (d) common voxels between O1 and O2; (e) voxels to move (positive voxels); (f) places (negative voxels), where positive voxels will be positioned.
1032
H. Sa´nchez-Cruz, E. Bribiesca / Image and Vision Computing 21 (2003) 1027–1036
Fig. 4(d) IC ; respectively. In this case, the number of common voxels is 10,266, shown in Fig. 4(d). 2. Find positive voxels. The positive voxels correspond to voxels to be moved, which are represented by the 3D binary image IP ; i.e. IP ¼ IO1 \IO2 :
ð8Þ
Fig. 4(e) illustrates all positive voxels. The number of positive voxels is 5962. 3. Find negative voxels. The negative voxels correspond to the 3D binary image IN which is defined by: IN ¼ IO2 \IO1 ;
ð9Þ
i.e. the 3D binary image IN represents the negative voxels where the positive voxels will be placed. Fig. 4(f) displays the negative voxels. The number of negative voxels is 5962. 4. How to move voxels. There are many ways of moving voxels. If k is the number of voxels to be moved, then k! is the number of different ways of moving them from IP to IN : The 3D binary images and the distances between their voxels may be considered as a weighted complete bipartite graph [24] with bipartition ðIP ; IN Þ; where IP ¼ {pi : i # k};
ð10Þ
IN ¼ {nj : j # k}
ð11Þ
and edge pi nj has weight wij (each weight wij corresponds to the Euclidean distance between the voxels pi and nj Þ: Thus, the optimal assignment problem is how to find the maximum-weight perfect matching in the weighted graph, and this is termed optimal matching. Fig. 5 illustrates the weighted complete bipartite graph with bipartition ðIP ; IN Þ: A method for finding an optimal matching in a weighted complete bipartite graph is the Kuhn– Munkres algorithm [24]. The bold lines in Fig. 5 represent the optimal matching. So, using the Kuhn – Munkres algorithm, the covered distances by the voxels to be moved (positive voxels) are minimised, and this produces an optimum transformation of objects. 4.1. Examples of transformations between objects In this subsection, we present two examples of transformation of objects. Fig. 6 shows the different stages of the transformation of the volcano Iztaccı´huatl (see Fig. 6(a)) into La Malinche (see Fig. 6(v)) in steps of 25 voxels. The objects shown in Fig. 6(a) and (v) are already invariant under translation, rotation and volume. They are represented using the same amount of information. To simplify the progressive transformations, volcanoes were normalised to 2122 voxels each. The voxels common to Fig. 6(a) and (v) are 1609 and the voxels to move are 513. Fig. 6(b) presents the first 25 moved voxels. Fig. 6(c)
Fig. 5. The weighted complete bipartite graph with bipartition ðIP ; IN Þ; (positive voxels, negative voxels).
shows 50 moved voxels, and so on. Notice that this is an optimum transformation of objects. Fig. 7 presents the different stages of the transformation of the volcano Popocate´petl (see Fig. 7(a)) into the volcano La Malinche (see Fig. 7(aa)). Both volcanoes are composed of 2122 voxels. They are already invariant under translation, rotation and volume. Thus, Fig. 7 displays the progressive transformations in steps of 29 voxels. Fig. 7(a) and (aa) have 1361 common voxels and 761 voxels to move. Fig. 7(b) displays the first 29 voxels moved, Fig. 7(c) illustrates 58 voxels moved, and so on. Note that the views of Figs. 6 and 7 are presented in perspective. In order to improve the presentation of volcano transformation, the point of view of Fig. 7 was changed with respect to Fig. 6. 4.2. Results of transformations between objects To complete our results, we have included two additional objects, which correspond to images of cars. Fig. 8 shows the mentioned cars, which are composed of 16,228 voxels each. In this stage, these models are only invariant under rotation. Table 1 summarises the common voxels of the five objects, which are invariant under scale and are composed of 2122 voxels each. Notice that the maximum number of common voxels of the different pairs of objects is equal
H. Sa´nchez-Cruz, E. Bribiesca / Image and Vision Computing 21 (2003) 1027–1036 Fig. 6. The different stages of the transformation of the volcano Iztaccı´huatl into the volcano La Malinche in steps of 25 voxels: (a) the volcano Iztaccı´huatl; (b) 25 voxels are moved; (c) 50 voxels are moved; and so on.
1033
1034 H. Sa´nchez-Cruz, E. Bribiesca / Image and Vision Computing 21 (2003) 1027–1036
Fig. 7. The different stages of the transformation of Popocate´petl into La Malinche in steps of 29 voxels: (a) Popocate´petl; (b) 29 voxels are moved; (c) 58 voxels are moved; and so on.
H. Sa´nchez-Cruz, E. Bribiesca / Image and Vision Computing 21 (2003) 1027–1036
1035
Fig. 8. Examples of cars composed of 16,228 voxels each: (a) a Porsche; (b) a Camaro.
to 1765. Table 2 summarises the positive voxels of the normalised volcanoes. Notice that Iztaccı´huatl and the Camaro have a larger number of positive voxels than the other pairs. Table 1 Common voxels Objetos: O1 – O2
Iztaccı´huatl La Popocate´petl Camaro Porsche Malinche
Iztaccı´huatl La Malinche Popocate´petl Camaro Porsche
2122 1609 1388 1217 1237
1609 2122 1361 1311 1305
1388 1361 2122 1293 1348
1217 1311 1293 2122 1765
1237 1305 1348 1765 2122
Table 2 Positive voxels Objetos: O1 – O2
Iztaccı´huatl La Popocate´petl Camaro Porsche Malinche
Iztaccı´huatl La Malinche Popocate´petl Camaro Porsche
0 513 734 905 885
513 0 761 811 817
734 761 0 829 774
905 811 829 0 357
885 817 774 357 0
Table 3 Measure of shape dissimilarity among the five normalised objects DðO1 ; O2 Þ
La Popocate´petl Iztaccı´huatl Camaro Malinche
Porsche
La Malinche Popocate´petl Iztaccı´huatl Camaro Porsche
0 4367.86 3829.56 6475.66 6341.65
6341.65 5723.60 7271.72 2671.79 0
4367.86 0 4755.78 7163.24 5723.60
3829.56 4755.78 0 7908.27 7271.72
6475.66 7163.24 7908.27 0 2671.79
5. Transformation of objects as a measure of shape-ofobject dissimilarity This section gives a procedure to measure the shape-ofobject dissimilarity between any two objects based on the transformation of one into another. Dissimilar objects will require a large number of moved voxels to transform one into the other, while similar objects will require fewer voxels. When two objects are identical, the number of moved voxels transforming one into the other is zero. Thus, the distance D or measure of shape-of-object dissimilarity between two objects is obtained by counting how many voxels we have to move and how far, to transform one object into another. So, the shape-ofobject dissimilarity between the object O1 and O2 is defined by: DðO1 ; O2 Þ ¼
k X
dðO1i ; O2j Þ;
ð12Þ
i;j
where dðO1i ; O2j Þ is the Euclidean distance between the voxels of O1i and O2j ; O1i corresponds to the i-th voxels of the object O1 (positive voxels), and O2j corresponds to the j-th voxels of the object O2 (negative voxels). Table 3 displays the distance D or measure of shape-ofobject dissimilarity of the five objects. In conclusion: the two most similar objects of the five studied above are the Camaro and the Porsche; on the other hand, the most dissimilar objects are the Camaro and Iztaccı´huatl.
6. Conclusions When objects are irregular, the extraction of a set of features or primitives may be difficult. The proposed method here is robust in measuring shape-of-object
1036
H. Sa´nchez-Cruz, E. Bribiesca / Image and Vision Computing 21 (2003) 1027–1036
dissimilarity among irregular objects as shown with the volcanoes and cars considered in this paper. The shape-ofobject measurement method proposed here depends on two factors: first, the method of object normalisation, which makes objects invariant under translation, rotation and volume; and second, the method for placing voxels, which minimises the distance covered by the voxels which are moved, producing an optimum transformation of objects. In special cases, the first method may affect the second, and additional criteria must be added. Further work may consist of how to minimise the processing time of the proposed method of object transformation.
Acknowledgements This work was in part supported by the CONACyT, UNAM and REDII-CONACyT. We wish to express our gratitude to Ricardo Berlanga, David A. Rosenblueth and Julio Peralta for their help in reviewing this work. DEM data used in this study were provided by INEGI.
References [1] E. Bribiesca, Measuring 3-D shape similarity using progressive transformations, Pattern Recognition 29 (1996) 1117–1129. [2] D.H. Ballard, C. Brown, Computer vision, Prentice-Hall, Englewood Cliffs, NJ, 1982. [3] P.J. Besl, R.C. Jain, Three-dimensional object recognition, ACM Computing Surveys 17 (1985) 75–139. [4] A.K. Jain, R. Hoffman, Evidence-based recognition of 3-D objects, IEEE Transactions on Pattern Analysis and Machine Intelligence 10 (1988) 783–802. [5] D. Cohen-Or, D. Levin, Three-dimensional distance field metamorphosis, ACM Transactions on Graphics 17 (1998) 116–141. [6] G. Lohmann, Volumetric Image Analysis, Wiley and Sons/B.G. Teubner Publishers, New York, NY, 1998. [7] A. Adan, C. Cerrada, V. Feliu, Global shape invariants: a solution for 3D free-form objects discrimination/identification problem, Pattern Recognition 34 (2001) 1331–1348.
[8] M. Holden, D.L.G. Hill, E.R.E. Denton, J.M. Jarosz, T.C.S. Cox, T. Rohlfing, J. Goodey, D.J. Hawkes, Voxels similarity measures for 3-D serial MR brain image registration, IEEE Transactions on Medical Imaging 19 (2000) 94 –102. [9] F. Mokhtarian, N. Khalili, P. Yuen, Multi-scale free-form 3D object recognition using 3D models, Image and Vision Computing 19 (2001) 271 –281. [10] S.J. Dickinson, A.P. Pentland, A. Rosenfeld, From volumes to views: an approach to 3-D object recognition, CVGIP: Image Understanding 55 (1992) 130–154. [11] S. Zhang, G. Sullivan, K. Baker, The automatic construction of a view-independent relational model for 3D object recognition, IEEE Transaction on Pattern Analysis and Machine Intelligence 15 (6) (1993) 778 –786. [12] E. Bribiesca, Digital elevation model data analysis using the contact surface area, Graphical Models and Image Processing 60 (1998) 166 –172. [13] R.M. Haralick, L.G. Shapiro, Glossary of computer vision terms, Pattern Recognition 24 (1991) 69 –93. [14] M.K. Hu, Visual pattern recognition by moment invariants, IEEE Transactions in Information Theory 8 (1962) 179–187. [15] R.N. Bracewell, The Fourier Transform and Its applications, Electrical and Electronics Engineering Series, second ed., McGrawHill, New York, 1978. [16] E. Brigham, The Fast Fourier Transform and Its Applications, Prentice-Hall, Englewood Cliffs, NJ, 1988. [17] H. Wechsler, Invariance in Pattern Recognition, in: P.W. Hawkes (Ed.), Advances in Electronics and Electron Physics, vol. 69, Academic Press, New York, 1987, pp. 262–322. [18] J. Lin, Universal principal axes: an easy-to-construct tool useful in defining shape orientations for almost every kind of shape, Pattern Recongnition 26 (1993) 485–493. [19] J.M. Galvez, M. Canton, Normalization and shape recognition of three-dimensional objects by 3D moments, Pattern Recognition 26 (1993) 667 –681. [20] H. Bullow, L. Dooley, D. Wermser, Application of principal axes for registration of NMR image sequences, Pattern Recognition Letters 21 (2000) 329 –336. [21] W. Karush, Webster’s New World Dictionary of Mathematics, Simon and Schuster, Inc, New York, 1989. [22] A.I. Borisenko, I.E. Tarapov, Vector and Tensor Analysis, Dover Publications, Inc, New York, 1979. [23] S. Wolfram, Mathematica: a System for doing Mathematics by Computer, Addison-Wesley, Redwood City, CA, 1991. [24] J.A. Bondy, U.S.R. Murty, Graph Theory with Applications, MacMillan Press, London, 1976.