A corner orientation detector

A corner orientation detector

Image and Vision Computing 17 (1999) 761–769 A corner orientation detector F. Chabat*, G.Z. Yang, D.M. Hansell Dept. of Imaging, Royal Brompton Hospi...

546KB Sizes 16 Downloads 64 Views

Image and Vision Computing 17 (1999) 761–769

A corner orientation detector F. Chabat*, G.Z. Yang, D.M. Hansell Dept. of Imaging, Royal Brompton Hospital, Sydney Street, London SW3 6NP, UK Received 3 November 1997; received in revised form 25 June 1998; accepted 6 July 1998

Abstract This paper introduces an operator for the detection of the true location and orientation of corners. Its strength in dealing with junctions as well as corners is demonstrated. With this method, corner points are detected as intensity patterns that are anisotropic along several directions. Pixels belonging to the arms of the detected corners are analysed, and a histogram search provides a measure of their dominant orientations. Based on a single derivative scheme proposed by Yang et al. [Structure adaptive anisotropic image filtering, Image and Vision Computing 14 (1996) 135–145] the approach has proved to be insensitive to noise and has been applied to both synthetic and real-life images. q 1999 Elsevier Science B.V. All rights reserved. Keywords: Corner detection; Corner orientation; Measure of anisotropism; Measure of uni-directionality

1. List of symbols

2. Introduction

x I(·) c(·) V g(·) u (·) c (g(x))

Corners and junctions are essential 2D image features. The certain identification of these points provides important information in numerous Computer Vision applications such as stereo-vision, motion detection, and scene analysis. They indicate the presence and location of objects, thus narrowing down the search problem and making highlevel interpretation of images easier. To this end, it is also important to evaluate the orientation of the arms of the detected corners, i.e. the edges which intersect as corners or junctions. Thus far, most corner detectors are based on second-order derivative schemes. Kitchen and Rosenfeld [2] compute the strength of a corner c(x) as the product between the rate of change of gradient direction along an edge and the gradient magnitude:

Cj sc(j)(·) G v

a n Fc(j)(·) H(·) mc 7 i·i 2

position vector of a point in space input image cornerness local neighbourhood of x local measure of uni-directionality of a pattern local measure of orientation of a pattern a decreasing function with respect to g(x) used to compose c(·) corner point measure of the likelihood of a pixel being part of the edges associated with a particular corner Cj local neighbourhood of a corner point the vector Cj Mi , Mi being a pixel in the neighbourhood of Cj measure of the difference between the direction of an edge at pixel Mi and the direction of (CjMi) parameter controlling how strictly direction compatibility is enforced in the expression of sc(j)(·) a function giving less weight to pixels far away from Cj histogram of sc(j)(·) measure of confidence of the identification of a corner gradient operator magnitude of a vector partial derivative operator

* Corresponding author.

c…x† ˆ

Ixx Iy2 2 2Ixy Ix Iy 1 Iyy Ix2 Ix2 1 Iy2

…1†

where x ˆ (x,y), I(x) is the original image, Ix ˆ

2I 2I 22 I 22 I 22 I ; Iy ˆ ; Ixx ˆ 2 ; Ixy ˆ and Iyy ˆ 2 2x 2y 2x 2y 2x 2y

Dreschler and Nagel [3], Zuniga and Haralick [4] adopt nearly equivalent approaches, all of them being sensitive to noise, because of the fact that derivatives amplify

0262-8856/99/$ - see front matter q 1999 Elsevier Science B.V. All rights reserved. PII: S0262-885 6(98)00150-4

762

F. Chabat et al. / Image and Vision Computing 17 (1999) 761–769

noise. It has been suggested that median corner detectors, which rely on the difference between an image and its median-filtered version, give better results than that of the Kitchen-Rosenfeld technique [5]. Other methods for corner detection include template matching (but a large set of templates is often necessary, making the search practically infeasible), generalised Hough transform (adapted to curved or chipped corners and providing the possibility of optimising sensitivity and accuracy) [6], and dissimilarity corner detectors (a fast and robust method but does not require image derivation, and therefore cannot provide edge information) [7]. Based on minimisation techniques, another class of detectors relying on the mathematical modelling of junctions has also been proposed [8,9]. Parida et al., for instance, define a junction as a piecewise constant function of u in the local polar coordinates centred on a corner point. By minimising the L 2 norm of the difference between this model and the original image, the parameters of the best-fit ideal junction [9] are then found. These methods can provide good estimations of the position and orientation of junctions, but often involve a large computational cost. One major drawback of traditional edge-based corner detectors is that they do not tackle the problem of corner orientation. A common solution to this is to find the mean gradient orientation over a small neighbourhood of each estimated corner. Though effective, this approach provides only the averaged orientation of the arms of the corner, with an accuracy less than 20 degrees [5]. Furthermore, it is not well adapted to handling junctions. In this paper we introduce a corner detector based on the analysis of local anisotropism [1] and identify corners as points with strong gradient but not oriented in a single dominant direction. We then calculate the orientation of the arms of the corner or junction by measuring the likelihood of surrounding pixels being part of the corner structure.

3. Measure of uni-directionality For the purpose of defining an adaptive filter, Yang et al. [1] proposed a technique for defining and computing the measure of anisotropism at each point within an image. In that paper, the issue of corner identification has been addressed. Here we demonstrate how it can be extended to corner orientation detection. For a strongly orientated intensity pattern along one direction, the power spectrum clusters along a line through the origin in the Fourier domain. By determining this line, and how closely it approximates the Fourier transform of the image, the orientation u and the strength g of the unidirectionality of the pattern can be derived. It has been demonstrated that the computation of g and u does not require the actual computation of the Fourier transform and can be obtained through the following analytical expressions: RR 2I I dx dy p 21 1 R R V2 x y2 …2† u…x† ˆ 2 tan 1 2 …I 2 I † dx dy y V x R R g…x† ˆ

2 R R 2 2 2 …I 2 I † dx dy 1 2I I dx dy x y x y V V R R 2 2 2 …I 1 I † dx dy y V x

…3†

In Eqs. (2) and (3), V is a small neighbourhood of x. This scheme uses integrated single derivatives, and thus it is less sensitive to noise, compared to other techniques. The value of g(x) defined by Eq. (3) is close to 1 for a strongly orientated pattern, and is 0 for isotropic regions. When dealing with images with low signal-to-noise ratios, this method proves to be more robust than the mere estimation of the gradient direction. In order to apply the above technique to corner recognition,

Fig. 1. (a) Original image I(x). (b) Gradient magnitude i7I(x)i. (c) Uni-directionality g(x). Straight edges are strongly anisotropic along one direction whereas corner points are not. (d) Cornerness c(x). (e) Cornerness c(x) with different window settings: precise corner pixels can be identified. (f) Direction of anisotropism u (x) (on a gray-scale from 0, black to p , white). It gives the orientation of the edges, but cannot be used within the immediate neighbourhood of corners.

F. Chabat et al. / Image and Vision Computing 17 (1999) 761–769

763

we introduce the following two assumptions about the characteristic features of corners: • like all edge points, corners and junctions have a strong intensity gradient; • unlike straight edges they do not have a single dominant orientation. These properties lead to an analytical expression of cornerness: c…x† ˆ c…g…x††i7I…x†i

…4†

with c (t) being a monotonic decreasing function from 1 to 0 within [0:1]. In our implementation, we used:

c…t† ˆ …1 2 t†m

…5†

with m ˆ typically. Although alternative functions can be adopted, we chose this expression because of its simplicity and effectiveness in implementation. Fig. 1 shows an example of the process required for estimating cornerness. Fig. 1a is the original synthetic image. Whether or not belonging to a corner, all edge points in this image have a high gradient magnitude, as shown in Fig. 1b. The derived uni-directionality based on Eq. (3), however, is high only along edges but not at corner points. This is shown in Fig. 1c. Cornerness, obtained by combining the gradient magnitude and uni-directionality, is shown in Fig. 1d. Fig. 1e is the same as Fig. 1d but with different window settings and demonstrates that high values of cornerness are reached in a very small area only. Thus, it is possible to determine the actual location of the corner. The orientation u (x) which is required at a later stage of the algorithm, is also given, shown as Fig. 1f. Extracting the actual corner points is achieved by analysing the histogram of the cornerness image. A small proportion of pixels (e ) with sufficiently high values can be regarded to be part of corners (typically, e ˆ 1%). Each cluster of such points is labelled as one corner and the centre pixel Cj that has the highest cornerness is regarded as the exact location of the corner. Experiments show that some simple edge points may also be classified as corners in this way, especially if e is too high or if the image is too noisy. This, however, has little detrimental effect on the algorithm, since at a later stage these pixels can be easily identified and discarded. 1 2

Fig. 2. Definition of a . Cj is a corner pixel, Mi is a point located within its neighbourhood. a is the angular difference between the direction u of anisotropism at Mi and the direction of CjMi.

labelled as a corner point Cj. The following assumptions about the configuration within corners are used: • the pixels of the arms of a corner are edge points strongly orientated along one direction; • this direction should be compatible with the direction of v ˆ C j Mi . To satisfy the first assumption, we use the product g(x)i7I(x)i to detect the arms of the corner. The mere gradient would be inconsistently strong at noise spikes, and the mere uni-directionality would be misleadingly high on strongly orientated patterns such as gradient intensity ramps. Therefore, the combination of the two identifies the arms of a corner properly. The direction of uni-directionality u (x) will be regarded as the direction of the edge points found. The second assumption is necessary to deal with the problem of corner adjacency. Several edges can be found in the neighbourhood of a corner and they may not all belong to the same structure. Indeed, they can be associated with another corner nearby. If the direction of an edge pixel Mi does not point toward the corner point Cj, then its value of sc(j)(x) should be low. To express this condition analytically, we have introduced the angle a which is a measure of the difference between the direction of an edge at a pixel Mi and the direction of Cj Mi . a is defined by:

a ˆ …u; v†

…6†

with u being the direction of uni-directionality at Mi, derived from u in Eq. (2), and v ˆ Cj Mi , as shown in Fig. 2. It has been found that the cosine of a provides a good

4. Corner orientation The process of evaluating a corner’s orientation in our approach mainly involves finding the orientation of the surrounding edges which intersect as Cj. Extracting such information is therefore essentially a local process. We define a function sc(j)(x) that signifies the likelihood of a pixel being part of the edges associated with a particular corner Cj. The value of sc(j)(x) is computed at each point Mi in a small neighbourhood G around what has been

Fig. 3. Adjacent corners: when studying corner C0, Mi is discarded, for at Mi: cos a ˆ 0.

764

F. Chabat et al. / Image and Vision Computing 17 (1999) 761–769

Fig. 4. (a) Example image I(x) of a corner. An unrelated edge is located within its neighbourhood. (b) Uni-directionality g(x): the edges are strongly orientated along one direction whereas the corner point is not. (c) Cornerness c…x† ˆ …1 2 g†…x††m 7I…x†. (d) sc(j)(x): identification of the side points associated with the corner. The edge that is not associated with the corner is discarded.

discriminant to establish which neighbouring corner an edge point belongs to. Fig. 3 illustrates this property. An edge point Mi is close to two different corner points (C0 and C1). When analysed within the neighbourhood of C0, Mi is discarded, because the direction of the edge it belongs to is not compatible with the direction of C0 Mi . Based on the remarks made above, we subsequently obtain an analytical expression of sc(j)(x): sc…j† …x† ˆ g…x†i7I…x†icosn a

…7†

with n being a parameter controlling how strictly directional compatibility should be enforced. The results of applying Eq. (7) to synthetic images (as in Fig. 9a and Fig. 11) showed that choosing a high value of n (greater than two) did not modify the output of the algorithm for noise-free images with large corners which were distant from each other. Nevertheless it improved the results when dealing with noisy images of narrow corners close to each other, with several edges present in the neighbourhood of corner pixels. Therefore, in our application, we used the value n ˆ 3. To assist the understanding of Eq. (7), Fig. 4 gives a schematic summary of the fields computed at each step of the algorithm. Fig. 4a represents the original image of a corner. Uni-directionality g(x) is high along the edges and low at the corner point, as shown in Fig. 4b. Conversely, cornerness c(x) is maximum at the corner, as shown in Fig. 4c. Fig. 4d illustrates the value of sc(j)(x) in a small window enclosing the corner point, and only the pixels belonging to relevant edges are retained. For each corner point Cj, the value of sc(j)(x) is computed within a small neighbouring window G and a histogram H is constructed. X sc…j† …x†Fc…j† …x† …8† ;b [ ‰0; 2p† : H…b† ˆ x[Guu…x†ˆb

Fc(j)(x) is a factor which gives less weight to the pixels far

away from the corner point Cj. It is a positive function which reaches a maximum at Cj and is nil outside the window G. Typically, it can be an affine monotonic decreasing function of the distance to the corner point. One may notice that pixels located at the centre of the corner have not been taken into account. Indeed, they cannot provide reliable orientation information due to noise and quantisation errors. From Eq. (7), since the value of unidirectionality g(x) for these pixels is close to zero, the corresponding value of sc(j)(x) is small. This automatically removes these pixels from the histogram. Fig. 5 illustrates this observation in the case of a junction. The value of sc(j)(x) is minimal at the centre of the junction whereas the three edges strongly influence the histogram H. This property allows the use of a simple continuous function for the weighting factor Fc(j)(x). There is no need to explicitly remove pixels that are too close to the corner point [9]. The analysis of the histogram gives the dominant directions of the edges associated with the corner. The orientations of the arms of the corners are estimated by locating the local maxima of the histogram. Each maximum is found in the neighbourhood of a peak corresponding to one arm of the corner. An algorithm has been designed to determine the number of peaks, i.e. the number of edges defining the corner: two for a simple corner, three or more for a junction. The histogram is smoothed and then thresholded with its mean value for the identification of local maxima. An example is shown in the case of a noisy Y-junction in Fig. 6. Fig. 6a shows the original histogram H(b ), whereas Fig. 6b illustrates the histogram after smoothing and thesholding. Three peaks can now clearly be separated, and the three corresponding maxima are recovered from the original histogram. In terms of computational cost, the complexity of the above operation is proportional to the number of angular discretisation units and the size of the one-dimensional smoothing guassian mask. It is therefore much lower than energy minimisation techniques, such as that of Parida et al.

Fig. 5. (a) Image of a junction. (b) Value of sc(j)(x) in a neighbourhood G. No orientation information is extracted from the junction point. The arms are clearly identified.

F. Chabat et al. / Image and Vision Computing 17 (1999) 761–769

765

process if the direction of the edges is rapidly changing. From our experience, optimal results were obtained with the radius of G set to be ten pixels. This value is typical of, or slightly smaller than that of most corner detectors [8,10]. Unlike that of other algorithms [9], the window size is not determined dynamically at each corner point. In this way, the measure of confidence, which will be defined in the next paragraph, can be consistently compared between different junctions on an image. The computational cost is in the order of O(r 2), where r is the radius of G. This cost is acceptable because the subset of corner points in an image is small. The execution time for the construction and analysis of the histogram, as a function of the size of G, is shown of Fig. 7. The implementation is written in C, runs on a SunSPARC Station 5 (Mountain View, CA) and processes 512 × 512 float number images. Finally, it is possible to extract a measure of confidence mc for each corner point Cj, by computing the area of the histogram: mc ˆ

X

H…b†

…9†

b[‰0;2pŠ

Fig. 6. Extracting the peaks of the histogram. (a) Original histogram H(b ); (b) Histogram after smoothing and thresholding. Three peaks appear distinctly, three local maxima can then be found in the original histogram.

[9], in which the number of arms is determined by thresholding the rate of increase of energy E…n11† E…n† with E (n) measuring the distance between the original image and a junction template with n arms. We chose to use 256 discrete angular values with a smoothing mask of 16 angular units, which gave optimal accuracy in most applications. When deriving the histogram H, u (x) is regarded to be the direction of the arms. This ensures that the histogram is not too sensitive to the positional error of the corner Cj —which might be a couple of pixels away from its ideal location due to noise. It is worth noting that care must be taken when defining the size of G. Making it too small may not provide enough information for accurately defining the orientation of the corners. The charts in Fig. 7 measure the performance of the algorithm for different sizes of G: with a radius smaller than ten pixels, the results are not optimal. The precision of the orientation measurements is poor. Indeed, there is not sufficient information in a small neighbourhood to extract accurate orientation measurements. Conversely, making G too big would include irrelevant information in the search

It measures how clearly the arms of the corner are defined. Points that have been wrongly identified as corners due to noise within V can now be identified with their low values of mc, and can be subsequently discarded. The definition of mc in Eq. (9) uses the value of the histogram for all values of b in order to minimise the influence of noise. However, other schemes for estimating the measure of confidence may also be implemented. We have evaluated two other variations of mc, termed as m 0 c and m 00 c respectively. The first measure, m 0 c, is computed as the maximum of the histogram, whereas m 00 c is the area of the histogram, computed after smoothing and eliminating irrelevant noise spikes. When applied to the synthetic image shown on Fig. 11a with added gaussian noise (SNR ˆ 8, 14, and 20 dB, respectively), similar results were obtained with all definitions of mc, as shown in Fig. 8.

5. Results The proposed algorithm was tested on synthetic and reallife images. The synthetic images were designed so as to evaluate the detector’s ability in dealing with different angles, corner adjacency, gray-level distribution, and noise levels [11]. Fig. 9a is the original synthetic image, Fig. 9b shows its cornerness, and to which extent corners can be identified for areas of low contrast. Fig. 10a illustrates the result of using g(x)i7I(x)i as an edge detector, and Fig. 10b shows the orientations of the corners found. The synthetic image on Fig. 11a was designed to estimate a measure of the accuracy of the detector. The corners found in Fig. 11b were, on average, less than a quarter of a pixel away from their theoretical location, and the orientation error was, on

766

F. Chabat et al. / Image and Vision Computing 17 (1999) 761–769

Fig. 7. Influence of the size of the neighbouring window G, as tested on the noisy synthetic image shown on Fig. 11c. (a) Each time one arm of a corner or junction is missed, or is found where it should not be, one error is counted. The best results are found for a radius of G equal to or greater than ten pixels. (b) For arms correctly identified, the average orientation error is computed. The accuracy of the measures increases with the size of G. For a radius equal to or greater than ten pixels, the error is on average less than 1.3 pixel. On both charts, the execution time of the algorithm is shown, as measured on a SunSPARC 5 (e ˆ 1%, which means that 230 corners are analysed on this particular image).

average, 0.4 degree. The detector’s ability to cope with noise is shown in Fig. 11c where a gaussian noise was added to the image before processing (SNR ˆ 20 dB). The accuracy of the results remains satisfactory. The

location of the corners is within a 0.4 pixel error range on average, and their orientation is within a 1.3 degree error range. To demonstrate the result of real-life images, the

Fig. 8. Results of testing three different definitions of the measure of confidence mc on noisy synthetic images (signal-to-noise ratio SNR ˆ 8, 14 and 20 dB). Corners with a value of mc lower than the mean value were discarded. No corner was ever wrongly removed this way, but the algorithm failed to eliminate some artifacts due to noise. Each one of these is counted as one error. The three different definitions of mc give very similar results.

F. Chabat et al. / Image and Vision Computing 17 (1999) 761–769

767

Fig. 9. (a) Original synthetic image. The gradient ramp is linear on top of the image, and quadratic at the bottom of the image. (b) Cornerness c(x).

Fig. 10. (a) Edge detector g(x)i7I(x)i; (b) Orientations of the corners found. When the angle is narrow, one direction only is given. Note that junctions are also identified.

proposed algorithm was applied to a picture of the Royal Festival Hall (London) as shown in Fig. 12a. The corners found are shown in Figs. 12b and 13a. The potential of the method for edge grouping is illustrated in Fig. 13b where arms of corners with compatible directions are linked together to form the contours of the objects. Fig. 14a is a photograph of a more complex scene which depicts a large number of corners and junctions with different orientations, contrasts, and scales. We used a value of e ˆ 5% to detect a large set of corners. The result of applying the algorithm is shown of Fig. 14b, in which most corners and junctions are found, thus making higher-level processing (like motion-tracking) possible. 6. Discussion

Fig. 11. (a) Original synthetic image. (b) Orientation of the corners detected. The positions found are on average less than a quarter of a pixel away from the theoretical location, and the orientation error is on average less than 0.4 degree. (c) Orientation of the corners found on the same image with added gaussian noise (SNR ˆ 20 dB). The accuracy of the measures found remains good (location error less than 0.4 pixel; orientation error less than 1.3 degree).

In this paper, corners and junctions are detected as edge points that are anisotropic along several directions. For each point in the neighbourhood of a corner, it is given a value of sc(j)(x), according to its position and measure of uni-directionality. It measures the likelihood of a point belonging to a structure defining the corner. The value of sc(j)(x) is summed for all neighbouring pixels with a given direction, and then stored in a histogram. Extracting the peaks of the histogram gives the orientations of the edges intersecting as a corner or junction. In the actual implementation of the algorithm, some

768

F. Chabat et al. / Image and Vision Computing 17 (1999) 761–769

Fig. 12. (a) Original image of a part of the Royal Festival Hall, London. (b) Cornerness c(x) (high values in dark).

Fig. 13. (a) Orientations of the detected corners. (b) Joining corners with compatible directions. According to the principle of least commitment, more corners and orientations were taken into account. The knowledge of the orientations of the corners makes grouping edges easy.

parameters require tuning according to the type of image processed and the purpose of the corner detection. In most applications, such as contour grouping, it is preferable to set the parameters to be highly sensitive so that, according to the principle of least commitment [12], even corners with a low measure of confidence are retained. Although this produces false positives, higher-level processing should be able to identify these mistakes. Since there is no universal

settings for the parameters, prior knowledge of the type of images to be analysed can be helpful. Estimating the proportion of corner pixels helps set an optimal value of e . Quantifying noise on an image helps set the size of the mask that smooths the histogram H(b ) prior to peak identification (the noisier the image, the larger the mask). Although the computational cost of the algorithm is slightly higher than that of other algorithms (e.g. the

Fig. 14. (a) Original image. (b) Orientations of the detected corners and junctions. Higher-level applications like motion tracking can be implemented.

F. Chabat et al. / Image and Vision Computing 17 (1999) 761–769

dissimilarity corner detector), the measure of uni-directionality g(x) and u (x) can be useful for other stages of image processing, such as edge detection and grouping. It has been found that the accuracy of the orientation measures outperforms simpler schemes, like the Zernicke moments [13], as assessed by Rosin [10]. Furthermore, unlike most other methods, the technique also handles junctions effectively.

[2] [3]

[4]

7. Conclusion

[5]

In this paper, we have presented a novel method for the detection of corners. Unlike most corner detectors, it also estimates the orientation of the structures associated with each corner. The same method also works well with junctions. The accuracy of the measurement results is promising, even with noisy images. Determining the orientation of corners and junctions as well as their position facilitates the higher-level processes of image understanding. Indeed, this information provides important clues for the effective interpretation and grouping of low-level image features.

[6] [7]

[8] [9]

[10]

Acknowledgements This research is supported by Imatron, Inc., California. References [1] G.Z. Yang, P. Burger, D.N. Firmin, S.R. Underwood, Structure

[11] [12]

[13]

769

adaptive anisotropic image filtering, Image and Vision Computing 14 (1996) 135–145. L. Kitchen, A. Rosenfeld, Gray-level corner detection, Pattern Recognition Letters, December 1982, pp. 95–102. L. Dreschler, H.H. Nagel, On the selection of critical points and local curvature extrema of region boundaries for interframe matching, in International Conference on Pattern Recognition, 1982, pp. 542–544. O.A. Zuniga, R.M. Haralick, Corner detection using the facet model, in Proceedings of the Conference on Pattern Recognition Image Processing, 1983, pp. 30–37. E.R. Davies, Machine Vision: Theory, Algorithms, Practicalities, Academic Press, New York, 1997. E.R. Davies, Application of the generalised Hough transform to corner detection, IEE Proceedings 135 (1988) 49–54. J. Cooper, S. Venkatesh, L. Kitchen, The dissimilarity corner detector, IEEE Proceedings on Image Processing, 1991, pp. 1377–1382. K. Rohr, Recognizing corners by fitting parametric models, International Journal of Computer Vision 9 (3) (1992) 213–230. L. Parida, D. Geiger, B. Hummel, Kona: a multi-junction detector using minimum description length principle, in: M. Pellilo, E. Hancock (Eds.), Energy Minimization Methods in Computer Vision and Pattern Recognition (EMMCVPR’97), Vol. 1223 of LNCS, 1997, pp. 51–65. P.L. Rosin, Measuring corner properties, in: British Machine Vision Conference, 1997, pp. 100–109. P.K. Rajan, J.M. Davidson, Evaluation of corner detection algorithms, in IEEE Proceedings on Image Processing, 1989, pp. 29–33. D. Marr, Vision: A Computational Investigation into the Human Representation and Processing of Visual Information, W.H. Freeman, San Francisco, CA, 1982. S. Ghosal, R. Mehrotra, Zernicke moment-based feature detectors, in: International Conference on Image Processing, 1994, pp. 934–938.