Hough Array Processing via Fast Multi-Scale Clustering

Hough Array Processing via Fast Multi-Scale Clustering

Real-Time Imaging 6, 129±141 (2000) doi:10.1006/rtim.1999.0181, available online at http://www.idealibrary.com on Hough Array Processing via Fast Mul...

546KB Sizes 0 Downloads 46 Views

Real-Time Imaging 6, 129±141 (2000) doi:10.1006/rtim.1999.0181, available online at http://www.idealibrary.com on

Hough Array Processing via Fast Multi-Scale Clustering

T

he major emphasis in fast Hough transform algorithms has been placed on the transformation involved. Little attention has been paid to fast processing of a Hough array without requiring one to specify a threshold value to determine candidate parameters in the Hough array. This paper gives a comprehensive discussion of Hough array processing as a part of Hough transform, and presents a time ecient clustering algorithm, called Fast Multi-Scale Clustering, to obtain the number of and hence to select the locations of candidate parameters in a Hough array in a threshold independent manner. It is shown that the complexity of this algorithm is O(nd r ) where n is the number of non-zero cells in the Hough array, d is the number of cells used in the discretization of the corresponding parameter space, and r is the dimensionality of the Hough array. Two examples of line and circle detection are provided to illustrate the steps involved in deploying this Hough array processing approach. # 2000 Academic Press

Babak Nadjar Araabi and Nasser Kehtarnavaz1 Department of Electrical Engineering, Texas A&M University, Texas 77843, USA

Introduction Hough transform (HT) as a method for curve detection in binary images was introduced in 1962 [1]. It was later extended to gray-level images [2,3]. In this transform, an appropriate transformation from an image plane to a parameter space is performed by using an array called Hough or accumulator array. A survey of di€erent Hough transform techniques can be found in [4]. Although each point in a Hough array theoretically corresponds to an object (curve or shape) in the image 1

Corresponding author: Zachry Engineering Building, Department of Electrical Engineering, Texas A&M University, College Station, TX 77843-3128, USA; Tel: (409) 845 8371; Fax: (409) 845 6259; E-mail: [email protected]

1077-2014/00/040129+13 $35.00/0

plane, in practice, the process of extracting useful information from a Hough array is not so straightforward. This is due to the complexity of the transformation that may be involved and also because of the sensitivity of Hough array to image noise [5]. Hence, attempts have been made to address these problems by introducing various methods including Probabilistic HT [6], Randomized HT [7,8] and Hierarchical HT [9]. Meanwhile, a number of Hough-like transformation from point(s) to curve has been developed to improve the performance in speci®c applications, e.g. circle detection [10], ellipse detection [11], and mixed-pixel classi®cation [12] amongst others. Algorithm complexity is an important issue, especially if one desires to use HT in real-time applications. A number of approaches have been proposed to speed up

#

2000 Academic Press

130

B.N. ARAABI AND N. KEHTARNAVAZ

the computation of HT, for example Fast Adaptive HT for line detection [13], Fast Linear HT [14], and Fast HT for segment detection [15]. There have also been attempts at hardware implementation, e.g. [16]. A survey on hardware architectures for HT can be found in [17].

to produce a potential ®eld function

In almost all of the above approaches, the major emphasis has been placed on optimizing the complexity and memory requirements of the transformation involved in HT. However, little attention has been paid to Hough array processing, i.e. the process of extracting the number and locations of most prominent parameters in the Hough array. Normally, a user-de®ned threshold is used for this purpose. This paper presents a time ecient approach to obtain the number and hence locations of most prominent parameters in a Hough array without requiring the user to specify a threshold value. This is done by introducing a fast clustering algorithm named Fast Multi-Scale Clustering (Fast MSC). It is shown that the incorporation of this algorithm into HT provides a time ecient mechanism for identifying most prominent parameters in a threshold independent manner.

where the function f …† represents the data points as a sum of Dirac delta functions,

The next section gives an overview of multi-scale clustering and a description of its extended version to make it time ecient. In the following section, a Hough array processing approach based on fast multi-scale clustering is discussed. In addition, the complexity and memory usage of this processing approach is presented. Two examples are then provided and the conclusions are stated in the ®nal section.

Fast Multi-Scale Clustering (MSC) Overview of Multi-Scale Clustering Multi-Scale Clustering (MSC) [18,19] is a newly developed clustering algorithm based on scale-space theory. MSC determines an appropriate number of clusters in a threshold independent manner. The approach adopted in MSC is to perform a series of lowpass ®ltering of a set of r-dimensional data points fx1 ; x2 ; . . . ; xn g; xi 2 Rr ; i ˆ 1; 2; . . . ; n; via convolving them with a Gaussian kernel jjjj 1 ' …† ˆ pr eÿ 2 2 …2 †

2

2

…1†

 …†

ˆ f …†   …† ˆ ÿ ˆÿ

n X iˆ1 n X

… ÿ xi †   …† …2†  … ÿ xi †

iˆ1

f …† ˆ ÿ

n X

… ÿ xi †

…3†

iˆ1

and  denotes a speci®c scale size. As the scale size  of the Gaussian kernel  …† increases the data is smoothed from ®ne to coarse levels. This is analogous to Parzen window non-parametric estimation of probability density function [20], where the scale size acts as the width of Parzen window. The minima of the above potential ®eld function are natural candidates for cluster prototypes or representatives. A cluster is then taken to be the area around a prototype de®ned by the maxima along all outgoing directions from the prototype. Here the interesting phenomenon is the change in the number of minima as the scale size changes. It has been proved that the Gaussian kernel preserves the causality property [21], meaning that the number of minima N …  …†† decreases as  increases, i.e. 1  2 ) N…

1 …††

 N…

2 …††

…4†

For visualization purposes, Figure 1 illustrates a typical potential ®eld function  …† and the corresponding N…  …†† versus  plot for seven one-dimensional data points. The duration of scales for which the number of minima remains constant is referred to as lifetime. The longer the lifetime the more persistent that grouping of points is. The number of clusters with the longest lifetime is then chosen for further processing. Fast version extension Although MSC removes the dependency on any userde®ned threshold for the selection of the number of clusters or prototypes, it is computationally intensive. To determine the number of prototypes, MSC must ®nd every local minimum at every scale size. This is done by using Newton's method, which requires the computation of gradient and Hessian matrix at each iteration. There exists no upper bound on the number of required iterations. In order to decrease the computation time for real-time applications, a fast version of MSC has been

HOUGH ARRAY PROCESSING VIA FAST MULTI-SCALE CLUSTERING

131

neighborhood of the mean can see the ®lter, due to the fact that [22] Pr…jX ÿ x j  2†  0:04 jjxÿ jj 1 X   …x† ˆ pr eÿ 2 2 …2 † x 2

2

…5†

Hence given the smallest distance of any two data points to be T, for all practical purposes there is no ®ltering e€ect for 2 < T. Therefore 

min ˆ

T 2

…6†

The above analysis o€ers T ˆ 2min as an adequate step size for discretization of  …†. Furthermore, consider Fourier transform of  …x† in Eqn (5) F Tf …x†g ˆ eÿ jwx eÿ

2 jjwjj2 2

…7†

It can be shown that at most 0.5% of  …x† energy is lost as a result of the truncation of w in Eqn (7) for wmax ˆ 2 T (see Appendix). On the other hand, for very large  values,  …† becomes so smooth that all minima merge together and become one. This gives max as de®ned by the boundaries of data space. Hence, a reasonable value for max is one fourth of the size of data space.

Figure 1. (a) A typical potential ®eld function  …†. (b) corresponding plot of number of clusters N…  …†† vs. scale size .

developed here. This new algorithm, named Fast MSC, reduces the computational complexity by (a) discretizing the potential ®eld function, (b) con®ning the search domain, and (c) performing a local search with no gradient computation. Discretization of potential ®eld function Although theoretically  can vary from zero to in®nity, in practice  2 ‰min ; max Š. For very small  values, there is no ®ltering e€ect from a practical standpoint, since the e€ective domain of the ®lter is so small that it contains only one data point. For the Gaussian ®lter it can be shown that only the points within a 2

Con®ning search domain The processing can be speeded up by con®ning the search to a subset of the discretized domain of  …† considering that almost always there is a data point near each minimum. This subset includes the points having a  …† value equal to or higher than one. This means that in addition to all data points, the points whose total contribution is made by more than one data point are included. It is possible to make this contribution more restricted for further speed up depending on the redundancy and distribution of data points. Local search with no gradient computation In Fast MSC, there is no need to perform the search within a neighborhood as determined by the scale size . Merely a local search among 3r ÿ 1 adjacent neighbors would be adequate, where r denotes the dimensionality of data space. As long as one is concerned with the search in the con®ned domain, there is no need for the computation of local gradient due to the fact that the search is con®ned to adjacent neighbors. Hence a series of at most 3r ÿ 1 comparisons would indicate whether the center point is a minimum.

132

B.N. ARAABI AND N. KEHTARNAVAZ

Fast MSC Algorithm Figure 2 shows the ¯owchart of the Fast MSC algorithm. Tables 1, 2 and 3 provide the pseudo-codes of the three main components, namely: computation of  …† for a speci®c , computation of  …† via scalespace ®ltering, and ®nding prototypes via a local search. The scale size  is expressed per unit of T, the discretization step size. The e€ective length of the ®lter l in Table 1 is found based on the truncation stated in Eqn (5). To calculate 2  …† for f0; 1; 2; . . .  lg; at ®rst, e …ÿ1=2 † is approximated using Taylor series expansion. There 2 2 after, e …ÿi =2 † for i ˆ 0; 1; 2; . . . l; is calculated using the following recursions ai ˆ aiÿ1 biÿ1 ;

bi ˆ b20 biÿ1

…8†

2

where a0 ˆ 1 and b0 ˆ e …ÿ1=2 † . It is easy to show that 2 2 ai ˆ e …ÿi =2 † . The recursion in Eqn (8) is implemented

Figure 2. Flowchart of Fast MSC algorithm.

Table 1. Computation of  …x† for speci®c  ; scale factor, variance of Gaussian ®lter (input) d; dimension of discretized data space (input) l; unilateral e€ective length of (2  l+1)  (2  l+1) Gaussian ®lter Gaussian; (2  l+1)  (2  l+1) Ð at most d  d ¯oat array Exponent; d  2 ¯oat auxiliary array 1234567-

l :ˆ min(int(2  †+1, int(0:5  d† ÿ 1) ÿ1 arg :ˆ 2 2 aux1 :ˆ 1 aux2 :ˆ 1 for i from 1 to b aux2 :ˆ aux2  arg i aux1 :ˆ aux1 ‡ aux2 b P …arg†i 2 arg 8- end for faux1 ˆ ˆ eÿ1=2 g i!  e

9101112131415-

jˆ0

exponent‰l‡1; 1Š :ˆ 1 exponent‰l‡1; 2Š :ˆ aux1 aux1:ˆ aux1  aux1 for i from 1 to l exponent‰l‡i ‡ 1; 1Š :ˆ exponent‰i ‡ i; 1Š  exponent‰1 ‡ i; 2Š exponent‰l ÿ i‡1; 1Š :ˆ exponent‰l ‡ i ‡ 1; 1Š exponent‰l ‡ i‡1; 2Š :ˆ exponent‰l ‡ i; 2Š  aux1 ÿi2

16- end for fexponent‰l  i‡1; 1Š ˆ e 2 for i ˆ 0; 1; . . . ; lg 2

17- for i from 1 to…2  l ‡ 1† 18- for i from 1 to …2  l ‡ 1† 19Gaussian‰i; jŠ :ˆ exponent‰i; 1Š  exponent‰ j; 1Š 20- end for 21- end for fGaussian‰i; jŠ ˆ e

ÿ…iÿlÿ1†2 ÿ… jÿlÿ1†2 22

for i; j ˆ 1; 2; . . . ; 2  l ‡ 1g

Output Ð  …† function, (2  l+1)6(2  l+1) array Ð is placed in Gaussian.

HOUGH ARRAY PROCESSING VIA FAST MULTI-SCALE CLUSTERING Table 2. Computation of

 …†

133

via scale-space ®ltering

Gaussian; (2  l+1) (2  l+1) Ð at most d  d Ð ¯oat array (input from Table 1) Data Points; d  d Ð ¯oat array, data points (input) Pxay; d  d Ð ¯oat array,

 …†

function, initially set to zero

1- for ip from 1 to d 2for jp from 1 to d 3if data points[ip, jp]>.0 then 4up i :ˆ min(l+1, ÿip+d) 5low i :ˆ max(1ÿl, 1ÿip) 6up j :ˆ min(l+1, ÿjp+d) 7low j :ˆ max(1ÿl, 1ÿjp) 8for ig from low i to up i 9for jg from low j to up j 10Pxay[ip+ig,ip+ig] :ˆ Pxay[ip+ig,ip+igŠ ÿ data points[ip, jp]Gaussian[l+ig,l+jg] 11end for 12end for 13- end if 14- end for 15- end for Output Ð

 …†

function, d  d array Ð is placed inPxay.

in lines 9 through 16 of Table 1. The remaining part of Table 1 shows the calculation of  …† based on the outcome of this recursion. Notice that a multiplier of …1=22 † in  …† is ignored since it has no e€ect on prototype calculation. Table 2 includes the calculation of  …† by passing the Gaussian ®lter  …† over data points (placed in

Data_Points). For Hough array processing, data points constitute the nonzero elements of a Hough array. Table 3 shows the search algorithm over the discretized domain of  …†. Minima of  …† denote the prototypes. Among many possible search techniques, a local search without the need to compute the gradient is utilized here.

Table 3. Finding prototypes via local search Pxay; d  d Ð ¯oat array,  …† function (input from Table 2) con®ne; con®ne the domain of search in  …†, set to 1 if not speci®ed (input) Prototypes; m  2 Ð integer array, for prototype vectors count; number of prototypes 1- count :ˆ 0 2- for i from 1 to d 3for j from 1 to d 4if Pxay [i, j ]  con®ne then 5for ii from ÿ1 to 1 6for ji from ÿ1 to 1 7if (ii, ji) ˆ 6 (0,0) & Pxay [i, j ] Pxay [i ‡ ii, j ‡ ji ] then 8goto :next point 9end if 10end for 11end for 12count :ˆ count+1 13Prototypes[count,1] :ˆ i 14Prototypes[count,2] :ˆ j 15end if 16:next point 17- end for 18- end for Output Ð prototypes Ð is placed in Prototypes.

134

B.N. ARAABI AND N. KEHTARNAVAZ

Table 4. Hough Transform via Fast MSC d0; dimension of discretized original image (input) m; number of edge points in the original image (input) d; dimension of Hough array -discretized parameter space (input) edge point; m  2 integer array, input edge points in d0  d0 original image (input) cos sin; d  2 ¯oat array, sine and cosine lookup table (input) counter; d  d integer array, initially set to zero variance; d  d ¯oat array, initially set to zero mean; d  d ¯oat array, initially set to zero Hough array; d  d ¯oat array, main output of HT algorithm 1 1- aux1 :ˆ p 2  d0 2- aux2 :ˆ aux1  …d0 ‡ 1†  0:5 3- for i from 1 to m 4edge point‰i; 1Š :ˆ edge point‰i; 1Š  aux1 ÿ aux2 5edge point[i,2] :ˆ edge point[i,2]  aux1 ÿ aux2 6for i from 1 to m 7rou :ˆ edge point[i,1]  cos sin[ j,1]‡edge point[i,2]  cos sin[ j,2]+0.5 8irou :ˆ int(rou  d) 9counter[irou; j ] :ˆ counter[irou; j ]+1 10if counter[irou; j ]2 then   1 …rou ÿ mean‰irou; jŠ†2 11variance[irou; j ] :ˆ variance[irou; j ] 1 ÿ ‡ counter‰irou; j Š ÿ 1 counter‰irou; jŠ 12- end if rou ÿ mean‰irou; j Š 13- mean[irou; j ] :ˆ mean[irou; j ]+ counter‰irou; j Š 14- end for 15- end for 16171819202122232425262728293031-

aux1 :ˆ 0 aux2 :ˆ 0 Max :ˆ 0 for i from 1 to d for j from 1 to d if counter‰i; jŠ > 0 then if counter‰i; jŠ > Max then Max :ˆ counter‰i; jŠ end if aux2 :ˆ aux2 ‡ 1 aux1 :ˆ aux1 ‡ counter‰i; jŠ end if end if end if Mean :ˆ …aux1=aux2† con®ne :ˆ 0:5  …Max=Mean†

32333435363738394041424344-

aux1 :ˆ 0 aux2 :ˆ 0 for i from 1 to d for j from 1 to d if counter‰i; jŠ > con®ne then if aux1 > variance‰i; jŠ then aux1 :ˆ variance‰i; jŠ else if aux2 < evariance‰i; jŠ then aux2 :ˆ variance‰i; jŠ end if end if end for end for

HOUGH ARRAY PROCESSING VIA FAST MULTI-SCALE CLUSTERING

135

Table 4. Continued 45- aux2 :ˆ 1=…aux2 ÿ aux1† 46- for i from 1 to d 47- for j from 1 to d 48if counter‰i; jŠ > con®ne then 49variance‰i; jŠ :ˆ …variance‰i; jŠ ÿ aux1†=aux2 50Hough array‰i; jŠ :ˆ counter‰i; jŠ  …1 ÿ variance‰i; jŠ† 51else 52Hough array‰i; jŠ :ˆ 0 53end if 54- end for 55- end for Output Ð Hough Array Ð is placed in Hough array:

The pseudo-code appearing in the above tables provide an implementation of the algorithm by using only additions (+) and multiplications (*) of ¯oat numbers. No library function has been utilized. As a result, these codes can be easily implemented on modern DSP processors.

Hough array processing Hough array processing constitutes the processing needed to obtain the number and locations of candidate parameters in a Hough array. In this section, ®rst we discuss how to setup a Hough array and then proceed with the processing of this array using Fast MSC. Table 4 shows the pseudo-code of the Hough mapping to setup a Hough array. This array is then supplied to the Fast MSC algorithm for Hough array processing. The given pseudo-code works for any dimension r : r is set to 2 in the pseudo-code appearing Table 5. Complexity of Fast MSC algorithm Fast MSC algorithm

Computation of  …† (Table 1) Computation of  …† (Table 2) Finding prototypes (Table 3) Overall

Complexity for one 

Complexity worst case (Large  values)

O…l r †

O…d r †

O…nl r †

O…nd r †

O…n3 r †

O…n3 r †

O…nl r †

O…nd r †

n=Number of non-zero elements of Hough array, r= Dimension of parameter space, d=Number of discretization cells in parameter space, l=E€ective length of Gaussian ®lter …2l < d†.

in Tables 1 through 4 noting that the version for r > 2 is similar. In Hough transformation, each point in an image plane gets mapped into a curve in a corresponding parameter space. A Hough array is formed by discretizing the parameter space. To this end, d discretization levels are assigned to each dimension. Of course, it is possible to use di€erent numbers of discretization levels for di€erent dimensions. Such a discretization process divides the parameter space into cells or bins forming a Hough array. This array is then used to count the number of intersections generated by the curves in the parameter space. The cells containing high numbers of intersections are selected and mapped back to the image plane. As stated earlier, the selection process requires one to specify a threshold value indicating a high number of intersections. By using Fast MSC it is no longer required to specify such a threshold. The number of intersections for a cell can be considered to be the number of times all curves pass through it. However, this leads to poor outcomes when using coarse discretization or small values of d. As shown in Table 4, we have used the variance of curve distances from the center of a cell to obtain the number of intersections Hough_array[i, j ] passing through it. To achieve fast processing, the variance is computed by using the following easily proven recursion 1 1 2 ; 2k ˆ 2kÿ1 ‡ …xk ÿ kÿ1 †2 ÿ k k ÿ 1 kÿ1

21 ˆ 0 …9†

where k ˆ kÿ1 ‡ k1 …xk ÿ kÿ1 †, 0 ˆ 0. The number of intersections is then found as follows Hough array‰i; jŠ :ˆ counter‰i; jŠ  …1ÿvariance‰i; jŠ† …10† where here variance is normalized between 0 and 1. Relation (10) gives a more precise and meaningful way

136

B.N. ARAABI AND N. KEHTARNAVAZ

Figure 3. (a),(b) Image plane 60  60, (c),(d), Parameter space consisting of sinusoidal curves generated by the point to curve mapping, (e),(f ) Hough array 64  64.

to compute the number of intersections for coarse discretization situations. Lines 1 through 15 and 32 through 55 in Table 4 correspond to this variance computation. In addition, as shown in lines 16 through 31, mean and max of the non-zero cells can be used to de®ne con®ne during the search process. These are two suitable and easy-to-calculate statistical indicators of distribution of parameter vectors in a Hough array.

Using mean and max, con®ne can be assigned to be 0:5 (Max+Mean). In the absence of any information regarding the selection of con®ne, con®ne:=1 can be assumed which e€ectively indicates the presence of at least one parameter vector. Basically, the outcome of any Hough transformation can be used as the input to the Fast MSC algorithm.

HOUGH ARRAY PROCESSING VIA FAST MULTI-SCALE CLUSTERING

Figure 4. Hough array processing. (a) N…  …†† vs.  plot, min =T/2=0.5, max ˆ dT=4 ˆ 16, (b) Typical Detected lines in image plane Ð 60  60 array.

This is simply done by Data Points‰i; jŠ :ˆ Hough array‰i; jŠ

Figure 5. E€ect of discretization level. (a) Typical

…11†

 …†

137

 …†

Ð 64  64 array, (c)

Therefore, the entire HT procedure consists of: (a) setting up a Hough array using a Hough transformation, and (b) ®nding candidate cells in the Hough array using fast multi-scale clustering.

Ð 32  32 array, (b) Detected lines in image plane Ð 60  60 array.

138

B.N. ARAABI AND N. KEHTARNAVAZ

Figure 6. Circle detection example. (a) Image (256  256) with salt & pepper and speckle noise, (b) image after edge detection using Sobel mask.

Complexity and memory usage The complexities of di€erent parts of the proposed algorithm are given in Table 5. Here, the same instruction cycle time is considered for multiplications, additions, subtractions, and comparisons. All entries in this table are calculated for one  value. MSC needs to be applied for a few values of , typically 10 to 20 values. The worst case overall numerical complexity for r dimensions is O…nd r †, where n is the number of nonzero elements of the Hough array and d is the number of cells used in the discretization of the parameter space. Obviously, n and d should be kept as small as possible to speed up the processing of the Hough array. In other words, the Hough array should be set up such that a suitable input for Fast MSC is provided. Even for large values of d and n, Fast MSC is quite ecient for small values of r such as two or three, for example for recognizing polygonal and circular objects. As far as memory usage is concerned, Fast MSC needs three d  d arrays and one 2  d array for r=2. For an arbitrary r, three d r arrays as well as one r  d array are needed.

Line and circle detection example This section provides two examples of HT based on Fast MSC processing of Hough array. A hexagonal object in

a 60  60 image plane was considered for line detection (2D Hough array), and a 256  256 image containing ®ve circles for circle detection (3D Hough array). Line detection To illustrate the performance in the presence of noise, three types of noise with relatively high levels were considered: (a) salt & pepper noise with a bit error rate of 0.05 over the edge points, (b) salt & pepper noise with a bit error rate of 0.001 over the entire image, and (c) random shifting to left or right with a probability of 0.2. Figure 3(a,b) shows the original and a noisy version of the image plane, respectively. The noisy image shown consists of all of the above three noises. Figure 3(c,d) illustrates the corresponding curves in the parameter space. Each point in the image plane gets mapped into a sinusoidal curve in the parameter space. Figure 3(e, f ) show the Hough arrays. The gray levels in Figure 3(e, f ) indicate the numerical content of cells in the Hough array. As can be seen, while the parameter space is dense, the Hough array consists of not many nonzero cells leading to an ecient processing of the Hough array. Figure 4(a) illustrates the plot of N…  …†† vs. , and Figure 4(b) a typical  …†. The most persistent grouping of points corresponds to six as expected without the need to specify a threshold value. The graylevels indicate the strength of the potential ®eld function. The detected lines are shown in Figure 4(c).

HOUGH ARRAY PROCESSING VIA FAST MULTI-SCALE CLUSTERING

139

Figure 8. N…  …†† vs.  plot: threshold independent Hough array processing indicating 5 circles.

the discretization level is lowered from 64  64 to 32  32. Circle detection Two types of noise were added over the image: [Figure 6(a)] (a) salt & pepper noise with a bit error rate of 0.001, and (b) speckle noise with a variance of 0.01. A Sobel mask was employed to detect the edges as shown in Figure 6(b). The standard Hough transform for circle detection was utilized as follows …x ÿ A†2 ‡ …y ÿ B†2 ˆ R2

Figure 7. Circle detection: Hough array 32  32  32 represented as 32 cross sections for 32 discretized levels of radius (R), from left to right and from up to down R ˆ 1; 2; 3; . . . ; 32; (A,B) denotes the center with A being on the horizontal axis and B on the vertical axis. The gray-levels indicate the numerical content of cells in the Hough array.

The detected lines in the presence of the above noise sources were obtained to be close to the ideal lines. The di€erence between the detected lines and the ideal ones is primarily due to the discretization level of the parameter space. This is illustrated in Figure 5 where

…12†

where …x; y† denotes an edge point in the image plane, (A; B) the location of center, and R radius. The parameter space, and hence the corresponding Hough array, are three dimensional in this case. Each (x; y) point gets mapped into a cone in the parameter space. Thirty-two discretization levels was considered along each dimension. Hence d=32 and r=3 for this case. Figure 7 illustrates a cross-sectional representation for the 32  32  32 Hough array. The N…  …†† vs.  plot is shown in Figure 8. As can be seen from this ®gure, the number of most prominent peaks, the longest lifetime, is found to be ®ve. As expected smaller d s resulted in more discretization error. At this point, it is worth emphasizing that the algorithm discussed here is only meant to provide the number and hence the locations of most prominent peaks (normally obtained via a threshold) in a Hough

140

B.N. ARAABI AND N. KEHTARNAVAZ

array without the need to specify a threshold value. Furthermore, since the robustness to noise and distortion is a property of MSC due to its multi-scale ®ltering nature, the algorithm is capable of coping with false peaks which happen as a result of noise or distortion.

Conclusions This paper has presented a Hough array processing technique to ease the need to specify a threshold value for determining candidate parameters in a Hough array. A fast version of multi-scale clustering, called Fast MSC, is introduced and incorporated into the HT procedure. This has led to a fast, threshold independent approach for processing Hough array to extract candidate parameters. The proposed method is a Hough array processing technique which can be used virtually along with any Hough-like point(s) to curve transformation. Two of these transformations were exempli®ed here. There are three factors that in¯uence the computational complexity, and hence the processing speed of the developed algorithm: (a) the number of non-zero cells in the Hough array as related to the number and distribution of parameter vectors, (b) the number of cells used in the discretization of the parameters space, and (c) the dimensionality of the Hough array.

Acknowledgement This research was supported by Texas Instruments as part of the DSP Program at Texas A&M University.

References 1. Hough, P.V.C. (1962) Methods and means for recognizing complex patterns. U.S. Patent 3,069,654. 2. O'Gorman, F. & Clowes, M.B. (1973) Finding picture edges through co-linearity of feature points. Proc. 3rd Int. Joint Conf. Arti®cial Intelligence, pp. 543±555. 3. Ballard, D.H. (1994) Generalizing the Hough transform to detect arbitrary shapes. Pattern Recognition, 13: 111±122. 4. Illingworth, J. & Kittler, J. (1988) A survey of the Hough transform. Computer Vision, Graphics, and Image Processing, 44: 87±116. 5. Grimson, W.E.L. & Huttenlocher, D.P. (1990) On the sensitivity of the Hough transform for object recognition. IEEE Trans. Pattern Analysis and Machine Intelligence, 12: 255±274.

6. Kiryati, N., Eldar, Y. & Bruckstein, A.M. (1991) A probabilistic Hough transform. Pattern Recognition, 24: 303±316. 7. Xu, L., Oja, E. & Kultanen, P. (1990) A new curve detection method: Randomized hough transform (RHT). Pattern Recognition Letters, 11: 331±338. 8. Xu, L. & Oja, E. (1993) Randomized Hough Transform (RHT): Basic mechanisms, algorithms, and computational complexities, CVGIP: Image Understanding, 57: 131±154. 9. Princen, J. Illingworth, J. & Kittler, J. (1990) A hierarchical approach to line extraction based on the Hough transform, Computer Vision, Graphics, and Image Processing, 52: 57±77. 10. Lam, W.C.Y. & Yuen, S.Y. (1996) E€ective technique for circle detection using hypothesis ®ltering and Hough transform. IEE Proc. Vis. Image Signal Process., 143: 292±300. 11. Sheu, H.T., Chen, H.Y. & Hu, W.C. (1997) Consistent symmetric axis method for robust detection of ellipses. IEE Proc. Vis. Image Signal Process., 144: 332±338. 12. Bosdogianni, P., Petrou, M. & Kittler, J. (1998) Classi®cation of sets of mixed pixels with the hypothesis testing Hough transform. IEE Proc. Vis. Image Signal Process., 145: 57±64. 13. Haule, D.D. & Malowany, A.S. (1989) Object recognition using fast adaptive Hough transform. Proceedings of IEEE Paci®c Rim Conf. on Comm. Comp. and Signal Proc., pp. 91±94. 14. Jean, E. Vuillemin, (1994) Fast linear Hough transform. IEEE Int. Conf. on Appl. Speci®c Array Processors, pp. 1±9. 15. Guil, N., Villalba, J. & Zapata, E.L. (1995) A fast Hough transform for segment detection. IEEE Trans. Image Processing, 4: 1541±1548. 16. Mahmoud, M., Nakanishi, M. & Ogura, T. (1997) Hough transform implementation on a recon®gurable highly parallel architecture. IEEE Int. Workshop on Comp. Arch. For Mach. Perception, pp. 186±194. 17. Ferretti, M. & Albanesi, M.G. (1996) Architecture for the Hough transform: a survey. Proc. IAPR Workshop on Machine Vision Applications, Tokyo, Japan, pp. 542±551. 18. Nakamura, E. & Kehtarnavaz, N. (1998) Determining number of clusters and prototype locations via multi-scale clustering. Pattern Recognition Letters, 19: 1265±1283. 19. Kehtarnavaz, N. & Nakamura, E. (1998) Generalization of EM algorithm for mixture density estimation. Pattern Recognition Letters, 19: 133±140. 20. Parzen, E. (1962) On estimation of a probability density function and mode. Ann. Math. Stat., 33: 1065±1076. 21. Lindeberg, T. (1994) Scale-Space Theory in Computer Vision, Boston: Kluwer Academic Publishers. 22. Popoulis, A. (1991) Probability, Random Variables, and Stochastic Processes, 3rd Edition, New York: Mc Graw Hill.

Appendix Based on Eqs (2) and Eqs (7), we can write F Tf

 …x†g ˆ ÿ

n X iˆ1

F Tf …x ÿ xi †g

HOUGH ARRAY PROCESSING VIA FAST MULTI-SCALE CLUSTERING

ˆÿ

n X

eÿjwxi e ÿ

iˆ1

ˆÿ

n X

As a result, Z

2 jjwjj2 2

! eÿjwxi e

iˆ1

and the relations if jjwjj2 > …wmax †2 ˆ ˆ 22



 2 2 2 jjwjj2 2 …2†2 > ) T 2 2 T2



2 Hence, for jjwjj2 > …wmax †2 ˆ …2 T†  jjwjj n  jjwjj 1 1 X  ÿjwxi ÿ 2 jF Tf  …x†gj ˆ e  eÿ 2  eÿ 2 e n n iˆ1 2

2

e

ÿ

2 w2i 2



dwi 

jjwjj2 >…wmax †2

eÿ

r Z Y iˆ1

w2i >…wmax †2

e

2 jjwjj2 2

ÿmin 2 w2i 2

dw

dwi

2 i

X X n n ÿjwxi e jeÿjwxi j ˆ n  iˆ1 iˆ1

2

w2i >…wmax †2

 …x†gjdw

Z r Y w 1  eÿ 2 dwi 2 2  iˆ1 min wi >…min wmax †

2 2 2 2 jjwjj2 2 > ˆ ) T2 2 2min 2 2

2

r Z Y iˆ1

Z

1 jFf n

jjwjj2 >…wmax †2

2 jjwjj2 ÿ 2

141

2

1 min

Z w2i >2

e

w2

ÿ 2i

!r dwi

r  r    0:001 p r 1 1  ˆ 2  min 400min 200T T=1 indicates the per unit step size. Therefore, at most 0.5% of the signal energy is lost.