Robust ellipse fitting via alternating direction method of multipliers

Robust ellipse fitting via alternating direction method of multipliers

Signal Processing 164 (2019) 30–40 Contents lists available at ScienceDirect Signal Processing journal homepage: www.elsevier.com/locate/sigpro Rob...

3MB Sizes 1 Downloads 92 Views

Signal Processing 164 (2019) 30–40

Contents lists available at ScienceDirect

Signal Processing journal homepage: www.elsevier.com/locate/sigpro

Robust ellipse fitting via alternating direction method of multipliers Junli Liang a,∗, Pengliang Li a, Deyun Zhou a, H.C. So b,1, Ding Liu c, Chi-Sing Leung b, Liansheng Sui c a

School of Electronics and Information, Northwestern Polytechnical University of China, China Department of Electronic Engineering, City University of Hong Kong, China c Xi’an University of Technology, China b

a r t i c l e

i n f o

Article history: Received 3 May 2018 Revised 24 April 2019 Accepted 28 May 2019 Available online 29 May 2019 Keywords: Ellipse fitting Outlier Alternating direction method of multipliers (ADMM) Nonlinear optimization Nonconvex optimization Ellipse parameter vector

a b s t r a c t The edge point errors, especially outliers, introduced in the edge detection step, will cause severe performance degradation in ellipse fitting. To address this problem, we adopt the p -norm with p < 2 in the direct least square fitting method to achieve outlier resistance, and develop a robust ellipse fitting approach using the alternating direction method of multipliers (ADMM). Especially, to solve the formulated nonconvex and nonlinear problem, we decouple the ellipse parameter vector in the nonlinear p -norm objective function from the nonconvex quadratic constraint via introducing auxiliary variables, and estimate the ellipse parameter vector and auxiliary variables alternately via the derived numerical methods. Simulation and experimental examples are presented to demonstrate the robustness of the proposed approach.

1. Introduction The aim of ellipse fitting is to map a set of coplanar points of an image into an ellipse circumference. It plays an important role in the fields of computer vision, automatic manufacture, observational astronomy, structural geology, industry inspection, medical diagnosis, and military and security [1–27], and thus has received much attention. For example, in the silicon single crystal production industry [18,19], the obtained diameter estimate via the ellipse fitting technique is employed to stimulate the corresponding control measures accurately in order that the produced cylinder-type silicon single crystal can be used as high-quality source material for semiconductor devices. Additionally, in the spacecraft control system [18,20], the ellipse fitting technique is commonly utilized to determine the pose of a spacecraft from a single image and then control measures are taken to ensure that the spacecraft can navigate with a prespecified state. In the biomedical field, ellipse fitting has been utilized for automatic cell [21,22] and epicardial fat segmentation [23] in diagnosis and treatment. It is also employed to assist vision based landing systems in controlling the motions of unmanned aerial vehicles [24].



1

Corresponding author. E-mail address: [email protected] (J. Liang). Fellow, IEEE

https://doi.org/10.1016/j.sigpro.2019.05.032 0165-1684/© 2019 Elsevier B.V. All rights reserved.

© 2019 Elsevier B.V. All rights reserved.

Generally speaking, ellipse fitting algorithms can be classified into two categories, namely, clustering techniques [10] and leastsquares methods [14–16]. The former include the Hough transform (HT) [10] and its variants, which involve heavy computational load due to the fact that an ellipse is characterized by five parameters and thus fine quantization and peak search in 5-dimensional (5-D) parameter space is required to achieve a satisfactory resolution. On the other hand, the latter can be further divided into geometric and algebraic methods. Geometric methods are based on the orthogonal distances between the data points and the estimated ellipse [11,15]. For example, in [15], Barwick proposed a geometric approach called the parallel chord method. But it will fail when the parallel chords being perpendicular to a common axis are not available. While algebraic methods are widely used due to their advantages of simplicity and computational efficiency [12–14,16]. For example, in [14], Fitzgibbon et al. developed a direct least square fitting (DLSF) method by solving a generalized eigenvalue problem. And the constrained least squares (CLS) method [16] solves a least-squares problem subject to a unit-norm constraint on the ellipse parameters. Besides, based on the probability theory, the Bayesian method [19] constructs a Markov chain and samples the ellipse parameters from the joint posterior distribution, and attains the state of equilibrium after sufficient number of iterations. Liang et al. [25] recovers a low-rank generalized multidimensional scaling matrix, which is a function of ellipse parameters to be determined.

J. Liang, P. Li and D. Zhou et al. / Signal Processing 164 (2019) 30–40

It is worth mentioned that in practical applications, ellipse fitting often follows an edge detection step, and thus the effect of edge point errors, especially outliers, on the fitting performance cannot be ignored. Although some existed ellipse fitting methods work well in normal cases, their performance are sensitive to outliers. Therefore, it is necessary to develop robust ellipse fitting methods to achieve outlier resistance. Recent proposed robust ellipse fitting methods include [17,18,28,29]. The sparsity based method (SBM) [17] selects sparse samples to represent the ellipse coefficient vector in order to alleviate the influence of the outliers on the ellipse fitting. The maximum correntropy criterion (MCC) based method [18] determines the ellipse parameters with the large-weight samples via the half-quadratic and semidefinite relaxation optimization. [28] utilizes the sparsity of outliers and Huber fitting measure to improve the robustness of ellipse fitting [47]. Furthermore, Shao et al. [29] removes the outliers via taking advantages of the phenomenon that outliers generated by ellipse edge point detector are likely to appear as groups. Recently, Lin et al. proposed an alternating direction method of multipliers (ADMM) for multidimensional ellipsoid-specific fitting where the objective function is separated from the nonconvex positive semidefinite constraint via introduction of an auxiliary vector [30]. Unlike [30], this paper develops a p based ellipse fitting approach [31–33] with p < 2 to tackle outlier observations. It is worth highlighting three aspects of the proposed algorithm here: 1. We replace the 2 -norm by the p -norm in the DLSF method to enhance robustness against outliers, which results in a nonconvex and nonlinear optimization problem. In addition, auxiliary variables are introduced to decouple the parameters of the ellipse in the nonlinear p -norm objective function from the nonconvex quadratic constraint. Then we compute the ellipse parameter vector and auxiliary variables alternately via ADMM [31–33]; 2. For the ellipse parameter vector determination step, the Lagrange multiplier method is utilized to transform the vector estimation problem as finding a single Lagrange multiplier. In particular, we derive the feasible region of the Lagrange multiplier, analyze the monotonically decreasing property of the resultant problem, and efficiently find the multiplier by applying the bisection method [34]; 3. For the auxiliary variable determination step, we separate the corresponding nonlinear objective function into multiple subfunctions to determine the auxiliary variables in parallel, each of which only involves one single auxiliary variable. Then, we analyze the convexity or concavity of each subfunction to devise a simple but efficient method to calculate the corresponding auxiliary variable. The rest of this paper is organized as follows. The ellipse fitting problem is described in Section 2. The proposed approach is developed in Section 3. Simulation and experimental results are presented in Section 4, and conclusions are drawn in Section 5. Notation: Vectors and matrices are denoted by boldface lowercase and uppercase letters, respectively. The | · |,  · , and indicates that A ∈ Rn×n is positive semidefinite. The [A, B], (A, B], [A, B), and (A, B) are the closed, right half-closed, left half-closed, and open intervals, respectively. Also, {zi , λi } denotes a set with elements zi and λi , and adiag{[a1 , a2 , , an ]} represents the anti-diagonal matrix with elements {a1 , a2 , , an } on the anti-diagonal positions and zeros otherwise.

31

( (x − h ) cos θ + (y − k ) sin θ )2 a2

+

(−(x − h ) sin θ + (y − k ) cos θ )2

= 1,

b2

(1)

where (x, y) denotes any point on the ellipse. The task of ellipse fitting is to estimate the ellipse parameters {a, b, h, k, θ } from the obtained edge points (xi , yi ), i = 1, · · · , I. 3. Proposed algorithm In this section, we first review the DLSF method, then the robust ADMM based ellipse fitting method is devised. 3.1. Review of DLSF method As one of the most popular ellipse fitting methods, the DLSF approach [14] considers the implicit second-order polynomial model:

Ax2 + Bxy + Cy2 + Dx + Ey + F = 0,

(2)

where

cos2 θ sin θ + , a2 b2 2

A=

B = 2 sin θ cos θ

1 a2

(3)





1 , b2

(4)

C=

sin θ cos2 θ + , 2 a b2

D=

−2h cos2 θ − 2k sin θ cos θ −2h sin + a2

2

(5)

2

−2h sin θ cos θ − 2k sin a2

2

E= and



F=

h cos θ + k sin θ a

2

θ

 +

+

θ + 2k sin θ cos θ b2

(6)

2h sin θ cos θ − 2k cos2 θ , b2

h sin θ − k cos θ b

,

(7)

2 − 1.

(8)

With this model, the DLSF approach seeks the ellipse parameter vector

a = [A B C D E F ]T ,

(9)

by minimizing the sum of squared algebraic distances:

f (a ) =

I  i=1

( aT x i )2 =

I 

aT xi xTi a,

(10)

i=1

where the data point vector



T

xi = x2i xi yi y2i xi yi 1 ,

(11)

is yielded from the edge points (xi , yi ) for i = 1, 2, · · · , I. In order to ensure that a corresponds to an ellipse and avoid the trivial solution, the DLSF method [14] enforces a quadratic constraint on the ellipse parameter vector a, i.e., aT Ca = 1, where





2. Problem formulation

adiag{[2, −1, 2]} C= 03×3

A general elliptic equation with center (h, k), counter-clockwise rotation angle θ , and semi axes (a, b) can be written as:

whose left upper corner is an anti-diagonal submatrix adiag{[2, −1, 2]}. Then, the DLSF method computes a from a

03×3 , 03×3

(12)

32

J. Liang, P. Li and D. Zhou et al. / Signal Processing 164 (2019) 30–40

generalized eigenvalue problem:

min a

s.t.

I 

aT xi xTi a

i=1

aT Ca = 1.

(13)

From (13), it is seen that when the errors contained in the algebraic distances follow a Gaussian distribution, the optimal solution can be obtained by minimizing the sum of these squared distances. However, in practical applications, the procedure of ellipse fitting often follows an edge detection step. When the non-edge points are misclassified as the edge points, their effect on the fitting performance cannot be ignored. Especially, in the presence of outliers, i.e., there are several random errors with very large magnitudes, the performance of the DLSF method degrades severely.

Conventional techniques often rely on the Gaussian distribution assumption of the measurements and their derivation is based on the 2 -norm. As a result, they may fail to work properly when the observations contain outliers. Comparing with the 2 -norm, the choice of 0 < p < 2 is more robust to outliers because another cost function less increasing than square is employed. In fact, the p norm minimizer belongs to the M-estimator [35] in robust statistics. Moreover, when the noise is an α -stable process where the impulsiveness decreases with α , we should use p < α in the p norm minimizer [36]. This indicates that a smaller value of p can deal with a more impulsive noise. Note that the p -norm cost function has been widely used for developing robust algorithms in the presence of impulsive noise in various signal processing applications, which include beamforming [37,38], matrix completion [39], direction-of-arrival estimation [41], time delay estimation [42], and spectrum sensing [43]. Exploiting the p -norm with 0 < p < 2, we generalize the objective function in (13) as:

min a

s.t.

(14)

From (14), it can be seen that: i) Ii=1 |aT xi | p is a highly nonlinear objective function of a when 0 < p < 2; and ii) a is subject to the nonconvex constraint aT Ca = 1. As a result, it is difficult to determine a from the nonlinear and nonconvex optimization problem. In this work, ADMM is applied for solving (14). It has two important features, namely, decomposition-coordination and superior convergence [31–33]. First, it decomposes a constrained convex optimization problem into multiple smaller subproblems whose solutions are coordinated to find the global optimum. Second, in spite of employing iterations in the parameter updating process, it provides superior convergence. ADMM has also been applied to solve nonconvex problems [44]. First, we define I auxiliary variables:

i = 1, · · · , I,

(15)

and rewrite the problem in (14) as:

min a,zi

s.t.

α

I 

i=1

s.t.

I 

λi (zi − aT xi ) +

i=1

I ρ

2

( z i − aT x i ) 2

i=1

aT Ca = 1,

(17)

where ρ > 0 and λi ∈ R are the step size and Lagrange multiplier [31–33], respectively. Obviously, one can utilize α to flexibly con trol the tradeoff between Ii=1 |zi | p and ρ2 Ii=1 (zi − aT xi )2 . Then, based on ADMM, we determine {a, zi , λi } from (17) via the following iterative procedure:

min aT Ra + 2dT a a

s.t. aT Ca = 1,

(18)

where

R=

I ρ

2

xi xTi ,

(19)

i=1

and

d=−

I 1 (λi + ρ zi )xi . 2

(20)

i=1

We define the Lagrangian associated with (18) as:



aT Ra + 2dT a + λ aT Ca − 1 ,

(21)

where λ is the corresponding Lagrange multiplier [34]. Differentiating (21) with respect to a and λ and then setting the resultant expressions to zero yields:

2Ra + 2d + 2λCa = 0,

(22)

R + λC  0,

(24)

results in the necessary and sufficient optimality conditions of the generalized trust region subproblems [45,46]. The analytical solution to (22) is given by

a = −(R + λC )−1 d.

(25)

Inserting (25) into (23) yields:

g(λ ) = dT (R + λC )−1 C(R + λC )−1 d − 1,

(26)

which shows that the optimal value of the Lagrange multiplier λ is the root of g(λ ) = 0. Thus, the ellipse fitting problem becomes a problem of finding the optimal Lagrange multiplier. After λ is obtained via the numerical approach in Appendix B, we substitute it into (25) to obtain a. 2. With given set {λ1 , , λI , a}, we determine {z1 , , zI } from:

zi

i=1

(23)

which are known as the Lagrange equations. Combining (22)–(23) with

min

|zi | p

I 



α|zi | p +

i=1

ρ 2

 (zi − z˜i )2 ,

(27)

where

aT Ca = 1, zi = aT xi , i = 1, · · · , I,

α|zi | p +

aT Ca − 1 = 0,

i=1

z i = aT x i ,

I 

and

|aT x i | p

aT Ca = 1.

Lρ (a, zi , λi ) =

1. With given set {zi , λi }, we compute a from:

3.2. Development of proposed algorithm

I 

where the objective function is scaled with a user-defined parame ter α > 0. Obviously, the minimization of α Ii=1 |zi | p is equivalent I p to min i=1 |zi | . Note that the constraint aT Ca = 1 is imposed on a and is not related to zi . Hence it plays its role only in the step of finding a. For this reason, we devise the augmented Lagrangian:

(16)

z˜i = aT xi −

λi . ρ

(28)

J. Liang, P. Li and D. Zhou et al. / Signal Processing 164 (2019) 30–40

With the decomposition-coordination property of ADMM, we divide (27) into I subproblems:

min h(zi ) = |zi | p + zi

ρ¯ 2

(zi − z˜i )2 ,

(29)

for i = 1, · · · , I, where

ρ¯ =

(30)

controls the weighting between |zi |p and (zi − z˜i )2 . Then, we can solve (29) to obtain zi using the numerical approach in Appendix C. 3. λi and ρ [53] are updated as:



λi (t + 1 ) = λi (t ) + ρ zi − aT xi , i = 1, · · · , I,

(31)

and

ρ = min(1.5ρ , ρmax ),

(32)

respectively, where ρmax is a given upper bound of ρ . Steps 1) to 3) are repeated until a stopping criterion is reached. In this study, the algorithm is terminated when a maximum iteration number T is attained [31–33]. Note that ρ plays two roles in the iterative steps: i) Similar to other applications of ADMM, ρ is the so-called step size to control the convergence of ADMM [31–33]; and ii) it controls the tradeoff between |zi |p and (zi − z˜i )2 together with α . Finally, the standard ellipse parameters {a, b, h, k, θ } are computed from the estimate aˆ as [14]:







aˆ (2 ) , aˆ (1 ) − aˆ (3 )



aˆ = and

bˆ =

    

−ˆa(2 ) −2aˆ (3 )

T hˆ kˆ

(33)

−1

hˆ −2aˆ (1 ) = −ˆa(2 ) kˆ

aˆ (1 ) aˆ (2 )/2



aˆ (4 ) , aˆ (5 )

T hˆ kˆ

aˆ (1 ) aˆ (2 )/2

(34)



aˆ (2 )/2 aˆ (3 )

hˆ +1 kˆ

2 aˆ (1 ) cos2 θˆ + aˆ (2 ) sin θˆ cos θˆ + aˆ (3 ) sin θˆ

    

robustness schemes to improve the fitting robustness. A PC with Intel i5, CPU 3.30GHz, and 16 GB memory is used. In our algorithm, we set α = 10 and ρmax = 10 0 0, and the maximum iteration number T = 500 is employed as the stopping criterion. 4.1. Experiment 1: effect of p and outlier level on proposed algorithm

ρ , α

1 θˆ = tan−1 2

33

,

(35)

hˆ +1 kˆ

2 aˆ (1 ) sin θˆ − aˆ (2 ) sin θˆ cos θˆ + aˆ (3 ) cos2 θˆ



E1 =

10 0 0 1  10 0 0

,

(36)

where aˆ (i ) denotes the ith entry of the vector aˆ . The proposed algorithm is summarized as follows:

aˆi − a2 ,

as a measurement standard of fitting results, where aˆi and a represent the estimated value in the ith Monte Carlo run and true value, respectively. For comparison, we plot the obtained E1s under different p case in Fig. 1, and find out: 1) generally the fitting performance decreases (i.e., large E1) with the increase of p for all 4 outlier levels; and ii)more outliers there are in the complete data points, the larger the corresponding E1 is. According to the simulation result, we consider setting p as 0.1 in the rest of Section 4. Then, we investigate the effect of outlier levels on the successful rate of the proposed method. For each Monte Carlo run, we compute the corresponding E2 value according to the following error definition:

i = 1, · · · , 10 0 0,

(38)

and sort them in a decreasing order, as shown in Fig. 2. It can be seen that: i)when the outlier level is not larger than 20%, all the E2 values from the proposed method are less than 0.12, which implies that the proposed method works well and achieves high accuracy; ii) when the E2 values is larger than 1.0, the fitting result is no

Robust ADMM based Ellipse Fitting Algorithm Initialize zi and λi , set T, α , ρmax , and iteration number t = 1; While t < T Determine a using (18)-(26); Calculate zi using (27)–(30); Update λi and ρ (t + 1 ) using (31) and (32), respectively; end While. Determine the ellipse parameters {a, b, h, k, θ} using (33)–(36).

4. Simulation and experimental results In this section, simulations and experiments are conducted to assess the proposed method. For comparison purpose, we implement some representative methods, including the CLS method [16], the DLSF method [14], the Lin method [30], the Bayesian method [19], the HT method [10], the SBM method [17], and the MCC method [18]. Among them, the former four methods do not consider the outlier case; whereas the last three methods adopt some

(37)

i=1

E2 = aˆi − a,



aˆ (2 )/2 aˆ (3 )

In this experiment, we first evaluate the effect of p on the performance of the proposed algorithm. The parameters of synthetic ellipses are {h = 0, k = 0, a = 2, b = 1, θ = 30◦ }, and the total number of data points, including outliers, is 100. Here we consider 4 different levels of outliers, i.e., 10%, 20%, 30% and 40% of total data points as outliers, and the rest are the data points corrupted by additive zero-mean Gaussian noise with variance 0.005. To simulate random outliers, we generate them randomly in a square region with length 8, where the same center coordinate with the synthetic ellipses and the angle between the square and the coordinate axis 0◦ (or 90◦ ) are applied. For each level outliers, we implement 10 0 0 Monte Carlo runs. For each run, the proposed method is implemented 20 times separately with different p values, which vary from 0.1 to 2 with uniform interval 0.1. To assess the effect of p on the proposed method, we employ the root mean square error

Fig. 1. Influence of p in Exp. 1.

34

J. Liang, P. Li and D. Zhou et al. / Signal Processing 164 (2019) 30–40

Fig. 2. Limitation of proposed algorithm in Exp. 1.

Fig. 3. Convergence of a in Exp. 2.

longer satisfactory. Therefore, we set 1.0 as the threshold of estimation failure. When the outlier level attains 30% and 40% of the total data points, there are 4 and 19 failures in all the 10 0 0 runs, respectively. Therefore, the proposed method is not applicable to the case there are not less than 30% outliers in all the data points. 4.2. Experiment 2: random parameter configuration and random number of outliers In this experiment, we evaluate the ability of the proposed algorithm to fit synthetic ellipses with random parameter configurations and random number of outliers from 50 0 0 Monte Carlo runs. In each run, the ellipse parameters {h, k, b, a, θ } are generated from the randomly uniform distributions h ∈ [0, 20], k ∈ [0, 20], b ∈ [10, 50], a ∈ [b, 50], θ ∈ [0o , 180o ]. The number Ii of ellipse data points and noise variance distribution vi in the ith Monte Carlo run belong to the uniform distributions Ii ∈ [30, 100] and vi ∈ [0.0 0 05, 0.05], respectively, for i = 1, · · · , 50 0 0. And the corresponding number of outliers is set as integer(r∗ Ii ) generated from

Table 1 C_RMSE values of all the methods in Exp. 2. Method

a

b

h

k

θ

ART(s)

HT SBM Bayesian Lin CLS MCC DLSF Proposed

0.0045 0.0205 0.0203 0.0207 0.0190 0.0196 0.0190 0.0198

0.0067 0.0299 0.0455 0.0328 0.0395 0.0207 0.0395 0.0204

0.0161 0.0763 0.0734 0.1446 0.1325 0.0702 0.1325 0.0360

0.0154 0.0769 0.0726 0.1103 0.0987 0.0741 0.0987 0.0375

0.0152 0.0355 0.0507 0.0558 0.0601 0.0444 0.0601 0.0290

45.3251 0.4021 0.5984 0.0157 0.0254 0.1686 0.0401 1.1251

the number of data points Ii and random outlier level r ∈ [0, 0.2], where the function integer(∗ ) means the rounding operation and which implies that there are Ii − integer (r ∗ Ii ) noisy data points with variance vi in the Ii data points. In addition, the integer(r∗ Ii ) outliers are randomly generated in the square region which has the same center coordinate with corresponding ellipse, and the edge length of which is 1.1∗ a∗ max(cos (θ ), sin (θ )). Moreover, the angle

Fig. 4. Test images in Exp. 3.

Fig. 5. Edges of test images in Exp. 3.

J. Liang, P. Li and D. Zhou et al. / Signal Processing 164 (2019) 30–40

35

Fig. 10. C_RMSE of θ in Exp. 3.

Fig. 6. C_RMSE of a in Exp. 3.

Fig. 7. C_RMSE of b in Exp. 3.

Fig. 8. C_RMSE of h in Exp. 3.

Fig. 11. Ellipse fitting results for Crystal1 image.

between the square and the coordinate axis is 0o (or 90o ). For 50 0 0 Monte Carlo runs, we compute the corresponding C _RMSE values according to



C_RMSE =

50 0 0 1  50 0 0 i=1

Fig. 9. C_RMSE of k in Exp. 3.

|

xˆi − xi 2 |, xi

(39)

to evaluate the estimate accuracy of parameters hi , ki , ai , bi , θ i separately, where xˆi and xi represent the estimated value and true value of {hi , ki , ai , bi , θ i } in the ith Monte Carlo run. In addition, we compute the estimate error between two adjacent iterations, as shown in Fig. 3. Table 1 lists the corresponding C _RMSE values and

36

J. Liang, P. Li and D. Zhou et al. / Signal Processing 164 (2019) 30–40

Fig. 12. Ellipse fitting results for Eye1 image.

average running times (ART) for all the methods. From the results, we can see that: i) the HT method, the benchmark, has the highest accuracy at a cost of high ART, and the accuracy of the proposed method is closer to HT; and ii) the proposed method basically converges within 300 iterations. 4.3. Experiment 3: real images In this experiment, we apply the proposed method to fit the ellipses in 20 real images shown in Fig. 4, including 9 iris images, 6 silicon single crystal images, 4 traffic sign images, and 1 Voyager image [51,52]. Via some preprocessing steps [50], including image segmentation and edge detection (as well as color analysis for color images), the obtained edge points are given in Fig. 5. We can see that after image preprocessing steps: i)there exist outliers for crystal images resulted from the unclear viewing window; ii) the bright regions inside the pupil region resulted from the specular reflection yield outliers; iii)other marks (e.g., an arrow) inside the ellipse of traffic sign images generate outliers; and iv)there exist outliers for the Voyager image (resulted from the special spacecraft structure) outside and inside the ellipse. Note that unlike the simulations used in Exps. 1 and 2, one cannot get the real parameter values for real images. To assess the fitting performance of the methods, we use the results from HT as reference (true) values and compute the corresponding virtual C_RMSE values of {a, b, h, k, θ }, which implies that the smaller the virtual C_RMSE values are, the higher the accuracy of

Fig. 13. Ellipse fitting results for Traffic1 image.

the corresponding method is. In Figs. 6–10, we plot the virtual C_RMSE values of {a, b, h, k, θ } of these methods, respectively. It is easily found that the proposed method has the lowest virtual C_RMSE values, i.e., it has the accurate fitting results, being closer to HT. Furthermore, to assess the fitting performance of the methods from the visible view, we plot the fitting results of the Crystal1, Eye1, Traffic1, and Voyager as the representative examples, as shown in Figs. 11–14. From these figures, we can see that: i) the proposed algorithm can fit both the inner and outer ellipses accurately and thus can provide accurate diameter measurement for the diameter control system, as shown in Fig. 11; ii) although there

J. Liang, P. Li and D. Zhou et al. / Signal Processing 164 (2019) 30–40

37

Acknowledgement This work was supported in part by National Natural Science Foundation of China (NSFC 61533014, 61471295) and 973 Project (Grant 2014CB360508). Appendix A. ADMM ADMM is applicable to problems which can be formulated as the following constrained optimization:

min x,z

f ( x ) + g( z )

s.t. Ax + Bz = c.

(40)

The augmented Lagrangian constructed from (40) is then:



L x, z, λ = f (x ) + g(z ) + λ (Ax + Bz − c ) T

+

ρ 2

Ax + Bz − c2 ,

(41)

where ρ > 0 and λ are the step size and Lagrange multiplier vector, respectively. The ADMM solves (41) via the decompositioncoordinated procedure and the resultant iterative updating steps are:



x(t + 1 ) = arg min L x, z(t ), λ(t ) , x



(42)

z(t + 1 ) = arg min L x(t + 1 ), z, λ(t ) ,

(43)

λ(t + 1 ) = λ(t ) + ρ (Ax(t + 1 ) + Bz(t + 1 ) − c ),

(44)

z

where t denotes the iteration number. The interested reader is referred to [31] for detail. Fig. 14. Ellipse fitting results for Voyager image.

are the shield and outliers resulted from specular reflections, the proposed algorithm can fit the pupil boundary accurately, which is helpful to determine the iris region [48,49], as shown in Fig. 12; iii) the proposed algorithm can robustly fit an ellipse of the traffic image with many outliers, as shown in Fig. 13; and iv) as shown in Fig. 14, the proposed algorithm can fit an ellipse well even if there exist outliers both outside and inside an ellipse, which is important to enhance the pose estimation accuracy of the spacecraft.

5. Conclusion This paper has focused on the robust ellipse fitting problem. By introducing the p -norm into the DLSF method, we construct a new nonconvex and nonlinear optimization formulation to improve the outlier-rejection ability. To solve it using ADMM, we introduce auxiliary variables to decouple the ellipse parameters in the nonlinear objective function from the nonconvex quadratic constraint. To determine the parameters, we derive a proper feasible region for the corresponding Lagrange multiplier and solve it efficiently. To determine the auxiliary variables, we analyze the convexity or concavity of the corresponding objective function using its derivatives to develop a simple but efficient method. The robustness of the proposed ellipse fitting approach is demonstrated via a number of simulation and experimental examples. As a future work, we will focus on multiple ellipse fitting problem. One possible solution is to combine the clustering method with the proposed robust ellipse fitting method.

Declaration of Competing Interest None.

Appendix B. Solution to g(λ ) = 0 In this appendix, we drive the solution to g(λ ) = 0. To determine the optimal value of the Lagrange multiplier, we first derive the feasible region for the multiplier so that R + λC  0 in this region [45,46]. The matrix R is positive definite because I > > 6 in general cases. Additionally, C has two negative eigenvalues of −2 and −1, one positive eigenvalue of 2, and three zero eigenvalues. Therefore, in all the 6 general eigenvalues of the matrix pencil (C, R), two are negative, one is positive, and three are zeros. Assuming that R is positive definite, we rewrite R + λC as:



R + λC = R0.5 I + λR−0.5 CR−0.5 R0.5 ,

(45)

which implies that the numbers of positive, negative and zero eigenvalues of R + λC are the same as those of I + λR−0.5 CR−0.5 . Note that the eigenvalues of R−0.5 CR−0.5 are the same as the general eigenvalues of the matrix pencil (C, R). To facilitate our analysis, we define the eigenvalue decomposition of R−0.5 CR−0.5 as:

R−0.5 CR−0.5 =

6 

σk uk uTk = U UT ,

(46)

k=1

where U = [u1 , · · · , u6 ] and  = diag{σ1 , · · · , σ6 }. Additionally, σ 1 > 0 > σ 5 ≥ σ 6 and σ2 = σ3 = σ4 = 0 are the eigenvalues, and uk is the eigenvector associated with σ k for k = 1, · · · , 6. Then, the eigenvalues of I + λR−0.5 CR−0.5 can be represented as:

1 + λσk ,

k = 1, · · · , 6.

(47)

According to the nonzero eigenvalues {σ 1 , σ 5 , σ 6 }, we divide the possible region of λ into four subregions:



−∞, −

  −1 −1   −1 −1   −1  , , , , , , +∞ . σ1 σ1 σ6 σ6 σ5 σ5 1

(48)

38

J. Liang, P. Li and D. Zhou et al. / Signal Processing 164 (2019) 30–40

In the subregion λ ∈





−1 (i.e., λ > −1 σ1 and λ < σ6 ), we derive the upper and lower bounds of λσ 1 , λσ 2 , λσ 3 , λσ 4 , λσ 5 , and −1 λσ 6 using the subregion bounds −1 σ1 and σ6 , σ 1 > 0 > σ 5 ≥ σ 6 , and σ2 = σ3 = σ4 = 0 : −1

−1

σ1 , σ6

−1

−1

× σ1 >1 + λσ1 > 1 + × σ1 = 0 , σ6 σ1 1 + λσ2 = 1 + λ × 0 = 1,

(49)

1 + λσ3 = 1 + λ × 0 = 1,

(51)

1+

1 + λσ4 = 1 + λ × 0 = 1, −1

0<1+

σ6

and

−1

0=1+

σ6

× σ5 <1 + λσ5 < 1 + × σ6 <1 + λσ6 < 1 +

−1

σ1

−1

σ1

× σ5 , × σ6 ,

(50)







−1







−1



−1

× R0.5 I + λR−0.5 CR−0.5 R0.5 = dT R−0.5 I + λR−0.5 CR−0.5



× I + λR−0.5 CR−0.5

−1





× UUT + λU UT

−1 −1

= dT R−0.5 U(I + λ) =

6  k=1

=

−1

γk = d R

uk ,

(54)

R−0.5 d − 1

(I + λ)−1 UT R−0.5 d − 1

(55)

(56)

(57)

lim  +

γ52 σ5 γ52 σ5 σ12 = , 2 (1 + λσ5 ) ( σ1 − σ5 ) 2

(58)

−1 σ1

and

lim  +

λ→

−1 σ1

γ62 σ6 γ62 σ6 σ12 = , 2 (1 + λσ6 ) ( σ1 − σ6 ) 2

(59)

we have

lim  + g ( λ ) = + ∞ ,

λ→

lim  − −1 σ6

λ→

(64)

−1 σ6

holds, where λ → (−1/σ6 )− means that λ approaches −1/σ6 from the left side. Furthermore, in the subregion λ ∈ (−1/σ1 , −1/σ6 ) 6 γk2 σk2 ∂ g( λ )  = −2 < 0. ∂λ (1 + λσk )3 k=1

(65)

Based on (60), (64), and (65), it is easily found that g(λ) is a monotonically decreasing  function of λ ∈ (−1/σ1 , −1/σ6 ) and thus −1

−1

σ1 , σ6

there is a unique solution λo to g(λo ) =

Let us first discuss the case of z˜i ≥ 0 : i) |zi |p is an even function, in which the symmetric center is zi = 0, and increases with |zi |; ii) (zi − z˜i )2 is also an even function, in which the symmetric center is zi = z˜i . Especially, in the region [z˜i , +∞ ), (zi − z˜i )2 increases with zi but decreases with the increase of zi in the region (−∞, 0]; and iii) in the regions [z˜i , +∞ ) and (−∞, 0], h(zi ) is a monotonically decreasing and increasing, respectively [34]. Especially, h(z˜i ) and h(0) are the local minima of the regions [z˜i , +∞ ) and (−∞, 0], respectively. Therefore, the global minimum of h(zi ) appears in the region [0, z˜i ], and thus (29) can be simplified into:

min h(zi ) = zip +

γ12 σ1 = +∞, (1 + λσ1 )2

λ→

(63)

and

Appendix C. Solution to (29)

lim  + −1 σ1

γ62 σ6 = −∞, (1 + λσ6 )2

−1 σ6

0. Then, we utilize the bisection method to obtain λo in this range.

R−0.5 CR−0.5

and UT U = I is applied. Since

λ→

(62)

λ→

in the subregion

d−1

U UT

k = 1, · · · , K,

γ52 σ5 γ52 σ5 σ62 = , (1 + λσ5 )2 (σ6 − σ5 )2

lim  − g(λ ) = −∞

γk2 σk −1 (1 + λσk )2

−0.5

lim  −

−1 σ6

the limit

where T

(61)

λ→

(53)

C

γ52 σ5 γ62 σ6 γ12 σ1 + + − 1, 2 2 (1 + λσ1 ) (1 + λσ5 ) (1 + λσ6 )2

γ12 σ1 σ62 γ12 σ1 = , (1 + λσ1 )2 (σ6 − σ1 )2

λ→

R−0.5 d − 1

= dT R−0.5 UUT + λU UT

lim  −

(52)

which imply that in this range λ ∈ (−1/σ1 , −1/σ6 ), all the eigenvalues of I + λR−0.5 CR−0.5 , i.e., 1 + λσk , are positive, and thus all the eigenvalues of R + λC are positive. Therefore, when λ ∈ (−1/σ1 , −1/σ6 ), R + λC is positive definite. Then, we discuss how to find the optimal Lagrange multiplier in this range, denoted by λo , which satisfies g(λo ) = 0. We rewrite g(λ) as:

g(λ ) = dT R0.5 I + λR−0.5 CR−0.5 R0.5

Moreover, since

zi

ρ¯ 2

(zi − z˜i )2 ,

s.t.

zi ∈ [0, z˜i ].

(66)

On the other hand, when z˜i < 0, we define z¯i = −˜zi and z˘i = −zi . Then |zi | p + ρ2¯ (zi − z˜i )2 becomes |z˘i | p + ρ2¯ (z˘i − z¯i )2 , where z¯i > 0. It implies that it can be transformed as the same form as (66). Therefore, we discuss only the case of z˜i ≥ 0 and focus on the optimization problem in (66). p Now we discuss the convexity or concavity of zi + ρ2¯ (zi − z˜i )2 in the region zi ∈ [0, z˜i ]. Its first-, second-, and third-order derivatives with respect to zi are given by [34,39,40]:

∂ h ( zi ) = pzip−1 + ρ¯ (zi − z˜i ), ∂ zi

(67)

∂ 2 h ( zi ) = p( p − 1 )zip−2 + ρ¯ , ∂ zi2

(68)

and

(60)

−1 σ1

where λ → (−1/σ1 )+ means that λ approaches −1/σ1 from the right side.

∂ 3 h ( zi ) = p( p − 1 )( p − 2 )zip−3 . ∂ zi3

(69)

Then, we discuss (66) from three cases according to the value of p:

J. Liang, P. Li and D. Zhou et al. / Signal Processing 164 (2019) 30–40

Case 1 1 < p ≤ 2: Since

∂ 2 h ( zi ) > 0 holds for all zi ∈ [0, z˜i ], ∂ zi2

the objective function h(zi ) is convex for zi ∈ [0, z˜i ]. Note that

∂ h ( zi ) ∂ h ( zi ) ∂ zi |zi =0 < 0 and ∂ zi |zi =z˜i > 0, and thus h(zi ) has a unique min-

imum in the region zi ∈ [0, z˜i ]. Therefore, we can get the minimal value by applying a simple bisection method for finding the root of ∂ h (z ) p−1 the nonlinear equation ∂ z i = pzi + ρ¯ (zi − z˜i ) = 0 in the region i

[0, z˜i ] [34]; Case 2 p = 1: When p = 1, (66) is equivalent to

ρ¯ 

min

2

zi



zi − z˜i −

1 ρ¯

2

,

zi ∈ [0, z˜i ].

s.t.

(70)

The solution is given by:



zi =

z˜i − ρ1¯ , 0,

if z˜i − ρ1¯ > 0 if z˜i − ρ1¯ ≤ 0.

Case 3 0 < p < 1: In this case,

(71) ∂ 3 h ( zi ) > 0 in the region [0, z˜i ], ∂ zi3

∂ 2 h ( zi ) is a monotonically increasing function. Since the ∂ zi2 ∂ 2 h ( zi ) convexity or concavity of h(zi ) depends on the sign of , we ∂ zi2

and thus

need to determine this sign.

To facilitate the analysis, we derive the root for

 zˆi =

ρ¯

∂ 2 h ( zi ) = 0 as: ∂ 2 zi

1  p−2

p( 1 − p )

> 0.

(72)

Obviously, when zi > zˆi ,

∂ 2 h ( zi ) > 0 due to the monotonically in∂ zi2

∂ 2 h ( zi ) , and thus h(zi ) is convex in the region ∂ zi2 ∂ 2 h ( zi ) [zˆi , +∞ ). With the same reason, < 0 in the region [0, zˆi ] and ∂ zi2

creasing property of

thus h(zi ) is concave. Since here the focus is on the minimum of h(zi ) in [0, z˜i ], we divide Case 3 into two subcases according to the value of zˆi : Case 3-A zˆi ≥ z˜i : Since

∂ 2 h ( zi ) ≤ 0, h(zi ) is concave. Thus, we can ∂ zi2

determine the minimum from the function values at the boundary points zi = 0 and zi = z˜i ;

∂ 2 h ( zi ) may be positive or negative. ∂ zi2 2 ∂ h ( zi ) Therefore, according to the value of , we divide the region ∂ zi2

Case 3-B zˆi < z˜i : In this case,

[0, z˜i ] into two parts, i.e., [0, zˆi ] and (zˆi , z˜i ]. Case 3-B-i: In the former region [0, zˆi ],

∂ 2 h ( zi ) < 0 and thus h(zi ) ∂ zi2

is concave. We can determine two function values at the boundary points zi = 0 and zi = zˆi in the region [0, zˆi ). Case 3-B-ii: In the latter region (zˆi , z˜i ],

∂ 2 h ( zi ) > 0 and thus h(zi ) ∂ zi2

∂ h ( zi ) ∂ zi |zi =z˜i > 0, we can adopt different strategies ∂ h (z ) to obtain the minimum: if ∂ z i |zi =zˆi > 0, h(zi ) is a monotonically i

is convex. Since

increasing function of zi in the region [zˆi , z˜i ]. Therefore, the minimum is attained at zi = zˆi . Otherwise, if

∂ h ( zi ) ∂ zi |zi =zˆi ≤ 0 we apply a

simple bisection method to obtain the minimum in [zˆi , z˜i ]. Apparently, we can obtain the global minimum of h(zi ) for Case 3-B by comparing the locally minima in the regions [0, zˆi ] and (zˆi , z˜i ]. References [1] L. Luo, Z. Lin, X. Lai, New optimal method for complicated assembly curves fitting, Int. J. Adv. Manuf.Technol. 21 (10–11) (2003) 896–901. [2] H.Y. Tseng, C.C. Lin, A simulated annealing approach for curve fitting in automated manufacturing systems, J. Manuf. Technol. 18 (2) (2007) 202–216.

39

[3] Y.T. Chan, Y.Z. Elhalwagy, S.M. Thomas, Estimation of circle parameters by centroiding, J. Optim. Theory Appl. 114 (2) (2002) 363–371. [4] H. Junrn, W. Aaiyun, X. Liu, Shape reconstruction by genetic algorithms and artificial neural networks, Eng. Comput. 20 (2) (2003) 129–151. [5] M.C. Chen, D.M. Tsai, H.Y. Tseng, A stochastic optimization approach for roundness measurement, Pattern Recognition Letters 20 (7) (1999) 707–719. [6] H.Y. Tseng, Welding parameters optimization for economic design using neural approximation and genetic algorithm, Int. J. Adv. Manuf.Technol. 27 (9) (2006) 897–901. [7] T. Ellis, A. Abbood, B. Brillault, Ellipse detection and matching with uncertainty, Image Vis. Comput. 10 (5) (1992) 271–276. [8] R. Halir, J. Flusser, Numerically stable least squares fitting of ellipses, in: Proc. Sixth Int’l Conf. in Central Europe on Computer Graphics, Visualization, and Interactive Digital Media, 1, 1998, pp. 125–132. [9] J. Porrill, Fitting ellipses and predicting confidence envelopes using a bias corrected Kalman filter, Image Vis. Comput. 8 (1) (1990) 37–41. [10] V.F. Leavers, Shape Detection in Computer Vision Using the Hough Transform. New York, Springer-Verlag, 1992. [11] S.J. Ahn, W. Rauh, H.J. Warnecke, Least-squares orthogonal distances fitting of circle, sphere, ellipse, hyperbola, and parabola, Pattern Recognit. 34 (12) (2001) 2283–2303. [12] P.L. Rosin, A note on the least squares fitting of ellipses, Pattern Recognit. Lett. 14 (10) (1993) 799–808. [13] E.S. Maini, Enhanced direct least squares fitting of ellipses, Int. J. Pattern Recognit.Artif. Intell. 20 (6) (2012) 939–954. [14] A. Fitzgibbon, M. Pilu, R. Fisher, Direct least square fitting of ellipses, IEEE Trans. Pattern Anal. Mach.Intell. 21 (5) (1999) 476–480. [15] D.S. Barwick, Very fast best-fit circular and elliptical boundaries by chord data, IEEE Trans. Pattern Anal. Mach.Intell. 31 (6) (2009) 1147–1152. [16] W. Gander, R. Strebel, G.H. Golub, Least-squares fitting of circles and ellipses, BIT 34 (4) (1994) 558–578. [17] J. Liang, M. Zhang, D. Liu, Robust ellipse fitting based on sparse combination of data points, IEEE Trans. Image Process. 22 (6) (2013) 2207–2218. [18] J. Liang, Y. Wang, X. Zeng, Robust ellipse fittng via half-quadratic and semidefinite relaxation optimization, IEEE Trans. Image Proces. 24 (11) (2015) 4276–4286. [19] D. Liu, J. Liang, A Bayesian approach to diameter estimation in the diameter control system of silicon single crystal growth, IEEE Trans. Instrum.Meas. 60 (4) (2011) 1307–1315. [20] C. Liu, W. Hu, Relative pose estimation for cylinder-shaped spacecrafts using single image, IEEE Trans. Aerosp. Electron.Syst. 50 (4) (2014) 3036–3056. [21] C. Panagiotakis, A.A. Argyros, Cell segmentation via region-based ellipse fitting, in: 2018 25th IEEE International Conference on Image Processing (ICIP), 2018, pp. 2426–2430. [22] M. Liao, Y.Q. Zhao, X.H. Li, et al., Automatic segmentation for cell images based on bottleneck detection and ellipse fitting, Neurocomputing 173 (2016) 615–622. [23] V. Zlokolica, L. Krstanovic, L. Velicki, et al., Semiautomatic epicardial fat segmentation based on fuzzy c-means clustering and geometric ellipse fitting, J. Healthcare Eng. (2017), doi:10.1155/2017/5817970. [24] 2018, Accessed 16 Feb. http://weibo.com/p/1005051921858037/photos. [25] J. Liang, G. Yu, P. Li, Ellipse fitting via low-rank generalized multidimensional scaling matrix recovery, Multidimension. Syst. Signal Process. 29 (1) (2018) 49–75. [26] J. Augustyn, M. Kampik, Application of ellipse fitting algorithm in incoherent sampling measurement of complex ratio of AC voltages, IEEE Trans. Instrum. Meas. 66 (6) (2017) 1117–1123. [27] J. Li, Y. Wang, B. Lei, et al., Automatic fetal head circumference measurement in ultrasound using random forest and fast ellipse fitting, IEEE J. Biomed. Health Inf. 22 (1) (2017) 215–223. [28] E. Sobhani, M. Sadeghi, M. Babaie-zadeh, et al., A robust ellipse fitting algorithm based on sparsity of outliers, in: 25th European Signal Processing Conference, 2017, pp. 1195–1199. [29] M. Shao, Y. Ijiri, K. Hattori, Grouped outlier removal for robust ellipse fitting, in: 14th IAPR International Conference on Machine Vision Applications, 2015, pp. 138–141. [30] Z. Lin, Y. Huang, Fast multidimensional ellipsoid-specific fitting by alternating direction method of multipliers, IEEE Trans. Pattern Anal. Mach.Intell. 38 (5) (2016) 1021–1026. [31] S. Boyd, N. Parikh, E. Chu, B. Peleato, J. Eckstein, Distributed optimization and statistical learning via the alternating direction method of multipliers, Found. Trends Mach. Learn. 3 (1) (2011) 1–122. [32] D. Gabay, Applications of the method of multipliers to variational inequalities, Augmented Lagrangina Methods: Applications to the Solution of Boundary-Value Problems„ Amsterdam, North-Holland, 1983. [33] J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm algorithm for maximal monotone operators, Math. Program. 55 (3) (1992) 293–318. [34] S.P. Boyd, L. Vandenberghe, Convex Optimization, Cambridge University Press, 2004. [35] P.J. Huber, E.M. Ronchetti, Robust Statistics, Wiley, Hoboken, N.J., 2009. [36] C.L. Nikias, X. Ma, Signal Processing with AlphaStable Distributions and Applications, John Wiley & Sons, 1995. [37] P. Tsakalides, C.L. Nikias, Robust adaptive beamforming in alphastable noise environments, in: Proc. IEEE Int. Conf. Acoust., Speech, Signal Process., May, 1996, pp. 2884–2887.

40

J. Liang, P. Li and D. Zhou et al. / Signal Processing 164 (2019) 30–40

[38] X. Jiang, W.J. Zeng, A. Yasotharan, H.C. So, T. Kirubarajan, Minimum dispersion beamforming for non-gaussian signals, IEEE Trans. Signal Process. 62 (7) (2014) 1879–1893. Apr [39] F. Nie, H. Wang, H. Huang, C. Ding, Joint schatten p -norm and _ p -norm robust matrix completion for missing value recovery, Knowl. Inf. Syst. 42 (3) (2015) 525–544. [40] J. Liang, X. Zhang, H.C. So, D. Zhou, Sparse array beampattern synthesis via alternating direction method of multipliers, IEEE Trans. Antennas Propag. 66 (5) (2018) 2333–2345. [41] W.J. Zeng, H.C. So, L. Huang, p-MUSIC: robust direction-of-arrival estimator for impulsive noise environments, IEEE Trans. Signal Process. 61 (17) (2013) 4296–4308. Sep. [42] W.J. Zeng, H.C. So, A.M. Zoubir, An p-norm minimization approach to time delay estimation in impulsive noise, Digital Signal Process. 23 (4) (2013) 1247–1254. [43] F. Moghimi, A. Nasri, R. Schober, Lp-norm spectrum sensing for cognitive radio networks impaired by non-gaussian noise, in: Proc. GLOBECOM, Honolulu, HI, USA, 2009, pp. 1–6. Nov. 30-Dec. 4 [44] M. Hong, Z.Q. Luo, M. Razaviyayn, Convergence analysis of alternating direction method of multipliers for a family of nonconvex problems, in: Proc. IEEE

[45] [46] [47] [48] [49] [50] [51] [52] [53]

International Conference on Acoustics, Speech and Signal Processing, Brisbane, Australia., 2015, pp. 3836–3840. Apr. J.J. More, Generalizations of the trust region subproblem, Optim. Methods Softw. 2 (1993) 189–209. A. Beck, P. Stoica, J. Li, Exact and approximate solution of source localization problems, IEEE Trans. Signal Process. 56 (5) (2008) 1770–1778. May B.W. Silverman, U.K. London, Density Estimation for Statistics and Data Analysis, Chapman and Hall, 1986. J. Daugman, How iris recognition works, IEEE Trans. Circuits Syst. Video Technol. 14 (1) (2004) 21–30. 2012Accessed 5 June http://www.cs.princeton.edu/∼andyz/irisrecognition. R.C. Gonzalez, R.E. Woods, Digital Image Processing, second ed., Prentice Hall, Upper Saddle River, NJ, 2002. 2012a, Accessed 5 June. http://en.wikipedia.org/wiki/Voyager_1. 2012b, Accessed 5 June http://www.mobilemag.com/tag/voyager-2/. Y. Wang, J. Yang, W. Yin, et al., A new alternating minimization algorithm for total variation image reconstruction, SIAM J. Imaging Sci. 1 (3) (2008) 248– 272.