Geometric tolerance evaluation: A discussion on minimum zone fitting algorithms

Geometric tolerance evaluation: A discussion on minimum zone fitting algorithms

Available online at www.sciencedirect.com Precision Engineering 32 (2008) 232–237 Technical note Geometric tolerance evaluation: A discussion on mi...

382KB Sizes 3 Downloads 56 Views

Available online at www.sciencedirect.com

Precision Engineering 32 (2008) 232–237

Technical note

Geometric tolerance evaluation: A discussion on minimum zone fitting algorithms Giovanni Moroni, Stefano Petr`o ∗ Sezione di Tecnologie Meccaniche e Sistemi di Produzione, Dipartimento di Meccanica, Politecnico di Milano, Via La Masa 34, 20156 Milano, Italy Received 22 May 2006; received in revised form 24 July 2007; accepted 16 August 2007 Available online 21 November 2007

Abstract Estimating any geometric tolerance requires to sample a cloud of points on the feature to check, and then to fit an ideal substitute geometry respect to which to evaluate errors. Usually, points will be sampled by means of a CMM, and then a suitable software algorithm will be applied to them. Algorithms that can be found in literature show some limitations: they may be very slow, the solution they propose is just an approximated one, or some degenerate situations cannot be managed. We will propose a categorization of algorithms found in literature, highlighting merits and defects for them. Moreover, a new algorithm will be proposed. This algorithm will be tested against classical algorithms to show its advantages. © 2007 Elsevier Inc. All rights reserved. Keywords: CMM; Estimation; Geometric tolerance; Fitting; Roundness; Straightness; Combinatorial algorithm; Essential subset; Minimum zone

1. Introduction According to ISO 1101 [1], “A geometrical tolerance applied to a feature defines the tolerance zone within which that feature shall be contained”. To evaluate whether a real feature complies to the tolerance zone or not, the geometric error has to be estimated. This estimation involves the sampling of a cloud of points on the surface, and then the fitting of the cloud: a “substitute geometry” is identified for it, according to some criterion. In this work, the problem of quickly fitting the substitute geometry will be addressed; the problem of choosing the correct sampling strategy will not be addessed. It is quite unusual to take high density samples. Standard algorithms can efficiently manage up to some hundreds of points. But we experienced that fitting a 300,000 points cloud can take up to fifteen minutes, which is unacceptable in productive environment. Even if probably we will never have to face a directly sampled cloud of 300,000 points reconstruction algorithms [2–4] will propose this situation. This new approach to geometric error estimation consist in defining a model for the



Corresponding author. Tel +39 02 23998530. Fax +39 02 23998585. E-mail address: [email protected] (S. Petr`o).

0141-6359/$ – see front matter © 2007 Elsevier Inc. All rights reserved. doi:10.1016/j.precisioneng.2007.08.007

typical behavior of the surface, i.e. the “manufacturing signature”, by densely sampling some few parts; then, next produced parts are sampled on a reduced number of sampling locations, and surfaces are “reconstructed”. Even if the reconstruction may be analytical, a direct evaluation of the tolerance from this analytical reconstruction is impossible, so a very large amount of fitted sampling points is calculated. Therefore, new fitting algorithms are required. In this article we will focus on roundness profile fitting and roundness tolerance estimation, using, as required by ISO/TS 12181-2 [5] standard, Minimum Zone (MZ) criterion (the application of which requires the solution of a strongly nonlinear problem); because examples concerning circularity are hard to be clearly illustrated, we will take some instance from straightness when explaining the “essential subset”. However, the algorithm we are proposing can be easily extended to any sort of geometry to reduce the computational effort, if any exact but slow algorithm is known for that geometry. 2. State of the art ISO/TS 12181-2 suggests that the Minimum Zone criterion should be applied to evaluate roundness; it consists in finding two concentric circles containing the sampled profile having

G. Moroni, S. Petr`o / Precision Engineering 32 (2008) 232–237

the minimum separation: these circles will define the “tolerance zone”. MZ criterion involves the solution of a strongly non-linear problem; solving it with standard optimization techniques usually leads to local minima or non convergence problems. Several alternative algorithms have been proposed, which can be split in four categories: Computational Geometry based approaches, Brute Force algorithms, Optimization Techniques, and Approximated Approaches. Several algorithms based on Computational Geometry have been proposed, like [6–10]. These algorithms usually require the Nearest and the Farthest Voronoi diagram to be calculated for the points cloud; then the MZ center is found in the intersection of the two diagrams. The drawback of methods based on Voronoi Diagrams, that ensure the exact solution, are Voronoi diagrams themselves, that require long time to be calculated. Another possible approach is “brute force”: if an exact solution for the MZ problem is available for a subset of the sampled points, the tolerance zone is calculated for any possible subset; then, among those “feasible” zones containing the whole cloud of points, the one having the minimum separation is chosen. An algorithm of this kind will be developed in Appendix A. However, if the sample size is greater than one hundred points, the required computational effort becomes excessive. Therefore, we look for a reduction in the number of subsets to be considered. Some algorithms have then been proposed in [11,12], where it is highlighted the relation of this approach with linear programming; the proposed solution is based on the employ of “limacons”, so the solution is approximated. A reference work in the field of fitting algorithms is [13]: MZ problem, local and global optimality criterions are exactly pointed out, and a classification of fitting algorithms is proposed. This article is strictly linked to [14], which develops optimality theory, and proposes the algorithm we have developed and improved. This algorithm will be briefly explained later. Some more “combinatorial” approaches have been proposed in literature: instances may be found in [15–17], but these algorithms do not seem to guarantee convergence, and are outclassed by the “Steepest Descent” algorithm proposed in [18]. This algorithm is the quickest we tested, but it has two defects: it cannot manage some degenerate situations (this deficiency is admitted by the authors themselves), and the solution is guaranteed to be optimal if and only if the initial center position is not far from the real MZ center. Because MZ fitting is essentially an optimization problem, linear programming and other optimization techniques have been applied to solve it. An instance of these methods is [19], which applies a modification of Nelder and Mead simplex method to estimate roundness tolerance. However, MZ problem is not linear, so the solution is approximated. Optimization method can then be modified, thus generating approximated methods, in which a sequence of linear problems is solved, so that solutions converge to the exact one. The most famous algorithm of this kind is Carr–Ferreira [20,21] algorithm, which has been originally developed for flatness, straightness, and cylindricity; in [22,23] it has been shown that this algorithm can be generalized to circularity. Even if the found solution is

233

a very good approximation of the exact solution, this algorithm requires a long time if there are over than 1000 points. 3. Reducing the sample size: convergence problems As we already stated, the application of Brute Force algorithms (BFA) leads to long calculation time: if m points are required to define the tolerance, and the sample size is n, then N=

n! m! (n − m)!

(1)

subsets have to be considered. Then, criterions to choose which subsets have to be considered, while ensuring the found solution is optimal, need to be found. A theorem has been demonstrated [14] stating that, if a particular tolerance zone is minimal for a subset of the points cloud, and the whole set belongs to the tolerance zone itself, then this tolerance zone is the MZ problem solution. Moreover, let’s define the essential subset: let us consider a sample set S, and a subset Sr of this sample. Let us suppose the optimal solution for Sr is optimal for S, too. Now, discard a point from Sr : if the optimal solution for this Sr−1 set changes, and it changes regardless of the discarded point, then Sr is an “Essential Subset” (ES) of S. This definition tells us that, in a cloud of points, only few points own the complete information about the MZ tolerance, i.e. we need only few point to calculate the exact MZ solution even for large sample size. It can be taught that only contacting points (those points lying on the border of the tolerance zone) constitute the essential subset, so any essential subset for a particular geometry should be constituted by the same number of points. Even if this is usually true, this is not always the case. Anomalous situation are rare for roundness, but appear quite frequently in straightness evaluation, so a straightness example will be proposed . Let’s consider the 4 points set shown in Fig. 1; as it is evident, the tolerance zone is the one indicated by continuous lines, and point A does not seem to affect the zone itself: it should be discarded from the essential subset. But if point A is eliminated, the new tolerance zone is the one indicated by dashed lines. Then, no point

Fig. 1. Essential subset.

234

G. Moroni, S. Petr`o / Precision Engineering 32 (2008) 232–237

Fig. 2. Degenerate points distribution.

A further improvement of the algorithm consists in choosing as next point to include in the subset the most external point from the tolerance zone, as suggested in [11,12]. We call the resulting algorithm the “Loop Control Algorithm” (LCA). No crash situation has been found for this algorithm, unlike steepest descent [18] algorithm and speed-up [13] algorithm. It is fast (perhaps the fastest, at least if it is compared to our software implementation of other algorithms). We think that further improvements have to be searched in a better implementation of the BFA: as it will be shown in Appendix A, the used version of it is very “rough”. However, when considering large points sets, the algorithm bottle neck is not the BFA application, but the constraints verification (algorithm termination condition), which is hard to improve. 5. Algorithm comparison

can be eliminated: the essential subset is constituted by these 4 points. A “speed-up” algorithm, improving BFA performances, can then be introduced: first, let’s take a subset of the sampling points (usually, in the beginning a single point is chosen), and, by means of BFA, let’s estimate an MZ tolerance for it. If the related tolerance zone covers the whole points set, then the MZ solution is found; else, the subset is checked for “essentiality”, and redundant points, if present, are removed. Finally, a point external to the tolerance zone is added and the algorithm is iterated. A degenerate case is well known for roundness: an essential subset constituted by 6 points is possible if there are 3 points on the inner circumference and three on the outer, each triple forming an equilateral triangle; the triangles vertices correspond each other (Fig. 2). Because the reciprocal distance of corresponding vertices is equal to the amplitude of the tolerance zone, this points distribution may be used to test fitting algorithm effectiveness; however, it is obvious that this point distribution is not possible in any real case. It is intuitive that speed-up algorithm cannot solve this case: a modification of this algorithm is then to be researched. 4. Loop control algorithm The algorithm we are proposing is a modification of the speed-up algorithm that is quicker than the original algorithm, and able to solve degenerate cases. The essentiality check is often more computationally intensive than the zone fitting; but usually situations wherein a non contacting point of the set belongs to the essential subset seldom appear. Therefore, we propose not to check subset essentiality, but instead to directly discard the non-contacting point(s) of the subset. Of course, removing this step would lead to a decrease of computation time, but convergence of the algorithm is not ensured anymore: the algorithm could begin an endless loop. But a loop situation can be easily detected: it suffices to compare the actual subset to those previously considered; if a loop situation happens, applying the BFA to the union of all of the subsets that have been considered since the first appearance of the “loop subset” ensures the loop breaks off.

Loop control algorithm has been implemented for roundness, and compared with several algorithms for MZ roundness found in literature and implemented using the programming language “Matlab 7.1”. The same computer (a 1.6 GHz Intel Pentium Mobile IV based laptop) has been used for every algorithm. Profiles used for testing where randomly generated, i.e. they were generated, in polar coordinates, according to i−1 N r (θi ) = r0 + w1

θi = 2π

(2)

i ∈ [1, N] where r0 is the “nominal” radius, and wi are uniformly and independently distributed random variables. One hundred profiles were used for every combination of algorithm and sample size (N). In the following graphs, averages and 95% confidence intervals on computational time have been plotted on the y-axis, while the x-axis indicates the sample size. Before doing so, implemented algorithms were tested, were possible, applying them on the degenerate case described in Section 3; they all have proven to converge to the real value of the roundness error. Algorithms not able to deal with degenerate situations were then compared to modified Carr–Ferreira algorithm [23], yielding the same results. 5.1. Initialization of loop control algorithm First of all, we wish to investigate how the loop control algorithm (LCA) behaves if the starting conditions vary. MZ algorithms are usually initialized using a Least Squares (LS) solution, but we do not know whether this reduces computational time for LCA or not. Then, Fig. 3 shows how LCA behaves when initialized in nominal center, LS center, or a completely randomly chosen center. As it can be seen, if the initialization is not random, the algorithm converges quite quickly; as we can see, LS initialization increases required time, with respect to nominal initialization: this shows that fitting LS center to have a better starting condition does not justify the time required to calculate

G. Moroni, S. Petr`o / Precision Engineering 32 (2008) 232–237

Fig. 3. Initialization of loop control algorithm.

it. Because it seems to be the best initialization, nominal center will be used as initialization in the following comparisons. 5.2. Loop control algorithm against Carr–Ferreira algorithm Carr–Ferreira algorithm (CFA) is a reference algorithm in the field of approximated algorithms. As every algorithm of this kind, it generates a series of linear problems whose solutions converge to the real value of roundness error but the solution of these problems requires a long time if sample size is large. Let’s compare CFA with LCA. As Fig. 4 shows, LCA is always quicker than CFA; then, remembering that they share the same accuracy and advantages in terms of convergence, there is no reason to use CFA instead of LCA. 5.3. Loop control algorithm against Voronoi algorithm Voronoi based algorithms (VA) are quite hard to implement, but we suspect that simple computation of Voronoi diagrams

235

Fig. 5. Loop control algorithm against Voronoi algorithm.

would require longer than the complete application of LCA. Therefore, we have not implemented a VA, but LCA was compared to Voronoi diagram definition, that is already implemented in standard versions of Matlab, required time. In can be pointed out (Fig. 5) that, for small size samples (up to 1000 points), computation of VA is quicker than LCA, but then VA required time grows rapidly passing LCA. However, it should be remembered that this is the computation time for a single Voronoi diagram: the complete application of a VA requires two diagrams, and then the best center search, so time can become longer even for small clouds of points. Therefore, it seems that LCA is better. 5.4. Loop control algorithm against speed-up algorithm Speed-up algorithm (SUA) is not able to manage degenerate cases described in Section 3. However, because it is the algorithm from which LCA derives, they have to be compared. We compared a “modified SUA” (MSUA), too, where we introduced the choice of the point added to the subset as the farthest from the MM circumference. Fig. 6 proposes this comparison. LCA is much faster than MSUA, as we expected. We can notice that MSUA is a lot better than SUA, too, therefore, the most of the improvement we find in LCA is due to the point selection criterion. However, SUA and MSUA are not able to solve degenerate sets, while LCA is, thus ensuring a superior completeness. 5.5. Loop control algorithm against steepest descent algorithm

Fig. 4. Loop control algorithm against Carr–Ferreira algorithm.

Steepest descent algorithm (SDA) is the most complex and promising method we found in literature. It does not require much calculations, so it is probably the fastest algorithm. Effectively (Fig. 7), SDA is faster than LCA for samples up to 2000 points. However, in judging these algorithms, robustness has to be considered: as we said in Section 2, if it is not correctly

236

G. Moroni, S. Petr`o / Precision Engineering 32 (2008) 232–237

if degenerate situations should never appear in real inspections, this gives LCA a greater completeness. Finally, this is a very fast algorithm, requiring a little computational time: this is relevant if sample size is greater than 10,000 points. Even if this seldom happens in usual measurements, where sample size is typically less than 1000 points, literature is now proposing reconstructive algorithms (see, for instance, [2–4]), which will create this need. Appendix A. BFA implementation

Fig. 6. Loop control algorithm against modified speed-up algorithm.

A BFA is required for LCA: this is a possible implementation. Solutions for clouds of points constituted by up to 3 points are straightforward. The 4 points cloud is the basic situation that must be solved. Two situations are possible: 2 points on the inner circumference and two on the outer, and 3 points on a circumference and the remaining one on the other. Let’s concentrate on the first situation: points can be grouped as follows: 1, 2 1, 3

3, 4 2, 4

[j = 1] [j = 2]

1, 4

2, 3

[j = 3]

(A.1)

For each couple, a possible solution can be calculated by solving the following linear system: 2(x2 − x1 )aj + 2(y2 − y1 )bj = x22 + y22 − (x12 + y12 ) 2(x4 − x3 )aj + 2(y4 − y3 )bj = x42 + y42 − (x32 + y32 )

(A.2)

The first equation refers, here, to the first couple of points, and the second to the second couple; aj and bj are the center coordinates for the proposed set. In the second situation, the following clusterings have to be considered:

Fig. 7. Loop control algorithm against steepest descent algorithm.

initialized, SDA can incur in local minima, while LCA always converges to the global optimum. Moreover, SDA creators themselves admit their method is not able to manage degenerate situations, so we think LCA is slightly better than SDA. 6. Conclusions It was shown that, with just few modifications, a BFA for fitting the MZ circumference, that is usually considered to be slow, may become faster than classical approximated algorithms. It must be noticed that the found solution is a global optimum, too. This algorithm can give, as an additional output, the contacting points delimiting the tolerance zone and, with a little modification, the essential subset; this information allows to construct empirical distributions for contacting points, that is useful if best sampling points pattern has to be found [24]. Differently from other combinatorial quick approaches, like for instance SDA, LCA can manage degenerate situations; even

1, 2, 3 1, 2, 4 1, 3, 4 2, 3, 4

4 3 2 1

[j [j [j [j

= 4] = 5] = 6] = 7]

(A.3)

An then we will solve: 2(x2 − x1 )aj + 2(y2 − y1 )bj = x22 + y22 − (x12 + y12 ) 2(x1 − x3 )aj + 2(y1 − y3 )bj = x12 + y12 − (x32 + y32 )

(A.4)

As it can be seen, only the points in the triple are used. After calculating the seven possible solutions, defining rij as the distance from the ith points to the jth solution center, the MZ solution can be identified by selecting   t = min maxrij − minrij (A.5) j

i

i

When considering clouds of points constituted by n points, we can extract a subset of 4 points in k=

n! 4! (n − 4)

(A.6)

different ways, and then we will have 7k possible centers. Then, applying A.5 to 7k centers, the MZ solution is easily found.

G. Moroni, S. Petr`o / Precision Engineering 32 (2008) 232–237

References [1] International Organization for Standardization, Geneva, Switzerland, ISO 1101: Geometrical Product Specifications (GPS)—tolerances of form, orientation, location and run out, 2nd ed.; December 2004. [2] Henke RP, Summerhays KD, Baldwin JM, Cassou RM, Brown CW. Methods for evaluation of systematic geometric deviations in machined parts and their relationships to process variables. Precis Eng 1999;23(4):273–92. [3] Summerhays K, Henke R, Baldwin J, Cassou R, Brown C. Optimizing discrete point sample patterns and measurement data analysis on internal cylindrical surfaces with systematic form deviations. Precis Eng 2002;26(1):105–21. [4] Colosimo BM, Moroni G, Petr`o S, Semeraro Q. Manufacturing signature of turned circular profiles. In: IFAC-MIM ’04 Conference on Manufacturing, Modelling, Management and Control. 2004. [5] International Organization for Standardization, Geneva, Switzerland, ISO/TS 12181–2: Geometrical Product Specifications (GPS) Roundness—Part 2: Specification operators, 1st ed.; December 2001. [6] Roy U, Zhang X. Establishment of a pair of concentric circles with the minimum radial separation for assessing roundness error. Comput Aided Des 1992;24(3):161–8. [7] Novaski O, Barczak ALC. Utilization of voronoi diagrams for circularity algorithms. Precis Eng 1997;20(3):188–95. [8] Samuel GL, Shunmugam MS. Evaluation of straightness and flatness error using computational geometric techniques. Comput Aided Des 1999;31(13):829–43. [9] Samuel GL, Shunmugam MS. Evaluation of circularity from coordinate and form data using computational geometric techniques. Precis Eng 2000;24(3). [10] Huang J, Lehtihet EA. Contribution to the minimax evaluation of circularity error. Int J Product Res 2001;39(16):3813–26. [11] Chetwynd DG, Phillipson PH. An investigation of reference criteria used in roundness measurement. J Phys E: Sci Instrum 1980;530:538.

237

[12] Chetwynd DG. Applications of linear programming to engineering metrology. Proceedings of the Institution of Mechanical Engineers.Pt.B: Management and Engineering Manufacture, 199. 1985. p. 93–100. [13] Anthony GT, Anthony HM, Bittner B, Butler BP, Cox MG, Drieschner R, et al. Reference software for finding chebyshev best-fit geometric elements. Precis Eng 1996;19(1):28–36. [14] Drieschner R. Chebyshev approximation to data by geometric elements. Numeric Algorithms 1993;5(10):509–22. [15] Rajagopal K, Anand S. Assessment of circularity error using a selective data partition approach. Int J Product Res 1999;37(17):3959–79. [16] Jywe WY, Liu CH, Chen CK. The min-max problem for evaluating the form error of a circle. Measurement 1999;26(4):273–82. [17] Dhanish PB. A simple algorithm for evaluation of minimum zone circularity error from coordinate data. Int J Mach Tools Manuf 2002;42(14):1589–94. [18] Zhu LM, Ding H, Xiong YL. A steepest descent algorithm for circularity evaluation. Comput Aided Des 2003;35(3):255–65. [19] Tsukada T, Kanada T, Okuda K. An evaluation of roundness from minimum zone center by means of an optimization technique. Bull Jpn Soc Precis Eng 1984;18(4):317–22. [20] Carr K, Ferreira P. Verification of form tolerances part i: Basic issues, flatness, and straightness. Precis Eng 1995;17(2):131–43. [21] Carr K, Ferreira P. Verification of form tolerances part ii: Cylindricity and straightness of a median line. Precis Eng 1995;17(2):144–56. [22] Gass SI, Witzgall C, Harary HH. Fitting circles and spheres to coordinate measuring machine data. International J Flexible Manuf Syst 1998;10(1):5–25. [23] Pacella M, Elia A, Anglani A. Evaluation of workpiece roundness error in a turning operation: assesment of computational algorithms by using simulated data. In: 2nd International Simulation Conference, ISC04. 2004. [24] Colosimo BM, Gutierrez Moya E, Moroni G, Petr`o S. Measuring strategy definition for flatness: a case study. In: Proceedings of the 10th CIRP International Conference on Computer Aided Tolerancing. 2007.