Journal Pre-proof Optimization for statistical tolerance allocation
Hang Li, Songgang Xu, John Keyser
PII:
S0167-8396(19)30097-4
DOI:
https://doi.org/10.1016/j.cagd.2019.101788
Reference:
COMAID 101788
To appear in:
Computer Aided Geometric Design
Please cite this article as: Li, H., et al. Optimization for statistical tolerance allocation. Comput. Aided Geom. Des. (2019), 101788, doi: https://doi.org/10.1016/j.cagd.2019.101788.
This is a PDF file of an article that has undergone enhancements after acceptance, such as the addition of a cover page and metadata, and formatting for readability, but it is not yet the definitive version of record. This version will undergo additional copyediting, typesetting and review before it is published in its final form, but we are providing this version to give early visibility of the article. Please note that, during the production process, errors may be discovered which could affect the content, and all legal disclaimers that apply to the journal pertain. © 2019 Published by Elsevier.
Highlights • • • • •
Present a geometry-based method for statistical tolerance computations. Provide a good fit for a confidence region with a statistical tolerance model. Describe ways to approximate the ellipsoid’s volume in objective functions. Provide various objective functions, demonstrating their use in optimization. Achieve a relatively low cost of computation.
Optimization for Statistical Tolerance Allocation Hang Lia,∗, Songgang Xub , John Keysera a Texas
A&M University, College Station TX 77843, USA Corporation, Folsom CA 95630, USA
b Intel
Abstract We present a scheme for tolerance allocation in any dimensions based on a statistical tolerance model. People commonly use tolerances to account for uncertainty and errors in the modeling and manufacturing process. Tolerance allocation refers to the process of arranging tolerance values across an object in order to meet some overall requirements. Statistical tolerance models treat variations as some form of statistical distribution. We treat allocation as an optimization problem where we need to define both constraints and an objective function. With the statistical tolerance model, the calculation of the tolerance stack-up through the cascading of tolerance zones is straightforward. Constraints can be defined on the cascaded result. We present a general approach to optimization that allows different objective functions to be used to achieve different goals. Specific objective functions are described as examples. Finally, we present an effective method for solving the allocation problems in our scheme with these different functions. We demonstrate how the scheme works and its advantages on a small set of examples. Keywords: Tolerance Allocation, Statistical Tolerance Model, Tolerance Optimization
1
1. Introduction
2
Errors are a common problem in computer aided design and manufacturing. People commonly use tolerances to account for uncertainty and errors in the modeling and manufacturing process [Standard, 1994]. Careful modeling and propagation of tolerances allows a designer to understand the range of possible outcomes in a manufacturing process. This can allow the designer to modify the design or the manufacturing process to avoid major problems in the resulting product. Tolerance allocation refers to the process of arranging tolerance values across an object in order to meet some overall requirements. As an example, a several-step manufacturing process will have some tolerance associated with each step. In order to create a final part that meets specific tolerance requirements, the tolerances in each of the individual steps must be controlled. At the same time, there is typically a trade off between cost and precision, so generally the larger the tolerances that can be allowed, the better. The tolerance allocation problem is thus an optimization problem that seeks to arrange tolerances for each geometric feature or manufacturing step to improve an optimization function while meeting some set of constraints. The goal of this paper is to provide a scheme for the 3D tolerance allocation problem that works for a particular category of tolerance models.
3 4 5 6 7 8 9 10 11 12 13 14 15
16 17 18 19
1.1. Prior Work Our work focuses on a statistical model of tolerances. Tolerance models typically fall into one of two types. The traditional model is a worst-case tolerance model [Fischer, 2011]. In this model, a tolerance zone is defined that encompasses all of the possible values so that the worst-case situations are guaranteed to fall ∗ Corresponding
author Email addresses:
[email protected] (Hang Li),
[email protected] (Songgang Xu),
[email protected] (John Keyser) Preprint submitted to CAGD
October 21, 2019
20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61
62 63 64 65 66 67 68 69 70
within the zone. A typical representation of tolerances will model the tolerance region as a range in each of the global axes, giving tolerance zones that are axis-aligned rectangular boxes. Another alternative, used for points, is to model the tolerance zone as an undirected amount, thus yielding spherical tolerance zones. By their very nature, worst-case tolerance models overestimate the actual range of variation in a manufactured part. This can lead to overly conservative, and thus more expensive, manufacturing processes. An alternative to worst-case tolerance modeling is to use a statistical tolerance method. Statistical tolerance models treat variations as some form of statistical distribution [Barkallah et al., 2012; Beaucaire et al., 2013; Qureshi et al., 2012; Ryan, 2011; Tsai and Kuo, 2012; Xu and Keyser, 2016]. This can allow a more realistic evaluation of likely variations, and allow one to use looser tolerances that are cheaper to use and still very likely (though not guaranteed) to yield an acceptable result. Our paper is based on the statistical tolerance model proposed by Xu and Keyser [Xu and Keyser, 2016]. This model represents the tolerance zone as a confidence region of a multivariate normal distribution (MND). The model can represent tolerance regions geometrically and is relatively simple to calculate with. While it has several nice and useful properties, its application to tolerance allocation has been very limited. For a given tolerance model, when one geometric feature is determined relative to another, and each feature has a tolerance zone, the combined tolerance zone can be computed by combining the information from the two original zones. This computation, determining a global tolerance region given multiple local tolerance regions, is called tolerance stacking, tolerance propagation, tolerance chaining, or tolerance cascading (we will use the term cascading). Tolerance cascading is a key part, but only a part, of performing tolerance allocation. Many previous researchers have investigated the tolerance cascading problem. Tolerance charting is a long-standing approach for helping people understand cascading and then (manually, or with assistance) allocate tolerances. Traditional methods have long existed, but have been improved with computer-aided tolerance charting [Lehtihet et al., 2000]. Charting can only handle low dimensional problems, so other methods have been developed to deal with cascading in higher dimensions. These include small displacement torsors (SDT) [Cl´ement and Bourdet, 1988; Villeneuve et al., 2001], technologically related surfaces (TTRS) [Desrochers, 2003] and Monte Carlo methods [Huang et al., 2004; Barkallah et al., 2012]. There are a large number of papers that deal with tolerance allocation. Most of these focus on the relation between a tolerance (T ) and manufacturing cost (c) [Edel and Auer, 1964; Chase and Greenwood, 1988; Spotts, 1973; Sutherland and Roth, 1975; Lee and Woo, 1987; Speckhart, 1972; Michael and Siddall, 1982]. These methods first define a function relating tolerance and cost c = f (T ), and then formulate the allocation problem as an optimization problem. Example optimization functions are of the form c = a − b · T [Edel and Auer, 1964], c = a + b/T k [Sutherland and Roth, 1975] and b · e−mT /T k [Michael and Siddall, 1982], where a, b, k, and m are constants for a specific application. Chase et. al.’s work [Chase, 1999] uses an alternative based on the root square method (RSS). Choi et. al. [Choi et al., 2000] present a tolerance allocation method using a statistical tolerance model. Our approach is meant to make the tolerance allocation problem feasible for a statistical model based on the MND. This will allow the use of the MND in a wider range of application. Xu and Keyser [Xu and Keyser, 2016] provide a method to allocate tolerances in 2D with an MND-based statistical model based on an ellipse’s area. This method can get a exact solution. But it does not work directly in 3D because the calculation of the ellipsoid’s volume makes the optimization formulation they proposed fail. This paper’s goal is to overcome this shortcoming by using an approximate volume that is computationally tractable. 1.2. Our Work In this paper, we present a scheme for tolerance allocation based on a statistical tolerance model. We formulate the allocation as an optimization problem. We provide several simplifications based on some reasonable assumptions that make the optimization more feasible. The statistical model for tolerances represents the tolerance zone using an ellipsoid that represents a confidence region (we typically use 95% in our examples). To simplify our optimization problem, we first approximate an ellipsoid’s volume by using the areas of its projections in various directions. We can then use one of several tractable objective functions in the optimization. We demonstrate several of these objective functions as examples to show our scheme. The results show that our scheme is efficient for this problem. 2
Tolerance Zone
Global Local
Global
d
d
A
B
A
a
B b
Figure 1: Tolerance zones and cascading.
71
72 73
The specific characteristics of our scheme are: • We present a method with clear geometric meaning and bridge the geometric and analytical computations in a statistical tolerance model.
75
• We provide a good ellipsoidal fit for a confidence region with a statistical tolerance model, showing that it works well for cascading.
76
• We describe ways to approximate the ellipsoid’s volume so that it can be used in objective functions.
77
• We provide various objective functions, demonstrating their use in optimization.
78
• We achieve a relatively low cost of computation.
74
79
With our scheme, a large number of tolerance allocation problems can be solved easily.
80
2. Tolerance Cascading
81
90
In CAD, positions are often specified relative to the positions of other objects. For example, in Figure 1-a, the position of B is specified relative to A. However, because manufacturing processes cannot meet designs exactly, positions in the real world are not at exactly the position specified. Tolerance zones are used to represent the area of possible positions. Tolerance zones can be specified locally relative to some other position. The tolerance zone of B in Figure 1-a is a local tolerance zone because we assume that A is at an exact position. If we take the tolerance zone of A into consideration, we can get a global tolerance zone for B (Figure 1-b). We use TL (·) to denote a local tolerance zone while TG (·) denotes a global tolerance zone. Traditionally, worst-case analysis is used to define the tolerance zone. However, a statistical tolerance model is also a powerful tool and may be better in some cases.
91
2.1. Statistical Tolerance Model
92
We should first notice that dimensions of a tolerance zone are not always in the same space as the object of which it is a part. An object might be designed in 1D, 2D or 3D, referring to spatial dimensions. On the other hand, the dimension of the tolerance zone is determined by the degrees of freedom (DOFs) used to define it. For example, the DOF of a point in R3 is 3. For a line segment, which is defined by two points, the DOF is 6. In most cases, objects can be decomposed and represented by related points, whose individual DOF are no more than 3. One way to describe the distribution of the possible positions is as a multivariate normal distribution (MND)[Xu and Keyser, 2016]. We describe an MND as N (μ, Σ), for a mean μ and a covariance matrix Σ. Σ must be positive definite. We can compute the probability density function of a multivariate normal distribution of an n-dimensional random vector (X ∼ N (μ, Σ)) as
82 83 84 85 86 87 88 89
93 94 95 96 97
f (X) =
1 (2π)n |Σ|
1 exp{− (X − μ)T Σ−1 (X − μ)}. 2
3
(1)
For a covariance matrix Σ, we can define a tolerance zone for a given confidence probability p. This tolerance zone E is defined as follows: E(Σ) = {X ∈ Rn : (X − μ)T Σ−1 (X − μ) ≤ Rn (p)}.
(2)
Rn (p) is the inverse cumulative density function (CDF) of a χ2 -distribution in n dimensions and with confidence p. For example, R3 (99%) = 11.34. We also call Σ the tolerance matrix. For any element aij in Σ, aij has the form of i=j σi2 aij = , (3) σi σj ρij i = j 98 99
where σi2 is the variance in the xi direction and ρij is the correlation coefficient between the xi and xj directions. Notice that since the covariance matrix Σ−1 is a symmetric positive definite matrix, for an n dimensional vector of position X the boundary of E(Σ) is an ellipsoid: X T Σ−1 X = Rn (p).
(4)
We will typically think of this ellipsoid as being the tolerance region, but it should be noted that it is representing the boundary for some particular confidence level p ∈ [0, 1]. Furthermore, because we can divide both sides by the scalar Rn (p), we can without loss of generality consider the ellipsoid to be defined as X T Σ−1 X = 1 100
for some Σ that represents the tolerance zone boundary at a particular confidence level.
101
2.2. Calculation of Cascading
(5)
Cascading operations have been discussed extensively in Xu and Keyser’s paper [Xu and Keyser, 2016]. Here is a simple example of cascading. Assume that the position of B only depends on A (Figure 1-b). The global tolerance matrix of A and local tolerance matrix of B are Σ(1) and Σ(2) . The sum of two normal distributions is also a normal distribution. And, it can be proved that the covariance matrix that is the global tolerance matrix of B is Σ(1) + Σ(2) [Xu and Keyser, 2016]. So TL (A) = TG (A) = E(Σ(1) ), TL (B) = E(Σ(2) ), TG (B) = E(Σ(1) + Σ(2) ).
(6)
102
3. Projection and Volume Approximate
103
In our tolerance allocation scheme, the volume of the tolerance zone is an important part of computing the π (n/2) objective function. The exact volume of the ellipsoid, denoted as S0 , is Γ(n/2+1) det(Σ) in n-D. However, using S0 is not a good choice for this problem for reasons we will discuss in the next section. Our solution simplifies the objective function computation by using an approximate volume based on projections of the ellipsoid onto some planes (Figure 2). This section will show how to calculate the projection and how to estimate volume with these projections. We will illustrate with 3D examples, though note that the definitions extend to higher dimensions directly.
104 105 106 107 108 109
4
Figure 2: Projections of an Ellipsoid onto planes
110
3.1. Projection of Tolerance Zone We earlier showed how the tolerance zone boundary could be defined as an ellipsoid Σ. If we want to find the 3D ellipsoid’s projection in the x1 − x2 plane, we can prove that the area of the projection is (7) πR3 (p)σ1 σ2 1 − ρ212 , where (for a 3D case) ⎛
σ12 Σ = ⎝σ1 σ2 ρ12 σ1 σ3 ρ13
σ1 σ2 ρ12 σ22 σ2 σ3 ρ23
⎞ σ1 σ3 ρ13 σ2 σ3 ρ23 ⎠ . σ32
(8)
In 3D, the 3 projections (Figure 2) are 2 Sxy = π 2 R3 (p)2 σ12 σ22 (1 − ρ212 ), 2 Syz 2 Szx 111
112
= =
2
π R3 (p)2 σ12 σ32 (1 π 2 R3 (p)2 σ22 σ32 (1
− −
ρ213 ), ρ223 ).
(9) (10) (11)
The proof of this part is shown in Appendix A. 3.2. Volume Approximation The exact volume of the ellipsoid is complicated to compute within the optimization process. However, the expressions for the projections of the ellipsoid are very simple. So, we hope to use these projections in the optimization process as a proxy to represent the volume of the ellipsoid. There are several ways we investigated for doing this: 1 4 S1 = √ (Sxy Syz Szx ) 2 , 3 π 3 4 S2 = 7 √ (Sxy Syz + Syz Szx + Szx Sxy ) 4 , 4 3 π 3 4 S3 = 5 √ (Sxy + Syz + Szx ) 2 , 32 π 4 2 2 2 34 S4 = 7 √ (Sxy + Syz + Szx ) . 34 π
5
(12) (13) (14) (15)
Error
Percentile 0.5
1.0 Error=40%
0.4
S1 S2 S3 S4
0.3
0.2
0.1
Percentile=99.6%
Error=20%
0.8
Percentile=89.5% Error=10%
0.6
Percentile=68.6%
0.4
0.2
200
400
600
800
1000
0.1
Figure 3: Approximation of Volume (Blue: S1 Orange: S2 Green: S3 Red: S4 )
0.2
0.3
0.4
0.5
Error
Figure 4: CDF of Approximate Results with S4
Figure 3 shows the experimental results using the different methods. We first generated 1000 ellipsoids randomly. Then we calculated the exact volume of the ellipsoid (S0 ) and the approximate volumes given by S1 , S2 , S3 and S4 . Figure 3 shows the relative errors of approximate volumes for each ellipsoid (without taking absolute value). Each integer on the x axis represents an ellipsoid and has four corresponding data points, showing the relative error using S1 , S2 , S3 and S4 . The y axis is relative error, so lower values are better. We sorted the ellipsoids by the relative error of the third method (S3 ). So, in the figure, the green points (S3 ) are always increasing. We can observe that S0 ≤ S1 ≤ S2 ≤ S3 ≤ S4
(16)
125
appears true according to the figure. We can also prove (16) in theory. (See Appendix B.) We will use S4 to represent the volume of the ellipsoid in this paper. These simplifications all overestimate the volume, and thus provide conservative bounds while keeping the overall estimation of volume reasonably close to S0 . Generally, engineers prefer to overestimate the errors in manufacturing so that they can ensure that all parts can be assembled or work correctly. By ensuring that any difference is an overestimate, we can use any of the four estimation methods in our optimization problem. In the next section, we will show that S4 is the best choice among the four approximation methods, because the cost function provides the simplest formula for optimization, significantly improving efficiency. No square root operations are needed if S4 is plugged into the cost function a + b/T k with k = 43 . Although any of these four approximations could be used, using S4 will lead to the simplest, and thus fastest, optimization. Figure 4 shows the cumulative distribution function (CDF) of the 1000 ellipsoids’ error using S4 . When using S4 to approximate volume, 68.6% of the ellipsoids had error less than 10% and 89.5% had errors less than 20%. S4 is thus an effective method to serve as a proxy for the volume of the ellipsoid.
126
4. Optimization Problems
127
In CAD, we want to set tolerance zones to ensure that the global error is not too large. Obviously, the probability of having a small error is higher if the tolerance zone is smaller. Assume the tolerance matrix of a cascaded result is E(Σ ) = E(Σ(1) + Σ(2) + · · · + Σ(k) ). Our goal is to represent this region by a single tolerance region, E(Σ). In other words, the {Σ(i) } are the local tolerance zones we will optimize. We hope to bound the cascaded result Σ , the global tolerance zone, with a goal tolerance zone Σ. We want the tolerance zone determined by E(Σ) to cover (close to) the same tolerance zone as that determined by E(Σ ). This means that Σ−1 − Σ−1 should ideally be a positive semi-definite matrix. Appendix C gives a brief proof of the positive semi-definite property. However, the mathematical expression of a positive semi-definite matrix in this form is very complex and can not be easily used in the optimization. An alternative solution is to control the variances in the x,
113 114 115 116 117 118 119 120 121 122 123 124
128 129 130 131 132 133 134
6
b. Control of Bounding Box
a. Three Dimensional Bounding box
Figure 5: Bounding Box of Tolerance Zone. By controlling the variance, we can ensure the bounding box of the approximation encompasses the original bounding box, but without correlation coefficients, the shape does not match.
y, and z directions. The bounding box of a directions. Assume ⎛ 2 σ1 cov12 σ22 Σ = ⎝cov21 cov31 cov32
tolerance zone is determined by the three variances in the three ⎞ cov13 cov23 ⎠ , σ32
⎛
σ12 Σ = ⎝cov21 cov31
cov12 2 σ2 cov32
⎞ cov13 ⎠ cov23 , 2 σ3
(17)
and covij = covji = σi σj ρij = σi σj ρji , covij = covji = σi σj ρij = σi σj ρji , σi , σi ≥ 0,
−1 ≤ ρij , ρij ≤ 1, i = j.
(18)
cov means covariance and ρ means correlation coefficient. We assume that the constraint matrix is always positive definite, which means σi > 0 and −1 < ρij < 1. Then if we have σi ≥ σi > 0 (i = 1, 2, 3), 135
136 137 138 139
(19)
the bounding box of Σ’s tolerance zone will cover the bounding box of Σ ’s. If we only constrain the variances, there may be a huge error in the resulting tolerance zone. The correlation coefficients {ρij } also need to be controlled. Correlation coefficients have a strong influence on the shape of the tolerance zone. Figure 5-b shows an example. Having {ρij } be the same as {ρij } is an ideal solution. Intuitively, if {ρij } and {ρij } are the same (or similar), then E(Σ) and E(Σ ) will be in the same “orientation”. However, the correlation coefficients are influenced by the work environment; for example, if manufacturing is done on a slope, the correlation coefficients between z and x or y may change. Typically, however, the differences are not too large in most situations. So, we keep the correlation coefficients {ρij } within a small interval near the {ρij }: ρij − ρij ≤ εij , −1 < ρij < 1 (20) εij is the limit of ρij ’s error. Although this constraint for covariances and correlation coefficients is not strictly equivalent to letting Σ−1 − Σ−1 be positive semi-definite, it works perfectly for our scheme. The formulation is justified by the geometric meaning of these terms, and the experimental results (Section 5) also demonstrate that the approach works in practice. Assume the objective function(usually cost function) is C({Σ(i) }). Then, we can formulate our overall 7
optimization problem as min
{Σ(i) }
s.t.
140 141 142 143 144
145
C({Σ(i) })
(21)
σi ≥ σi , ρij − ρij ≤ εij .
That is, we want to find the set of local tolerances Σ(i) that minimize the objective function C, while ensuring that the tolerance regions bound the cascaded tolerance zones, and the correlation coefficients have only small changes. For convenience, we omit the positive definite conditions here. In implementation, we (i) (i) (i)2 include σj > 0 and −1 < ρjk < 1 in the constraints to make sure Σ(i) is positive definite, where {σj } (i)
are variances and {ρjk } are correlation coefficients corresponding to covariance matrix Σ(i) . 4.1. Single-step Cascading Optimization Assume that we plan to cascade two points’ tolerance zones in one step. We can describe the zones by covariance matrices Σ(1) and Σ(2) , so the cascaded zone can be described as Σ = Σ(1) + Σ(2) .
146
The allocation problem is how to choose Σ(1) and Σ(2) when Σ is specified by the approximation Σ. One way to do this is to minimize the cost of tolerance zones. Smaller tolerance zones always require higher machining accuracy and thus increased cost. As mentioned in the introduction, there are many kinds of tolerance-cost functions of the form C = G(T ) where T is the volume of a tolerance zone and C is the cost. Assume that F (·) is the volume of the tolerance region. Then the optimization problem becomes min
Σ(1) ,Σ(2)
s.t.
147
G(1) (F (Σ(1) )) + G(2) (F (Σ(2) )) Σ = Σ(1) + Σ(2) , σi ≥ σi , ρij − ρij ≤ εij .
G(i) is the cost function for the i-th step. While several functions are possible, we elected to use c = a + b/T k [Sutherland and Roth, 1975] as the tolerance-cost model in our example, where a, b and k are parameters. a can be ignored as it has no real contribution to the optimization problem. If we use approximate volume S4 to replace T and make k = 43 (only for 3D), the cost function takes on a simple form for optimization: b(i)
G(i) (F (Σ(i) )) =
j,k 148
(22)
(i)2 (i)2 σk (1
σj
(i)2
− ρjk )
.
(23)
This is just the inverse of a polynomial. Another option is to have all the tolerance zones be as similar as possible so that none of the tolerance zones are too small. In this case, our objective function should try to keep the volumes similar to each other. The shape of the tolerance zones can be controlled by the {ρij }. However, by doing this, the resulting matrices σ (1) and σ (2) may converge to zero, which means the tolerance zones would not exist and all the points would need to be in their exact positions. So, the constraints need to be more strict: the variances of Σ must match those of Σ. Three inequalities become equalities in the constraints. min
Σ(1) ,Σ(2)
s.t.
(F (Σ(1) ) − F (Σ(2) ))2 Σ = Σ(1) + Σ(2) , σi = σi , ρij − ρij ≤ εij . 8
149 150 151 152 153 154 155 156 157 158 159
The volume of the tolerance region, F (·), is most accurately determined by the exact calculation S0 . S0 ∝ |Σ|, is described by an n-degree polynomial, where n is the number of dimensions. There are several reasons why calculating volume with this determinant of the covariance matrix is problematic: (1) For n ≥ 3, the cubic or higher degree polynomial cannot easily be used in optimization. Our projection method provides a fixed-degree formula and as we go to higher dimensions, it only increase the number of terms (not degree). (2) The number of terms also increases for S0 . In an n-D tolerance space, our formula(S4 ) has Cn2 = O(n2 ) terms while S0 has O(n!) terms. 6D tolerance spaces are widely used in manufacturing. For a 6D case, S4 has C62 = 15 terms while S0 has 6! = 720 terms, which is hard for coding and calculation. Given both 1 and 2, the use of S0 causes optimization using the S0 metric to be unacceptably slow for all but the simplest cases. We will show a 6D example in the next section to demonstrate this. Thus, we use F (·) = S4 in our computations, which as noted in Section 3.2 provides a conservative bound on volume. Our projection scheme can also be used even when the objective is not based on volume. Similar tolerance zones must have similar projections. If the projections of E(Σ) are {Sxy , Syz , Szx } and the projections of , Syz , Szx }, another objective function could be the difference between the two groups of E(Σ ) are {Sxy projections. 2 (Sij − Sij ) min Σ(1) ,Σ(2)
ij=xy,yz,zx
Σ = Σ(1) + Σ(2) , σi ≥ σi , ρij − ρij ≤ εij .
s.t.
160
4.2. Multi-step Cascading Optimization Given the results of single-step optimization, we can easily extend this into multi-step optimization. The objective function C has the following choices, corresponding to the cases just described: G(i) (F (Σ(i) )), (24) C({Σ(i) }) = i
C({Σ(i) }) =
i
C({Σ(i) }) =
(F (Σ(i) ) − F (Σ(j) ))2 ,
(Si ab − Sj ab )2 .
(25) (26)
i
162
Solving the cascaded problem is very similar to solving the single-step problem. Some examples will be shown in the application section.
163
4.3. Solving
164
169
The optimization problem cannot be solved analytically in most conditions. Instead, we give an initial value for the problem and use an iterative search to find a local solution. Some cost-tolerance functions may perform better or worse. For example, if we choose G(T ) = a − b · T [Edel and Auer, 1964], for some initial values the problem may not converge to a solution or might converge to a local maximum. To get closer to the global solution, we solve the problem several times with different initial values, choosing the best answer among all of the results. With more attempts, the probability of getting a better solution increases.
170
5. Results
161
165 166 167 168
We give the following example to illustrate how our scheme works. Assume that B depends on A. TG (A) = TL (A) = E(Σ(1) ), TL (B) = E(Σ(2) ). The global tolerance zone of B, TG (B), is constrained by
9
Σ. For this single step cascading, Σ(1) + Σ(2) = Σ. We assume the input values are as follows (these were generated randomly): ⎛ ⎞ 0.635 −0.091 0.151 Σ = ⎝−0.091 0.365 0.041⎠ , 0.151 0.041 0.305 εij = 0.01, p = 95%
(27)
If we use the optimization function 24 and assume the cost function is the same for both steps, that is b(1) = b(2) in equation 23, the result of minimum cost optimization is ⎛ ⎞ 0.317 −0.043 0.073 Σ(1) = Σ(2) = ⎝−0.043 0.182 0.019⎠ . (28) 0.073 0.019 0.153 The other two methods (functions 25 and 26) give nearly the same result ⎛ ⎞ 0.317 −0.046 0.075 (1) (2) Σ = Σ = ⎝−0.046 0.182 0.020⎠ . 0.075 0.020 0.153 The exact solver generates following result: ⎛ Σ(1) = Σ(2)
171
0.318 −0.043 = ⎝−0.043 0.183 0.073 0.019
(29)
⎞ 0.073 0.019⎠ , 0.153
(30)
which shows that our method can give a result similar to the one from the exact solver. If we set a different b(1) and b(2) , such as b(1) = 1.3b(2) , then the result of minimum cost optimization is ⎛ ⎞ 0.331 −0.045 0.076 Σ(1) = ⎝−0.045 0.190 0.019⎠ , 0.076 0.019 0.159
172 173 174 175 176 177
(31) ⎛
Σ(2)
0.304 −0.041 = ⎝−0.041 0.174 0.070 0.018
⎞ 0.070 0.018⎠ . 0.146
(32)
Next we will show that our projection approximation is necessary compared with exact volume computation for a 6D example. Similar to the previous problem, assume that B depends on A. TG (A) = TL (A) = E(Σ(1) ), TL (B) = E(Σ(2) ) where Σ(1) , Σ(2) ∈ R6 . For this single step cascading, Σ(1) + Σ(2) = Σ. We show only the minimum cost optimization (function 24). We assume the input values are as follows (these were also generated randomly): ⎞ 0.494 −0.050 0.112 0.036 −0.102 0.101 ⎜−0.050 0.447 −0.010 −0.042 0.056 −0.104⎟ ⎟ ⎜ ⎜ 0.112 −0.010 0.598 0.018 −0.173 0.091 ⎟ ⎟ Σ=⎜ ⎜ 0.036 −0.042 0.018 0.428 −0.064 0.058 ⎟ , ⎟ ⎜ ⎝−0.102 0.056 −0.173 −0.064 0.583 −0.155⎠ 0.101 −0.104 0.091 0.058 −0.155 0.621 ⎛
b(2) = 1.6b(1) εij = 0.01, p = 99% 10
(33)
The result of approximate volume optimization is ⎛ ⎞ 0.228 −0.021 0.049 0.015 −0.044 0.044 ⎜−0.021 0.206 −0.002 −0.017 0.023 −0.046⎟ ⎜ ⎟ ⎜ 0.049 −0.002 0.276 0.006 −0.077 0.039 ⎟ (1) ⎟ ΣA = ⎜ ⎜ 0.015 −0.017 0.006 0.197 −0.027 0.024 ⎟ , ⎜ ⎟ ⎝−0.044 0.023 −0.077 −0.027 0.269 −0.069⎠ 0.044 −0.046 0.039 0.024 −0.069 0.286 ⎞ ⎛ 0.266 −0.024 0.058 0.017 −0.052 0.052 ⎜−0.024 0.241 −0.003 −0.020 0.027 −0.053⎟ ⎟ ⎜ ⎜ 0.058 −0.003 0.323 0.007 −0.090 0.046 ⎟ (2) ⎟, ⎜ ΣA = ⎜ ⎟ ⎜ 0.017 −0.020 0.007 0.231 −0.032 0.028 ⎟ ⎝−0.052 0.027 −0.090 −0.032 0.314 −0.080⎠ 0.052 −0.053 0.046 0.028 −0.080 0.335 Calculation Time = 0.405s,
(34)
while the result of exact volume optimization is ⎛ ⎞ 0.239 −0.022 0.052 0.015 −0.047 0.046 ⎜−0.022 0.216 −0.008 −0.018 0.024 −0.048⎟ ⎜ ⎟ ⎜ 0.052 −0.008 0.289 0.011 −0.081 0.041 ⎟ (1) ⎜ ⎟, ΣE = ⎜ ⎟ ⎜ 0.015 −0.018 0.011 0.207 −0.029 0.025 ⎟ ⎝−0.047 0.024 −0.081 −0.029 0.282 −0.072⎠ 0.046 −0.048 0.041 0.025 −0.072 0.300 ⎞ ⎛ 0.255 −0.023 0.055 0.016 −0.050 0.049 ⎜−0.023 0.231 −0.008 −0.019 0.026 −0.051⎟ ⎟ ⎜ ⎜ 0.055 −0.008 0.309 0.011 −0.087 0.044 ⎟ (2) ⎟, ⎜ ΣE = ⎜ ⎟ ⎜ 0.016 −0.019 0.011 0.221 −0.031 0.027 ⎟ ⎝−0.050 0.026 −0.087 −0.031 0.301 −0.077⎠ 0.049 −0.051 0.044 0.027 −0.077 0.321 Calculation Time = 5.257s. 178 179 180
(35)
The optimization problem was implemented in Mathematica 10.2, on a computer with an Intel Core i7 960 @ 3.20GHz. Because Mathematica implementation may not be efficient, the absolute running times are likely to be artificially high, however the relative performance between the different approaches is accurate. It is clear that the results are almost the same. The differences are bounded by (i)
(i)
(i)
(i)
max|σA j − σE j | = 0.013,
(36)
i,j
max|ρA jk − ρE jk | = 0.020.
(37)
i,j,k
181 182 183 184 185 186 187 188
(i)
(i)
(i)
(i)
(i)
σA j and ρA jk are the standard deviations and correlation coefficients of ΣA and σE j and ρE jk are the (i) ΣE .
standard deviations and correlation coefficients of However, our method is more than an order of magnitude faster but still can generate similar results. Figure 6 gives more timing results. As the results above demonstrate, our approach provides a result very close to the optimum found using an exact volume computation, at significantly faster rates. Particularly as the number of dimensions, and thus the variables to optimize, increases, an optimization based on a direct exact computation becomes infeasible. Thus, our approach provides significant improvement in efficiency while maintaining accuracy of the optimization.
11
Figure 6: Time Cost. This is a LOG plot of the calculation time vs. the dimension. In higher dimensional(≥ 8-D) cases, the exact method exceeded the 16GB RAM available before generating the result while the approximate method works normally.
l1-direction
l2-direction
O2 O1
d1
20.000
30.000
R5.000
10.000 R5.000
40.000
30.000
20.000
d2 B
A
d0
d0 C d2
40.000
10.000
D
O3
10.000 R5.000 10.000
d1
30.000
30.000
O4
R5.000
Figure 8: Optimization Result
Figure 7: A Bent Plate Example
189
6. Application
190
We will demonstrate the use of our approach on some full object models. Figures 7 and 8 show the first example. The part (imagine a metal plate) will be folded along AB and CD at a 90 degree angle so that O1 and O3 , as well as O2 and O4 are aligned in space (such that a cylinder can extend through both holes). AB is chosen as a reference object. l1 , l2 , and θ are chosen as axes in tolerance space. CD depends on AB and O3 O4 depends on CD. The DOF of CD is 3: the length of AC, the length of BD and the folding angle of section β relative to section α. Considering O3 O4 as one object, the DOF of {O3 , O4 } is also 3: the distance from O3 to CD, the distance from O4 to CD and the folding angle of section γ relative to section β. (We do not care about other dimensions.) Then we define the 3 dimensional tolerance space. The first dimension is l1 , which corresponds to AC and the distance from O3 to CD. The second dimension is l2 , which corresponds to BD and the distance from O4 to CD. The third dimension is θ, which corresponds to the folding angles of β and γ.
191 192 193 194 195 196 197 198 199 200
12
3
2
l2
1
0
-1
-2
-3 -3
-2
0
-1
1
2
3
l1
Figure 9: Comparing of tolerance zones. The green ellipsoid is the tolerance zone of {O1 , O2 }. The blue ellipsoid is the projection of {O3 , O4 }’s tolerance zone in l1 − l2 plane.
We represent the overall constraints as:
⎛
⎞ 0.609 0.216 0.175 Σ = ⎝0.216 0.329 0.274⎠ , 0.175 0.274 0.514 b(2) = 1.2b(1) , εij = 0.1, p = 99%
201 202 203 204
Note that Σ is describing the global tolerance we will allow in {O3 , O4 }. The b(i) are parameters in the cost function (corresponding to Equation 23), where we are using the cost-based optimization (of the form Equation 24). The ε limits the variation in correlation coefficients (so that the shape is maintained), and the p value states that we are wanting bounds that are sufficient for 99% of cases. Solving this optimization, we get the solution below. Recall that Σ(1) is the tolerance region for line CD relative to line AB and Σ(2) is the tolerance region for {O3 , O4 } relative to line CD. ⎛ ⎞ ⎛ ⎞ 0.295 0.083 0.058 0.314 0.088 0.062 Σ(1) = ⎝0.083 0.159 0.113⎠ , Σ(2) = ⎝0.088 0.169 0.120⎠ (39) 0.058 0.113 0.249 0.062 0.120 0.265 The tolerance zone of {O1 , O2 } is TG ({O1 , O2 }) = E(Σ0 ) (2 dimensional): 0.508 −0.328 Σ= −0.328 0.454
205 206 207
(38)
(40)
Projecting the optimized tolerance zone E(Σ(1) +Σ(2) ) to the l1 −l2 plane, we can compare the two tolerance zones of {O1 , O2 } and {O3 , O4 }, as shown in Figure 9. Because plane γ is folded 180 degrees compared with plane α, the projection should be flipped along both l1 and l2 direction before comparing. A second example is a bracket model shown in Figure 10. B depends on A, D depends on C, and E is the middle point between B and D, thus depending on both. Finally, F depends on E. Then we can easily get the cascading formula, the global tolerance matrix of F . We can set some of the tolerances symmetrically, and thus have: 1 Σ = (ΣA + ΣB + ΣC + ΣD ) + ΣF 2 ΣA = ΣC = Σ(1) , ΣB = ΣD = Σ(2) , ΣF = Σ(3) (41) 13
F
F
D E
C
B D
A
B
E
B
A
Figure 10: Bracket Example
Figure 11: Optimization Result. Each ellipsoid represents the optimized local tolerance zone for it’s center point. All tolerance zones are uniformly scaled. Arrows show the dependency among the tolerance zones.
14
If we have the following constraints: ⎛
⎞ 0.378 0.088 −0.260 Σ = ⎝ 0.088 0.645 0.127 ⎠ , −0.260 0.127 0.444 b(2) = 1.2b(1) , b(3) = 1.6b(1) εij = 0.1, p = 95%
(42)
Then optimization gives the solution ⎛
Σ(1)
Σ(2)
Σ(3)
0.117 = ⎝ 0.012 −0.068 ⎛ 0.124 = ⎝ 0.013 −0.072 ⎛ 0.137 = ⎝ 0.014 −0.079
⎞ 0.012 −0.068 0.199 0.023 ⎠ , 0.023 0.137 ⎞ 0.013 −0.072 0.212 0.024 ⎠ , 0.024 0.146 ⎞ 0.014 −0.079 0.233 0.027 ⎠ . 0.027 0.161
(43)
208
The resulting tolerance zones are shown visually in Figure 11.
209
7. Conclusion and Future Work
210
In this paper, we present a scheme for the 3D tolerance allocation problem based on a statistical tolerance model. To simplify the problem, we first provide an efficient method to estimate an ellipsoid’s volume. The problem is formulated as an optimization problem. The constraints come from the cascading of tolerance zones, which contain the geometric information of the object and the tolerance zones. The objective function can be chosen to achieve a wide variety of purposes. As examples, we demonstrate optimizations:
211 212 213 214
217
• where the objective function is a cost function that reflects manufacturing costs. Many options are possible: we provide several illustrations with one such function, and discuss implementation with others.
218
• where the objective function is based on the difference in volumes of the tolerance zones.
215 216
219 220
221 222 223 224 225 226 227 228 229 230 231 232 233 234
• where the objective function is based on the differences of the projected areas of the the tolerance zones. Although different objective and cost functions can lead to different overall performance, the projected volumes provide a much faster method, while maintaining a conservative estimate of volume, in computations involving volume calculations. As we showed in our examples, the result of the optimization is very similar, while the performance is greatly improved. We have also demonstrated that by limiting change in the covariance terms, we can ensure that the optimized tolerance allocation follows not just overall volume, but also shape of the tolerance region. Finally, we have shown that the method is applicable across high-dimensional tolerance spaces, with asymptotic performance that is a significant improvement on more exact methods. Together, these results demonstrate that a statistical tolerance model based on the MND is feasible for tolerance allocation. This should make the use of statistical tolerance models more feasible for widespread use. One area of future work would be to further characterize the effect of different objective and cost functions on the overall performance of the algorithm. We noted that different objective functions might require additional constraints to avoid convergence to unrealistic values. Other cost or objective functions should 15
239
vary in both efficiency and in convergence. Obtaining a more comprehensive classification of cost functions and their effect on tolerance allocation optimization would thus provide a good complement to this work. One particular direction that could be explored is to use the matrix difference Σ−1 − Σ−1 instead of a bound of difference in correlation coefficients to enforce similar orientations; doing so would require reformulating the optimization and enforcing additional constraints, such as positive semi-definite matrices.
240
References
241
Barkallah, M., Louati, J., Haddar, M., 2012. Evaluation of manufacturing tolerance using a statistical method and experimentation. International Journal of Simulation Modelling 11, 5–16. Beaucaire, P., Gayton, N., Duc, E., Dantan, J.Y., 2013. Statistical tolerance analysis of over-constrained mechanisms with gaps using system reliability methods. Computer-Aided Design 45, 1547–1555. Chase, K.W., 1999. Tolerance allocation methods for designers. ADCATS Report 99, 1–28. Chase, K.W., Greenwood, W.H., 1988. Design issues in mechanical tolerance analysis. Manufacturing Review 1, 50–59. Choi, H.G.R., Park, M.H., Salisbury, E., 2000. Optimal tolerance allocation with loss functions. Journal of Manufacturing Science and Engineering 122, 529–535. Cl´ ement, A., Bourdet, P., 1988. A study of optimal-criteria identification based on the small-displacement screw model. CIRP Annals-Manufacturing Technology 37, 503–506. Desrochers, A., 2003. A CAD/CAM representation model applied to tolerance transfer methods. Journal of mechanical design 125, 14–22. Edel, D.H., Auer, T.B., 1964. Determine the least cost combination for tolerance accumulations in a drive shaft seal assembly. Gen. Mot. Eng. J 4, 37–38. Fischer, B.R., 2011. Mechanical tolerance stackup and analysis. CRC Press. Huang, S.H., Liu, Q., Musa, R., 2004. Tolerance-based process plan evaluation using monte carlo simulation. International Journal of Production Research 42, 4871–4891. Lee, W.J., Woo, A.C., 1987. Tolerancing: its distribution, analysis, and synthesis . Lehtihet, E.A., Ranade, S., Dewan, P., 2000. Comparative evaluation of tolerance control chart models. International Journal of Production Research 38, 1539–1556. Michael, W., Siddall, J.N., 1982. The optimal tolerance assignment with less than full acceptance. Journal of Mechanical Design 104, 855–860. Qureshi, A.J., Dantan, J.Y., Sabri, V., Beaucaire, P., Gayton, N., 2012. A statistical tolerance analysis approach for overconstrained mechanism based on optimization and monte carlo simulation. Computer-Aided Design 44, 132–142. Ryan, T.P., 2011. Statistical methods for quality improvement. John Wiley & Sons. Speckhart, F.H., 1972. Calculation of tolerance based on a minimum cost approach. Journal of Engineering for Industry 94, 447–453. Spotts, M.F., 1973. Allocation of tolerances to minimize cost of assembly. Journal of Engineering for Industry 95, 762–764. Standard, A., 1994. Y14. 5m, dimensioning and tolerancing. American Society of Mechanical Engineers, New York . Sutherland, G.H., Roth, B., 1975. Mechanism design: accounting for manufacturing tolerances and costs in function generating problems. Journal of Engineering for Industry 97, 283–286. Tsai, J.C., Kuo, C.H., 2012. A novel statistical tolerance analysis method for assembled parts. International Journal of Production Research 50, 3498–3513. Villeneuve, F., Legoff, O., Landon, Y., 2001. Tolerancing for manufacturing: a three-dimensional model. International journal of production research 39, 1625–1648. Xu, S., Keyser, J., 2016. Statistical geometric computation on tolerances for dimensioning. Computer-Aided Design 70, 193 – 201.
235 236 237 238
242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269 270 271 272 273 274 275 276 277
16
278
279
Appendix A. Area of Projection We need to find E(Σ)’s projections, Sxy , Syz and Szx , to approximate an ellipsoid’s volume. In n-D case, the covariance matrix is Σ. For each element aij in Σ, σi2 i=j aij = . σi σj ρij i = j
(A.1)
Without loss of generality, assume that we want to find the ellipsoid’s projection in the x1 − x2 plane. Then we need to decompose X and Σ−1 as T A2×2 B2×(n−2) X2×1 , Σ−1 = (A.2) X= X(n−2)×1 B(n−2)×2 C(n−2)×(n−2) For each point X on the boundary of the projected ellipse, a line from X perpendicular to the x1 − x2 plane will intersect the ellipsoid (tangentially) at exactly one point. So we need to find the X so that X has only one solution in X T CX + 2X T B T X + X T AX = Rn (p).
(A.3)
Because C is a positive definite matrix,the minimum value of the left hand side (LHS) occurs at the point X = −C −1 BX .
(A.4)
And if the value of LHS is Rn (p) at this point, then the equation will have a unique solution (similar to a simple quadratic equation): X T (A − B T C −1 B)X = Rn (p).
(A.5)
This is the equation for the boundary of projection, an ellipse in R2 . And its area is
πRn (p)
.
(A.6)
1 − ρ212 .
(A.7)
det(A − B T C −1 B)
After simplification, the area is πRn (p)σ1 σ2
When n = 3, there are three projections(Figure 2). The areas of the three projections are Sxy = πR3 (p)σ1 σ2 1 − ρ212 Syz = πR3 (p)σ2 σ3 1 − ρ223 Szx = πR3 (p)σ3 σ1 1 − ρ213 Or in another form which can be used in optimization: ⎛ a11 a12 Σ = ⎝a12 a22 a13 a23
⎞ a13 a23 ⎠ a33
(A.8) (A.9) (A.10)
(A.11)
2 Sxy = π 2 (a11 a22 − a212 )
(A.12)
2 Syz 2 Szx
(A.14)
2
= π (a22 a33 − = π 2 (a33 a11 − 17
a223 ) a213 )
(A.13)
280
Appendix B. Different Approximate Methods There are four methods to approximate the exact volume of ellipsoid (S0 ) mentioned in the paper: 1 4 S1 = √ (Sxy Syz Szx ) 2 3 π 3 4 S2 = 7 √ (Sxy Syz + Syz Szx + Szx Sxy ) 4 34 π 3 4 S3 = 5 √ (Sxy + Syz + Szx ) 2 32 π 4 2 2 2 34 S4 = 7 √ (Sxy + Syz + Szx ) 34 π
(B.1) (B.2) (B.3) (B.4)
We will prove that S0 ≤ S1 ≤ S2 ≤ S3 ≤ S4 281
in theory. The inverse of the covariance matrix has the form ⎛ b11 b12 Σ−1 = ⎝b12 b22 b13 b23
(B.5)
⎞ b13 b23 ⎠ b33
(B.6)
And the equation of ellipsoid E(Σ) is X T Σ−1 X = 1
(B.7)
The exact volume is 1 4 π 3 det(Σ−1 )
S0 =
(B.8)
According to Appendix A, Sxy =
π
(B.9)
det(A − B T C −1 B)
A, B and C are as described in Appendix A b b A = 11 12 , b12 b22
BT =
b13 , b23
C = b33
Plugging them into Sxy , then we can have 1 Sxy =π b33 (b11 b22 b33 − b11 b223 − b22 b213 − b33 b212 + 2b12 b13 b23 )− 2 b33 =π det(Σ−1 )
(B.10)
(B.11)
Similarly, we can also get Syz and Szx . So 1
S1 =
4 (b11 b22 b33 ) 4 π 3 det(Σ−1 ) 34 18
(B.12)
It’s obvious that S0 ≤ S1 is equivalent to det(Σ−1 ) ≤ b11 b22 b33 . The difference between them is b11 b22 b33 − det(Σ−1 ) =b11 b223 + b22 b213 + b33 b212 − 2b12 b13 b23
(B.13)
By the inequality of arithmetic and geometric mean (i.e. the AM-GM inequality), we can get 1 1 b11 b223 + b22 b213 ≥ |b13 b23 | b11 b22 2 2
(B.14)
According to the definition of bij , we can estimate b11 b22 by b11 b22 = σ32 det(Σ−1 ) + b212 ≥ b212
(B.15)
1 1 b11 b223 + b22 b213 ≥ |b13 b23 b12 | 2 2
(B.16)
Thus
The same can be used to show 1 b22 b213 + 2 1 b33 b212 + 2
1 b33 b212 ≥ |b13 b23 b12 | 2 1 b11 b223 ≥ |b13 b23 b12 | 2
(B.17) (B.18)
So the difference becomes b11 b22 b33 − det(Σ−1 ) ≥ 3|b13 b23 b12 | − 2b12 b13 b23 ≥ 0 282
And S0 ≤ S1 is proved. By using the AM-GM inequality again, we can show 2
283
284
(B.19)
Sxy Syz + Syz Szx + Szx Sxy ≥ 3(Sxy Syz Szx ) 3
(B.20)
(Sxy + Syz + Szx )2 − 3(Sxy Syz + Syz Szx + Szx Sxy ) 1 1 1 = (Sxy − Syz )2 + (Syz − Szx )2 + (Szx − Sxy )2 2 2 2 ≥0
(B.21)
This shows that S1 ≤ S2 . We can the see that
So S2 ≤ S3 . And, finally 2 2 2 3(Sxy + Syz + Szx ) − (Sxy + Syz + Szx )2
=(Sxy − Syz )2 + (Syz − Szx )2 + (Szx − Sxy )2 ≥0
(B.22)
285
So S3 ≤ S4 .
286
Appendix C. Positive Semi-definite Property Ideally, the cascaded tolerance zone E(Σ ) should be controlled by the target tolerance zone E(Σ). For any point in E(Σ ), the point must also be inside E(Σ): X T Σ−1 X ≤ Rn (p) ⇒ X T Σ−1 X ≤ Rn (p). 19
(C.1)
Suppose X can be any non-zero point in Rn . As Σ and Σ−1 are positive definite, we can denote that X T Σ−1 X = R > 0,
(C.2)
and then we have (
Rn (p) T −1 X) Σ ( R
Rn (p) X) = Rn (p). R
(C.3)
Rn (p) X) ≤ Rn (p). R
(C.4)
According to equation C.1, (
Rn (p) T −1 X) Σ ( R
Subtract equation C.4 from equation C.3, it can be proven that X T (Σ−1 − Σ−1 )X ≥ 0, 287
which means that Σ−1 − Σ−1 is positive semi-definite.
20
(C.5)