Finite Elements in Analysis and Design 131 (2017) 25–42
Contents lists available at ScienceDirect
Finite Elements in Analysis and Design journal homepage: www.elsevier.com/locate/finel
A Bi-directional Evolutionary Structural Optimisation algorithm with an added connectivity constraint
MARK
⁎
David J. Munk , Gareth A. Vio, Grant P. Steven AMME, The University of Sydney, NSW 2006, Australia
A R T I C L E I N F O
A BS T RAC T
Keywords: Evolutionary Zhou-Rozvany problem Connectivity Optimisation Fully-stressed design Compliant design
This paper proposes the introduction of a connectivity constraint in the Bi-directional Evolutionary Structural Optimisation (BESO) method, which avoids the possibility of arriving at highly non-optimal local optima. By developing a constraint that looks at the usefulness of complete members, rather than just elements, local optima are shown to be avoided. This problem, which affects both evolutionary and discrete optimisation techniques, has divided the optimisation community and resulted in significant discussion. This discussion has led to the development of what is now known in the literature as the Zhou-Rozvany (Z-R) problem. After analysing previous attempts at solving this problem, an updated formulation for the convergence criteria of the proposed BESO algorithm is presented. The convergence of the sequence is calculated by the structure's ability to safely carry the applied loads without breaking the constraints. The Z-R problem is solved for both stress minimisation and minimum compliance, further highlighting the flexibility of the proposed formulation. Finally, this paper aims to give some new insights into the uniqueness of the Z-R problem and to discuss the reasons for which discrete methods struggle to find suitable global optima.
1. Introduction Evolutionary algorithms are characterised by ease of application, greater robustness and lower complexity compared with most of the classical methods used for optimisation [10]. In light of these benefits, evolutionary methods have been criticised for their lack of algorithmic convergence and selection of appropriate stopping criteria [29]. One such criticism of the ESO/BESO method showed that hard-kill methods could arrive at highly non-optimal designs, when applied to a beam-tie problem [38]. This test case became known as the ZhouRozvany problem and it is the aim of this article to develop a BESO algorithm that avoids these convergence issues and is capable of producing optimal designs for this problem. Zhou and Rozvany show that ESO's rejection scheme can result in highly non-optimal designs and that the readmission scheme used in the BESO algorithm is unable to correct this failure [38]. Zhou and Rozvany blame this shortfall of the algorithm on the inability of the sensitivity numbers to give a good estimate of the change in the objective function. This is due to the heuristic nature of the algorithm, i.e. the thickness of the elements are discrete [0 1] with no intermediate values. Failure of the sensitivity number occurs when its value varies significantly with respect to its normalised density [38]. Since evolutionary methods are discrete this change is not identified and the
⁎
Corresponding author. E-mail address:
[email protected] (D.J. Munk).
http://dx.doi.org/10.1016/j.finel.2017.03.005 Received 28 November 2016; Accepted 28 March 2017 Available online 21 April 2017 0168-874X/ © 2017 Elsevier B.V. All rights reserved.
element is never readmitted. Nevertheless, the merits of evolutionary algorithms have been recognised by the gradient-based optimisation community and improvements have been suggested. Rozvany and Querin suggested that the sensitivity numbers given in the prior iteration should be compared with the actual finite change of the objective function [23]. If a large discrepancy is obtained the element is kept or banned from being removed. Zhou pointed out at the WCSMO7 congress that this proposed improvement may become uneconomical for the very large systems used in practice [21]. Rozvany gives an example of 100 rejected elements out of 100,000 total elements, resulting in 100 checks that have to be made in each iteration [21]. This would significantly increase the computational cost of the algorithm, nullifying one of the benefits of evolutionary methods. From this early criticism a new method, termed Sequential Rejections and Admissions (SERA), was developed by Rozvany and Querin [24]. The sensitivity numbers are calculated from the Lagrange multipliers, as in the case of gradient-based methods [39,40]. SERA also introduces the concept of virtual material, where instead of completely removing an element an infinitesimal density/thickness is assigned. The success of this method was demonstrated at the WCSMO-4 congress [22], but with the implementation of virtual material, the approach is no longer a hard-kill topology optimisation method. Furthermore, the algorithm still suffers from an inaccurate
Finite Elements in Analysis and Design 131 (2017) 25–42
D.J. Munk et al.
sensitivity analysis and is unable to solve the Z-R problem. Nevertheless, such improvements have been applied successfully to multiconstraint problems [25] and flexible structures [1]. Another solution is to use a finer finite element mesh in the analysis [5,6,8,19,41,15]. This goes against some of the algorithm's merits, such as low Finite Element Analysis (FEA) time [34], however it was shown to lead to optimal solutions for the Z-R problem in its original formulation [9]. In spite of this success, Rozvany showed in a recent critical review of established topology optimisation methods that, for any mesh refinement, the ESO algorithm would still produce nonoptimal results to the Z-R problem if the magnitude of the loads were increased [21]. Huang and Xie show that the solution to the original ZR problem obtained by the ESO/BESO method is a highly inefficient local optimum, instead of a nonoptimum [10]. They demonstrated that, if the penalty number was increased to p ≥ 3.1, then the SIMP algorithm would arrive at the same local optimum as the ESO/BESO methods. Therefore one cannot dismiss an optimisation method that arrives at a local optimum for a non-convex problem. To avoid the possibility of arriving at local optima, Huang and Xie suggest checking the boundary and loading conditions after every design iteration [11]. If after a certain design iteration the boundary and/or loading conditions have changed, the sequence cannot continue until corrective measures have been taken. However, Rozvany showed that this procedure can cause computational complications [21]. Furthermore, an initial problem may be given with many possible supports, some of which are uneconomical to use. One of the aims of topology optimisation for multiple support problems, such as the Z-R problem, is to select the supports that should be used. This would not be possible if all boundary conditions were frozen. Recently, Stolpe and Bendsoe considered the minimum compliance variation of the Z-R problem and developed a nonlinear branch and cut algorithm to solve for global optima [33]. They showed several optimal designs for varying volume constraints; the only example of this kind of heuristic algorithm solving this class of problem. Ghabraie examined the evolutionary algorithm and its application to the Z-R problem [7]. He proposed a problem statement for ESO and performed an accurate sensitivity analysis, mathematically justifying the algorithm. He showed that by including the higher order terms of the Taylor expansion in the sensitivity analysis the ESO method is not prone to the problems proposed by Zhou and Rozvany [38]. However, even the revised ESO algorithm is unable to solve the Z-R problem, suggesting that the reason for this is ESO's inherent unidirectional approach [7]. He concluded that the ESO method should only be used on a very specific class of problems, where the constraints demand a unidirectional approach to final solutions. He also suggested that the BESO method is not prone to this shortcoming, but did not apply this approach to the Z-R problem.
•
•
The aim of this work is to develop a connectivity constraint to embed inside the BESO method so that the topology can arrive at optimal solutions. This updated technique is used to solve the Z-R problem, as it is special in its formulation in that the solution can become two completely different problems by the removal of a single element. The Z-R problem will be solved in its different conceptions, i.e. stress and compliance, as well as different volume fractions and grid resolutions, that have been developed over the years by the topology optimisation community. The updated technique uses a similar approach to that suggested by Rozvany and Querin, where the sensitivity numbers given in the prior iteration should be compared with the actual finite change of the objective function [23], but the algorithm presented in this article only performs this check when the connectivity of a member is broken; it assesses whether or not the member is economical, thus improving computational efficiency. The solution suggested by Rozvany and Querin [23] was never implemented due to its excessive computational burden [21]. The new connectivity constraint works by checking whether the objective function of the structure is increased above a predefined tolerance, only for the elements that do not satisfy connectivity. If this change is above the tolerance then the elements are readmitted to reconnect the structure. Otherwise, the complete structural member, or elements which do not satisfy connectivity, are removed. 3. Theoretical analysis The algorithm used in this article is a hard-kill BESO method with a novel connectivity criterion. The aim of this work is to fundamentally understand why ESO/BESO algorithms suffer from convergence problems and how this can be avoided in a manner that does not add a significant computational burden, using the Z-R problem as a test case. Furthermore, elements with very small positive densities, such as those found in soft-kill BESO, a discrete method with a non-zero density material defined for void elements, and SIMP algorithms, a continuous method with densities varying between a very small positive number and 1, may cause the global stiffness matrix to become ill-conditioned, particularly for a nonlinear structure [17]. For these reasons the hardkill BESO method may be preferable, though it still suffers from computational difficulties due to its discrete nature as outlined in Section 1.
2. Research gaps It has been demonstrated in the literature review (Section 1) that the ESO/BESO algorithm can suffer from convergence issues and an inaccurate sensitivity analysis. This has mainly been shown through the Z-R problem where the current ESO/BESO algorithms struggle to find a suitable solution, however:
•
•
analysed to show that the updates to the algorithm can find the same final topologies at a much lower computational expense. Thus, maintaining the original benefits of ESO/BESO algorithms. One prior method for avoiding arriving at highly inefficient local optima is the so called “freezing” approach suggested by [9]. While this approach is suitable for solving the Zhou-Rozvany problem with BESO methods, [21] points out that this method would fail to determine which supports would be economical to use for multiple support problems. Furthermore, [7] noted that one of the shortcomings of this method is that it can also fail on statically determinate problems, where there is no change in boundary or loading conditions. Therefore, in this paper, a statically determinate problem suggested by [7] is solved to show the improvements of the presented approach over the freezing approach. Finally, a recent study [33] has provided a range of different optima for varying volume fractions to the Z-R problem to provide a benchmark for new or modified algorithms. In this study the modified BESO algorithm is verified against some of these optima.
While it is known that the sensitivity analysis of ESO/BESO algorithms can become inaccurate it is not known when. This article shows that by removing one element from the domain the nature of the problem is changed. The BESO algorithm tries to solve this new problem and never returns to the old problem. However, if this change in nature is picked up then the problem is never changed and the BESO algorithm is able to solve the original problem. Prior researches have shown that refining the mesh can provide more suitable topologies to the problem. However, this is at a great expense in computational time. In this paper refined meshes are
3.1. Hard-Kill Bi-directional Evolutionary Structural Optimisation (BESO) The BESO method, in its simplest form, determines the optimal topology of a structure according to the relative ranking of its sensitivity numbers. The BESO method, unlike the ESO method, allows 26
Finite Elements in Analysis and Design 131 (2017) 25–42
D.J. Munk et al.
3.2. Connectivity filter
material to be removed and added simultaneously. BESO was introduced for stiffness optimisation by Yang et al. [37] and is here applied to the Z-R problem, for minimum compliance and stress minimisation. An updated formulation for the sensitivity analysis checks the usefulness of complete members rather than just elements. The current hard-kill BESO algorithm determines the elemental sensitivities, which are then ranked (see Section 3.3). A small proportion of the active elements with low sensitivity numbers are then removed. Furthermore, the removed elements are ranked in ascending order; if void elements with high sensitivity numbers are considered beneficial, they are readmitted into the design domain. However, the ZR problem has shown that this sensitivity analysis can become inaccurate, resulting in poor final topologies. The present work aims to suggest a modified sensitivity analysis to identify this issue and circumvent the problem. The modification is effectively a connectivity constraint, which will be outlined in more detail in the following section. It assesses the usefulness of structural members/boundaries as a whole, instead of single elements as is the case for current BESO algorithms. The evolutionary procedure for the updated BESO method is as follows:
It has been noted that due to the discrete nature of the hard-kill ESO/BESO algorithms an inaccurate sensitivity analysis can lead to nonoptimal structures. This has been attributed to a breakdown in the boundary support during the optimisation process. For example, Huang and Xie suggest freezing the elements at the boundary and loaded regions of the structure [9,11]. Once the pre-defined loading or boundary condition is altered, the optimisation process should not proceed until corrective measures have been put into place. They suggest the use of a finer mesh to discretise the initial design. For certain cases it has been shown that the current ESO/BESO techniques cannot recover and tend towards a highly inefficient local optimum [38,10]. To counteract this problem a connectivity filter is developed to ensure that a coherent structure is maintained throughout the optimisation process. Therefore, no structural islands, i.e. when parts of the structure are not connected to the main structure, will be present in the final topology. The connectivity filter guarantees that all the active elements are connected. This is achieved by checking that all the elements kept in the design domain are connected to each other. An ill-defined connectivity filter may result in element pairs that are not connected to the rest of the structure. To avoid this, a check is made to ensure that at least two opposite edges of an element are connected to the structure. In the case of the breakdown of a member, the accuracy of the previous sensitivity analysis is checked by calculating the change in the objective function (Fig. 1). If the change in the objective is greater than the threshold, the element is readmitted to enforce connectivity. This guarantees that members are not completely removed from the design if elements are prematurely disconnected (Fig. 2). For the connectivity filter to detect a change in the connectivity of the design an associativity matrix, which describes the number of active elements connected to the structure [32], is implemented. To illustrate how this works, two “test” examples showing both the removal of element(s) resulting in a disconnected and connected final structure are given. The first example is illustrated in Fig. 2. Fig. 2 shows an element that has been removed resulting in the two bar structures no longer being connected. Furthermore, the associativity matrix before and after the element has been removed is given. The rows of the matrix are the active elements in the design and the columns are the initial elements in the design space. Zero columns indicate that the original element is no longer connected to any remaining elements. The lines in the associativity matrices indicate how connectivity between all active elements in the design is achieved. If the structure is coherent, then all active elements (indicated by a 1 in A ), will be able to have a line drawn to them (alike the top half of Fig. 2). Otherwise if the connectivity is broken, so too is the line joining the active elements in A (alike the bottom half of Fig. 2). Thus, Fig. 2 shows how the algorithm picks up a disconnection in the structure. A check is made to see if removing the element results in a large change in the objective function, as suggested by Rozvany and Querin [23]. If the sensitivity analysis was deemed correct then the element would be removed from the design space. However, if there is a large change observed in the objective function then the sensitivity analysis would be deemed inaccurate and the element would be readmitted to the design space. The second example is given in Fig. 3. In this case there is no disconnection; therefore, the structure remains coherent (Fig. 3). Two elements are removed from the design domain, the corresponding A matrix before and after they are removed is shown in Fig. 3. As is illustrated, although the connectivity lines change, all active elements are able to have a line drawn to them. Therefore, the filter is able to determine that the structure remains coherent and no check on the sensitivity analysis is performed. Therefore, the problem statement for the minimum compliance problem is modelled as the mixed 0-1 non-convex problem,
1. Discretise the design domain using a finite element mesh and assign design variables x for initial design. 2. Initialise problem by defining the volume constraint (V), remove ratio (RR) and include ratio (IR). 3. Perform a finite element analysis on the structure and determine the elemental sensitivity numbers (Section 3.3). 4. Calculate the nodal sensitivity numbers and pass through the mesh independency filter (Section 3.3). 5. Average sensitivity numbers with the sensitivities of the previous iteration. 6. Add and delete elements to construct new design (Section 3.3). 7. Check if resulting design is coherent, if not pass through connectivity filter (Section 3.2), otherwise move onto next step. 8. repeat steps 3-7 until all constraints are met and the sequence has converged. An overview of the algorithm is given in Fig. 1, showing the extensions made (inside the dotted rectangle) such that the algorithm avoids changing the topology based on inaccurate sensitivity calculations. Where ΔO (x ) is the change in the objective function and ζ is a predetermined threshold value, set to a change in the order of magnitude, i.e. a difference of greater than or equal to 1 in the logarithm of base 10 (log10 ΔO (x ) ≥ 1), in the objective for this analysis. This is one point of weakness of this modification, at least in-terms of rigidity, since an extra variable must be defined whose value has an impact on the convergence of the algorithm. Ghabraie [7] pointed out that this type of approach is “overall reasonable,” however argues that it would be difficult for one to ensure that the threshold set is large enough to consider the sensitivity calculation as being inaccurate. The authors of this work agree it is not foreseeable for a threshold to be set that ensures, for all problems, detection of the structural mechanics being changed. However, from their experience it would be suitable to calculate the threshold based on the changes in the objective that occur when connectivity is not broken. The change in objective from removing one element should not be noticeably larger than the previous changes in objective. Nevertheless, this is analogous to any variable in an optimisation algorithm. A common example is different penalty exponents leading to different solutions. To be sure the variables used are suitable, one must change them and rerun the optimisation to assess their effect. The following sections detail the connectivity filter and convergence criteria for the modified BESO method of this work.
27
Finite Elements in Analysis and Design 131 (2017) 25–42
D.J. Munk et al.
Fig. 1. Flow chart of updated BESO method.
Fig. 2. Connectivity filter showing disconnected members.
28
Finite Elements in Analysis and Design 131 (2017) 25–42
D.J. Munk et al.
Fig. 3. Connectivity filter showing connected members.
1 T f u 2
MINIMIZE SUBJECT TO
MINIMIZE SUBJECT TO
K ( x) u = f A (x) ≥ b n
V V0 1}n
∑ j =1 xj ≤ x ∈ {0,
elem σvm ≤ σvmy
(1)
n
V V0 1}n
∑ j =1 xj ≤
where x is the vector of design variables, V is a predefined structural volume, V0 is the total design space volume, K (x) is the global stiffness matrix and u and f are the nodal displacement and force vectors, respectively. Furthermore, A (x) ≥ b are general linear inequality constraints, where A ∈ Rn × m is a given associativity matrix that describes the number of active elements connected to the structure (see Figs. 2 and 3), i.e. each row represents an active element of the design space and each column gives the state of the connected element, where a 1 defines the corresponding element connection as active and a 0 if it is not active. Finally, b ∈ Rn is a given vector that defines the minimum number of elements that must be connected to satisfy the connectivity constraint, in this case either of the two opposite edges of active elements must have at least one element connected to them. Alternatively, the force vector of each element can be checked, if active elements are not taking any of the applied load, this indicates that these elements are not connected to the structure. However, for the purpose of this study, the method outlined in Figs. 2 and 3 is used. Each row in b represents an active element in the design space, with the corresponding number of elements required to satisfy the connectivity given for that element. Therefore if we take, as an example, the problem illustrated in Fig. 2 then the b vector would be:
b = [1, 2, 2, 3, 1, 4, 1, 2, 2, 3, 1]T
elem max(σvm ) K (x) u = f σ = DBu A (x) ≥ b
x ∈ {0,
(3)
where D and B are the conventional elasticity matrix and strain vector respectively, σvmelem is an elemental von Mises stress and σvmy is the yield stress of the material. The problem formulation presented here is a discrete version of that given by Lee et al. [12]. 3.3. Sensitivity number and convergence criteria The current convergence criteria of ESO/BESO algorithms with von Mises stress objectives are determined by predefined volume fractions. This is not a physical limitation of the structure and, depending on the volume fraction set, can result in significantly different topologies [29]. For these reasons the convergence criteria used in this work for the von Mises stress objective problem are based on the stress limitations of the defined structure. Before applying the filter scheme and constraints, the element sensitivity numbers must first be defined. Studies by McKeown [16] and Li et al. [14] have shown that fully stressed designs are equivalent to the stiffest design for a single load and freedom case. Since it is the goal of the designer to get a balance between the stress and stiffness that suits the design at hand [31], sensitivity numbers for both stiffness and stress are defined. In finite element analysis the change in the stiffness of the structure due to the removal of an element is equal to the element strain energy [4]. This change can be defined as the element sensitivity number for the compliance minimisation problem,
(2)
These inequality constraints are used to model the conditions on the connectivity of the structure. For the stress minimisation criteria, the other objective outlined in the original problem formulation given in [38], a constraint on the stress is added to ensure that the material limits are not exceeded. Therefore, the problem statement for the minimum stress criteria is modelled as the mixed 0–1 non-convex problem [10],
αcelem =
1 T elem u K u 2
(4)
where u is the nodal displacement vector of the considered element and Kelem is the element stiffness matrix. For the stress minimisation 29
Finite Elements in Analysis and Design 131 (2017) 25–42
D.J. Munk et al.
problem [13], the von Mises stress can be used to define the element sensitivity,
based on a goal volume fraction for each design iteration, elements are only included if the structure satisfies,
elem αselem = σvm
max allowed σvm > σvm
(5)
elem where σvm is the von Mises stress of the element under consideration. If a non-uniform mesh is used, the element sensitivity number should be scaled by the element volume, since the element sensitivity number is a measure of the element's contribution to the objective of the optimisation problem. The scaled sensitivity number can be defined as, elem elem α elem = σvm V
(6)
RR =
elem
where V is the volume of the element being considered; equating the stress and strain energy sensitivity number units (Nm). To smooth the element sensitivity numbers across the entire domain, a filter scheme is used that will alleviate the problems of checkerboarding and mesh dependency. This filter scheme is similar to the mesh independency filters used for the SIMP algorithm [28,30]; however, nodal sensitivity numbers are used when calculating the updated element sensitivity numbers based on the surrounding structure, as suggested by [8]. The nodal sensitivity numbers are defined as the average of the element sensitivity numbers connected to the node,
∑ wi αielem
IR =
where M is the number of elements connected to the jth node being considered and αelem is the ith element sensitivity number being used i (Eqs. (3), (4) and (5)). The weighting factor of the ith element, wi, is defined with respect to its distance from the jth node:
wi =
⎛ rij ⎞ 1 ⎜ 1 − M ⎟⎟ ⎜ M − 1⎝ ∑i =1 rij ⎠
This section outlines and discusses the results obtained from the application of the updated formulation. The Z-R problem for the stress criteria is given first, followed by the definition of the minimum compliance problem. The algorithm is then tested for other volume fractions and compared with the global optima found by Stolpe and Bendsoe [33]. Next, the fine mesh version of the problem, presented by Huang and Xie [9], is analysed. Finally, a statically determinant problem proposed by Ghabraie [7], referred to as the Ghabraie problem in this work, is solved. This is done to demonstrate the improvements of the approach proposed in this work over the freezing approach proposed by Huang and Xie [9]. As noted by Ghabraie [7], one of the shortcomings of the freezing approach is that it cannot be used to solve statically determinate problems.
N
N
(9)
where N is the total number of nodes in the sub-domain Ω and ω (rij ) is the linear weighting factor defined as,
ω (rij ) = rmin − rij
j = 1, 2, …, N
(13)
4. Results and discussion
∑ j =1 ω (rij ) αjn ∑ j =1 ω (rij )
ninclude ≤ IRmax n e0 − ne
(8)
where rij is the distance between the center of the ith element and the jth node being considered. The nodal sensitivity numbers are then converted into smoothed element sensitivity numbers through a filter scheme. The filter radius, rmin, is set to identify the nodes that will have an effect on the element sensitivity. The value of rmin must be large enough such that the sub-domain Ω, which is the domain defined by a circle centred at the centre of the element in consideration having a radius of rmin, covers at least one element. Nodes located inside Ω contribute to the smoothing of the element sensitivity,
αi =
(12)
where IR is the ratio of included elements, ninclude is the number of elements included and n e0 is the number of solid elements in the initial design. Hence, this BESO algorithm must start with a full design, i.e. n e0 > ne , alike the algorithm presented by Huang and Xie [8]. It is noted that RRmax and IRmax are introduced to ensure that the topology of the structure is not significantly changed in a single iteration. This is to guarantee structural integrity and convergence. Convergence is assumed when no material can be removed and/or added while keeping the stress below the maximum allowable stress. Therefore, the convergence of the sequence is a function of the structure's ability to safely carry the applied loads.
(7)
i =1
nremove ≤ RRmax ne
where RR is the ratio of removed elements, nremove is the number of elements removed from the structure and ne is the number of solid elements remaining in the structure at the current design iteration. Similarly, the number of elements added is defined by,
M
αjn =
(11)
max and σvmallowed are the maximum stress in the structure and where σvm maximum allowed stress of the material respectively. Therefore material is only added if the structure is under-designed. The amount of material added and removed is a function of the number of solid and void elements. The number of elements removed is defined by,
(10)
4.1. The Zhou-Rozvany problem – stress criteria
where rij is the distance between the jth node inside Ω and the center of the ith element under consideration. The filter scheme smooths the nodal sensitivity numbers over the entire design domain. Consequently, sensitivity numbers for void elements can be obtained as they are determined by the nodal sensitivities in the sub-domain. It must be noted that the SIMP method incorporates the density of the element into the filter; this would result in an infinite sensitivity number for void elements, if the design variable were set to zero as is done for hard-kill techniques. However, the SIMP method and all other soft-kill algorithms define a minimum design variable, e.g. xmin = 0.001, to avoid this discontinuity [27]. Thus, this filter scheme has been adopted for hard-kill algorithms. The solid and void elements are then ranked. The solid elements with the lowest sensitivity numbers are removed and the void elements with the highest sensitivity numbers are added. Unlike the current convergence criteria for ESO/BESO techniques with stress objectives, where the number of removed and included elements is determined
Zhou and Rozvany introduced a challenging tie-beam problem in their critical review of ESO methods [38]. They showed that the ESO's rejection criteria could result in highly non-optimal designs, as outlined in Section 1. This example has been continuously used to illustrate that heuristic algorithms such as ESO [36,35] and variations thereof [20] can fail to find good feasible designs. Therefore, Stolpe and Bendsoe suggest that this problem be introduced in the standard repertoire of benchmark examples when assessing new topology methods [33]. Stolpe and Bendsoe [33] point out that the unique problem provided by Zhou and Rozvany is a “very hard nut to crack”. Thus, for some values of the volume constraint they added a symmetry constraint to achieve global optimality within a reasonable tolerance. Therefore, it is appropriate to demonstrate that the BESO algorithm with a connectivity criterion introduced in this paper is able to produce a reasonable solution to the Z-R problem. The Z-R problem is an L-shaped clamped beam with a roller 30
Finite Elements in Analysis and Design 131 (2017) 25–42
D.J. Munk et al.
Fig. 4. The Zhou-Rozvany problem.
to the initial structure; therefore, max(σvm ) × V = 2.4376 which is less than the initial value of 2.4612. Thus it can be seen that the final structure is more efficient than the initial. The final design shown in Fig. 5 differs from the intuitive design given by Zhou and Rozvany [38]. This is because no symmetry condition has been added to the problem, thus the lower part of the horizontal beam is under a higher stress level due to the vertical load resulting in increased bending stresses in the bottom section. As the loaded end of the beam is approached and the bending stresses are reduced the center line of the beam is more efficient because it is more in line with the horizontal load. Consequently, a combination of the middle and bottom row of the horizontal beam is used to diffuse the bending stresses and normal stresses. However, if a symmetry constraint is added the present solution becomes identical to the design by Zhou and Rozvany. Furthermore, the result (Fig. 5) appears to have a connectivity issue with the finite elements connected at one corner. However, this is a common occurrence in element removal methods, the stiffness matrix is not singular and force can be transmitted through such connections, a finer mesh will remove this (see Section 4.4). This type of connection was also present in the structures found by Stolpe and Bendsoe [33] with a volume fraction of V ≤ 70 . Further, Edwards et al. [6] achieved the same topology as shown in Fig. 5 using the SIMP method with higher penalisation factors. Thus, such a design can be seen as being suitable for the coarse mesh problem. Fig. 6 shows the history of the efficiency measure, S (x) × V (x), of the structure. Fig. 6 demonstrates that the efficiency of the final structure is higher than the initial, since it drops below the dotted line. However, it is clear that large jumps in the efficiency result from structure being added and removed. This is an indicator that the mesh is too coarse for element removal methods like BESO, as removing one element should not result in such sharp changes in the efficiency of the structure. The final volume of V (x) = 40 , is the lowest achievable volume for this problem if the non-designable domain, shown in Fig. 4, is upheld.
support at the top of the vertical-tie, Fig. 4. There is a single load case consisting of a vertical tensile load of intensity wy=1 and a horizontal compressive load of intensity wx=2, as seen in Fig. 4. The problem is modelled using a coarse mesh of only 100 four-node plane stress elements. The literature in support of the ESO methods identify the coarseness of the mesh as the reason for the ESO's shortcomings [5,6,10]. Edwards et al suggest using an initial design that contains 3,600 elements [5]; whereas, Huang and Xie suggest an initial design of 10,000 elements [10]. However, Rozvany [21] points out that for either of the suggested mesh refinements, ESO produces non-optimal results if the intensity of the horizontal load is increased. Rozvany gives the example of the 10,000 element case, after increasing the horizontal load intensity from 2 to 12 [21]. This shows that even when the vertical-tie is only one element thick (instead of ten), in these elements the stress will be approximately 10 units, whereas in the horizontal beam the stress will be about 12 units. Therefore, the ESO method will remove the element next to the top supports, and the structure again becomes highly non-optimal. The BESO technique with a novel connectivity criterion is applied to the coarse Z-R problem. A filter radius, rmin = 2 , is used to ensure no checkerboarding or mesh dependency is present. It must be pointed out that Stolpe and Bendsoe enforce a connectivity constraint when solving the Z-R problem using a local branching method [33]. Experience has shown that for difficult problems a small rejection ratio is required to ensure convergence, therefore RRmax = 0.02 is used along with IRmax = 0.05. Using these optimisation parameters an optimal result is reached after 76 iterations. The material design, x , of the optimal result is illustrated in Fig. 5 and has a volume V (x) = 40 . The original initial design domain of Fig. 4 has a volume of V (x) = 100 . The objective function, defined here as the maximum von Mises stress of the structure,
S (x) = max(σvm )
(14)
is recorded at each iteration. The goal of the optimiser is to minimise the objective function of Eq. (14), since the smaller the objective function the higher the load the structure can withstand. This objective function represents the amount of stress resisted by the structure. This has been chosen as it assesses the effectiveness of the structure in taking the applied load, i.e. lighter structures with the same maximum stress level are more efficient therefore, the efficiency of the structure can be measured by max(σvm ) × V . However, it can be easily shown that, for this particular problem, the maximum stress will go up with the removal of material (see Appendix). Hence, the aim is to minimise this increase in maximum stress. The objective function for the optimal design, Fig. 5, is S (x) = 6.064 which is approximately 2.5 times the initial value. However, the weight has been reduced by 60% compared
Fig. 5. Optimal design of the Zhou-Rozvany problem, V (x) = 40 and S (x) = 6.064 .
Fig. 6. Iteration history for the efficiency S (x) × V (x) and volume V (x).
31
Finite Elements in Analysis and Design 131 (2017) 25–42
D.J. Munk et al.
Fig. 7. Optimal design of the Zhou-Rozvany problem, V (x) = 39 and S (x) = 6.126 .
However, if we chose to ignore this we can reduce the volume down to V (x) = 39 and further decrease the objective function to S (x) × V = 2.389 . This design is shown in Fig. 7. It is interesting to note that the element removed from the nondesignable domain, is the same that was removed by the local branching method employed by Stolpe and Bendsoe [33]. This is because the elements in the vertical-tie receive the least diffusion from the horizontal load applied to the beam. As is expected the stress level in the horizontal beam is approximately double the stress level in the vertical-tie. Thus, the initial instinct of evolutionary methods is to remove the elements in the vertical-tie; however, with the connectivity constraint in place this is not achievable since it would result in a noncoherent structure, a large jump in the stress levels of the horizontal beam and a stress decrease to approximately zero in the vertical-tie. Therefore the sequence iterates around while elements are slowly removed and re-included in the vertical-tie until the element is removed in the non-designable region, solving the problem of the inefficient use of the vertical-tie material. Here is where the problem lies for conventional BESO/ESO algorithms; the structural mechanics of the problem changes completely with the removal of one element. The initial problem is not driven by bending stresses as the presence of the vertical-tie alleviates the large bending forces in the horizontal beam, having a maximum value of 2 units due to the horizontal load. However, once the algorithm removes an element from the vertical-tie, the mechanics of the problem change from being driven by normal stresses to being dominated by bending stresses with a maximum value of 14.57 units [38]. This has now become a completely different problem, the horizontal beam essentially becomes a cantilever. The conventional BESO/ESO algorithms try to solve this new problem by adding material around the high stress region, which occurs in the horizontal beam. Moreover, at the constrained end of the horizontal beam where large bending stresses are now present due to the removal of the vertical-tie. This also explains why using a fine mesh can solve the problem [9], since the transition is slower and hence can be picked up by the conventional ESO/BESO algorithm before the structural mechanics of the problem changes. Gradient methods, such as SIMP, do not suffer from this problem since they slowly reduce the density of the elements instead of switching them on and off as is the case for discrete methods. The branch and cut method proposed by Stolpe and Bendsoe [33] is able to avoid the switching; the variables are made continuous to relax the problem, hence rendering this method semidiscrete. Therefore, like these other techniques, the proposed algorithm has implemented a method for checking whether the nature of the problem has changed during the optimisation procedure. Finally, the authors are not aware of any other structure whereby the removal of one element switches the nature of the problem, making this test case special. Fig. 8 shows the iteration history of the objective function S (x) and the volume V (x) for the V (x) = 40 solution. The optimal result is achieved on the 76th iteration, since the sequence can no longer remove or include any more material without breaking the connectivity constraint or decreasing the objective function S (x). Here is where another pitfall in the original ESO type methods lie, their lack of convergence drives the sequence to nonoptimal designs [29]. Edwards et al. [6] show that the ESO method goes to a minimum compliance before the sequence is converged; however, the volume continues to decrease due to the lack of a stopping criterion for the ESO method. Thus the vertical-tie becomes detached
Fig. 8. Iteration history for the proposed BESO objective function S (x) and volume V (x).
and the objective function decreases significantly [6]. The convergence history (Fig. 8) further demonstrates the rapid change in the structural mechanics of the problem, which is the cause of previous discrete methods failing to converge to a viable structure. There are two distinctive jumps in the objective, the first when one element is removed in the horizontal beam and the volume is reduced to 0.99, the second when the volume is reduced from 0.7 to 0.69, deviating from a three beam pin-jointed frame (see Section 4.3 and the Appendix for a further discussion on significance of this volume fraction). Such rapid shifts in stress and the dramatic change in the nature of the structure further indicates that the mesh is too coarse, since removing one element should not change the structural mechanics of the problem completely. Of-course there exists structures where removing any element will change the connectivity of the design. This class of structure, denoted by S to follow the convention of Ghabraie [7], has two subclasses. The first denoted S0 [7], are structures where the removal of any element results in instability of the system. An example of this class of structure is Fig. 5, assuming the non-design region is upheld. For these types of problems no BESO method, or any element removal method, can determine an optimum design with a lower volume fraction. The second subclass denoted S1 [7], are all the other structures in S, which do not belong to S0. It is these types of structures, where the system remains stable even though connectivity is altered, that the BESO method should be able to obtain reasonable structures with lower volume fractions. The ‘jumps’ in the objective function in Fig. 8 are due to the optimiser removing an element from the horizontal beam and vertical-tie breaking connectivity. Thus, the sequence starts to go towards an inefficient local optimum, before it is put back on the correct path by the connectivity filter. Here we see the advantages of non-discrete methods, such as SIMP [2,26,27,3], their convergence histories do not contain this oscillatory behaviour, as they do not completely remove elements over a single iteration. However, improvements in convergence have been demonstrated for ESO/BESO algorithms showing that these types of algorithms are more suited to physical sensitivity functions [18]. The proposed BESO optimisation method has been shown to solve the Z-R problem for stress minimisation. No expensive variable fixing procedures or reduction of the size of the problem by adding symmetry conditions are necessary to solve the problem. However, there is a slight reduction in computational efficiency experienced due to the added connectivity constraint in the optimisation problem. 4.2. The Zhou-Rozvany problem – compliance minimisation The ESO/BESO method has also been shown to produce highly 32
Finite Elements in Analysis and Design 131 (2017) 25–42
D.J. Munk et al.
inefficient local optimal solutions to the Z-R problem for compliance minimisation. The problem formulation is identical to the one given in the previous section, however the objective function is now the total strain energy, Ne
S (x) =
∑ j =1
1 elem T elem elem (u j ) K j u j 2
Fig. 9. Solution to the Zhou-Rozvany problem for minimum compliance.
(15)
Since this is the same rejection criterion as the original ESO method, the algorithm still suffers from an inaccurate sensitivity calculation if the sensitivity of the rejected element increases significantly with respect to its normalised density [38]. However, the connectivity constraint that has been added into the formulation checks for these inaccuracies when the connectivity of the structure is broken. This allows for an extra check to analyse the efficiency of complete members and possible supports. Therefore the computational expense of the algorithm is not increased significantly as the filter is only applied if connectivity is broken and multiple support problems can be solved without retaining the uneconomical supports. The final topology for the Z-R problem for minimum compliance is shown in Fig. 9. The solution is identical to the intuitive design given by Zhou and Rozvany [38]. As expected, the optimiser tries to remove the elements from the vertical-tie, however, the connectivity filter readmits the elements as the change in the objective function is increased by more than the predefined tolerance ζ. Once the optimiser has removed several elements from the horizontal beam, elements from the verticaltie are no longer removed as the sequence is directed towards the global optimum. Fig. 10 shows the iteration history of the objective function for the compliance minimisation problem. The optimal result is achieved in 79 iterations. Initially the optimiser removes the elements of the vertical-tie, however the connectivity filter stops these from being removed due to the large change in the objective function. After the 40th iteration the connectivity of the horizontal beam is broken, resulting in a jump in the objective function. The connectivity filter rectifies this by adding back the elements with the highest sensitivity numbers resulting in a drop in compliance. The evolution of the structure's topology shows how the sequence arrives at the optimum. Interestingly the element that Zhou and Rozvany suggest to remove first, labelled element c in [38], is the first element removed. The middle row of the horizontal beam is then progressively removed, this topology agrees well with some of the topologies seen in Stolpe and Bendsoe [33], with minimum error in compliance. As they pointed out the sequence of optimal designs for varying volume constraints changes significantly with the available volume. This observation has significant implications for all heuristic optimisation methods, as even if the algorithm is able to find the optimum for a given volume constraint the previous topologies are not necessarily the optimal topology for that volume fraction. Therefore, when using heuristic methods one must be careful to apply an appropriate volume constraint since the structure's topology can change after the volume constraint has been reached; this has been noted by Sigmund and Maute in their comparative review [29]. This issue is further discussed in the following section, where multiple volume fractions are solved. It must be noted that whilst the Z-R problem was solved, care was needed in selecting appropriate rejection ratios; since if too many elements were eliminated in each design iteration the sequence could not find the global optima. Furthermore, an appropriate tolerance, ζ, for the connectivity filter must be carefully selected. However, this is alike any variables used in topology optimisation techniques and should not be considered a deficiency of the algorithm. Similar to move limits and penalty exponents that are required for SIMP methods and the mesh independency filter radius used to smooth out the sensitivities over the design domain, the values used affect the final
Fig. 10. Iteration history of the Zhou-Rozvany problem for minimum compliance.
design. Therefore, the designer should vary these values to ensure they are not in a local optimum. Gradient-based topology optimisation algorithms should not be used as black-box systems.
4.3. Other volume fractions and global optimality Stolpe and Bendsoe apply a local branching method to determine the global optimum, for a range of volume fractions, to a modified Z-R problem [33]. Their aim was to provide a number of benchmarks to validate other methods and heuristics. Furthermore, they conclude that the problem is very sensitive to the volume constraint [33]. As this work employs a heuristic algorithm, a number of topologies are compared to determine the merits of the BESO heuristic. A particular volume fraction that is not easily solved in [33] is V=90. Further, global optimality to the desired tolerance is not guaranteed and the sequence takes over 5,000 iterations and 2,899.5 CPU seconds to solve. The proposed algorithm is applied to the compliance minimisation problem for this volume fraction, the final structure is given in Fig. 11. This solution is comparable to the one presented in [33] for the same volume fraction; however, the elements are removed from the opposite end of the horizontal beam (almost as though the solution is mirrored about the y-axis). This occurs because the elements are removed further from the clamped constraint, resulting in less bending stresses in the structure. Fig. 12 shows the iteration history of the objective function for the V=90 solution. The sequence takes 25 iterations to converge, having a final objective of 224.9. This value is close to that reported in Stolpe and Bendsoe [33] (225.33), however, these results are not directly comparable due to the differences in the element formulations used1. Nevertheless, it is clear that a reasonable final structure has been obtained, which is consistent with the global optimum presented in Stolpe and Bendsoe [33]. Furthermore, the number of iterations required for the convergence of the sequence are significantly reduced. 1 The discrepancies in the compliance values used in this work to those in Stolpe and Bendsoe [33] are due to analytically integrated stiffness matrices being used for the finite elements in Stolpe and Bendsoe [33].
33
Finite Elements in Analysis and Design 131 (2017) 25–42
D.J. Munk et al.
Fig. 11. Optimal design of the Zhou-Rozvany problem V (x) = 90 and S (x) = 224.9 .
Fig. 14. Iteration history of the Zhou-Rozvany problem (V (x) = 80 ).
Fig. 12. Iteration history of the Zhou-Rozvany problem (V (x) = 90 ).
Fig. 15. Optimal design of the Zhou-Rozvany problem V (x) = 70 and S (x) = 284.8.
This is one of the advantages of BESO type methods compared with other heuristics; they are able to solve topology optimisation problems efficiently [10]. The Z-R problem is now solved for a volume fraction of V=80, as it was found in [33] that the branch and cut method could not improve on the initial feasible design, having determined that the solution is at least N10 − optimal or optimal within a neighbourhood of size 10. With a symmetry constraint the local branch and cut method was found to solve the problem in 12,7657 iterations and 75,2694.3 CPU seconds. The problem is solved using the BESO method proposed in this article and the final topology is shown in Fig. 13. Similarly to the previous solution, the topology of Fig. 13 is comparable with the one given in [33]. However, the mirroring is again present. The objective is found to be 255 compared with 255.42 as reported in Stolpe and Bendsoe [33]. As previously mentioned, these compliance values are not comparable due to the numerics of the finite elements1. Nevertheless, it is clear that a reasonable final structure has been obtained, consistent with the global optimum of Stolpe and Bendsoe [33]. Fig. 14 shows the iteration history of the objective function for the V=80 solution. The sequence only takes 35 iterations to converge, suggesting that once the connectivity filter corrects the inaccurate sensitivity analysis the sequence is able to find at least near optimal topologies. This further emphasises the merits of this BESO algorithm, since as long as elements are removed based on accurate sensitivity analysis the problem is solved efficiently. The problem is solved for a volume fraction of V=70. This time the branch and cut method can easily find solutions for this problem. However, it is not clear why this is the case. The final topology found using the updated BESO method is given in Fig. 15. Again the final topology is similar to that found by [33]; however,
instead of the entire middle row of the horizontal beam being removed, one element below the vertical-tie is removed. The objective for this topology is found to be 284.8, compared to 285.28 in [33]. Thus, the design found in this article is close to the optimum, and, arguably, a significant optimum for the Z-R problem with V=70 (see Appendix). Interestingly, in a recent study, [7] achieved the same topology (Fig. 15) using the ESO method with a revised sensitivity analysis. However, since the middle row of elements is removed from the horizontal beam, lower volume fractions require elements to be added back into the design. Therefore, because of ESO's monotonic approach this is not possible and thus is an inherent limitation of the algorithm. Conversely, this study shows that BESO is able to solve these types of problems effectively and efficiently. Fig. 16 shows the iteration history of the objective function for the V=70 solution. The sequence takes 45 iterations to converge; once the inaccurate sensitivity analysis is corrected and enough elements are removed from the correct locations the algorithm easily converges to optimal topologies. In the previous section it was found that the volume constraint
Fig. 13. Optimal design of the Zhou-Rozvany problem V (x) = 80 and S (x) = 255.
Fig. 16. Iteration history of the Zhou-Rozvany problem (V (x) = 70 ).
34
Finite Elements in Analysis and Design 131 (2017) 25–42
D.J. Munk et al.
Table 1 Summary of results for the Zhou-Rozvany problem with compliance criteria.
Fig. 17. Optimal design of the Zhou-Rozvany problem V (x) = 60 and S (x) = 405.7.
significantly impacted the result of the optimisation. For this particular problem, where the load intensity in the horizontal plane is double that in the vertical, the ideal answer using a pin-jointed frame is given by three beams; two in the horizontal and one in the vertical. The only solution of this kind due to the distributed loading condition is that of Fig. 15. Therefore, reducing the volume fraction past this point makes the solution less efficient and, should only be done if a topology with a volume of V < 70 is required from a design perspective. This is the reason that, for this volume, a near optimal solution is easily achieved, further emphasised by the ‘jumps’ present in the V=40, solution of Figs. 8 and 10, after the volume is reduced past V=70. These two volume fractions are more closely studied in the appendix. Finally, the problem is solved for a volume fraction of V=60. This is done to demonstrate that the approach of this work can provide reasonable solutions for a wide range of volume fractions. The final topology found using the updated BESO method is given in Fig. 17. Once more the final topology is similar to that found by [33]; however, the horizontal beam is again reflected about the vertical axis. Therefore, instead of the middle row being connected to the clamped boundary condition and the upper and lower row being connected at the loaded end [33], the upper and lower rows of the horizontal beam are clamped and the middle row connects these rows to the vertical-tie. Furthermore, in Stolpe and Bendsoe [33] one element below the vertical-tie is removed, this is not the case here instead the middle row of the horizontal beam contains one less element. The final compliance is found to be 405.7, which is close to that reported in Stolpe and Bendsoe [33] (388.7). Fig. 18 shows the iteration history of the objective function for the V=60 solution. The sequence takes 62 iterations to converge (Fig. 18). For this case, because the volume fraction is V < 70 , there are two breaks in connectivity. One occurs in the vertical-tie and the other results in the horizontal beam breaking when the volume is reduced past 70%. However, both result in a large jump in the objective function and hence reconnection of the structure. The topology then converges to a suitable solution, albeit not identical to that found by Stolpe and Bendsoe [33]. A summary of the results for the different volume fractions that were solved for the compliance minimisation problem is given in
Volume fraction
Compliance
Iterations
40 60 70 80 90
560.5 405.7 284.8 255 224.9
79 62 45 35 25
Table 1. Finally, it is worth noting that for all the volume fractions studied in this article the computation times ranged between 1,000–2,000 CPU(s), which is significantly less than all the optimisation runs performed using the local branching method given in [33] (Approximately two orders of magnitude). Further, except for the V=70 case, the computation times are shorter than those of the branch and cut method applied in [33]. This highlights the efficiency of BESO type algorithms compared with other discrete algorithms.
4.4. The Zhou-Rozvany problem – fine mesh Rozvany [21] points out that the note by Huang and Xie [9] is the most “reasonable” in its analysis of the ESO/BESO algorithm's breakdown. Huang and Xie conclude that the ESO/BESO algorithm can obtain optimal designs provided an adequate mesh is assigned, further stating that if a breakdown in the boundary support is detected before the solution is obtained, this may be an indication that the mesh is too coarse [9]. Therefore the next analysis of this study applies the proposed optimisation scheme to the refined version of the Z-R problem presented by Huang and Xie [9] (Fig. 19). The load case and boundary conditions are identical to the original Z-R problem [38] (Fig. 4); however 10,000 four-node elements are used in the finite element mesh to discretise the design domain as shown in Fig. 19. The volume of the final design is specified to be 50% of the initial design, a filter radius of rmin = 0.3 is set and a rejection ratio of RR=0.002 or 20 elements per iteration is defined. These values are chosen to mimic those used by Huang and Xie in their study [9]. The objective function is the total strain energy, therefore only the minimum compliance problem is solved. Using these optimisation parameters an optimal result is reached after 360 iterations. The optimal topology found by Huang and Xie [9] is given in Fig. 20 (left) and using the algorithm of this work in Fig. 20 (right). The final topologies are comparable, having a mean compliance of approximately 380. This is expected since the low rejection rate, and fine mesh allow the structure to maintain coherence, i.e. the vertical-tie is never removed. Therefore, the connectivity constraint is never broken, and the sensitivity analysis is always accurate, resulting in no sudden jumps in compliance. As shown in the convergence history of Fig. 21. The volume is slowly reduced resulting in a minimum increase in compliance. It must be noted that the refined result is significantly less than what was obtained for the coarse mesh in [33] (477.78), suggesting a finer mesh is required for this volume fraction. The difference in compliance between iterations does not drastically change, indicating an accurate sensitivity analysis. However, the sequence takes over 360 iterations to converge due to the restriction on the rejection rate to prevent the breakdown of the boundary support. Nevertheless, the BESO method proposed in this article is able to handle changes in boundary conditions allowing a higher rejection rate to be used and hence resulting in quicker convergence without reducing the accuracy of the sequence. This is demonstrated here by solving the problem of Fig. 19; however, using a rejection ratio of RR=0.01, which is 5 times higher compared with the previous analysis. The optimal topology found for the fine mesh with a high
Fig. 18. Iteration history of the Zhou-Rozvany problem (V (x) = 60 ).
35
Finite Elements in Analysis and Design 131 (2017) 25–42
D.J. Munk et al.
Fig. 19. The Zhou-Rozvany problem with a fine mesh as presented in [9].
rejection ratio is given in Fig. 22. The final topology is comparable to that found with the low rejection ratio (Fig. 21). However, the sequence only requires 85 iterations to converge to a structure with a final compliance of 367.4 (Fig. 23). The volume is reduced at a quicker rate, such that the sequence requires less iterations to achieve convergence. However, this results in the connectivity of the vertical-tie being broken, therefore the conventional BESO methods are not able to solve this problem. Nevertheless, as is shown by the steady minimal increase in compliance as the volume is reduced (Fig. 23) the BESO algorithm proposed in this work is able to find a suitable solution in only 85 iterations. This is a 76% reduction in the number of iterations required compared with conventional BESO methods. Further, the final compliance is comparative to the previous analysis and the conventional BESO algorithm [9]. Examining the resulting topologies of the refined case (Fig. 20) it is clear that the horizontal beam is transformed into a diagonal member, keeping the bottom section at the root attachment. This is also observed for the Z-R stress-criteria problem (Fig. 5). The optimality of this topology can be more easily understood by looking at the resulting stress distribution of the initial topology with a fine mesh plotted in Fig. 24, which shows the presence of a stress concentration on the lower part of the horizontal beam at the constrained end. Therefore, the BESO algorithm presented in this work is shown to produce optimal results for the original Z-R problem, by correctly identifying inaccurate sensitivity calculations, as well as for the refined problem. This confirms the criticism of the original ESO/BESO algorithms, i.e. that an inaccurate sensitivity analysis can be obtained if the mesh assigned is too course. However, if an adequate mesh is assigned, evolutionary methods can produce optimal designs. Finally, the results of the fine mesh case also indicate that the global optimum of the Z-R problem is not symmetric for volume fractions below V=70. Therefore, the addition of a symmetry constraint does change the nature of the problem, at least for the stress criteria.
Fig. 21. Iteration history of the Zhou-Rozvany refined problem introduced in [9].
Fig. 22. Optimal topology of the Zhou-Rozvany problem with a fine mesh and RR=0.01.
occur in statically determinate problems. To illustrate this an example is given where the Z-R problem can be considered to be only half the domain of a symmetric statically determinate problem. The resulting structure, termed here as the Ghabraie problem, is shown in Fig. 25. Hence, it is important to demonstrate that the BESO method proposed in this study is able to produce a reasonable solution to the Ghabraie problem. The load case consists of two vertical tensile loads of intensity wy=1 and two horizontal compressive loads of intensity wx=2, as seen in Fig. 25. The problem is modelled with the same coarse mesh as the Z-R problem, having 200 four-node plane stress elements. Clearly, it would not be possible for the method suggest in Huang and Xie [9] to solve this problem (Fig. 25). This is due to the method not being able to detect the failure of the BESO method when applied to this statically determinate problem. However, since the method proposed in this work detects changes in connectivity and then reassesses the sensitivity calculation, it is able to detect the failure of the BESO method.
4.5. The Ghabraie problem Huang and Xie [9] showed that failure of the BESO method when applied to statically indeterminate problems, such as the Z-R problem, can occur when a prescribed boundary condition is broken. They go on to suggest that it is imperative for the prescribed boundary conditions of the structure to be maintained at each iteration, avoiding this failure. However, Ghabraie [7] pointed out that the failure of the BESO method is not only restricted to statically indeterminate problems, but can also
Fig. 20. Optimal topology of the Zhou-Rozvany problem with a fine mesh (left) presented in [9] (right) from this study.
36
Finite Elements in Analysis and Design 131 (2017) 25–42
D.J. Munk et al.
Fig. 26. Optimal design for the Ghabraie problem with a V (x) = 40 .
Fig. 23. Iteration history of the Zhou-Rozvany refined problem with RR=0.01.
Fig. 24. Stress distribution of the Zhou-Rozvany problem with a fine mesh density.
The BESO method with a novel connectivity criterion is applied to the Ghabraie problem. Ghabraie [7] does not report a volume fraction for this problem, however, the limiting volume fraction is the same as the Z-R problem, i.e. V=40. Therefore, this value is used here. A filter radius, rmin = 2 , rejection ratio, RRmax = 0.005 and inclusion ratio, IRmax = 0.05, is set and the optimisation problem is solved. The final structure found is given in Fig. 26. The final design, shown in Fig. 26, would be the full domain of the symmetric statically determinate structure that would result from the intuative design suggested by Zhou and Rozvany [38]. This is what one would expect, since the Z-R problem is a symmetric half-domain of the Ghabraie problem. Therefore, as predicted the optimiser first removes the two centre elements in the vertical-tie connecting the two horizontal beams (Fig. 25), resulting in a change in connectivity. Consequently, the sensitivity analysis is checked and determined to be inaccurate. Therefore, the elements are put back in the design domain, resulting in the removal of the next two elements with the lowest sensitivities. Eventually the optimiser removes enough elements
Fig. 27. Iteration history of Ghabraie problem with a V (x) = 40 .
in the horizontal beams, resulting in elements no longer being removed from the vertical connector, and the sequence converges to a reasonable solution. Fig. 27 gives the iteration history of the objective function for V (x) = 40 . The solution takes 147 iterations to converge, having a final objective of 1152. It is clear that the optimiser takes an equivalent path to that for the Z-R problem with the same volume fraction (Figs. 10 and 27). Finally, to demonstrate the equivalence of the Z-R and Ghabraie problem, the Ghabraie problem is solved for a volume fraction of V (x) = 70 . The final structure found is given in Fig. 28. Once again, the final design shown in Fig. 28 is the equivalent full domain of the symmetric statically determinate structure that would
Fig. 25. The Ghabraie problem.
37
Finite Elements in Analysis and Design 131 (2017) 25–42
D.J. Munk et al.
freezing method proposed by Huang and Xie [9]. 5. Conclusion An updated formulation for a BESO method has been presented with an added connectivity constraint that looks at the usefulness of complete members, rather than just elements. Previous studies pointed out that discrete methods suffer from an inaccurate sensitivity analysis, determined by large changes in the objective function, which occur when the sensitivity number varies with its normalised density. The algorithm presented in this paper performs an extra check when the connectivity of the structure is broken by the removal of an element. If removing the element causes large changes in the objective, the element is readmitted as the sensitivity analysis is found to be inaccurate; otherwise the entire member is removed to enforce a coherent structure. The proposed algorithm is applied to the Z-R problem for a stress criterion and compliance minimisation. The algorithm was shown to avoid arriving at highly non-optimal local optima without increasing the computational cost of the method significantly. It was shown that for stress minimisation, the final result deviated slightly from the optimum determined by Zhou and Rozvany because no symmetry constraint was enforced and the stress did not exceed the material's allowable stress limit. A grid refinement study confirmed that this result is optimal, or near optimal. The problem was solved for several volume fractions and the results were compared to the list of topologies given in [33]. It was found that the topologies are comparable, and therefore, it has been demonstrated that the approach can provide reasonable results for a wide range of volume fractions. Furthermore, it was shown that for the V=70 case the resulting topology is identical to what would be expected for a pinjointed frame design. This fact gives confidence to the validity of the algorithm. The algorithm was also applied to the refined Z-R problem [9], giving consistent results to those found using a conventional BESO algorithm. Therefore the BESO method with a novel connectivity criterion does not alter the final topologies, unless the coherence of the structure is broken and an inaccurate sensitivity analysis is determined. Finally, the algorithm was applied to a statically determinate problem suggested by Ghabraie [7]. This problem was introduced to show that current methods, namely the freezing approach, could still result in failure due to an inaccurate sensitivity analysis going unnoticed. It was shown that the method of this work was able to solve this problem resulting in reasonable final designs. The nature of the Z-R problem can be completely altered by the removal of a single element, so that the problem changes from being
Fig. 28. Optimal design for the Ghabraie problem with a V (x) = 70 .
Fig. 29. Iteration history of Ghabraie problem with a V (x) = 70 .
result from the solution of the Z-R problem, with the same volume fraction, using the method of this work (Fig. 15). Fig. 29 gives the iteration history of the objective function for V (x) = 70 . The solution takes 88 iterations to converge, having a final objective of 573. Furthermore, It is again noticed that the optimiser takes an equivalent path to that for the Z-R problem with the same volume fraction (Figs. 16 and 29). Therefore, it has been demonstrated that the approach proposed in this work can solve statically determinate problems, which previous suggestions fail to do as noted in Ghabraie [7]. Hence, for this reason, the approach is an improvement on the
Fig. 30. Simple beam-tie model for analytical verification (left) V=70 (right) V=40.
Fig. 31. Equilibrium of horizontal member (left) in the vertical plane (right) in the horizontal plane.
38
Finite Elements in Analysis and Design 131 (2017) 25–42
D.J. Munk et al.
root. The new additional constraint was able to detect this dramatic change in the nature of the load path and readmit the element in order to preserve the original problem configuration. Therefore when solving minimum compliance problems with a discrete method care must be taken to ensure large changes in the objective function do not occur.
Table 2 Summary of strain energies for ideal structures. Compliance No Cut
Vf
1 0.7 0.4
Cut
A
B
A
B
182 Et 272 Et 542 Et
0.0002 Et 0.000192 Et 0.00074 Et
180 Et 270 Et 540 Et
2000 Et 2077 Et 54000 Et
Acknowledgment The authors would like to thank Prof. Mike Xie for permission to use Fig. 20 taken from his paper in collaboration with Dr. Huang [9].
driven by normal stress to being dominated by bending stresses at the
Appendix. On the significance of the V=70 and V=40 test cases If we assume the vertical member is connected by a pin joint to the horizontal member and we ignore the non-design material then we can simplify the analysis to the structure shown in Fig. 30. Hence, by analysing the vertical member we can set up the equilibrium condition as shown in Fig. 31. Where, to satisfy equilibrium,
∑ Fx = F1 + F2 = 1
(16)
Compatibility of vertical deflection states,
F1 (0.4L ) F (3L )3 = 2 AE 3EI Rearranging and setting the cross-section area to A =
(17) 1 Lt , 10
gives,
⎛ 1 ⎞ ⎜ Lt ⎟ E F2 27L3 ⎝ 10 ⎠ F1 = 3EI 0.4L
(18)
where t is the thickness of the elements in the horizontal and vertical members. Which simplifies to,
F1 =
9 L3t F2 4 I
(19)
Therefore, by substituting Eq. (19) back into the equilibrium equation, Eq. (16), we get,
9 L3t F2 + F2 = 1 4 I 1 ⟹ F2 = ⎤ ⎡ 9 L3t + 1⎥ ⎢ ⎦ ⎣4 I
(20)
The moment of inertia, I, of the horizontal beam is given by,
I=
bd 3 12
(21)
hence, for Va=1, Vb=0.7 and Vc=0.4,
(0.3L )3t 0.027L3t = 12 12 (0.3L )3t (0.1L )3t 0.026L3t = − = 12 12 12 (0.1L )3t 0.001L3t = = 12 12
IVa = IVb IVc
(22)
Therefore, substituting the moments of inertia, Eq. (22), into Eq. (20) gives the vertical force that the horizontal beam can resist. Hence, for Va=1, Vb=0.7 and Vc=0.4 respectively,
39
Finite Elements in Analysis and Design 131 (2017) 25–42
D.J. Munk et al.
1 = 0.000999 ⎡ ⎤ ⎢9 ⎥ 3 Lt ⎢ + 1⎥ ⎢ 4 0.027L3t ⎥ ⎢⎣ ⎥⎦ 12
F2a =
1 ⎤ ⎡ ⎥ ⎢ 3t L 9 ⎢ + 1⎥ ⎥ ⎢ 4 0.026L3t ⎥ ⎢ ⎦ ⎣ 12 1 ⎤ ⎡ ⎥ ⎢ L3t ⎢9 + 1⎥ ⎥ ⎢ 4 0.001L3t ⎥ ⎢ ⎦ ⎣ 12
F2b =
F2c =
= 0.000962
= 0.000037
(23)
Therefore, the less stiff the structure is in bending the smaller the vertical load it can carry. Eq. (23) shows that for the Va=1 and Vb=0.7 structures the resistance to the vertical load is similar, due to the small difference in the moments of inertia of the horizontal beams. However, for the Vc=0.4 case the vertical load resistance is reduced by over an order of magnitude due to the drop in inertia of the horizontal beam. Looking at the strain energy of the structure, the strain energy due to axial loads is given by,
∫0
Um =
L
N2 dx 2AE
(24)
Thus, for a constant cross-sectional area this becomes,
N2L 2AE
Um =
(25)
Hence, for the Va=1 case,
1 F12 (0.4L ) 1 F32 (3L ) + 2 E (0.1Lt ) 2 E (0.3Lt ) 2 180 182 = + = Et Et Et
Um =
(26)
Therefore, for the Vb=0.7 case,
1 F12 (0.4L ) 1 F32 (3L ) + 2 E (0.1Lt ) 2 E (0.2Lt ) 2 270 272 = + = Et Et Et
Um =
(27)
Finally, for the Vc=0.4 case,
1 F12 (0.4L ) 1 F32 (3L ) + 2 E (0.1Lt ) 2 E (0.1Lt ) 2 540 542 = + = Et Et Et
Um =
(28)
Therefore, as the volume fraction is reduced the strain energy due to axial loads is increased significantly. If we now consider bending then, the strain energy due to the bending moment is found by,
∫0
Ub =
L
M2 dx 6EI
(29)
Thus, for a constant cross-sectional area this becomes,
Ub =
F2L3 6EI
(30)
Hence, for the Va=1 case,
1 F22 (3L )3 2 3EI 1 (0.000999)2 27L3 = ⎛ 0.027L3t ⎞ 2 3E ⎜ ⎟ ⎝ 12 ⎠
Ub =
=
0.002 Et
(31)
Next, for the Vb=0.7 case,
40
Finite Elements in Analysis and Design 131 (2017) 25–42
D.J. Munk et al.
1 F22 (3L )3 2 3EI 1 (0.000962)2 27L3 = ⎛ 0.026L3t ⎞ 2 3E ⎜ ⎟ ⎝ 12 ⎠
Ub =
=
0.00192 Et
(32)
Finally, for the Vc=0.4 case,
1 F22 (3L )3 2 3EI 1 (0.000037)2 27L3 = ⎛ 0.001L3t ⎞ 2 3E ⎜ ⎟ ⎝ 12 ⎠
Ub =
=
0.00074 Et
(33)
As the vertical force in the horizontal beam, F2, goes down so does the moments of inertia I, this corresponds to a drop in the strain energy for bending as the volume fraction is reduced. However, the bending strain energy for the Va=1 and Vb=0.7 structures are very similar. This is because they can resist similar vertical loads and have similar moments of inertia. The bending strain energy for the Vc=0.4 structure is an order of magnitude less, suggesting it is less superior in bending when compared with the other structures. Therefore, the total strain energy for Va=1, Vb=0.7 and Vc=0.4 can be found by, (34)
U = Um + Ub hence, the total strain energies are,
182 0.002 + Et Et 272 0.00192 = + Et Et 542 0.00074 = + Et Et
UVa = UVb UVc
(35)
If we now assume the vertical member has been cut, Fig. 31, the bending strain energies for the Va=1, Vb=0.7 and Vc=0.4 are as follows,
(1)2 27L3 = ⎛ 0.027L3t ⎞ 3E ⎜ ⎟ ⎝ 12 ⎠
Uba =
1 2
Ubb =
1 2
(1)227L3 ⎛ 0.026L3t ⎞ ⎟ 3E ⎜ ⎝ 12 ⎠
Ubc =
1 2
(1)227L3 ⎛ 0.001L3t ⎞ ⎟ 3E ⎜ ⎝ 12 ⎠
=
2077 Et
=
54000 Et
2000 Et
(36)
Therefore, when the vertical member is cut the strain energy due to bending is significantly increased, especially for the Vc=0.4 case which becomes very inefficient in bending. The results of this appendix are summarised in Table 2. The results show that cutting the vertical-tie radically changes the problem, since the bending strain energy increases by orders of magnitude becoming significant for the different volume fractions. Secondly, a smaller change in the problem is seen from reducing the volume fraction past Vb=0.7. There is little difference between the Va=1 and Vb=0.7 structures in bending, however once the volume is reduced past this point the horizontal beam can no longer resist the vertical load, due to the reduction in inertia, and hence the vertical-tie starts to take most of the vertical force. This is the reason for the inclination on the horizontal beam for the fine mesh structure; to alleviate bending. This result is very sensitive to the material properties of the problem. For example if a very stiff material is being used then this advantage in bending is not significant, if we assume the vertical-tie remains, since the vertical-tie will not stretch, resulting in no bending. However, if a flexible material is used, bending becomes more significant to the topological design. For this test case a Young's modulus of E=1 is employed therefore, bending exists as seen in Fig. 24. Further, this small change in the problem is why there is a small jump in compliance when the volume goes below Vb=0.7 (Fig. 10); likewise, in the maximum stress for the same volume change (Fig. 8).
[5] C.S. Edwards, H.A. Kim, C.J. Budd, Investigations on the validity of topology optimization methods, in: Proceedings of 47th AIAA/ASME/ASCE/AHS/ASC Structures, Structural Dynamics Conference (AIAA 2006–1984), 2006, pp. 1–15. [6] C.S. Edwards, H.A. Kim, C.J. Budd, An evaluative study on ESO and SIMP for optimizing a cantilever beam tie, Struct. Multidiscip. Optim. 34 (2007) 403–414. [7] K. Ghabraie, The ESO method revisited, Struct. Multidiscip. Optim. 51 (2015) 1211–1222. [8] X. Huang, Y.M. Xie, Convergent and mesh-independent solutions for the bidirectional evolutionary structural optimization method, Finite Elem. Des. 43 (2007) 1039–1049. [9] X. Huang, Y.M. Xie, A new look at ESO and BESO optimization methods, Struct.
References [1] C. Alonso, O.M. Querin, R. Ansola, A sequential element rejection and admission (SERA) method for compliant mechanisms design, Struct. Multidiscip. Optim. 47 (2013) 795–807. [2] M.P. Bendsoe, Optimal shape design as a material distribution problem, Struct. Optim. 1 (1989) 193–202. [3] M.P. Bendsoe, O. Sigmund, Topology Optimization: Theory, Method and Application, 1st ed., Springer, Berlin, Germany, 2003. [4] D.N. Chu, Y.M. Xie, A. Hira, G.P. Steven, Evolutionary structural optimization for problems with stiffness constraints, Finite Elem. Anal. Des. 21 (1996) 239–251.
41
Finite Elements in Analysis and Design 131 (2017) 25–42
D.J. Munk et al.
10th AIAA/ISSMO Symposium on Multidiscip. Anal. and Optim., 2004. [26] G.I.N. Rozvany, M. Zhou, Applications of the COC method in layout optimization, in: Proceedings of the International Conference on Engineering Optimization in Design Processes, Karlsruhe, 1990, pp. 59–70. [27] G.I.N. Rozvany, M. Zhou, T. Birker, Generalized shape optimization without homogenization, Struct. Optim. 4 (1992) 250–254. [28] O. Sigmund, On the design of compliant mechanisms using topology optimization, Mech. Struct. Mach. 25 (1997) 495–526. [29] O. Sigmund, K. Maute, Topology optimization approaches, Struct. Multidiscip. Optim. 48 (2013) 1031–1055. [30] O. Sigmund, J. Petersson, Numerical instabilities in topology optimization: a survey on procedures dealing with checkerboards, mesh-dependencies and local minima, Struct. Optim. 16 (1998) 68–75. [31] G.P. Steven, Q. Li, Y.M. Xie, Multicriteria optimization that minimizes maximum stress and maximizes stiffness, Comput. Struct. 80 (2002) 2433–2448. [32] D. Stojanov, B.G. Falzon, X. Wu, W. Yan, Implementing a structural continuity constraint and a halting method for the topology optimization of energy absorbers, Struct. Multidiscip. Optim. 54 (2016) 429–448. [33] M. Stolpe, M.P. Bendsoe, Global optima for the Zhou-Rozvany problem, Struct. Multidiscip. Optim. 43 (2011) 151–164. [34] P. Tanskanen, The evolutionary structural optimization method: theoretical aspects, Comput. Methods Appl. Mech. Eng. 191 (2002) 5485–5498. [35] Y.M. Xie, G. Steven, Evolutionary Structural Optimization, 1st ed., Springer, London, UK, 1997. [36] Y.M. Xie, G.P. Steven, A simple evolutionary procedure for structural optimization, Comput. Struct. 49 (1993) 885–896. [37] X.Y. Yang, Y.M. Xie, G.P. Steven, O.M. Querin, Bidirectional evolutionary method for stiffness optimization, AIAA J. 37 (1999) 1483–1488. [38] M. Zhou, G. Rozvany, On the validity of ESO type methods in topology optimization, Struct. Multidiscip. Optim. 21 (2001) 80–83. [39] M. Zhou, G.I.N. Rozvany, COC: an optimality criteria method for large systems. Part I, Theory Struct. Optim. 5 (1992) 12–15. [40] M. Zhou, G.I.N. Rozvany, COC: an optimality criteria method for large systems. part II, Algorithm Struct. Optim. 6 (1993) 250–262. [41] J.H. Zhu, W.H. Zhang, K.P. Qiu, Bi-directional evolutionary topology optimization using element replaceable methods, Comput. Methods 40 (2007) 97–109.
Multidiscip. Optim. 35 (2008) 89–92. [10] X. Huang, Y.M. Xie, Evolutionary Topology Optimization of Continuum Structures, 1st ed., Wiley, Chichester, UK, 2010. [11] X. Huang, Y.M. Xie, A further review of ESO type methods for topology optimization, Struct. Multidiscip. Optim. 41 (2010) 671–683. [12] E. Lee, K.A. James, J.R.R.A. Martins, Stress-constrained topology optimization with design-dependent loading, Struct. Multidiscip. Optim. 46 (2012) 647–661. [13] Q. Li, G.P. Steven, O.M. Querin, Y.M. Xie, Evolutionary shape optimization for stress minimization, Mech. Res. Commun. 26 (1999) 657–664. [14] Q. Li, G.P. Steven, Y.M. Xie, On equivalence between stress criterion and stiffness criterion in evolutionary structural optimization, Struct. Optim. 18 (1999) 67–73. [15] X. Liu, W.J. Yi, Q.S. Li, P.S. Shen, Genetic evolutionary structural optimization, J. Constr. Steel Res. 64 (2008) 305–311. [16] J.J. McKeown, A note on the equivalence between maximum stiffness and maximum strength truss, Eng. Optim. 29 (1997) 443–456. [17] D.J. Munk, G.A. Vio, G.P. Steven, Topology and shape optimization methods using evolutionary algorithms: a review, Struct. Multidiscip. Optim. 52 (2015) 613–631. [18] D.J. Munk, G.A. Vio, G.P. Steven, A simple alternative formulation for structural optimisation with dynamic and buckling objectives, Struct. Multidiscip. Optim. 55 (2017) 969–986. [19] J.A. Norato, M.P. Bendsoe, R.B. Harber, D.A. Tortorelli, A topological derivative method for topology optimization, Struct. Multidiscip. Optim. 33 (2007) 375–386. [20] O.M. Querin, G.P. Steven, Y.M. Xie, Evolutionary structural optimization (ESO) using a bi-directional algorithm, Eng. Comput. 15 (1998) 1034–1048. [21] G.I.N. Rozvany, A critical review of established methods of structural topology optimization, Struct. Multidiscip. Optim. 37 (2009) 217–237. [22] G.I.N. Rozvany, O.M. Querin, Present limitations and possible improvements of SERA (sequential element rejections and admissions) methods in topology optimization, in: Proceedings of WCSMO 4, Dalian, China, 2001. [23] G.I.N. Rozvany, O.M. Querin, Combining ESO with rigorous optimality criteria, Int. J. Veh. Des. 28 (2002) 294–299. [24] G.I.N. Rozvany, O.M. Querin, Theoretical foundations of sequential element rejections and admissions (SERA) methods and their computational implementations in topology optimization, in: Proceedings of the 9th AIAA/ISSMO Symposium on Multidiscip. Anal. and Optim., 2002b. [25] G.I.N. Rozvany, O.M. Querin, Sequential element rejections and admissions (SERA) method: Applications to multiconstraint problems, in: Proceedings of the
42