Experimental complexity analysis of continuous constraint satisfaction problems

Experimental complexity analysis of continuous constraint satisfaction problems

Information Sciences 153 (2003) 1–36 www.elsevier.com/locate/ins Experimental complexity analysis of continuous constraint satisfaction problems q Yi...

828KB Sizes 0 Downloads 82 Views

Information Sciences 153 (2003) 1–36 www.elsevier.com/locate/ins

Experimental complexity analysis of continuous constraint satisfaction problems q Yi Shang *, Markus P.J. Fromherz Palo Alto Research Center (PARC), 3333 Coyote Hill Road, Palo Alto, CA 94304, USA Received 10 March 2002; received in revised form 27 September 2002; accepted 8 October 2002

Abstract Continuous constraint satisfaction problems are at the core of many real-world applications, including planning, scheduling, control, and diagnosis of physical systems such as car, planes, and factories. Yet in order to be effective in such resource-constrained environments, constraint-based techniques must take into account the complexity of continuous constrained problems. In this paper, we study complexity phase transition phenomena of continuous constraint satisfaction problems (CSPs). First, we analyze three continuous constraint satisfaction formulations based on (discrete) 3-SAT problems, which have a strong relation between structure and search cost. Then, we propose a generic benchmarking model for comparing continuous CSPs and algorithms, and present two example problems based on sine functions. In our experiments, these problems are solved using both local and global search methods. Besides comparing the complexities of different continuous CSPs and search algorithms, we also connect these back to results from similar studies on SAT problems. In solving continuous 3-SAT and sine-based CSPs, we find that the median search cost is characterized by simple parameters such as the constraint-to-variable ratio and constraint tightness, and that discrete search algorithms such as GSAT have continuous counter-parts with similar behavior. Regarding local versus global search techniques for constraint solving, our results show that local search methods are more efficient for weakly constrained problems, whereas global search methods work better on highly constrained problems.  2003 Elsevier Science Inc. All rights reserved.

q

This work has been supported in part by DARPA grant F33615-01-C-1904. Corresponding author. E-mail addresses: [email protected] (Y. Shang), [email protected] (M.P.J. Fromherz).

*

0020-0255/03/$ - see front matter  2003 Elsevier Science Inc. All rights reserved. doi:10.1016/S0020-0255(03)00065-3

2

Y. Shang, M.P.J. Fromherz / Information Sciences 153 (2003) 1–36

1. Introduction Many science and engineering applications require finding solutions to satisfy a set of non-linear constraints over real numbers or to optimize a nonlinear function subject to non-linear constraints [2]. For example, in modern robotic systems, which increasingly have many more degrees of freedom than required for typical tasks (e.g., modular robots), casting the control problem as a constraint satisfaction or constrained optimization problem is a promising approach for robustly handling a variety of non-standard constraints found in such systems [11,21,29,38]. Non-linear programming problems can be very hard. For example, deciding if a set of polynomial constraints has a solution is NP-hard. On the other hand, NP-hard problems could have polynomial complexity on average cases and sub-classes of the problems could be solved efficiently. Discovering problem features that identify the hardness of the problems would lead to the development of effective constraint-based techniques that adapts to problem structures. Toward this end, this paper extends the progress made in understanding complexity results for discrete constraint problems to continuous problems. In the past decade, significant progress has been made in understanding problem complexity of discrete constraint satisfaction problems (CSPs) such as the satisfiability (SAT) problem [15]. The SAT problem belongs to an important class of discrete CSPs. Many problems in artificial intelligence, logic, computer aided design, database query, and planning, etc. can be formulated as SAT problems [13]. Generally, a SAT problem is defined as follows. Given a set of m clauses fC1 ; C2 ; . . . ; Cm g on n Boolean variables x ¼ ðx1 ; x2 ; . . . ; xn Þ, xi 2 f0; 1g, and a Boolean formula in conjunctive normal form C1 ^ C2 ^    ^ Cm ;

ð1Þ

the task is to find an assignment of values to all the variables so that (1) evaluates to be true, or derive its infeasibility if (1) is infeasible. In the wellstudied 3-SAT problem, a clause Ci , Ci ¼ xj _ xk _ :xl where j, k, l 2 f1; . . . ; ng, is a disjunction of exactly three variables that may appear either positively or negatively. SAT problems can be formulated as discrete or continuous, constrained or unconstrained, optimization problems [33]. In some previous work, using a constrained optimization approach, people have formulated SAT problems as integer programming problems [20] and applied various methods such as cutting plane [19] and interior point methods [20]. Previous results show that integer programming methods may be faster than the resolution algorithm for certain classes of problems, although they often do not work well on hard SAT instances [13].

Y. Shang, M.P.J. Fromherz / Information Sciences 153 (2003) 1–36

3

SAT problems can also be solved as continuous problems. In formulating a continuous SAT problem, discrete variables in the original problem are transformed into continuous variables in such a way that solutions to the continuous problem are solutions to the original problem. The continuous formulation is potentially beneficial because the continuous function can provide a continuous path from a given infeasible point to a feasible one without using discontinuous jumps. Another motivation for using a continuous formulation is that it allows the application to employ continuous reasoning similar to fuzzy logic. A disadvantage is that continuous formulations often require computationally expensive algorithms. Experimental results of the continuous approach is quite negative, showing that it is slower to solve SAT problems as continuous problems than as discrete ones [13]. Many CSPs, including the SAT problem, are NP-hard and intractable in the worst case. However, sub-classes of these problems are not necessarily hard. One particularly well-studied phenomenon for SAT formulations is the complexity phase transition [16]. For a variety of complete search algorithms, the average time to find a solution or determine infeasibility is short when the ratio of clauses to variables is small (i.e., the problem is weakly constrained) as well as when this ratio is large (where the problem is highly or over constrained), while solving time is largest in the intermediate case. Fig. 1 shows the easyhard–easy phase transition of random 3-SAT formulas solved by the widely used Davis–Putnam procedure [31]. For satisfiable problems, incomplete algorithms such as GSAT exhibit similar complexity curves [30,39]. Complexity of 3-SAT appears to be a function of a single problem parameter, namely the clause-to-variable ratio. Based on this work and related experience, in this paper we study complexity phase transition phenomena of

Fig. 1. Median number of recursive Davis–Putnam (DP) calls for random 3-SAT formulas, as a function of the ratio of clauses to variables. This figure is taken from the original paper [31].

4

Y. Shang, M.P.J. Fromherz / Information Sciences 153 (2003) 1–36

continuous CSPs with the goal of understanding the relevant characteristics for predicting problem complexity. The rest of this paper is structured as follows. In Section 2, we introduce the continuous CSPs used in our study. First, we present three continuous constraint satisfaction formulations based on (discrete) 3-SAT problems, which have a strong relation between structure and search cost. The CSPs coming from SAT are a special case of non-linear CSPs. To study characteristics of general non-linear CSPs, we propose a generic benchmarking model for continuous CSPs and discuss two example problems. They are based on trigonometric functions and generate constraint problems similar to those appearing in robot control applications that are of special interests to us. Section 3, we present the two search algorithms used in our experiments: a sequential quadratic programming (SQP) method [28] used as local search, and a Nelder–Mead (NM) direct search method [27] for global search. In Section 4, the CSPs are solved and analyzed using both search methods. As for discrete 3-SAT studies, measures of the problem solving difficulty are examined as a function of the ratio of the number of constraints to the number of variables. While continuous SAT-like problems exhibited complexity similar to the behavior exhibited by discrete problems, more general continuous problem formulations seem to require the use of additional parameters, such as ruggedness and sizes of attractive basins to better understand search behavior. Section 5 summarizes this paper with conclusions and future work.

2. Continuous constraint satisfaction problems Real-world constraint problems consists of a variety of complex constraints with various parameters that might influence the time/structural complexity of the constraint problems. One example is modular robot control. Modular robots consist of many interchangeable robotic modules connected into a mechanical and functional assembly. Interest in modular robots has increased recently due to their potential for robustness, reduced costs, and wide range of applicability particularly in highly constrained environments, such as search and rescue in damaged buildings, [21,29,38]. Typically, there are only a small number of module types, each module with limited motion capability. The flexibility of modular robots is achieved through the large number of modules, expected to range from dozens to thousands, and their many possible configurations. Because of the large number of degrees of freedom, such modular robots are also called hyper-redundant. Hyper-redundant manipulators are a sub-class of such robots that consist of a chain of modules with rotational joints [38] (Fig. 2). Realizing the potential of such modular reconfigurable systems requires robust, versatile and scalable control algorithms satisfying many changing

Y. Shang, M.P.J. Fromherz / Information Sciences 153 (2003) 1–36

5

Fig. 2. Modular robot prototype PolyBot. The pictured snake configuration consists of 12 modules, each with a rotational joint. Each module contains processor, motor, sensors, and networking capability. Alternative configurations are possible. Control of the modules is constrained by joint angle limits, motor torque limits, obstacles in the environment, etc.

constraints. This suggests a model-based control approach using constrained optimization techniques: the goal of the controller is to find actuation values (e.g., joint angles for the robot) such that the objective (e.g., reaching a position) is achieved while the constraints (e.g., joint angle and motor torque limits, collision avoidance, etc.) are satisfied. In order to be able to deploy such embedded solvers, we need to come to an understanding of the relevant characteristics for analyzing and predicting the complexity of such continuous non-linear constraint problems. We have therefore started to model and study a series of problems of increasing complexity. This section starts with SAT-like continuous CSPs as our simplest problems and later introduces more generic and complex problem sets with constraints based on trigonometric functions. These problems are a step closer to the real problems appearing in robot control applications. In real applications, it is also common to have constrained optimization problems (COPs), such as reaching a position with minimal movement. Our test problem sets an be easily extended to COPs by introducing an objective function. The study of solving complexity of the COPs is left for future research. 2.1. SAT-like continuous CSPs We present three alternative formulations for continuous SAT-like CSPs. Based on the concept of clauses in 3-SAT, constraints in the three formulations

6

Y. Shang, M.P.J. Fromherz / Information Sciences 153 (2003) 1–36

are represented by exponential, sigmoid, or quadratic functions. The common characteristics of these continuous formulations are as follows. • Each positive literal x in a 3-SAT clause is converted to a continuous, positive function f ðx0 Þ, and each negative literal :x is converted to f ð1 x0 Þ, where f : R ! R depends on the formulation. • Disjunction (e.g., x1 _ x2 ) is represented as addition in the constraint (e.g., f ðx01 Þ þ f ðx02 Þ). • Each clause (e.g., x1 _ x2 _ :x3 ) is transformed into an inequality constraint (e.g., f ðx01 Þ þ f ðx02 Þ þ f ð1 x03 Þ P c, where the constant c depends on the formulation). A feasible solution to the continuous CSP can be mapped to a feasible solution to the original discrete SAT problem through the following thresholding scheme: if a value x0i of the continuous solution is more than 0.5, then set xi to 1 in the solution to the SAT problem; if x0i is less than 0.5, then set xi to 0; if x0i is 0.5, then xi is unknown. The threshold value 0.5 is derived from our transformation framework, in which a positive literal x corresponds to x0 , whereas a negative literal :x corresponds to 1 x0 . The three formulations are chosen because the solutions of the continuous CSPs have a direct correspondence to the solutions of the SAT problems. They have simple forms and are intuitive. Here, these SAT-like continuous CSPs are mainly used as test problems to study the performance of various continuous optimization methods, not as a way to solve discrete SAT problems faster. In the rest of this section, we illustrate the continuous formulations by using a simple example, the 3-SAT clause x1 _ x2 _ :x3 . 2.1.1. Exponential formulation The basic transformation function of the exponential formulation is f ðxÞ ¼ ebðx 1Þ

ð2Þ

where b is a real number. The constant limit for the inequality is c ¼ 1. For instance, based on this function, clause x1 _ x2 _ :x3 is transformed into the non-linear constraint 0

0

0

ebðx1 1Þ þ ebðx2 1Þ þ e bx3 P 1

ð3Þ

This formulation has been chosen because the resulting constraint makes a similar ratio of the search space infeasible as SAT clauses (i.e., 1/2 per variable, or 1/8 for 3-SAT clauses). The constant b is chosen to make sure that a solution to the continuous constraint can be mapped to a feasible solution to the original clause through thresholding. In other words, the center of the space between 0 and 1, ð0:5; 0:5; . . . ; 0:5Þ, should not satisfy the continuous constraint. For 3-SAT problems, we have 3e 0:5b < 1 ) b > 2 ln 3  2:1972

Y. Shang, M.P.J. Fromherz / Information Sciences 153 (2003) 1–36

7

Fig. 3. Feasible spaces (white regions, including boundary) of the exponential formulation for two 2-SAT problems with variables x1 and x2 . Valid solutions to the discrete SAT problems are indicated by dots.

We arbitrarily set b ¼ 2:2 in our experiments. (The problem formulation is not sensitive to small variations of b. Larger values would make the function steep at the inflection point.) Using this transformation, a 3-SAT problem with n Boolean variables and m clauses is converted into a CSP with n continuous variables and m continuous constraints. As an illustration, Fig. 3 shows 2-D feasible spaces of continuous CSPs created from two 2-SAT problems. The left diagram is for the SAT problem ðx1 _ x2 Þ ^ ðx1 _ :x2 Þ, and the right diagram is for the SAT problem ðx1 _ x2 Þ ^ ð:x1 _ :x2 Þ. The interior of the dark areas are infeasible regions, whereas the white areas including points on the boundary are feasible regions. Using the thresholding scheme, a feasible solution to the continuous problems can be mapped to a feasible solution to the original (discrete) SAT problems. Furthermore, a feasible solution to a SAT problem, such as x1 ¼ 1, x2 ¼ 1, and x3 ¼ 1 for the sample SAT clause x1 _ x2 _ :x3 , is always a feasible solution to the continuous CSP. 2.1.2. Sigmoid formulation The basic transformation function of the sigmoid formulation is f ðxÞ ¼

1 1 þ e aðx 1Þ

ð4Þ

where a is a real constant. The constant limit for the inequality is c ¼ 1=2. Based on this function, clause x1 _ x2 _ :x3 is transformed into the non-linear constraint

8

Y. Shang, M.P.J. Fromherz / Information Sciences 153 (2003) 1–36

1 1þe

aðx01 1Þ

þ

1 1þe

aðx02 1Þ

þ

1 1 P ax03 2 1þe

ð5Þ

Again, the resulting constraint makes a similar ratio of the search space infeasible as SAT clauses (i.e., 1/2 per variable, or 1/8 for 3-SAT clauses). The constant a is chosen to make sure that a solution to the continuous constraint can be mapped to a feasible solution to the original problem through thresholding. In other words, the center of the space between 0 and 1, the vector ð0:5; 0:5; . . . ; 0:5Þ, should not satisfy the continuous constraint. For 3-SAT problems, we have 3 1 < ) a > 2 ln 5  3:2189 0:5a 1þe 2 We arbitrarily set a ¼ 3:3 in our experiments. (Again, the problem formulation is not sensitive to small variations of a, and larger values would make the function steep at the inflection point.) Similar to the exponential formulation, a solution to the continuous CSP can be mapped to a feasible solution to the original SAT problem through the thresholding scheme. Fig. 4 shows 2-D feasible spaces of continuous CSPs created from the same two 2-SAT problems as before, using the sigmoid formulation. The left diagram is for the SAT problem ðx1 _ x2 Þ ^ ðx1 _ :x2 Þ, and the right diagram is for the SAT problem ðx1 _ x2 Þ ^ ð:x1 _ :x2 Þ. The interior of the dark areas are infeasible regions, whereas the white areas including points on the boundary are feasible regions. Fig. 4 looks almost identical to Fig. 3 because the con-

Fig. 4. Feasible spaces (white regions, including boundary) of the sigmoid formulation for two 2SAT problems with variables x1 and x2 . Valid solutions to the discrete SAT problems are indicated by dots.

Y. Shang, M.P.J. Fromherz / Information Sciences 153 (2003) 1–36

9

straints in the exponential formulation have similar boundaries as those in the sigmoid formulation. 2.1.3. Quadratic formulation The basic transformation function of the quadratic formulation is f ðxÞ ¼ x2

ð6Þ

and the constant limit for the inequality is c ¼ 1. Based on this function, clause x1 _ x2 _ :x3 is transformed into the quadratic constraint 02 0 2 x02 1 þ x2 þ ð1 x3 Þ P 1

ð7Þ

This formulation has been chosen because it is one of the simplest forms of encoding Booleans in continuous form. It is also presumed to be a good formulation for solvers using a quadratic approximation of the constraint function (cf. Section 3.1). Fig. 5 shows 2-D feasible spaces of continuous CSPs created from the same two 2-SAT problems as before, using the quadratic formulation. The left diagram is for the SAT problem ðx1 _ x2 Þ ^ ðx1 _ :x2 Þ, and the right diagram is for the SAT problem ðx1 _ x2 Þ ^ ð:x1 _ :x2 Þ. The interior of the dark areas are infeasible regions, whereas the white areas including points on the boundary are feasible regions. Unlike in the exponential and sigmoid formulations, when we use the thresholding scheme to map a solution back to the discrete SAT problem, a feasible solution to the continuous problem may not always be mapped to a feasible solution to the original SAT problem. For example, considering the

Fig. 5. Feasible spaces (white regions, including boundary) of the quadratic formulation for two 2-SAT problems with variables x1 and x2 . Valid solutions to the discrete SAT problems are indicated by dots. The boundaries of the additional constraints are depicted by the outer box.

10

Y. Shang, M.P.J. Fromherz / Information Sciences 153 (2003) 1–36

right diagram in Fig. 5, when both x1 and x2 have large positive or large negative values at the same time, they are feasible solutions to the continuous CSP. However, after thresholding, they are not feasible solutions to the original SAT problem. This issue can be addressed by introducing simple bounds on the variables in the continuous formulation. For example, by adding the constraints 0:5 6 x0i 6 1:5

for i ¼ 1; . . . ; n;

ð8Þ

a solution to the continuous problem is always mapped to a feasible solution to the SAT problem using the previous thresholding scheme. 2.2. A generic problem generator The SAT-like continuous constraint problem formulations allow us to connect our work to the studies of (discrete) SAT problems, but they are far removed from real applications such as the robot control problem. The choice of problem ensembles has been recognized as an important part of the analysis task [16], and it has been shown that random problems do not always exhibit the relevant characteristics of real-world problems [12]. In particular, we need flexible test case generators that can generate non-linear continuous CSPs similar to the robot control problems. In various disciplines, test problems have been used to evaluation the performance of a large variety of non-linear constrained optimization/satisfaction methods, including mathematical programming methods [9], evolutionary methods [26], and simulated annealing methods [36]. Their performance often varies across different application domains and even from one problem instance to another. For example, in the mathematical programming community, several test problem sets have been proposed, such as constrained and unconstrained testing environment (CUTE) [6] and constrained optimization problems (COPs) [2], which are collections of non-linear constraint problems from real applications. Unfortunately, while the number of variables may be a parameter in these problems, the ratio of variables to constraints cannot be changed in either of these problem sets. Since previous phase transition work on random SAT problems concluded that the clause-to-variable ratio is an important parameter in determining the difficulty of SAT problems, changing that ratio (and other measures of feasibility) is an important capability for our purpose. In the context of evolutionary computing, a test-case generator capable of creating various test problems with different characteristics has been proposed for analyzing and comparing constraint-handling techniques [25]. The generator can generate problems with different relative sizes of feasible regions, different numbers and types of constraints, convex or non-convex objective functions, and highly non-convex constraints consisting of (possibly) disjoint

Y. Shang, M.P.J. Fromherz / Information Sciences 153 (2003) 1–36

11

regions. Its major limitation is that it defines a landscape that consists of a collection of optimization functions that are only individually (piece-wise) continuous and are defined on different sub-spaces of equal size. Because derivatives do not exist on the boundary of two sub-spaces, optimization methods requiring continuous derivatives cannot be applied. Furthermore, in this generator, the number of peaks in the objective function is defined by wn , where n is the number of variables and w is a positive integer. Thus there is only one peak when w ¼ 1 and 2n peaks when w ¼ 2. The number of peaks explodes even for small values of w and n. Test problems should allow one to easily compare different optimization techniques developed in various scientific communities on a level ground. They should be useful in investigating the various characteristics (e.g., constraintto-variable ratio) and complexity of constraint problems, as well as the computational complexity of various solving techniques. The general framework of generating test problems borrows the idea of randomly generated SAT problems [31]. It has a fixed-constraint-length model and a constant-density model. In the satisfiability community, formulas generated using the fixed-clause-length model are called random K-SAT. A KSAT instance is characterized by three parameters: the number of variables, the number of clauses, and the number of variables per clause. Formulas generated using the constant-density model are called random P-SAT. A P-SAT instance is also characterized by three parameters: the number of variables, the number of clauses, and the probability that a variable is in a clause. Similarly, the parameters of our generator Gðn; m; k; p; tÞ are • n, the number of variables, • m, the number of constraints, • k, the maximum number of variables in a constraint, • p, the probability that a variable is in a clause, and • t 2 ½0; 1, the average tightness of the constraints. In selecting the variables appearing in a random constraint, the generator first randomly picks k variables and then decides with probability p whether they are selected. For instance, Gðn; m; 3; 1; tÞ generates 3-SAT-like formulas, whereas Gðn; m; n; p; tÞ generates P-SAT-like formulas. By leaving the form of constraints open, the generator represents a family of CSPs. For example, the 3-SAT-like continuous problems presented in the previous section can be generated by the generator with their particular constraint formulations. We represent two more implementations of the generators based on sine-functions later in this section. We define the tightness t of a (continuous) constraint as the ratio of space made infeasible by the constraint to the total problem space. For example, t ¼ 0 means that the constraint leaves the entire problem space feasible, t ¼ 1=2 means that it makes half of the problem space infeasible, and t ¼ 1 means that it makes the whole problem space infeasible. Assuming that all constraints are

12

Y. Shang, M.P.J. Fromherz / Information Sciences 153 (2003) 1–36

independent, we can compare constraint problems with different numbers of constraints and tightnesses. For example, when two constraints of tightness t 2 are independent, their combined tightness is 1 ð1 tÞ ¼ 2t t2 . When t is much smaller than 1, the combined tightness is approximately 2t, which is equivalent to the tightness of a single constraint of tightness 2t. More generally, assuming that m constraints of tightness t are independent, their combined tightness t0 is  m  X i t0 ¼ 1 ð1 tÞm ¼ 1 ð tÞi m i¼0 ¼ mt

mðm 1Þ 2 m t þ    ð tÞ 2

ð9Þ

Now, consider another set of 2m independent constraints of tightness t=2. Their combined tightness t00 is  2m   X t 2m t i 2m 00 t ¼1 1 ¼1 i 2 2 i¼0  t 2m mðm 1=2Þ 2 t þ  ¼ mt ð10Þ 2 2 When t is small and m is large, the first term (mt) dominates and the two combined tightness t0 and t00 are approximately the same. As an example, a problem with m constraints and average tightness t has about the same combined tightness as the problem with 2m constraints and average tightness t=2. All the parameters of the generator can be changed independently. For this paper, we only look at K-SAT-like problems, and p is always set to 1. 2.2.1. Sine-based sum-form continuous CSPs The robot control problem mentioned before is representative of problems with spatially defined constraints using trigonometric functions. In part for that reason, we propose a test-case generator for sine-based continuous CSPs. In the following, we present two implementations of the generator, in which all constraints are generated based on the same base function with randomly generated coefficients. In the first implementation, the base form of Gðn; m; k; 1; tÞ is k k 1X 1 X ai sinð2pabi xji þ 2pci Þ 6 h ai k i¼1 k i¼1

ð11Þ

where fj1 ; . . . ; jk g  f1; . . . ; ng are the indices of k randomly chosen variables. 0 6 ai , ci 6 1 and 1 6 bi 6 2 are random coefficients. ai specifies the magnitude of the sine function, bi its frequency, and ci its phase. a controls the maximum frequency. A larger value of a usually makes the feasible region specified by the

Y. Shang, M.P.J. Fromherz / Information Sciences 153 (2003) 1–36

13

constraint more disjoint. a ¼ 1 is used in our experiments. 1 6 h 6 1 specifies the threshold for satisfying the constraint, which is related to t as described below. The larger its value, the easier it is to satisfy the constraint. For h ¼ 0, on average half of the search space is feasible for the constraint. Both the left- and right-hand-side of Eq. (11) are normalized to have values between )1 and 1. Constraints generated based on Eq. (11) may not be satisfiable. In our experiments, to guarantee a randomly generated CSP is satisfiable, we always check the feasibility of the origin for each constraint. If the origin does not satisfy a constraint, we simply discard the constraint and generate another one. Fig. 6 shows the function surfaces of four random constraints generated by Gð2; 1; 2; 1; 1=4Þ in the form of Eq. (11). Fig. 7 shows the 2-D feasible spaces formed by the combination of the same four random constraints (left diagram). For comparison, Fig. 7 also shows the 2-D feasible spaces formed when t ¼ 1=2 with the otherwise same four constraints (right diagram). As can be seen, the feasible space (corresponding to regions inside the contour lines in Fig. 7) formed by the combination of the four constraints is smaller for the tighter constraints. In discrete 3-SAT problems, each clause makes 1/8 of the search space infeasible, i.e., the tightness is exactly 1/8 for each 3-SAT constraint. By setting k ¼ 3, the generator Gðn; m; 3; 1; tÞ generates problems with each constraint containing 3 randomly selected variables. By changing t, we can control the difficulty of the constraints, which in turn affects the complexity of the whole problem. For the base function in Eq. (11), although it is hard to find the exact h value for a given t so that each constraint makes a fixed fraction of the search space infeasible, we can find an approximate h value that achieves this for the average case.

Fig. 6. 3-D plot of four random constraints generated by Gð2; 1; 2; 1; 1=4Þ in the form of Eq. (11). The right four figures are views from the top.

14

Y. Shang, M.P.J. Fromherz / Information Sciences 153 (2003) 1–36 1

1

0.8

0.8

0.6

0.6

0.4

0.4

0.2

0.2 x

x

0

– 0.2

0

– 0.2

– 0.4

– 0.4

– 0.6

– 0.6

– 0.8

– 0.8

–1 –1

– 0.8 – 0.6 – 0.4 – 0.2

0 y

0.2

0.4

0.6

0.8

1

–1 –1

– 0.8 – 0.6 – 0.4 – 0.2

0 y

0.2

0.4

0.6

0.8

1

1

1

0.9

0.9 % of Feasibility

% of Feasibility

Fig. 7. 2-D feasible spaces (dark regions inside the contour lines) formed by the combination of four random constraints generated by Gð2; 1; 2; 1; tÞ in the form of Eq. (11) with t ¼ 1=4 (left) and t ¼ 1=2 (right).

0.8 0.7 0.6 0.5 0.4

0.8 0.7 0.6 0.5

0

0.2

0.4

0.6 θ

0.8

1

0.4

0

0.2

0.4

0.6

0.8

1

θ

Fig. 8. Average and standard deviation of the percentage of feasibility of randomly generated constraints with varying h. The left figure is for constraints generated by Gð3; 1; 3; 1; tÞ in the form of Eq. (11), sine-based sum-form constraints. The right one is for constraints generated by Gð3; 1; 3; 1; tÞ in the form of Eq. (12), sine-based product-form constraints.

Fig. 8 shows the average and standard deviation of the percentage of feasibility (or 1 - tightness) of random constraints generated by Gð3; 1; 3; 1; tÞ with different h values. The figure on the left-hand side is for sine-based sum-form constraints in the form of Eq. (11). The data is obtained based on 1000 randomly generated constraints and each constraint is sampled randomly at 1000 points in region ½ 1; 1 on each dimension. The figure shows that h ¼ 0:57 approximately gives an average tightness t ¼ 1=8. Thus, h ¼ 0:57 is used in our experiments to compare with 3-SAT-based continuous CSPs. For comparison purposes, we also try h ¼ 0:35 in the experiments, in which case the average tightness of the constraints is approximately t ¼ 1=4, corresponding to 2-SATlike problems. When h ¼ 0:35, each constraint is, on average, almost twice as

Y. Shang, M.P.J. Fromherz / Information Sciences 153 (2003) 1–36

15

hard to satisfy by random sampling as when h ¼ 0:57. Finally, when h ¼ 0, the average tightness is about 1/2. 2.2.2. Sine-based product-form continuous CSPs The generator Gðn; m; k; p; tÞ can use other base functions besides Eq. (11) to generate constraints with different characteristics. Notice that the formulation in Eq. (11) allows a linear decomposition of the function for the sine terms involved. Other problems, such as the modular robot control problem, involve the product of the constraintsÕ components of sines and cosines. Thus, in the second implementation of the generator, we use a base function of Gðn; m; k; 1; tÞ to simulate such constraints as follows: k Y sinð2pabi xji þ 2pci Þ 6 h ð12Þ i¼1

with fj1 ; . . . ; jk g, bi , and ci chosen as in Eq. (11). Of course, the summation and multiplication of variable components can also be combined. We plan to investigate these variations and their effects on problem complexity in future work. Fig. 9 shows the function surfaces of four random constraints generated by Gð2; 1; 2; 1; 1=4Þ in the form of Eq. (12). Fig. 10 shows the 2-D feasible spaces formed by the combination of the same four random constraints (left diagram). For comparison purposes, Fig. 10 also shows the 2-D feasible spaces formed by the combination of four other random constraints of t ¼ 1=2 (right diagram). The feasible space (corresponding to regions inside the contour lines in Fig. 8) formed by the combination of the four constraints is smaller. Comparing Figs. 10 and 7, the feasible regions in Fig. 10 are more connected. In Fig. 8, the figure on the right-hand side shows the average and standard deviation of the tightness of random constraints generated by Gð3; 1; 3; 1; tÞ in

Fig. 9. 3-D plot of four random constraints generated by Gð2; 1; 2; 1; 1=4Þ in the form of Eq. (12). The right four figures are views from the top.

Y. Shang, M.P.J. Fromherz / Information Sciences 153 (2003) 1–36 1

1

0.8

0.8

0.6

0.6

0.4

0.4

0.2

0.2

0

x

x

16

0

– 0.2

– 0.2

– 0.4

– 0.4

– 0.6

– 0.6

– 0.8

– 0.8

–1 –1

– 0.8 – 0.6 – 0.4 – 0.2

0 y

0.2

0.4

0.6

0.8

1

–1 –1

– 0.8 – 0.6 – 0.4 – 0.2

0 y

0.2

0.4

0.6

0.8

1

Fig. 10. 2-D feasible spaces (dark regions inside the contour lines) formed by the combination of four random constraints generated by Gð2; 1; 2; 1; tÞ in the form of Eq. (12) with t ¼ 1=4 (left) and t ¼ 1=2 (right).

the form of Eq. (12), the sine-based product-form constraints. The data is obtained based on 1000 randomly generated constraints, and each constraint is 3 sampled randomly at 1000 points in the region ½ 1; 1 . Fig. 8 shows that the tightness of product-form constraints increases faster than that of sum-form constraints as h increases, while the standard deviations are smaller. The figure shows that h ¼ 0:40 approximates an average tightness t ¼ 1=8. When h ¼ 0:18, the average tightness of the constraints is approximately t ¼ 1=4. Finally, when h ¼ 0, the average tightness is about 1/2.

3. Search algorithms Different search algorithms have different complexity behaviors in solving continuous or combinatorial optimization problems. In this paper, we study computational complexity of both local search and global search algorithms in solving continuous CSPs. In this section, we first present the local search method used in our experiments, which is the MATLAB implementation of the widely used and very efficient SQP method [10]. Then we present the NM direct search method [27] that is capable of performing global search. 3.1. Sequential quadratic programming and its MATLAB implementation The SQP optimization method represents the state-of-the-art in non-linear programming methods. Based on the work of Biggs [1], Han [14], and Powell [28], the method mimics NewtonÕs method for constrained optimization. It is an iterative method, starting from some initial point and converging to a constrained local minimum. At each iteration of an SQP method, one solves a

Y. Shang, M.P.J. Fromherz / Information Sciences 153 (2003) 1–36

17

quadratic program (QP) that models the original non-linear constraint problem at the current point. The solution to the QP is used as a search direction to find an improving point, which is used in the next iteration. The iteration is repeated until an optimal or feasible solution is found (for optimization or satisfaction problems, respectively). The function fmincon is a MATLAB implementation of the SQP method. fmincon is a local search algorithm in the continuous search space, while GSAT [30] is a local search algorithm in the discrete search space. It starts from an initial point and usually converges to a constrained local optimum close to the initial point. fmincon may be trapped by local optima and may not be able to find a feasible solution when constraints are non-linear. fmincon cannot prove infeasibility when no feasible solution exists. One issue in using fmincon to solve continuous CSPs is that one run of fmincon from an infeasible starting point may not converge to a feasible solution. When that happens, we restart fmincon from another point. In our experiments, starting points are randomly generated within a certain range for each variable in two different ways as follows. Restart method 1 (R1): global random restart. This method mimics GSAT by combining fmincon with random global restart. fmincon is started from a random initial point in a certain fixed region. If it stops without finding a feasible solution, fmincon is restarted from a new random point in the fixed region. This procedure is repeated until a feasible solution is found. For example, in solving the continuous sine-based problems, variable values of the initial and restart points are generated randomly in the interval ½ 1; 1. Restart method 2 (R2): global random restart with pre-sampling. The goal of our second restart method is to generate better starting points. Instead of generating a single random point, the algorithm randomly samples k points in a fixed region and selects the best one as the initial or restart point for fmincon. In our experiments, we set k to n, the number of variables. The best sample point is the one with the smallest constraint violation, which is usually close to or even in a feasible region. As in R1, variable values xi of the sample points are generated in the same fixed region, e.g., ½ 1; 1 for the sine-based problems. Compared to R1, each run of R2 has an additional cost of k function evaluations in generating the starting point. Although we present the two restart methods in the context of fmincon, they are also used in generating starting points for the NM algorithm (to be discussed next) in our experiments. They are general strategies that can be used with other search methods as well. 3.2. Nelder–Mead algorithm and its MATLAB implementation The NM algorithm [27], first published in 1965, is a very effective method for multidimensional unconstrained optimization. The NM method falls in the

18

Y. Shang, M.P.J. Fromherz / Information Sciences 153 (2003) 1–36

class of direct search methods, which minimize a scalar-valued non-linear function of real variables using only function values, without any derivative information (explicit or implicit). The NM algorithm is an iterative method. To minimize an n-dimensional objective function, it starts with a simplex in n-D space, a polyhedron with n þ 1 vertices. At each iteration, the method evaluates the objective function at one or more trial points whose location depends on the relative values of the function at the vertices and at earlier trial points (Fig. 11). The simplex is altered by reflection, expansion, or contraction in order to find a new point improving the worst vertex, i.e., the vertex with the largest objective value. At the end of each iteration, a new simplex is created in which the worst vertex has been replaced. (The NM algorithm should not be confused with the simplex algorithm of Dantzig [7] for linear programming. Both algorithms employ a sequence of simplices, but are otherwise completely different and unrelated.) Direct search methods work much better on noisy, discontinuous, and nondifferentiable functions than derivative-based methods, such as quasi-NewtonÕs method [4] and conjugate gradient methods [23]. Besides the NM algorithm, other direct search methods include the multidirectional search method [8,35] and the Hooke–Jeeves algorithm [18]. Among various direct search methods, the NM algorithm is the most popular, despite the fact that it has been shown the NM algorithm does not always converge to a global minimum for general non-linear functions [24]. The reasons for its popularity are suggested in [22] as follows: (1) The NM algorithm typically produces significant improvement for the first few iterations, which is important for many applications that want improvement fast; (2) For applications where a function evaluation is very expensive and derivatives cannot be calculated, the NM method tends to require substantially fewer function evaluations than other alternatives, and its

Fig. 11. Nelder–Mead simplex with 3 vertices for a 2-D problem. In one iteration, the worst point x3 is replaced by either point r (reflection), e (expansion), oc (outside contraction), or ic (inside contraction), or the entire simplex is contracted towards the best point if none of these points improve on x3. All points are generated along the vector from the worst point to the centroid of the remaining points, and the actually selected point depends on the amount of improvement in the objective function.

Y. Shang, M.P.J. Fromherz / Information Sciences 153 (2003) 1–36

19

relative efficiency for large classes of problems often outweighs the lack of convergence theory; (3) The NM algorithm is easy to explain and simple to program. Simplex-based direct search methods have previously been extended to solving constrained optimization problems. In 1965, Box published a direct search method called Complex (‘‘constrained simplex’’) [3]. The method starts by constructing a ‘‘complex’’ of at least n þ 1 vertices (n is the number of variables) in the feasible region, meaning all vertices are feasible. BoxÕs method deals with infeasible points in a straightforward way by treating them as points with very bad objective function values. Because BoxÕs Complex method relies on feasible points to start with and requires feasible points during the search, it does not work well on highly constrained problems where the search has to start from an infeasible region and feasible points are hard to find. A more effective approach for extending direct search methods to highly constrained problems is to use penalty functions. An example is sequential weight increasing factor technique (SWIFT) [34], which was proposed in 1975 for non-linear constrained optimization. SWIFT converts a constrained problem into an unconstrained problem with a penalty function and minimizes the penalty function using the NM algorithm. In this paper, we apply the NM method to minimize static penalty functions of a constrained problem, in a way similar to SWIFT. A penalty function is a sum of the objective function and a penalty term that corresponds to constraint violations. The penalty term is zero if all constraints are satisfied, and is positive otherwise. The penalty value may be determined simply based on whether a constraint is satisfied or not, or based on the distance of a point from the feasible region. Consider a general non-linear constrained problem defined as follows: MinimizeX subject to

f ðX Þ gðX Þ 6 0 X ¼ ðx1 ; x2 ; . . . ; xn Þ hðX Þ ¼ 0

ð13Þ

where X ¼ ðx1 ; x2 ; . . . ; xn Þ is a vector of real numbers, f ðX Þ is an objective function, and gðX Þ ¼ ½g1 ðX Þ; . . . ; gm1 ðX ÞT and hðX Þ ¼ ½h1 ðX Þ; . . . ; hm2 ðX ÞT are constraint functions. f ðX Þ, gðX Þ, and hðX Þ can be either continuous or discontinuous, and linear or non-linear. The penalty function we use is LðX ; kÞ ¼ f ðX Þ þ

m1 X i¼1

ki maxf0; gi ðX Þg þ

m2 X

kjþm1 jhj ðX Þj

ð14Þ

j¼1

where k1 ; . . . ; km1 þm2 are penalty coefficients. In this paper, we simply set all k values to 1 in our experiments. The NM algorithm can perform minimization either based on local information or more global information. When the initial simplex is large and

20

Y. Shang, M.P.J. Fromherz / Information Sciences 153 (2003) 1–36

covers a large region in the search space, the NM algorithm perform descent based on more global information. On the other hand, when the initial simplex is small, it performs local descent. When doing local descent, the NM algorithm usually has slow convergence compared to derivative based methods, such as SQP, in solving smooth functions. Therefore, when there are faster local optimization methods available, they can be combined with NM in a way that NM finds the global structure of the search space and moves closer to a good solution of an optimization problem, or a feasible region of a CSP problem. Then a more efficient local search method can be used to actually find the global (or near-global) solution, or the feasible region. In minimizing a penalty function, starting points of the NM algorithm can be generated either randomly or heuristically. In our experiments, they are generated in the same way as the multistart of SQP (discussed in the previous section): restart method 1 (R1)––a random point, or restart method 2 (R2)–– the best of multiple random points. The initial simplex of the NM algorithm is constructed around the starting point. The starting point is used as one vertex. The size of the initial simplex is determined by a parameter Rrange . Each of the other n vertices of the initial simplex is created by perturbing the value of one variable of the starting point randomly in range Rrange . Let X 1 ¼ ðX11 ; X21 ; . . . ; Xn1 Þ be the starting point of NM, also the first vertex of the simplex. Then the jth vertex X j ¼ ðX1j ; X2j ; . . . ; Xnj Þ, j ¼ 2; . . . ; n þ 1, is Xij ¼



i þ 1 6¼ j Xi1 Xi1 þ Rrange ðr 0:5Þ i þ 1 ¼ j

for i ¼ 1; . . . ; n

ð15Þ

where 1 P r P 0 is a random number and Rrange > 0. Rrange is the same for all vertices. A larger Rrange leads to a larger initial simplex, enabling the NM algorithm to search more globally. The NM algorithm we used in our experiments is the fminsearch program provided in the MATLAB 6.0 optimization toolbox. The only change we made is in how to construct the initial simplex.

4. Experimental results In this section, we present experimental results of solving random 3-SATlike and sine-based continuous CSPs using the SQP and NM methods. Comparing the complexity of the different continuous CSPs solved by different search algorithms allows us not only to compare different problems and solving approaches, but also to connect back to results from similar studies on SAT problems.

Y. Shang, M.P.J. Fromherz / Information Sciences 153 (2003) 1–36

21

4.1. Results for SAT-like continuous CSPs 4.1.1. Local search We have studied the three continuous formulations (exponential, sigmoid, and quadratic) of 3-SAT problems with fmincon using restart methods 1 (R1) and 2 (R2) and run experiments on randomly generated 3-SAT problems with n ranging from 10 to 60 [32]. Fig. 12 shows representative results of 25 and 50 variable instances with the constraint ratio (ratio of constraints to variables) varying from 1 through 3.4. The results are based on 100 random problem instances for each constraint ratio, solved by fmincon/R1. The starting points were generated in the range ½0; 1 for each variable. Each random instance is guaranteed to have at least one feasible solution. The number of fmincon iterations added over all restarts is used as a measure of the computational cost for solving the problems. We plot the median number of iterations of fmincon to find one feasible solution against the constraint ratio. Results for problems with other numbers of variables are similar. Note that, as an example, given 50 variables and a constraint ratio of 2.6, 10 iterations corresponds to about 530 function evaluations. We use medians instead of means because the means of the number of iterations are heavily influenced by a small number of extremely large values. Our interest is in what the majority of instances from the distribution are like, not the unusual or extreme cases. As the median is more robust in the presence of such outliers, it appears to be a more informative statistic for our purpose. In Fig. 12, we see the following pattern: For formulas with relatively few clauses (under-constrained), fmincon finishes quickly, usually in one iteration.

Median # of iterations

100 Quadratic form, 25 vars Sigmoid form, 25 vars Exponential form, 25 vars Quadratic form, 50 vars Sigmoid form, 50 vars Exponential form, 50 vars 10

1 1

1.5

2

2.5

3

3.5

Ratio of constraints to variables Fig. 12. Complexity of continuous 3-SAT problems in three formulations solved by multistart of fmincon (restart method 1), the MATLAB implementation of SQP.

22

Y. Shang, M.P.J. Fromherz / Information Sciences 153 (2003) 1–36

10000

Median # of function evaluations

Median # of function evaluations

For formulas with many clauses (highly constrained), e.g., constraint ratio larger than 2, fmincon requires exponentially more iterations to find a feasible solution. This is an easy-hard phase transition. The data also shows that there is little difference between the different continuous formulations, a result validated with other values of n. Fig. 13 compares the results of fmincon with two different starting ranges, ½0; 1 and ½ 1; 2, within which the initial and restart points of fmincon are randomly generated. 3-SAT instances were transformed into the exponential formulation. Remember that the only difference between restart methods 1 and 2 is that in the latter, the best of n randomly generated points is used as the starting point of the initial run and every restart of fmincon. Because R2 has more sampling overhead than R1 in each restart, it is not fair to compare their numbers of iterations. A better metric is the number of function evaluations, including both the objective and constraint evaluations. fmincon can solve a constrained problem solely based on function values. Empirically, we find that although fmincon usually runs faster when derivatives are provided, there is no significant difference in numbers of iterations in solving a problem. To compare the numbers of function evaluations used by R1 and R2 methods, in this experiment, we only provide function values to fmincon, not derivatives. We expect that the physical time is proportional to the number of function evaluations. Fig. 13 shows the median numbers of function evaluations in solving 100 random instances. The same random instances were solved by R1 and R2 methods. The data show that for high constraint ratios, (a) R2 improves on R1 and (b) the larger starting range ½ 1; 2 is better than the smaller ½0; 1. In the easy phase, fmincon finds a feasible solution in one iteration, typically using n þ 3 function evaluations, which translate into 28 function evaluations for the 25variable instances and 53 function evaluations for the 50-variable ones. R2 uses

R1, range [0, 1] R2, range [0, 1] R1, range [-1, 2] R2, range [-1, 2]

1000

100

10

1

1.5

2 2.5 3 3.5 4 4.5 Ratio of constraints to variables

5

10000 R1, range [0, 1] R2, range [0, 1] R1, range [-1, 2] R2, range [-1, 2]

1000

100

10

1

1.5 2 2.5 3 Ratio of constraints to variables

3.5

Fig. 13. Complexity of continuous 3-SAT problems of the exponential formulation solved by multistart of fmincon. The left figure is for 25 variable instances and the right one is for 50 variable instances. R1 and R2 stand for restart methods 1 and 2, respectively.

Y. Shang, M.P.J. Fromherz / Information Sciences 153 (2003) 1–36

23

more function evaluations in the easy phase due to the overhead of evaluating the n sample points. R2 with a starting range ½ 1; 2 has the best results for the highly constrained instances. No significant improvement was found when we further increased the starting range. Actually, when the starting range is too large, it is hard for fmincon to find feasible solutions. One reason is that the amount of constraint violation of unsatisfied constraints approaches 1 for large variable values, providing little guidance to good search directions. Fig. 14 shows results of similar experiments on the sigmoid formulation. Random 3-SAT instances in the sigmoid form were solved by fmincon using R1 and R2, starting in two different ranges, ½0; 1 and ½ 1; 2. Again, the median numbers of function evaluations in solving 100 random instances are shown. Similar to results on the exponential form, an easy-hard phase transition is observed for both restart methods and starting ranges. R2, together with the larger starting range ½ 1; 2, is able to push the transition point slightly higher and has the best results on highly constrained instances. One important observation from the results shown in Figs. 12–14 is that the performance curves exhibit similarity to each other and to discrete 3-SAT results. Slightly above a constraint ratio of 2 both the 25 and 50 variable problems become exponentially more difficult, while below this ratio the problems are relatively easy to solve. Thus, the ratio of the number of constraints to the number of variables seems to be an important parameter of the problem. This transition from polynomial to exponential cost is analogous to a transition in discrete 3-SAT [5,17]. In particular, an approximate theory predicts that a transition from polynomial to exponential scaling of costs occurs just when there are enough clauses to make the expected number of consistent partial assignments no longer monotonic as a function of the number of assigned variables. For random k-SAT, this occurs when the clause-to-variable

Median # of function evaluations

Median # of function evaluations

10000 R1, range [0, 1] R2, range [0, 1] R1, range [-1, 2] R2, range [-1, 2]

1000

100

10

1

1.5

2 2.5 3 3.5 4 4.5 Ratio of constraints to variables

5

10000 R1, range [0, 1] R2, range [0, 1] R1, range [-1, 2] R2, range [-1, 2]

1000

100

10

1

1.5 2 2.5 3 Ratio of constraints to variables

3.5

Fig. 14. Complexity of continuous 3-SAT problems of the sigmoid formulation solved by multistart of fmincon. The left figure is for 25 variable instances and the right one is for 50 variable instances. R1 and R2 stand for restart methods 1 and 2, respectively.

24

Y. Shang, M.P.J. Fromherz / Information Sciences 153 (2003) 1–36

ratio l satisfies kl ¼ ð2k 1Þ ln 2 (Eq. (8) of [37]). For 3-SAT, this gives l ¼ 1:62. Our results are qualitatively quite similar to those found in discrete 3-SAT problems [31]. The discrete case undergoes a transition from weak dependence to strong dependence at a constraint-to-variable ratio similar to the continuous case. In both continuous and discrete cases, the exponential rate of increase is approximately independent of the number of variables and depends only on the ratio of constraints to variables. On the other hand, phase transition points and the slopes of the complexity curves in the exponential phase do appear to have a dependence on the algorithm and therefore cannot be considered solely a characteristic of the problem. The median number of iterations increases slightly more slowly for R2 than for R1. The different cost slopes represent different solution time for problems above the critical constraint ratio. The solution time can vary as much as a factor of ten between the various algorithms, which is very important for solving real-time applications. 4.1.2. Global search fmincon is an example of local optimization methods in solving CSPs. These methods try to find a good solution near their starting points fast. Global optimization methods take a different approach by exploring the search space more widely to discover global structures. Although they are usually slower than local search methods, it has been shown that they can find better solutions for hard problems. Therefore it is interesting to study the complexity of global search methods in solving our CSPs. The global search method used in our experiment is the NM algorithm. Starting with a large simplex, the NM algorithm is capable of searching along a descent direction of a global structure in the search space. It tolerates small fluctuations or noise in the search terrains better than local descent methods. On the other hand, NM has difficulties (1) when there are local minima with basins of depth that are comparable to the overall global trend of the objective function, (2) when there are isolated maxima, each surrounded by a ring minimum, such that the simplex may fall on the maxima and stall, and (3) when the global minimum is not located in a region where the average global structure has a minimum. In addition, the NM method is often slow in converging to the exact local minimum. To take advantage of the strength of NM and avoid its weakness, we combine NM with fmincon, effectively creating a new solver with both global and local search components. The idea is that we first run NM for a limited number of iterations. Then, the best solution is used as the starting point of one run of fmincon. If fmincon does not find a feasible solution, we restart NM from another random starting point, which is followed by fmincon, and so on. This method works well if NM can find a good solution that is close to a

Y. Shang, M.P.J. Fromherz / Information Sciences 153 (2003) 1–36

25

100000

Median # of function evaluations

Median # of function evaluations

feasible region. This is the NM + fmincon method we refer to in the rest of this paper. Figs. 15 and 16 compare fmincon and NM + fmincon in solving 25 and 50 variable 3-SAT instances in the exponential continuous formulation. To have a fair comparison of their computational time, we use the number of function evaluations in finding a feasible solution. All our experiments were run in MATLAB 6.0 on a 400 MHz Sun Enterprise 450 computer with Solaris version 2.6. The initial and restart points of both methods are randomly generated within the range ½ 1; 2 for each variable. In R2, the best of n (the number of variables) random points (based on the penalty function value) is used as the initial or restart point. We tried several values for Rrange and the maximum number of iterations in running the NM algorithm and settled on Rrange ¼ 10 and the maximum number of iterations set to 200.

fmincon, R1 fmincon, R2 NM+fmincon, R1 NM+fmincon, R2

10000

1000

100

10

1

1.5

2 2.5 3 3.5 4 4.5 5 5.5 Ratio of constraints to variables

100000

10000

1000

100

10

6

fmincon, R1 fmincon, R2 NM+fmincon, R1 NM+fmincon, R2

1

1.5 2 2.5 3 3.5 Ratio of constraints to variables

4

Fig. 15. The median numbers of function evaluations used by fmincon and NM + fmincon in solving continuous 3-SAT problems in the exponential formulation. The left diagram is for 25 variable instances and the right one is for 50 variable ones.

10000 fmincon, R1 fmincon, R2 NM+fmincon, R1 NM+fmincon, R2

100

Median CPU time (sec)

Median CPU time (sec)

1000

10

1

0.1

1

1.5

2 2.5 3 3.5 4 4.5 5 5.5 Ratio of constraints to variables

6

fmincon, R1 fmincon, R2 NM+fmincon, R1 NM+fmincon, R2

1000

100

10

1

1

1.5

2 2.5 3 3.5 Ratio of constraints to variables

4

Fig. 16. The median CPU time in seconds used by fmincon and NM + fmincon, respectively, in solving continuous 3-SAT problems in the exponential formulation. The left diagram is for 25 variable instances and the right one is for 50 variable ones.

26

Y. Shang, M.P.J. Fromherz / Information Sciences 153 (2003) 1–36

The two diagrams in Fig. 15 plot the median numbers of function evaluations versus the ratio of constraints to variables (or constraint ratio). For 25 variable instances, a cost transition of fmincon from constant to exponential happens at constraint ratio 2.2 for R1 and at ratio 2.6 for R2. The cost of NM + fmincon increases at a slower rate than that of fmincon in the exponential phase, and its phase transition point is not clearly shown in the constraint ratio range of the diagram. For 50 variable instances, a cost transition of fmincon from constant to exponential happens at constraint ratio 2.0 for R1 and at ratio 2.4 for R2. Phase transitions of NM + fmincon are clearly observed for both restart methods. A cost transition from constant to exponential occurs at constraint ratio 2.6 for R1 and at ratio 2.8 for R2. An important observation is that fmincon is faster for weakly constrained instances, whereas NM + fmincon is much better for highly constrained instances. Using R1 to solve the 25-variable instances, NM + fmincon requires fewer function evaluations when the constraint ratio is over 3.2. Using R2, the switch happens when the constraint ratio is around 4.0. R2 works better than R1 at high constraint ratios with both solvers. The data shows that the global search performed by NM is especially useful in solving highly constrained CSPs. The effectiveness of NM + fmincon on highly constrained instances shows that NM can indeed find very good solutions that are close to feasible regions even though the problem is hard. In this case, although the search terrain is very rugged, there are global structures that NM can effectively discover and explore. In contrast, fmincon increasingly behaves like random probing. Fig. 16 shows the median CPU times in seconds used by fmincon and NM + fmincon in solving the continuous 25 and 50 variable 3-SAT instances. Compared to Fig. 15, the algorithmic overhead of fmincon apparently increases faster than that of NM when the constraint ratio increases. Thus, according to CPU time cost, NM + fmincon is even more attractive than fmincon for highly constrained instances. 4.2. Results for sine-based continuous CSPs For the sine-based continuous CSPs, we ran experiments on problems that were generated randomly using Gðn; m; 3; 1; tÞ in the form of Eqs. (11) or (12), with n ¼ 25 or 50. All random instances are guaranteed to have at least one feasible solution, which is the origin, i.e., all variables with value 0. 4.2.1. Sum-form CSPs Fig. 17 shows the results of fmincon using R1 in solving 25 and 50-variable sine-based sum-form CSPs with t ¼ 1=2, 1/4, and 1/8, respectively. For each problem set, the graphs show both number of iterations and number of restarts. In our experiments of solving SAT-like CSPs using fmincon with R1,

1000

Median # of iterations and restarts

Median # of iterations and restarts

Y. Shang, M.P.J. Fromherz / Information Sciences 153 (2003) 1–36 (1a) (1b) (2a) (2b) (3a) (3b)

100

10

1

0

1

2 3 4 5 6 Ratio of constraints to variables

7

27

1000 (1a) (1b) (2a) (2b) (3a) (3b)

100

10

1

0

0.5

1 1.5 2 2.5 3 3.5 4 4.5 Ratio of constraints to variables

5

Fig. 17. Comparing median numbers of iterations and median numbers of restarts of fmincon with R1 in solving sine-based sum-form CSPs with 25 (left diagram) and 50 variables (right diagram). (1a) median no. of iterations, t ¼ 1=8; (1b) median no. of restarts, t ¼ 1=8; (2a) median no. of iterations, t ¼ 1=4; (2b) median no. of restarts, t ¼ 1=4; (3a) median no. of iterations, t ¼ 1=2; (3b) median no. of restarts, t ¼ 1=2.

fmincon often could run only for a single iteration, in which it either found a feasible solution or aborted if it failed to find one. Thus for high constraint ratios, fmincon often ran only one iteration between restarts. For sine-based sum-form CSPs, Fig. 17 shows that each restart can result in multiple iterations for small constraint ratios, but that the average number of iterations per restart also reduces to 1 as the constraint ratio increases. (This trend is evident in the converging iteration and restart curves for the same problems.) In other words, for highly constrained problem sets, solver iterations are often wasted until a restart point is sufficiently close to a feasible region. This was in part the motivation for R2. Given these observations, our hypothesis is that, with regard to the exponential formulation of SAT-like continuous CSPs, feasible regions are large in weakly constrained cases, and fmincon mostly needs only one iteration to find them. In the highly constrained cases, the search space is either flat or rugged, and fmincon could not find a direction towards the feasible region based on local information such as derivatives. So it usually gives up after one iteration. For the sine-based problems, function values change more gradually and fmincon can continue for more iterations in each run. As the constraint ratio increases, feasible regions become smaller and most restart points are far from these regions in the search space. In addition, the search space becomes increasingly rugged and the quadratic programming sub-problem does not approximate the actual function well. Again, fmincon usually gives up after one iteration, and most runs of fmincon are wasted. A starting point closer to the feasible region helps to alleviate the problem, and that is why R2 is better than R1. Fig. 18 compares the results of fmincon with R1 and R2 in solving 25 and 50variable sine-based sum-form CSPs with t ¼ 1=2, 1/4, and 1/8, respectively. The

Y. Shang, M.P.J. Fromherz / Information Sciences 153 (2003) 1–36 100000

Median # of function evaluations

Median # of function evaluations

28

10000 1000 100 t = 1/8, R1 t = 1/8, R2 t = 1/4, R1 t = 1/4, R2 t = 1/2, R1 t = 1/2, R2

10 1

0

1

2 3 4 5 6 7 8 Ratio of constraints to variables

9

10000

1000

100 t = 1/8, R1 t = 1/8, R2 t = 1/4, R1 t = 1/4, R2 t = 1/2, R1 t = 1/2, R2

10

1

0

1

2 3 4 5 Ratio of constraints to variables

6

Fig. 18. Complexity of random continuous sine-based sum-form CSPs of 25 (left diagram) and 50 (right diagram) variables solved by fmincon with R1 or R2.

median numbers of iterations of 100 random runs are shown. Phase transitions from flat to exponential are clearly observed for t ¼ 1=4 and 1/8. The transition point is around 2 for the 25-variable problems with t ¼ 1=8 and varies for other cases. The transition point is smaller for 50-variable problems and when t is larger. In Fig. 18, R2 exhibits a transition to the exponential behavior at slight higher ratios than that of R1. R2 outperforms R1 by a large margin for harder, highly constrained instances. This confirms results from the SAT-like continuous CSPs, where solution time above the critical constraint ratio could vary by more than a factor of ten between the two restart methods. The difficulty of solving a problem is related to the combined tightness of its constraints. For example, the problems with t ¼ 1=4 seem slightly harder (in terms of number of function evaluations) than the ones with t ¼ 1=8 and twice as many constraints. In comparison, Eqs. (9) and (10) say that the combined tightness of m random constraints with t ¼ 1=4 is approximately equal to that of 2m random constraints with t ¼ 1=8. The same observation can also be made for the problems with t ¼ 1=2 compared to the ones with t ¼ 1=4. As Figs. 17 and 18 illustrate, the constraint ratio is insufficient for predicting problem complexity. Other important parameters include the number of variables, the tightness of the constraints, and the form of constraint functions, etc. Characterizing the important features of constraints for real problems such as the robot control problem is an important research direction. Next, we compare the performance of fmincon and NM + fmincon on continuous sine-based sum-form CSPs. They are compared based on number of function evaluations. We tested both algorithms with both R1 and R2 on random instances. R2 is better than R1 for highly constrained problems in both algorithms. Figs. 19 and 20 show the results of fmincon and NM + fmincon using R2 for 25 and 50 variable instances with different tightness. The initial and restart points of both methods are randomly generated within ½ 1; 1 for

fmincon, t = 1/8 NM+fmincon, t = 1/8 fmincon, t = 1/4 NM+fmincon, t = 1/4 fmincon, t = 1/2 NM+fmincon, t = 1/2

100000

10000

1000

100

10

0

1

2

3 4 5 6 7 8 9 Ratio of constraints to variables

10000

1000

100

10

10

29

fmincon, t = 1/8 NM+fmincon, t = 1/8 fmincon, t = 1/4 NM+fmincon, t = 1/4 fmincon, t = 1/2 NM+fmincon, t = 1/2

100000 Median # of function evaluations

Median # of function evaluations

Y. Shang, M.P.J. Fromherz / Information Sciences 153 (2003) 1–36

0

1

2 3 4 5 6 Ratio of constraints to variables

7

fmincon, t = 1/8 NM+fmincon, t = 1/8 fmincon, t = 1/4 NM+fmincon, t = 1/4 fmincon, t = 1/2 NM+fmincon, t = 1/2

1000

100

10

1

0.1

0

1

2

3 4 5 6 7 8 Ratio of constraints to variables

fmincon, t = 1/8 NM+fmincon, t = 1/8 fmincon, t = 1/4 NM+fmincon, t = 1/4 fmincon, t = 1/2 NM+fmincon, t = 1/2

1000 Median CPU time (sec)

Median CPU time (sec)

Fig. 19. The median numbers of function evaluations used by fmincon and NM + fmincon (both using R2) in solving continuous sine-based sum-form CSPs (Eq. (11)) of 25 (left diagram) and 50 (right diagram) variables.

9

10

100

10

1

0.1

0

1

2 3 4 5 6 Ratio of constraints to variables

7

Fig. 20. Comparison of the median CPU time in seconds used by fmincon and NM + fmincon (both using R2) in solving sine-based sum-form CSPs of 25 (left diagram) and 50 (right diagram) variables.

each variable. We did not try to find the best parameter values for NM in solving these problem. Instead we set Rrange ¼ 1 and the maximum number of iterations to 200 to illustrate the complexity behavior of NM, which is fundamentally different from that of fmincon. Compared with the NM parameters used in solving the continuous 3-SAT problems, a smaller value of Rrange is used. The reason is that feasible regions of the sine-based problems can be of any size and shape and more rugged. In addition, Rrange is set to 1 since we are n mostly interested in the search space ½ 1; 1 , which represents the complexity of the problems. Fig. 19 compares the median numbers of function evaluations used by fmincon and NM + fmincon in solving 25 and 50 variable instances with t ¼ 1=2, 1/4, and 1/8. For NM + fmincon, phase transitions from near constant to exponential are clearly shown in all cases. For example, for instances of 25 variables and t ¼ 1=4, a cost transition of NM + fmincon happens around 4.0

30

Y. Shang, M.P.J. Fromherz / Information Sciences 153 (2003) 1–36

using R2. For 50 variable instances, the cost transition occurs at constraint ratio 2.8. Although NM + fmincon is more expensive than fmincon for small constraint ratios, its cost increases at a slower rate than that of fmincon and eventually becomes better for large constraint ratios. The crossover point varies for different tightness values, problem sizes, and solvers. Generally, the cost curves of fmincon and NM + fmincon together form a ‘‘fish’’ shape. The distinctive curve for NM + fmincon has three phases (easy-flat-hard) and can be explained by NMÕs overhead. Recall that in NM + fmincon, NM is run for up to a maximum of 200 iterations. In the first phase where the constraint ratio is small (e.g., for 25 variables, up to 1 for t ¼ 1=2, 2 for t ¼ 1=4, and 4 for t ¼ 1=8), the problems are easy. NM usually finds a solution within 200 iterations, although it needs more iterations as the constraint ratio increases. In the second phase where the constraint ratio is moderate (e.g., for 25 variables, up to 2 for t ¼ 1=2, 4 for t ¼ 1=4, and 8 for t ¼ 1=8), NM by itself is not able to find a solution within 200 iterations. However, a single NM run usually is sufficient to find a good starting point for fmincon to find a solution. That is why the curve is almost flat in this phase. Finally, in the third phase, where the constraint ratio is large, multiple restarts of NM are required. Fig. 20 compares the median CPU times in seconds used by fmincon and NM + fmincon. The results are consistent with what we see in solving the continuous 3-SAT problems: (1) The phase transitions of CPU time costs of fmincon are less obvious than those based on number of function evaluations. (2) The algorithmic overhead of fmincon, in addition to the cost of function evaluations, increases quickly as the constraint ratio increases. In contrast, NM has very little algorithmic overhead and the cost of this overhead increases about linearly with the number of constraints. The NM + fmincon complexity curves based on CPU time and number of function evaluations are very similar. In terms of CPU time, NM + fmincon is much more attractive than fmincon in solving highly constrained problems. 4.2.2. Product-form CSPs Finally, we compare the complexity phase transitions of fmincon and NM + fmincon in solving sum-form (Eq. (11)) and product-form (Eq. (12)) CSPs. Fig. 21 show the results on 25-variable problems and Fig. 22 show the results on 50-variable problems. First letÕs compare the complexity curves of fmincon in solving sum-form CSPs and product-form CSPs with average tightness t ¼ 1=8 (h ¼ 0:57 and 0.40, respectively). They are almost identical for both the 25 and 50 variable instances (shown in the top two diagrams). The complexity curves of NM + fmincon in solving these two types of problems have similar phase transition points and are almost identical before the transition points. After the transition points, the product-form problems become more difficult than the sum-form ones, as depicted by the steeper slopes of the complexity curves. The striking similarity of the complexity of the two methods

Sum form, t = 1/8 Sum form, t = 1/4 Sum form, t = 1/2 Product form, t = 1/8 Product form, t = 1/4 Product form, t = 1/2

100000

Median # of function evaluations

Median # of function evaluations

Y. Shang, M.P.J. Fromherz / Information Sciences 153 (2003) 1–36

10000

1000

100

10

0

1

2

3 4 5 6 7 8 9 Ratio of constraints to variables

Sum form, t = 1/8 Sum form, t = 1/4 Sum form, t = 1/2 Product form, t = 1/8 Product form, t = 1/4 Product form, t = 1/2

100000

10000

1000

100

10

10

31

0

1

2

3 4 5 6 7 8 9 Ratio of constraints to variables

10

Sum form, t = 1/8 Sum form, t = 1/4 Sum form, t = 1/2 Product form, t = 1/8 Product form, t = 1/4 Product form, t = 1/2

10000

Median # of function evaluations

Median # of function evaluations

Fig. 21. Comparison of the complexity of fmincon (left diagram) and NM + fmincon (right diagram), both using R2, in solving 25-variable sum-form (Eq. (11)) and product-form (Eq. (12)) CSPs.

1000

100

0

1

2 3 4 5 6 Ratio of constraints to variables

7

Sum form, t = 1/8 Sum form, t = 1/4 Sum form, t = 1/2 Product form, t = 1/8 Product form, t = 1/4 Product form, t = 1/2

10000

1000

100

0

1

2 3 4 5 6 Ratio of constraints to variables

7

Fig. 22. Comparison of the complexity of fmincon (left diagram) and NM + fmincon (right diagram), both using R2, in solving 50-variable sum-form (Eq. (11)) and product-form (Eq. (12)) CSPs.

in solving the two types of problems may be explained by the fact that the average tightness of the sum-form constraints is the same as that of the product-form constraints.

5. Conclusion In this paper, we study complexity phase transition phenomena of continuous CSPs based on two types of continuous problems: continuous 3-SAT-like problems and continuous sine-based problems. We present three alternative formulations of continuous 3-SAT-like problems based on discrete 3-SAT problems, in which each clause is converted into a continuous exponential, sigmoid, or quadratic constraint. On the other hand, the continuous sine-based problems are generated according to a generic benchmarking model for continuous CSPs. The model we propose in the paper aims at investigating the

32

Y. Shang, M.P.J. Fromherz / Information Sciences 153 (2003) 1–36

characteristics and complexity of continuous constraint problems as well as the computational complexity of various solving techniques, and facilitating comparison of different optimization techniques developed in various scientific communities. We present two implementations of the test case generator that are capable of generating continuous random CSPs with different characteristics, in a form similar to randomly generated SAT problems. They are based on sine functions since constraints using trigonometric functions are common in many real applications. The two implementations generate sine-based sumform CSPs and sine-based product-form CSPs, respectively. Two optimization methods are used to solve the various continuous CSPs. One is fmincon, an SQP method that does local search in the continuous space. The other one is NM + fmincon, which represents the strategy of global search followed by local search. The method uses the NM simplex algorithm for global search, which is followed by fmincon for local search. The complexities of the two methods in solving the various continuous CSPs are compared and analyzed in detail. One important result is that complexity phase transition phenomena are observed in various forms in all the cases. In solving the 3-SAT-like problems, the complexities of solving continuous and discrete versions of SAT problems exhibit highly similar behavior as a function of problem characteristics such as the ratio of constraints to variables. In both continuous and discrete cases, the complexity goes through an easy-hard phase transition at some point when the constraint-to-variable ratio is between 2 and 3. The similarity between the continuous and discrete SAT problem complexities provides further evidence that the complexity transitions are characteristic of the problem. On the other hand, different optimization methods exhibit different phase transition points and different complexity behaviors in the easy and hard phases. Particularly, the local search method fmincon is faster for weakly constrained instances, whereas the method with global search, NM + fmincon, is much better for highly constrained instances. fmincon has earlier phase transition points than NM + fmincon on the same instances, and has steeper complexity curves in the hard phases. In terms of physical time, the different complexities represent significantly different solution time for instances above the critical constraint ratios. The solution time can vary several orders of magnitude between the various algorithms for hard, large-size instances, which is very important in solving practical applications. Continuous SAT-like problems are harder to solve than discrete SAT problems, whether in terms of number of function evaluations or physical time. Furthermore, for the discrete problems, complete algorithms such as Davis– Putnam method exist that can eliminate infeasible regions, and it is possible to prove infeasibility when there is no solution. This fact causes over-constrained discrete problems to become easier after reaching a maximum difficulty where neither search for possible solutions nor elimination of infeasible regions works

Y. Shang, M.P.J. Fromherz / Information Sciences 153 (2003) 1–36

33

well. Thus, computational cost of solving discrete SAT problems has a phase transition of easy-hard-easy, which is found not only for complete algorithms, but also for local search methods such as GSAT. In contrast, so far we have not observed this phenomenon in our experiments using the two optimization methods. The phase transition we observed is of the kind easy-hard, which resembles previous phase transition results on combinatorial optimization problems, such as the asymmetric traveling salesman problem [40]. We plan to investigate more optimization methods on this issue. The experiments with generalized, sine-based CSPs show that various problem properties need to be taken into account to characterize problem complexity. In particular, varying the constraint tightness seems to be important when generating and analyzing continuous CSPs. In our experiments with the sine-based sum-form and product-form problems, we find that the complexities of the two optimization methods we used depend not only on the ratio of constraints to variables, but also strongly on the tightness of constraints. For a fixed ratio of constraints to variables, the tighter the constraints, the harder the CSP. For the randomly generated test cases, it appears that for two problems with the same number of variables, if one problem has constraints on average half as tight as constraints in the other problem, but has twice as many constraints, then the two problems are about equally difficult. In addition, comparing the complexity of solving the sum-form and the product-form sinebased CSPs, we found that if the average tightness of the constraints in a sumform CSP is the same as that of the constraints in a product-form CSP, then the two problems are about equally difficult. This means that the complexity of a CSP may not depend critically on the form of the constraints. In solving the sine-based CSPs, our results again show that the local search method, fmincon, works well on weakly constrained problems, whereas the global search done by NM helps on highly constrained problems. This also means that the NM algorithm is effective at discovering global structure in a noisy search terrain and is able to perform global descent to find promising local regions. For a given CSP instance, the crossover point of the complexity curves where NM + fmincon overtakes fmincon as the constraint ratio increases varies depending on the number of variables, the tightness of the constraints, and the restarting method. For general non-linear continuous CSPs, such as the sine-based problems, there do not exist complete algorithms that can prove global optimality or infeasibility without some assumptions on the behavior of the constraint functions, such as smoothness. Thus, no complete algorithm exists for general continuous problems, and problems appear to become continually more difficult with increasing constraint ratio. As we observed, the phase transition is most likely to be of the easy-hard kind, similar to previous phase transition results on combinatorial optimization problems. More work is needed to explain the behaviors of various search algorithms in solving continuous CSPs.

34

Y. Shang, M.P.J. Fromherz / Information Sciences 153 (2003) 1–36

For example, in Fig. 18, in contrast to the SAT-like CSPs, the curves of fmincon seem to exhibit super-exponential behavior. The work described in this paper is the foundation for various directions in future research on the understanding of the complexity of continuous constraint problems. Clearly, more exploration of real-world problem ensembles, their relevant characteristics, and complexity-predictive structural parameters is needed. Problem generators that better capture the features of real-world problems are very useful in benchmarking existing and future solving methods. The solvers used in our experiments are also just two examples from the set of available algorithms for such problems. Our work indicates that progress can be made both by improving restart strategies and by using different solvers. Furthermore, our ultimate goal is to use the experience gained from this work in developing new adaptive solvers, e.g., solvers consisting of multiple existing methods. The new solvers can change their behavior dynamically and intelligently based on problem complexity (e.g., choosing the most appropriate solution methods), yielding good performance no matter what the constraint ratio and tightness are. Finally, it is clear that, when using search algorithms for problems arising in real-time applications, one would want to stay to the left of the exponential part of the complexity curve, since it becomes quickly impossible to guarantee sensible time bounds. Even though global search methods may have relatively slow-slope exponential curves on highly constrained problems, for larger problems it becomes necessary to look for very different solution methods, such as hierarchical and distributed solving approaches.

Acknowledgements The authors would like to thank Warren Jackson, Tad Hogg, Danny Bobrow, Lara Crawford, Christophe Guettier, and the anonymous reviewers for their valuable suggestions and comments.

References [1] M.C. Biggs, Constrained minimization using recursive quadratic programming, in: L.C.W. Dixon, G.P. Szergo (Eds.), Towards Global Optimization, North-Holland, 1975, pp. 341–349. [2] A.S. Bondarenko, D.M. Bortz, J.J. More. COPS: Large-scale nonlinearly constrained optimization problems, Technical Memorandum ANL/MCS-TM-237, Argonne National Laboratory, Argonne, Illinois, 1998. [3] M.J. Box, A new method for constrained optimization and a comparison with other methods, Computer Journal 8 (1965) 42–52. [4] C.G. Broyden, Quasi-newton methods, in: W. Murray (Ed.), Numerical Methods for Unconstrained Optimization, Academic Press, New York, 1972, pp. 87–106.

Y. Shang, M.P.J. Fromherz / Information Sciences 153 (2003) 1–36

35

[5] Cristian Coarfa et al., Random 3-SAT: The plot thickens, Technical report, Dept. of Computer Science, Rice Univ., 2001. Available at www.cs.rice.edu/~vardi/papers. [6] CUTE, Constrained and unconstrained testing environment, 2001. Available at http:// www.cse.clrc.ac.uk/Activity/CUTE. [7] G.B. Dantzig, Linear Programming and Extensions, Princeton University Press, 1963. [8] J.E. Dennis, V. Torczon, Direct search methods on parallel machines, SIAM Journal on Optimization 1 (1991) 448–474. [9] E.D. Dolan, J.J. More, Benchmarking optimization software with performance profiles. Technical Memorandum ANL/MCS-P861-1200, Argonne National Laboratory, Argonne, Illinois, 2001. [10] R. Fletcher, Practical Methods of Optimization, John Wiley and Sons, 1980. [11] M.P.J. Fromherz, T. Hogg, Y. Shang, W.B. Jackson, Modular robot control and continuous constraint satisfaction, in: Proceedings of the IJCAI-01 Workshop on Modelling and Solving Problems with Constraints, Seattle, WA, 2001, pp. 47–56. [12] I. Gent, T. Walsh, Transitions from real computational problems, in: Proceedings of the 8th International Symposium on Artificial Intelligence, 1995, pp. 356–364. [13] J. Gu, P.W. Purdom, J. Franco, B.W. Wah, Algorithms for the satisfiability (SAT) problem: A survey, in: D.-Z. Du, J. Gu, P. Pardalos (Eds.), Satisfiability Problem: Theory and Applications, DIMACS Series in Discrete Mathematics and Theoretical Computer Science, American Mathematical Society, 1997, pp. 19–152. [14] S.P. Han, A globally convergent method for nonlinear programming, Journal of Optimization Theory and Applications 22 (1977) 297. [15] T. Hogg, B.A. Huberman, C.P. Williams (Eds.), Artificial Intelligence, Special Volume on Frontiers in Problem Solving: Phase Transitions and Complexity, vol. 81, Elsevier, 1996, pp. 1–2. [16] T. Hogg, B.A. Huberman, C. Williams, Phase transitions and the search problem, Artificial Intelligence 81 (1996) 1–15. [17] T. Hogg, C.P. Williams, The hardest constraint problems: a double phase transition, Artificial Intelligence 69 (1994) 359–377. [18] R. Hooke, T.A. Jeeves, Direct search solution of numerical and statistical problems, Journal of the Association for Computing Machinery 8 (1961) 212–229. [19] J.N. Hooker, Generalized resolution and cutting planes, Annals of Operations Research 12 (1988) 217–239. [20] A.P. Kamath, N.K. Karmarkar, K.G. Ramakrishnan, M.G.C. Resende, Computational experience with an interior point algorithm on the satisfiability problem, Annals of Operations Research 25 (1999) 43–58. [21] K. Kotay, D. Rus, M. Vona, C. McGray, The self-reconfiguring robotic molecule, in: Proceedings of the Conference on Robotics and Automation (ICRA98), IEEE, 1998, p. 424. [22] J.C. Lagarias, J.A. Reeds, M.H. Wright, P.E. Wright, Convergence properties of the Nelder– Mead simplex algorithm in low dimensions, SIAM Journal on Optimization 9 (1998) 112–147. [23] D.G. Luenberger, Introduction to Linear and Nonlinear Programming, second ed., Addision Wesley, Reading, MA, 1984. [24] K.I.M. McKinnon, Convergence of the Nelder–Mead simplex method to a nonstationary point, SIAM Journal on Optimization 9 (1) (1998) 148–158. [25] Z. Michalewicz, K. Deb, M. Schmidt, T. Stidsen, Test-case generator for constrained parameter optimization techniques, IEEE Transactions on Evolutionary Computation 4 (3) (2000) 197–215. [26] Z. Michalewicz, M. Schoenauer, Evolutionary algorithms for constrained parameter optimization problems, Evolutionary Computation 4 (1) (1996) 1–32. [27] J.A. Nelder, R. Mead, A simplex method for function minimization, Computer Journal 7 (1965) 308–313.

36

Y. Shang, M.P.J. Fromherz / Information Sciences 153 (2003) 1–36

[28] M.J.D. Powell, Variable metric methods for constrained optimization, in: A. Bachem, M. Grotschel, B. Korte (Eds.), Mathematical Programming: The State of the Art, SpringerVerlag, 1983, pp. 288–311. [29] D. Rus, M. Vona, Self-reconfiguration planning with compressible unit modules, in: Proceedings of the Conference on Robotics and Automation (ICRA99), IEEE, 1999. [30] B. Selman, H.J. Levesque, D.G. Mitchell, A new method for solving hard satisfiability problems, in: Proceedings of the AAAI-92, San Jose, CA, 1992, pp. 440–446. [31] B. Selman, D. Mitchell, H.J. Levesque, Generating hard satisfiability problems, Artificial Intelligence 81 (1996) 17–29. [32] Y. Shang, M.P.J. Fromherz, T. Hogg, W.B. Jackson, Complexity of continuous, 3-SAT-like constraint satisfaction problems, in: Proceedings of IJCAI-01 Workshop on Stochastic Search Algorithms, Seattle, WA, 2001, pp. 49–54. [33] Y. Shang, B.W. Wah, A discrete Lagrangian-based global-search method for solving satisfiability problems, Journal of Global Optimization 12 (1) (1998) 61–99. [34] B.V. Sheela, P. Ramamoorthy, SWIFT––a new constrained optimization technique, Computer Methods in Applied Mechanics and Engineering 6 (1975) 309–318. [35] V. Torczon, On the convergence of multidimensional search algorithm, SIAM Journal on Optimization 1 (1991) 123–145. [36] B.W. Wah, Y.X. Chen, Hybrid constrained simulated annealing and genetic algorithms for nonlinar constrained optimization, in: Proceedings of IEEE Congress on Evolutionary Computation, 2001. [37] C.P. Williams, T. Hogg, Exploiting the deep structure of constraint problems, Artificial Intelligence 70 (1994) 73–117. [38] M. Yim, Locomotion with a unit-modular reconfigurable robot, Ph.D. Thesis, Dept. of Mechanical Engineering, Stanford University, 1994. [39] M. Yokoo, Why adding more constraints makes a problem easier for hill-climbing algorithms: analyzing landscapes of CSPs, in: Proc of CPÕ97, number1330 in LNCS, Springer-Verlag, Berlin, 1997, pp. 357–370. [40] W. Zhang, R.E. Korf, A study of complexity transitions on the asymmetric traveling salesman problem, Artificial Intelligence 81 (1996) 223–239.