A novel algorithm for solving optimal path planning problems based on parametrization method and fuzzy aggregation

A novel algorithm for solving optimal path planning problems based on parametrization method and fuzzy aggregation

Physics Letters A 373 (2009) 3439–3449 Contents lists available at ScienceDirect Physics Letters A www.elsevier.com/locate/pla A novel algorithm fo...

365KB Sizes 0 Downloads 75 Views

Physics Letters A 373 (2009) 3439–3449

Contents lists available at ScienceDirect

Physics Letters A www.elsevier.com/locate/pla

A novel algorithm for solving optimal path planning problems based on parametrization method and fuzzy aggregation M. Zamirian ∗ , A.V. Kamyad, M.H. Farahi Department of Mathematics, Ferdowsi University of Mashhad, Mashhad 91775-1159, Iran

a r t i c l e

i n f o

Article history: Received 1 May 2009 Received in revised form 5 July 2009 Accepted 8 July 2009 Available online 16 July 2009 Communicated by A.R. Bishop MSC: 90C29 90C30 93C10 Keywords: Optimal path planning Multi-objective dynamic optimization Fuzzy aggregation Parametrization method Non-linear programming Membership function

a b s t r a c t In this Letter a new approach for solving optimal path planning problems for a single rigid and free moving object in a two and three dimensional space in the presence of stationary or moving obstacles is presented. In this approach the path planning problems have some incompatible objectives such as the length of path that must be minimized, the distance between the path and obstacles that must be maximized and etc., then a multi-objective dynamic optimization problem (MODOP) is achieved. Considering the imprecise nature of decision maker’s (DM) judgment, these multiple objectives are viewed as fuzzy variables. By determining intervals for the values of these fuzzy variables, flexible monotonic decreasing or increasing membership functions are determined as the degrees of satisfaction of these fuzzy variables on their intervals. Then, the optimal path planning policy is searched by maximizing the aggregated fuzzy decision values, resulting in a fuzzy multi-objective dynamic optimization problem (FMODOP). Using a suitable t-norm, the FMODOP is converted into a non-linear dynamic optimization problem (NLDOP). By using parametrization method and some calculations, the NLDOP is converted into the sequence of conventional non-linear programming problems (NLPP). It is proved that the solution of this sequence of the NLPPs tends to a Pareto optimal solution which, among other Pareto optimal solutions, has the best satisfaction of DM for the MODOP. Finally, the above procedure as a novel algorithm integrating parametrization method and fuzzy aggregation to solve the MODOP is proposed. Efficiency of our approach is confirmed by some numerical examples. © 2009 Elsevier B.V. All rights reserved.

1. Introduction Finding an optimal path planning is one of the most applicable problems, especially in robot industry, military, recently in surgery planning and etc. [1]. Latombe [2] has gathered novel methods for path planning in the presence of obstacles. Wang et al. in [3] have considered two novel approaches, constrained optimization and semi-infinite constrained optimization, for unmanned under water vehicle path planning. In [1] a new approach based on measure theory for finding approximate optimal path in the presence of obstacles is presented. In [4] an applicable method for solving the shortest path problems is proposed. In all of above references, the distance between path and obstacles is supposed to be a crisp value and there are not any incompatible objectives in their problems. In this Letter the optimal path planning problem has incompatible objectives, such as the length of path, the distance between the path and obstacles and etc. In this situation, the DM wants to minimize the length of path and maximize the distance between the path and obstacles, simultaneously. Some of these objectives are contradictory such that the optimization of one objective may implies the sacrifice of some other objectives. Therefore, the DM needs a multi-objective decision-making technique to look for a satisfying solution from conflicting objectives. Balicki considered a multi-objective problem which has three criteria (minimize total length of a path, satisfy measure of safety, and convince smoothness of the trajectory) to find the path of an underwater vehicle. He solved this multi-objective problem by using two methods: genetic programming [5], and tabu programming [6]. But in this Letter path planning is considered not only for underwater vehicle but also for all vehicles, and multi-objective is solved by using mathematical methods. Optimization for a multiobjective problem is a procedure looking for a compromise policy, the result called a Pareto optimal solution, consists of an infinite number of alternatives. There are a large variety of methods for treating the multi-objective optimization problem. These methods classified in many ways according to different criteria [7–9]. For example, Cohon [9] categorized methods into two relatively distinct subsets: generating

*

Corresponding author. Tel./fax: +985118828606. E-mail address: [email protected] (M. Zamirian).

0375-9601/$ – see front matter © 2009 Elsevier B.V. All rights reserved. doi:10.1016/j.physleta.2009.07.018

3440

M. Zamirian et al. / Physics Letters A 373 (2009) 3439–3449

Fig. 1. Obstacle k and its boundary, that is covered with circles.

methods and preference-based methods. The generating methods produce a set of Pareto optimal solution and then DM selects one of them on a basis of subjective value judgment. Among them the weighting-sum method is well known. The preference-based methods contain DM’s preference as the solution process goes on, and the solution that best fulfills DM’s preference is selected. Thus, all these multi-objective optimization methods for finding a Pareto optimal solution are filled with subjective and fuzzy properties [8]. In this Letter the multiple objectives are considered as fuzzy variables. Then, the intervals are determined for the values of fuzzy variables. Thus, for each interval a flexible membership function, as degree of satisfaction for fuzzy variables, is defined. Therefore, the optimal policy resulting in the FMODOP, is to find an optimal path which maximizes all of membership functions, simultaneously. Using a suitable t-norm the FMODOP is converted into an NLDOP whose variable is the path x(.). By substituting polynomials instead of x(.), parametrization method, the sequence of the NLDOPs is obtained whose variables are the constant coefficients of the polynomials. With some calculations, the NLDOPs are converted into conventional non-linear programming problems (NLPP). It is proved that the sequence of the solutions of the NLPPs converges to the solution of the NLDOP, and this solution is a Pareto optimal solution for the MODOP. Thus, a novel algorithm integrating parametrization method and fuzzy aggregation to solve the MODOP is proposed. Finally, some numerical examples are given to show the efficiency of our approach. 2. Problem statement A single rigid and free moving object A in a two or three dimensional space in the presence of stationary or moving obstacles is considered. We suppose object A is an r-radius circle or sphere with center x(t ) = (x1 (t ), x2 (t ), x3 (t )), and obstacle k is an rk -radius circle or sphere with center αk (t ) = (α1k (t ), α2k (t ), α3k (t )), k = 1, 2, . . . , q, for every t ∈ [0, t f ], where x(.) is a unknown continuously differentiable real vector-valued function which is the path of motion object A, αk (.), k = 1, 2, . . . , q, are known continuous real vectorvalued functions which are the paths of motion obstacles, and t f is a given real number as final time. We emphasize that all obstacles are considered as circles or spheres in plane or space, respectively. Since e.g. in plane, if we assume that kth obstacle has a non-circle geometrical shape γk with compact boundary ∂ γk , then one can cover ∂ γk by a finite number of circles. Thus, we can substitute these circles with the obstacle γk (see Fig. 1). Also we suppose x(.) ∈ X = {x(t ) | x(t ) ∈ C 1 (0, t f ), a(t )  x(t )  b(t ), c (t )  x˙ (t )  d(t ), x(0) = x0 , x(t f ) = x f , t ∈ [0, t f ]}, where a(t ) = (a1 (t ), a2 (t ), a3 (t )), b(t ) = (b1 (t ), b2 (t ), b3 (t )), c (t ) = (c 1 (t ), c 2 (t ), c 3 (t )), and d(t ) = (d1 (t ), d2 (t ), d3 (t )) are known continuous real vector-valued functions as the boundaries of x(t ) and x˙ (t ) for all t ∈ [0, t f ] respectively, also x0 and x f are given constant vectors in 3 as the initial and final points of x(.). Now, in the evaluation of a path x(.) from the initial point x0 to the final point x f , three main criteria can be considered: the length of the path, the distance between object A and obstacles called the measure of safety and the smoothness of the path. The first criterion in this evaluation is the length of the path which is more interested because of the time and economical aspects of motion, and is defined as follows:



 I 0 x(t f ) =

t f 

t f x˙ 21 (t ) + x˙ 22 (t ) + x˙ 23 (t ) dt

0

=

  x˙ (t ) dt . 2

0

The second criterion is the safety measure of path x(.) from each obstacle. Set





ϕk x(t ) =





2

x1 (t ) − α1k (t )

   2  2 + x2 (t ) − α2k (t ) + x3 (t ) − α3k (t ) − (r + rk ) = x(t ) − αk (t )2 − (r + rk ),

where ϕk (x(t )), k = 1, 2, . . . , q, is the distance between object A (or the path x(.)) and obstacle k at the moment t. To clarify the notation, we bring the following definition: Definition 1. The least distance between object A and obstacle k for all t ∈ [0, t f ] is called the distance between object A and obstacle k. Now, the distance between object A and obstacle k is showed by



   ϕk x(tkx(.) ) = min ϕk x(t ) , t ∈[0,t f ]

ϕk (x(tkx(.) )), which is a positive real number and define by:

M. Zamirian et al. / Physics Letters A 373 (2009) 3439–3449

3441

where tkx(.) is the time in [0, t f ] that minimized ϕk (x(t )), clearly this time depends on x(.). Now, ϕk (x(tkx(.) )) should be maximized to obtain a path as safe as possible. The third criterion is the smoothness of the path which automatically is satisfied, because we introduce the optimal path by a polynomial function that belong to C ∞ [0, t f ] (the set of very smooth functions). Now, the goal is to find an optimal path planning which minimizes the length of path, I 0 (x(t f )), and maximizes the distance between object A and obstacle k, ϕk (x(tkx(.) )), k = 1, . . . , q. That is





min I 0 x(t f )

and

x(.)∈ X

max

x(.)∈ X





ϕk x(tkx(.) ) , k = 1, 2, . . . , q.

(1)

We set I k (x(tkx(.) )) = −ϕk (x(tkx(.) )), k = 1, 2, . . . , q, and t 0x(.) = t f , then the reduced form of (1) is the following MODOP:





 











min Z x(.) = I 0 x(t 0x(.) ) , I 1 x(t 1x(.) ) , . . . , I q x(tqx(.) ) .

x(.)∈ X

(2)

These objective functions, however, are conflicted with each other. Thus, it is impossible to attain their own optimum, simultaneously. Since the optimization of one objective implies the sacrifice of another objective, therefore, DM must make some compromise among these goal functions. In contrast to the optimality used in single objective optimization problem, Pareto optimality characterizes the solutions in a multi-objective optimization problem, for more information see [7–9]. Definition 2. The path x∗ (.) ∈ X is said to be a Pareto optimal solution for the MODOP if and only if there exists no x(.) ∈ X such that I k (x(tkx(.) ))  I k (x∗ (tkx∗ (.) )) for all k ∈ {0, 1, . . . , q}, and I j (x(t jx(.) )) < I j (x∗ (t jx∗ (.) )) for some j ∈ {0, 1, . . . , q}. Definition 3. The path x∗ (.) ∈ X is said to be a weak Pareto optimal solution for the MODOP if and only if there exists no x(.) ∈ X such that I k (x(tkx(.) )) < I k (x∗ (tkx∗ (.) )) for all k ∈ {0, 1, . . . , q}. From above definitions, the number of solutions satisfying Pareto optimality in the MODOP can be infinite. It is difficult for the DM to attribute a set of incompatible objectives without knowledge of the possible level of attainment for those objectives. Thus, it is a fuzzy problem for finding a Pareto optimal solution that the best satisfies the DM. 3. The identification of fuzzy problem for the MODOP We suppose ˜I k , k = 0, 1, . . . , q, are fuzzy objectives for fuzzy variables I k (x(tkx(.) )) on intervals [ I kl , I ku ] (which are achieved in Sec-

tion 3.2). Then, for the kth minimum objective, it is thoroughly satisfied as the objective value I k (x(tkx(.) )) is less than I kl , it is unacceptable

as I k (x(tkx(.) )) is more than I ku , and for an I k (x(tkx(.) )) in between I kl and I ku the extent of satisfaction by the DM decreases with an increase in its value. Thus, a decreasing membership function, μ ˜I ( I k (x(tkx(.) ))), can be used to characterize such a transition from the objective k value, I k (x(tkx(.) )), to the degree of satisfaction. 3.1. The methodology of membership function In this work, we employ a logistic function for the non-linear membership function f (.) as follows:

f (x) =

B 1 + C eβ x

(3)

,

where B and C are scalar constants and β , 0 < β < ∞, is a parameter which determine the shape of f (.). The reason we use this function is that, the logistic membership function has similar shape as that of tangent hyperbolic function employed by Leberling [10], but it is more flexible than the tangent hyperbolic function [11]. It is also known that a trapezoidal membership function is an approximation to logistic function. This function is found to be very useful in making decisions and implementation by DM and implementer [12–14]. Theorem 1. If B and C are greater than zero then: (1) (2) (3) (4)

f (.) is a monotonic decreasing function. f (.) has asymptotes at f (x) = 0 and f (x) = 1 at appropriate values of B and C . f (.) has a vertical tangent at x = xm , xm is the point where f (xm ) = 0.5, when β → ∞ and f (0) = 1. f (.) has an inflection point at x = xm , such that f  (xm ) = ∞, when β → ∞.

Proof. See [15].

2

The above arguments on vertical tangent, asymptotic and inflection points lead to a conclusion that the suggested function is flexible. Now, we defined a modified logistic membership function as follows:

μ(x) =

⎧ ⎪ 1, ⎪ ⎪ ⎨

B , 1+ C e β x

⎪ 0.001, ⎪ ⎪ ⎩ 0,

x  xl , xl  x  xu , x = xu , x > xu .

(4)

3442

M. Zamirian et al. / Physics Letters A 373 (2009) 3439–3449

Fig. 2. The variation of

μ(.) with respect to β .

For simplicity, we rescale the x axis as xl = 0 and xu = 1 in order to find the values of B and C as follows:

1 = μ(0) =

B 1+C

,

so B = 1 + C ,

and

0.001 = μ(1) =

B 1 + C eβ

Now, by choosing β , from above equations one can find C and B, so to some values of β such as 7, 11, 15, 24, 48 and 200 for x ∈ [0, 1].

,

so C =

0.999 0.001e β − 1

.

μ(.) should identified. Fig. 2 shows the shape of μ(x) with respect

3.2. The monotonic membership functions for fuzzy objectives We assume that intervals [dlk , dku ], k = 1, 2, . . . , q, for the values of fuzzy variables ϕk (x(tkx(.) )) are determined, where dlk is the least possible distance between object A and the kth obstacle which may be small or even zero, and dku is the least distance that must exists between object A and the kth obstacle to have no risk. Definition 4. A path is called a risk-taker path if it always select the shortest path with constraints situation the length of risk-taker path, I l0 , is obtained by solving the following NLDOP:



 min I 0 x(t f ) =

t f

  x˙ (t ) dt s.t. 2

ϕk (x(t ))  dlk , for all t ∈ [0, t f ]. In this

ϕk (x(t ))  dlk , k = 1, 2, . . . , q, t ∈ [0, t f ],

Definition 5. A path is called a risk-averter path if it always selects the shortest path with constraints this situation the length of risk-averter path, I 0u , is obtained by solving the following NLDOP:



 min I 0 x(t f ) =

t f

  x˙ (t ) dt s.t. 2

(5)

x(.) ∈ X .

0

ϕk (x(t ))  dku , for all t ∈ [0, t f ]. In

ϕk (x(t ))  dku , k = 1, 2, . . . , q, t ∈ [0, t f ], x(.) ∈ X .

(6)

0

Therefore, interval [ I l0 , I 0u ] is achieved for the value of fuzzy variable I 0 (x(t 0x(.) )). According to (4), we determine the following membership functions which are decreasing for fuzzy objectives ˜I k , k = 0, 1, . . . , q,

μ˜Ik

 

 I k x(tkx(.) ) =

⎧ 1, ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎨

I k (x(tkx(.) ))  I kl , Bk

1+C k exp [βk (

⎪ ⎪ ⎪ ⎪ 0.001, ⎪ ⎪ ⎩ 0,

I k (x(tkx(.) ))− I l k I u −Il k k

, I kl  I k (x(tkx(.) ))  I ku , )]

(7) I k (x(tkx(.) )) = I ku ,

I k (x(tkx(.) )) > I ku ,

where I kl = −dku and I ku = −dlk , k = 1, . . . , q. We emphasize that all μ ˜I ( I k (x(tkx(.) ))), k = 0, 1, . . . , q, are flexible, then the DM can determines his desirable membership functions k by selecting the suitable values of βk . The MODOP (2) is now equivalent to look for a suitable path planning policy that can provide the maximal degree of satisfaction for the below FMODOP:

M. Zamirian et al. / Physics Letters A 373 (2009) 3439–3449

max

x(.)∈ X







 

 



 



3443

 .

μ˜I x(.) = μ˜I 0 I 0 x(t 0x(.) ) , μ˜I 1 I 1 x(t 1x(.) ) , . . . , μ˜I q I q x(tqx(.) )

(8)

Now, the DM must make a compromise decision that provides a maximum degree of satisfaction for all of these conflict objectives. The new problem (8) can be interpreted as the synthetic notation of a conjunction statement. The result of this aggregation, ˜I , can be viewed as a fuzzy intersection of all fuzzy objectives ˜I k , k = 0, 1, . . . , q, and it is still a fuzzy set ( ˜I = ˜I 0 ∩ ˜I 1 ∩ · · · ∩ ˜I q ). Then, μ ˜I (x(.)) can be determined by aggregating the degree-of-satisfaction for all μ ˜I (x(t )), k = 0, 1, . . . , q, via specific t-norm, T, for details see [16]; thus, k μ˜I (x(.)) = T{μ˜I ( I 0 (x(t 0x(.) ))), μ˜I ( I 1 (x(t 1x(.) ))), . . . , μ˜I ( I q (x(tqx(.) )))}. So, the problem (8) can be rewritten as follows: 0

max T

x(.)∈ X



q

1

μ˜I 0

 

       I 0 x(t 0x(.) ) , μ ˜I I 1 x(t 1x(.) ) , . . . , μ ˜I I q x(tqx(.) ) . q 1

(9)

One of the suitable t-norms for our problem is Zadeh-minimum which converts the problem (9) to the following problem:

max min

x(.)∈ X



 

 



 .

s.t.

(10)

α , the problem (10) can be transformed into the following equivalent problem:

By introducing the auxiliary variable,

max α

 



μ˜I 0 I 0 x(t 0x(.) ) , μ˜I 1 I 1 x(t 1x(.) ) , . . . , μ˜I q I q x(tqx(.) )

μ˜Ik ( I k (x(tkx(.) )))  α , k = 0, 1, . . . , q,

(11)

x(.) ∈ X .

Since I l0  I 0 (x(t 0x(.) ))  I 0u , then

μ˜I 0 ( I 0 (x(t 0x(.) ))) =

B0 1+C 0 exp [β0 (

I 0 (x(t 0x(.) ))− I l 0 I u −Il 0 0

which is a strictly decreasing membership function. But, )]

ϕk (x(tkx(.) )) not only is more than or equal to dlk but also can be more than or equal to dku , k = 1, 2, . . . , q, then I k (x(tkx(.) ))  I ku , so ⎧ I k (x(tkx(.) ))  I kl , ⎪ 1,    ⎨ Bk μ˜Ik I k x(tkx(.) ) = , I kl  I k (x(tkx(.) ))  I ku , l ⎪ ⎩ 1+C exp [β ( Ik (x(tkx(.) ))− Ik )] k

k

I u −Il k k

μ˜I k ( I k (x(tkx(.) ))) = 1 then the inequality (11) is trivially satisfied. Also in this situation μ˜I k ( I k (x(tkx(.) ))) = 1 so it can be substituted with μ˜I k ( I k (x(tkx(.) ))) = 1. Therefore, one can

which are decreasing membership functions. Now, if Bk 1+C k exp [βk (

I k (x(tkx(.) ))− I l k I u −Il k k

substitute

is more than or equal to )] Bk

1+C k exp [βk (

NLDOP:

I k (x(tkx(.) ))− I l k I u −Il k k

, k = 0, 1, . . . , q, with )]

⎧  tf ⎪ ⎪ ⎪ ⎨ 0 ˙x(t ) 2 dt  max α

s.t.

B −α ln( α0C ) 0

β0

μ˜I k ( I k (x(tkx(.) ))) in (11). Then, with a simple calculation, one achieves the following

( I 0u − I l0 ) + I l0 ,

⎪ ⎪ x(t ) − αk (t ) 2  (r + rk ) + ⎪ ⎩ x(.) ∈ X .

B −α ln( αkC ) k

βk

(dlk − dku ) + dku ,

(12)

k = 1, 2, . . . , q, t ∈ [0, t f ],

4. Related mathematical theorems Since the MODOP in (2), has been modified to another multi-objective problem in (8), then by using the t-norm this new problem is converted to a new single objective problem (9); therefore, it is necessary to ensure that the solution (9) is a Pareto optimality for (2). Theorem 2. If x∗ (.) be an optimal solution for the problem (9), then x∗ (.) is a weak Pareto optimality for the problem (2) if either (1) all of the μ ˜I (.), k = 0, 1, . . . , q, are defined strictly decreasing or k (2) T is strictly monotonous and μ ˜I (.), k = 0, 1, . . . , q, are defined such as (7). k

Proof. (1) If x∗ (.) is not a weak Pareto optimal solution for the problem (2), then there exists x(.) ∈ X such that I k (x(tkx(.) )) < I k (x∗ (tkx∗ (.) )) for all k ∈ {0, 1, . . . , q}. Since all μ ˜I (.) are strictly decreasing functions with respect to I k (.); thus, μ ˜I ( I k (x(tkx(.) ))) > μ ˜I ( I k (x∗ (tkx∗ (.) ))) k k k for all k ∈ {0, 1, . . . , q}, then T{μ ˜I ( I 0 (x(.))), μ ˜I ( I 1 (x(.))), . . . , μ ˜I ( I q (x(.)))} > T{μ ˜I ( I 0 (x∗ (.))), μ ˜I ( I 1 (x∗ (.))), . . . , μ ˜I ( I q (x∗ (.)))}, and 0

q

1

0

q

1

max T{μ ˜I ( I 0 (x(.))), μ ˜I ( I 1 (x(.))), . . . , μ ˜I ( I q (x(.)))} > max T{μ ˜I ( I 0 (x∗ (.))), μ ˜I ( I 1 (x∗ (.))), . . . , μ ˜I ( I q (x∗ (.)))}, and this contradicts the opti0

q

1

0

q

1

mality of x∗ (.). (2) If x∗ (.) is not a weak Pareto solution for the problem (2), then there exists x(.) ∈ X such that I k (x(tkx(.) )) < I k (x∗ (tkx∗ (.) )) for all k ∈ {0, 1, . . . , q}. Since μ ˜I (.) is strictly decreasing function and μ ˜I (.), k = 1, 2, . . . , q, are decreasing functions; therefore, 0 k μ˜I 0 ( I 0 (x(t 0x(.) ))) > μ˜I 0 ( I 0 (x∗ (t 0x∗ (.) ))) and μ˜I k ( I k (x(tkx(.) )))  μ˜I k ( I k (x∗ (tkx∗ (.) ))), k = 1, 2, . . . , q. Since T is strictly monotonous, then T{μ ˜I ( I 0 (x(.))), μ ˜I ( I 1 (x(.))), . . . , μ ˜I ( I q (x(.)))} > T{μ ˜I ( I 0 (x∗ (.))), μ ˜I ( I 1 (x∗ (.))), . . . , μ ˜I ( I q (x∗ (.)))}, and max T{μ ˜I ( I 0 (x(.))), μ ˜I ( I 1 (x(.))), 0

1

q

0

1

q

0

1

. . . , μ˜I q ( I q (x(.)))} > max T{μ˜I 0 ( I 0 (x∗ (.))), μ˜I 1 ( I 1 (x∗ (.))), . . . , μ˜I q ( I q (x∗ (.)))}, and this contradicts the optimality of x∗ (.), so x∗ (.) is a weak Pareto solution for the MODOP (2). 2

3444

M. Zamirian et al. / Physics Letters A 373 (2009) 3439–3449

Theorem 3. If x∗ (.) be an optimal solution for the problem (9), then x∗ (.) is a Pareto optimal solution for the problem (2) if either (1) x∗ (.) is the unique optimal solution for (9) and μ ˜I (.), k = 0, 1, . . . , q, are defined such as (7) or k (2) T is strictly monotonous and μ ˜I (.), k = 0, 1, . . . , q, are defined strictly decreasing. k

Proof. (1) If x∗ (.) is a unique optimal solution for problem (9) and is not a Pareto optimal solution for problem (2), then there exists x(t ) ∈ X such that I k (x(tkx(.) ))  I k (x∗ (tkx∗ (.) )) for all k ∈ {0, 1, . . . , q}, and I j (x(t jx(.) )) < I j (x∗ (t jx∗ (.) )) for some j ∈ {0, 1, . . . , q}. Observing that μ ˜I (.), k = 0, 1, . . . , q, are strictly decreasing or decreasing functions, this implies μ ˜I ( I k (x(tkx(.) )))  μ ˜I ( I k (x∗ (tkx∗ (.) ))). Thus, k k k T{μ ˜I ( I 0 (x(.))), μ ˜I ( I 1 (x(.))), . . . , μ ˜I ( I q (x(.)))}  T{μ ˜I ( I 0 (x∗ (.))), μ ˜I ( I 1 (x∗ (.))), . . . , μ ˜I ( I q (x∗ (.)))}, and max T{μ ˜I ( I 0 (x(.))), μ ˜I ( I 1 (x(.))), 0

q

1

0

1

q

0

1

. . . , μ˜I q ( I q (x(.)))}  max T{μ˜I 0 ( I 0 (x∗ (.))), μ˜I 1 ( I 1 (x∗ (.))), . . . , μ˜I q ( I q (x∗ (.)))}, this contradicts the assumption that x∗ (.) is the unique optimal solution for problem (9); Therefore, x∗ (.) is a Pareto optimal solution for the problem (2). (2) If x∗ (.) is optimal solution for problem (9) and is not a Pareto optimality for the problem (2), then there exists x(t ) ∈ X such that I k (x(tkx(.) ))  I k (x∗ (tkx∗ (.) )) for all k ∈ {0, 1, . . . , q}, and I j (x(t jx(.) )) < I j (x∗ (t jx∗ (.) )) for some j ∈ {0, 1, . . . , q}. Because all μ ˜I (.), are defined strictly decreasing then μ ˜I ( I k (x(tkx(.) )))  μ ˜I ( I k (x∗ (tkx∗ (.) ))) for all k ∈ {0, 1, . . . , q}, and μ ˜I ( I j (x(t jx(.) ))) > j k k k μ˜I j ( I j (x∗ (t jx∗ (.) ))) for some j ∈ {0, 1, . . . , q}. Since T is strictly monotonous, then T{μ˜I 0 ( I 0 (x(.))), μ˜I 1 ( I 1 (x(.))), . . . , μ˜I q ( I q (x(.)))} > T{μ ˜I ( I 0 (x∗ (.))), μ ˜I ( I 1 (x∗ (.))), . . . , μ ˜I ( I q (x∗ (.)))}, and max T{μ ˜I ( I 0 (x(.))), μ ˜I ( I 1 (x(.))), . . . , μ ˜I ( I q (x(.)))} > max T{μ ˜I ( I 0 (x∗ (.))), q q 0 1 0 1 0 ∗ ∗ μ˜I 1 ( I 1 (x (.))), . . . , μ˜I q ( I q (x (.)))}. But, this contradicts the assumption that x∗ (.) is the optimal solution for problem (9). Thus, x∗ (.) is a Pareto for the problem (2). 2 For the case of static Theorems 2 and 3, see [8]. According to the definitions of the membership functions (7) and using Zadeh-min as t-norm which is not strictly monotonous, the key point that guarantees Pareto optimality is to determine uniqueness the optimal solution. But, if the uniqueness of optimal solution, x∗ (.), is not guaranteed then for recognizing Pareto optimality, one must perform the Pareto optimality test for x∗ (.) by solving the following NLDOP:





max w x(.) =

 q

k =0

ck

⎧ I (x(t 0x(.) ))  I 0 (x∗ (t 0x∗ (.) )) − c 0 , ⎪ ⎪ 0 ⎨ ϕk (x(t ))  ϕk (x∗ (tkx∗ (.) )) + ck , k = 1, . . . , q, t ∈ [0, t f ], s.t. ⎪ x(.) ∈ X , ⎪ ⎩ c k  0, k = 0, 1 , . . . , q ,

(13)

where ck , k = 0, 1, . . . , q, are unknown constant real numbers. For the optimal solution of this problem, x¯ (.), the following theorem holds: Theorem 4. For the optimal solution x¯ (.) of the Pareto optimality test problem: (1) If w (¯x(.)) = 0, then x∗ (.) is a Pareto optimal solution of the MODOP. (2) If w (¯x(.)) > 0, then not x∗ (.) but x¯ (.) is a Pareto optimal solution of the MODOP. Proof. (1) If x∗ (.) is not the Pareto optimal solution for the MODOP, then there exists an x(t ) ∈ X such that I k (x(tkx(.) ))  I k (x∗ (tkx∗ (.) )) for all k ∈ {0, 1, . . . , q}, and I j (x(t jx(.) )) < I j (x∗ (t jx∗ (.) )) for some j ∈ {0, 1, . . . , q}. We suppose j ∈ {1, 2, . . . , q} (the proof is the same if j = 0) then mint ∈[0,t f ] ϕ j (x(t )) > ϕ j (x∗ (t jx∗ (.) )) or ϕ j (x(t )) > ϕ j (x∗ (t jx∗ (.) )) for all t ∈ [0, t f ]. Thus, there exist the constant numbers ck  0, k ∈ {0, 1, . . . , q} and c j > 0 for some j ∈ {1, 2, . . . , q} such that I 0 (x(t 0x(.) ))  I 0 (x∗ (t 0x∗ (.) )) − c 0 , ϕk (x(t ))  ϕk (x∗ (tkx∗ (.) )) + ck and ϕ j (x(t ))  ϕ j (x∗ (t jx∗ (.) )) + c j for all t ∈ [0, t f ]. Then, x(.) is the feasible solution of the problem (13) and w (x(.)) = 0 ( w (x(.)) > w (¯x(.))) which contradicts the assumption that x¯ (.) is optimal solution. Thus, x∗ (.) is a Pareto optimal solution. (2) If x¯ (.) is not the Pareto optimal solution for the MODOP, then there exists x(.) ∈ X such that I k (x(tkx(.) ))  I k (¯x(tkx¯ (.) )) for all k ∈ {0, 1, . . . , q}, and I j (x(t jx(.) )) < I j (¯x(t j x¯ (.) )) for some j ∈ {0, 1, . . . , q}. We suppose j ∈ {1, 2, . . . , q} (the proof is the same if j = 0) then mint ∈[0,t f ] ϕ j (x(t )) > ϕ j (¯x(t j x¯ (.) )) or ϕ j (x(t )) > ϕ j (¯x(t j x¯ (.) )) for all t ∈ [0, t f ]. Thus, there exist bk  0, k ∈ {0, 1, . . . , q} and b j > 0 for some j ∈ {1, 2, . . . , q} such that I 0 (x(t 0x(.) ))  I 0 (¯x(t 0x¯ (.) )) − b0 , ϕk (x(t ))  ϕk (¯x(tkx¯ (.) )) + bk and ϕ j (x(t ))  ϕ j (¯x(t j x¯ (.) )) + b j for all t ∈ [0, t f ]. Now, we suppose ck , k = 0, 1, . . . , q, are the optimal values related to x¯ (.) then I 0 (¯x(t 0x¯ (.) ))  I 0 (x∗ (t 0x∗ (.) )) − c 0 and ϕk (¯x(t ))  ϕk (x∗ (tkx∗ (.) )) + ck for all t ∈ [0, t f ]; Therefore, I 0 (x(t 0x(.) ))  I 0 (x∗ (t 0x∗ (.) )) − c 0 − b0 and ϕk (x(t ))  ϕk (x∗ (tkx∗ (.) )) + ck + bk for all t ∈ [0, t f ]. Then, x(.) is the feasible solution of problem (13) and w (x(.)) > w (¯x(.)) which contradicts the assumption that x¯ (.) is optimal solution. So, x¯ (.) is a Pareto optimal solution for the MODOP. 2 5. The solution of the NLDOPs There exist many methods for solving NLDOPs (5), (6), (12) and (13) [1,17,18]. For example Borzabadi et al. [1] defined the artificial control function u (t ) as x˙ (t ) = u (t ), and obtained an approximate solution by using measure theory that was established by Rubio [19]. But, we use a new approach for solving these NLDOPs. This approach has some advantages: The number of unknowns is lower than other methods such as [17,18], there is not any error in final condition, x(t f ) = x f , such that this error exists in the method used in [1,19], and in contrast with methods like successive approximation approach [20] and state parametrization using Chebyshev polynomials [21], which are restricted to quadratic objective function, our method is expressed for a general objective function. Not knowing the degree of polynomial which is achieved as optimal path, is the main disadvantage of our method. Since our approach for solving the problems (5), (6), (12) and (13) is the same, then this approach is only stated for solving problem (5).

M. Zamirian et al. / Physics Letters A 373 (2009) 3439–3449

3445

Let pn (t ) = ( p 1n (t ), p 2n (t ), p 3n (t )), for all t ∈ [0, t f ], where p in (.), i = 1, 2, 3, are polynomials of degree at most n with unknown constant coefficients. Then, by substituting pn (.) instead of x(.) in the problem (5), the sequence of the NLDOPs is obtained as follows:



 min I 0 pn (t f ) =

t f

   p˙ n (t ) dt s.t. 2

0

ϕk ( pn (t ))  dlk , k = 1, 2, . . . , q, t ∈ [0, t f ], pn (.) ∈ X ,

n = 1, 2, . . . .

(14)

Now, suppose Q is the set of x(.) ∈ X such that the problem (5) is feasible, and Q (n) is the set of pn (.) = ( p 1n (.), p 2n (.), p 3n (.)) such that the problem (14) is feasible. Also, we suppose Q and Q (n) are not empty. Then, by the following theorem is proved that the sequence of the solutions of problem (14) converges to the solution of problem (5). Theorem 5. If η = inf Q I 0 (x(t f )) and η(n) = inf Q (n) I 0 ( pn (t f )), then limn→∞ η(n) = η . Proof. It is obvious that Q (1) ⊂ Q (2) ⊂ · ·  · ⊂ Q , then η(1)  η(2)  · · ·  η . So, {η(n)} is a non-increasing and bounded sequence, then it ∞ converges to a number called ξ . Set W = n=1 Q (n); therefore, inf p (.)∈ W I 0 ( p (t f )) = limn→∞ η(n) = ξ . Since W ⊂ Q , then ξ  η . By the properties of infimum, for every ε > 0, there exists x(.) ∈ Q such that





η < I 0 x(t f ) < η + ε.

(15)

From the continuity of I 0 (x(t f )), there is a δ > 0, such that

      I 0 y (t f ) − I 0 x(t f )  < ε ,

(16)

whenever for any y (.) and x(.) ∈ X , y˙ (.) − x˙ (.) ∞ < δ (note that I 0 (x(.)) is a functional with respect to x˙ (.)). On the other hand, since x(.) ∈ C 1 (0, t f ), then there exists the sequence of polynomials as { pn (.)} such that { pn (.)} and { p˙ n (.)} uniformly converge to x(.) and x˙ (.), respectively. Now, set δ1 = min{ x(.) − a(.) ∞ , x(.) − b(.) ∞ , δ} and δ2 = min{ ˙x(.) − c (.) ∞ , ˙x(.) − d(.) ∞ , δ}; therefore, there is N

δ2 t belong to ℵ, the set of positive integer numbers, such that for any n  N, pn (.) − x(.) ∞ < δ31 and p˙ n (.) − x˙ (.) ∞ < (t +f2) . Now, we f define another polynomial as follows:



 (t f − t )

qn (t ) = pn (t ) + x0 − pn (0)

tf

 t + x f − pn (t f ) , tf

then qn (0) = x0 , qn (t f ) = x f .

1 Also, q˙ n (t ) = p˙ n (t ) + (x0 − pn (0)) − + (x f − pn (t f )) t1 . Thus, for every t ∈ [0, t f ] and every n  N, t f

f

    qn (t ) − x(t ) < δ1 and q˙ n (t ) − x˙ (t ) < δ2 . So, ∀n  N ,

qn (.) ∈ X .

(17)

Also,

  q˙ n (.) − x˙ (.)



  < δ2 < δ and qn (.) − x(.)∞ < δ2 < δ.

(18)

Now, we claim there is an N 1  N such that for every t ∈ [0, t f ]





ϕk q N 1 (t )  dlk , k = 1, 2, . . . , q.

(19)

Since, otherwise for every n  N, there is a k ∈ {1, 2, . . . , q} and t ∈ [0, t f ] such that

ϕk (x(.))

< dlk

ϕk (qn (t )) < dlk . Thus, limn→∞ ϕk (qn (.)) < dlk or

which contradicts the assumption that x(t ) ∈ Q . Now, from (17) and (19) we have

q N 1 (t ) ∈ Q ( N 1 ) ⊂ W ⊂ X .

(20)

By using (15), (16) and (18) | I 0 (q N 1 (t f )) − I 0 (x(t f ))| < ε or I 0 (q N 1 (t f )) < I 0 (x(t f )) + ε < η + 2ε . Since η  ξ = inf p (.)∈ W I 0 ( p (t f )), then according to (20) η  inf p (.)∈ W I 0 ( p (t f ))  I 0 (q N 1 (t f )) < η + 2ε or η  ξ < η + 2ε , so ξ = η or limn→∞ η(n) = η . 2

tf

Now, one can show ϕk (x(t ))  dlk for each t ∈ [0, t f ] if and only if 0 |ϕk (x(t )) − dlk − |ϕk (x(t )) − dlk || dt = 0. Therefore, the problem (14) is now equivalent to the following problem:

⎧t f l l ⎪ ⎪ 0 |ϕk ( p n (t )) − dk − |ϕk ( p n (t )) − dk || dt = 0, k = 1, 2, . . . , q, ⎪ ⎪  ⎪ tf ⎪ ⎪ n = 1, 2, . . . , ⎪ 0 | p n (t ) − a(t ) − | p n (t ) − a(t )|| dt = 0, ⎪ tf ⎪   tf ⎨   | pn (t ) − b(t ) + | pn (t ) − b(t )|| dt = 0 t ∈ [0, t f ], min  p˙ n (t )2 dt s.t. 0 tf ⎪ ⎪ ˙ ˙ ⎪ 0 | p n (t ) − c (t ) − | p n (t ) − c (t )|| dt = 0, ⎪ 0 ⎪ ⎪ tf ⎪ ⎪ | p˙ n (t ) − d(t ) + | p˙ n (t ) − d(t )|| dt = 0, ⎪ ⎪ ⎩ 0 pn (0) = x0 , pn (t f ) = x f . For simplicity, we rewrite the above problem as follows:

3446

M. Zamirian et al. / Physics Letters A 373 (2009) 3439–3449

⎧t f ⎪ k = 1, 2, . . . , q , ⎪ 0 F kn (t ) dt = 0, ⎪ ⎪  ⎪ t f ⎪ ⎪ n = 1, 2, . . . , ⎪ 0 G an (t ) dt = 0, ⎪ ⎪ t f ⎨tf G bn (t ) dt = 0, t ∈ [0, t f ], min E (t ) dt s.t. 0 tf ⎪ ⎪ ⎪ 0 G cn (t ) dt = 0, ⎪ 0 ⎪ ⎪tf ⎪ ⎪ G dn (t ) dt = 0, ⎪ ⎪ ⎩ 0 pn (0) = x0 , pn (t f ) = x f ,

(21)

where E (t ) = p˙ n (t ) 2 , F kn (t ) = |ϕk ( pn (t )) − dlk − |ϕk ( pn (t )) − dlk ||, G an (t ) = | pn (t ) − a(t ) − | pn (t ) − a(t )||, G bn (t ) = | pn (t ) − b(t ) + | pn (t ) − b(t )||, G cn (t ) = | p˙ n (t ) − c (t ) − | p˙ n (t ) − c (t )||, and G dn (t ) = | p˙ n (t ) − d(t ) + | p˙ n (t ) − d(t )||. t Now, we partition the interval [0, t f ] to m equal parts as h = mf . Thus, by using a numerical integration method such as trapezoidal rule, problem (21) is converted to the following problem:

min

h



⎧2 (a) ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ (b) ⎪ ⎪ ⎪ ⎨

s.t.





E (0) + 2E (h) + · · · + 2E (m − 1)h + E (t f )

(22)

h [ F kn (0) + 2F kn (h) + · · · + 2F kn ((m − 1)h) + F kn (t f )] = 0, 2 h [G an (0) + 2G an (h) + · · · + 2G an ((m − 1)h) + G an (t f )] = 0, 2 h [G bn (0) + 2G bn (h) + · · · + 2G bn ((m − 1)h) + G bn (t f )] = 0, 2

(c)

⎪ ⎪ (d) ⎪ ⎪ ⎪ ⎪ ⎪ (e) ⎪ ⎪ ⎩

k = 1, 2, . . . , q , n = 1, 2, . . . , (23)

h [G cn (0) + 2G cn (h) + · · · + 2G cn ((m − 1)h) + G cn (t f )] = 0, 2 h [G dn (0) + 2G dn (h) + · · · + 2G dn ((m − 1)h) + G dn (t f )] = 0, 2

pn (0) = x0 ,

(f)

pn (t f ) = x f .

Theorem 6. The solutions of the problem (22)–(23) and (21) are the same, if in (22)–(23) m tends to infinity. Proof. See [22].

2

The problem (22)–(23) is a non-linear programming problem (NLPP) with 3n variables (the unknown constant coefficients of p 1n (.), p 2n (.) and p 3n (.)) which can be solved by using many software such as Lingo and Matlab. We emphasize that the sequence of the solutions of the problem (22)–(23) converges to the solution of problem (5). In the same approach, we achieved the following NLPPs for problems (6), (12) and (13), respectively:

min

h 2

h s.t.







E (0) + 2E (h) + · · · + 2E (m − 1)h + E (t f )

[ H kn (0) + 2H kn (h) + · · · + 2H kn ((m − 1)h) + H kn (t f )] = 0, k = 1, 2, . . . , q, n = 1, 2, . . . , and (23)(b)–(f), 2

(24)

where H kn (t ) = |ϕk ( pn (t )) − dku − |ϕk ( pn (t )) − dku ||.

max α

s.t.

⎧ ⎪ ⎪ ⎨ h2 [ E (0) + 2E (h) + · · · + 2E ((m − 1)h) + E (t f )] 

B −α ln( α0C ) 0

β0

( I 0u − I l0 ) + I l0 ,

h [ J (0) + 2 J kn (h) + · · · + 2 J kn ((m − 1)h) + J kn (t f )] = 0, ⎪ ⎪ ⎩ 2 kn and (23)(b)–(f),

k = 1, 2, . . . , q , n = 1, 2, . . . ,

(25)

where

 





J kn (t ) = ϕk pn (t ) −

 q

max

k =0

ck

B −α



B −α



ln( αk C )        ln( αk C k )  l k dlk − dku − dku − ϕk pn (t ) − dk − dku − dku . β β

k k ⎧ h ⎪ [ E ( 0 ) + 2E ( h ) + · · · + 2E (( m − 1 ) h ) + E ( t )]  I 0 (x∗ (t 0x∗ )) − c 0 , f ⎨2 h s.t. k = 1, 2, . . . , q , n = 1, 2, . . . , ⎪ 2 [ K kn (0) + 2K kn (h) + · · · + 2K kn ((m − 1)h) + K kn (t f )] = 0, ⎩ and (23)(b)–(f),

(26)

where K kn (t ) = |ϕk ( pn (t )) − ϕk (x∗ (tkx∗ )) + ck − |ϕk ( pn (t )) − ϕk (x∗ (tkx∗ )) + ck ||. Now, we propose a algorithm based on parametrization method and fuzzy aggregation as follows: Subalgorithm (for solving the NLPPs). In this subalgorithm ε is chosen as a known positive real number which is the error between two consecutive values of goal function. Step 1. Step 2. Step 3. Step 4. Step 5.

Read ε , m, and set n = 1. Solve NLPP. If the NLPP is infeasible then n = n + 1 and go to step 2, else set the value of goal function in I 0n . Set n = n + 1 and solve NLPP and set the value of goal function in I 0n . If | I 0n − I 0n−1 | > ε then go to step 4.

M. Zamirian et al. / Physics Letters A 373 (2009) 3439–3449

3447

Step 6. Store I 0n as the optimal value of goal function. Step 7. End. Main algorithm. Step 1. Choose intervals [dlk , dku ], k = 1, 2, . . . , q, for the value of fuzzy variables

ϕk (.).

Solve NLPP (22)–(23) and (24) by using subalgorithm, and set the optimal value of goal functions in I l0 and I 0u , respectively. Determine a membership function for each interval by using modified logistic function. Choose a suitable t-norm to perform aggregation. Solve NLPP (25) by using subalgorithm, and set the optimal value of goal function in α ∗ and its corresponding path in pn∗ (.). If t-norm is strictly monotonous, then pn∗ (.) is a Pareto optimal solution for the MODOP and go to step 10. Solve NLPP (26) (the Pareto optimality test problem) by using subalgorithm. If the optimal value of goal function is not equal to zero, then the achieved optimal path, p¯ n (.), is a Pareto optimal solution for the MODOP and go to step 10. Step 9. pn∗ (.) achieved in step 5 is a Pareto optimal solution for the MODOP. Step 10. End. Step 2. Step 3. Step 4. Step 5. Step 6. Step 7. Step 8.

6. Numerical examples We set u i (t ) = x˙ i (t ), i = 1, 2, 3, where u i (t ) are interpreted as control functions which show the speed of object A in direct of xi (t ) at the moment t ∈ [0, t f ]. In this Letter, all the computations were run on a Lap-top with CPU 2.20 GHz and 0.99 GB of RAM and all the codes are written in Lingo software. Example 1. Consider a free moving rigid object A as a disk with radius r and center (x1 (t ), x2 (t )) in a plane in the presence of stationary obstacles as disks with radiuses rk , k = 1, 2, . . . , 5, and centers (α1k , α2k ) as follows: r = 0.01; rk = 18 , dlk = 0, k = 1, 2, . . . , 5; (α11 , α21 ) = (0.5, 0.5), (α12 , α22 ) = (0.7, 0.85), (α13 , α23 ) = (0.2, 0.2), (α14 , α24 ) = (0.3, 0.8), (α15 , α25 ) = (0.85, 0.2); d1u = 0.11, d2u = 0.1, d3u = 0.04, d4u = 0.1, d5u = 0.07, x0 = (0, 0), x f = (1, 1), (0, 0)  (x1 (t ), x2 (t ))  (1, 1), (0, 0)  (˙x1 (t ), x˙ 2 (t ))  (3.3, 8.2). We select m = 40 and ε = 10−3 , then by solving NLPP (22)–(23) we achieve I l0 = 1.4635, while in [1] is 1.4641 and in [3] is 1.4701. Then, by solving NLPP (24) we obtain I 0u = 1.5320. Now, the flexible membership functions for each interval with the following information are determined: B 0 = 1, C 0 = 0.00000000000062988, β0 = 35; B 1 = 1.0475, C 1 = 0.0475, β1 = 10; B 2 = 1.0008, C 2 = 0.0008, β2 = 14; B 3 = 1, C 3 = 0.00004136, β3 = 17; B 4 = 1.0062, C 4 = 0.0062, β4 = 12; B 5 = 11.3381, C 5 = 10.3381, β5 = 7. Then set m = 40 and ε = 10−5 , so by solving NLPP (25) the optimal path planning

x∗1 (t ) = 4.016648t − 13.23591t 2 + 20.84788t 3 − 13.09196t 4 + 2.463306t 5 , x∗2 (t ) = 0.4006305t + 5.692636t 2 − 24.26205t 3 + 34.8359t 4 − 15.66711t 5 , is achieved, where I 0 (x∗ (t 0x∗ (.) )) = 1.4979, ϕ1 (x∗ (t 1x∗ (.) )) = 0.07495507, ϕ2 (x∗ (t 2x∗ (.) )) = 0.08549927, ϕ3 (x∗ (t 3x∗ (.) )) = 0.01463190, ϕ4 (x∗ (t 4x∗ (.) )) = 0.4152975, ϕ5 (x∗ (t 5x∗ (.) )) = 0.1159192, α ∗ = 0.4997998. But this solution may be unique or not, then one must solve the problem (26). By selecting m = 40 and ε = 10−7 , the optimal solution of (26) which is a Pareto optimal solution for (2), is achieved as follows:

x¯ 1 (t ) = −0.003878861t − 0.06222087t 2 + 13.70816t 3 − 42.8483t 4 + 39.44721t 5 − 9.220959t 7 , x¯ 2 (t ) = 0.0008178819t − 0.05898677t 2 + 2.384057t 3 − 5.473355t 4 + 1.897249t 5 + 2.250218t 6 , where I 0 (¯x(t 0x¯ (.) )) = 1.4979, ϕ1 (¯x(t 1x¯ (.) )) = 0.07495507, ϕ2 (¯x(t 2x¯ (.) )) = 0.09806725, ϕ3 (¯x(t 3x¯ (.) )) = 0.01463190, ϕ4 (¯x(t 4x¯ (.) )) = 0.4172498, ϕ5 (¯x(t 5x¯ (.) )) = 0.120129, w (¯x(.)) = 0.03631934. In Figs. 3 and 4 the approximate optimal paths and control functions are shown, respectively. Example 2. Consider object A in a space, 3 , in the presence of five stationary obstacles with the following information: r = 0.01; r1 = 16 , rk = 15 , k = 2, 3, 4, 5; dlk = 0, k = 1, . . . , 5; d1u = 0.1, d2u = 0.15, d3u = 0.11, d4u = 0.12, d5u = 0.2; (α11 (t ), α21 (t ), α31 (t )) = (0.5, 0.5, 0.5), (α12 (t ), α22 (t ), α32 (t )) = (0.3, 0.3, 0.1), (α13 (t ), α23 (t ), α33 (t )) = (0.8, 0.2, 0.6), (α14 (t ), α24 (t ), α34 (t )) = (0.2, 0.2, 0.8), (α15 (t ), α25 (t ), α35 (t )) = (0.2, 0.8, 0.4); x0 = (0, 0, 0), x f = (1, 1, 1), (0, 0, 0)  (x1 (t ), x2 (t ), x3 (t ))  (1, 1, 1), (0, 0.2, 0.3)  (˙x1 (t ), x˙ 2 (t ), x˙ 3 (t ))  (7.5, 4, 3.5). We select m and ε similar to Example 1, then by solving NLPP (22)–(23) we achieve I l0 = 1.7714, while in [1] is 2.3012 and in [3] is 2.8115. Then, by solving NLPP (24) we obtain I 0u = 1.9708. Now, the flexible membership functions for each interval with the following information are determined: B 0 = 1.0001, C 0 = 0.00011244, β0 = 16; B 1 = 11.3381, C 1 = 10.3381, β1 = 7, B 2 = 1.5043, C 2 = 0.5043, β2 = 8; B 3 = 1.017, C 3 = 0.017, β3 = 11; B 4 = 1.0062, C 4 = 0.0064, β4 = 12; B 5 = 1.0475, C 5 = 0.475, β5 = 10. Then, by solving the problems (25) and (26) an approximate Pareto optimal path called x¯ (.), is achieved as follows: x¯ 1 (t ) = 0.4768958t − 2.578397t 2 + 9.89575t 3 − 10.33133t 4 − 2.537014t 5 + 6.0741t 6 , x¯ 2 (t ) = 1.854938t − 10.80957t 2 + 40.00153t 3 − 67.90836t 4 + 51.83759t 5 − 13.97613t 6 , x¯ 3 (t ) = 3.361182t − 16.14342t 2 + 46.0994t 3 − 63.78942t 4 + 40.27541t 5 − 8.803167t 6 , where I 0 (¯x(t 0x¯ (.) )) = 1.844, ϕ1 (¯x(t 1x¯ (.) )) = 0.108847, ϕ2 (¯x(t 2x¯ (.) )) = 0.1371711, ϕ3 (¯x(t 3x¯ (.) )) = 0.3914343, ϕ4 (¯x(t 4x¯ (.) )) = 0.09466882, ϕ5 (¯x(t 5x¯ (.) )) = 0.2118702. In Figs. 5 and 6 the approximate optimal paths and control functions are shown, respectively.

3448

M. Zamirian et al. / Physics Letters A 373 (2009) 3439–3449

Fig. 3. The optimal paths of Example 1.

Fig. 4. The control functions of Example 1.

Fig. 5. The optimal paths of Example 2.

Fig. 6. The control functions of Example 2.

7. Conclusion In this Letter, the path planning problems with three main criteria: the length of path, the distance between object A and obstacles, and the smoothness of the path, as a multi-objective problem is formulated. We emphasize that if one adds any objective functions, then the procedure of our algorithm will not be changed. We define the distance between object A and obstacles as mint ∈[0,t f ] ϕk (x(t )) which can be changed dependent on DM’s idea. In this Letter, membership functions are defined as non-linear functions which are flexible and decreasing, then the DM can make his subjective and desirable membership functions practical by using this flexibility. Also, t-norm used in this Letter is Zadeh-min, then to recognize Pareto optimality one must solve the Pareto optimality test problem. According to our Bk procedure, one cannot use algebraic product as t-norm, because μ ˜I (.), k = 1, 2, . . . , q, can get two values (1 and ), l k

1+C k exp [βk (

I k (x(tkx(.) ))− I I u −Il k k

k

)]

which cannot be substituted with each other. But, if the flexibility of membership functions are ignored and one can define simple and strictly decreasing membership functions, then algebraic product is better than Zadeh-min. Since in this situation there is no need to solve the Pareto optimality test problem (see Theorem 3), so the complexity of calculation become less, but the solution achieved is not satisfactory because DM cannot exactly make his desirable idea practical to define membership functions (since membership functions are not flexible). Therefore, in this Letter we preferred more accurate and satisfactory solution than the complexity of calculation. In future works, we will search the flexible, simple and strictly decreasing membership functions to decrease calculus complexity and have accurate solution. Finally, we use the parametrization method to solve the NLDOPs. Some advantages of this method are: the number of variables is little, there is no error in final condition (x(t f ) = x f ), and the path achieved is smooth. Acknowledgement The authors would like to thank Dr. Mahmoud Zadeh Vaziri for his help.

M. Zamirian et al. / Physics Letters A 373 (2009) 3439–3449

3449

References [1] [2] [3] [4] [5] [6] [7] [8] [9] [10] [11] [12] [13] [14] [15] [16] [17] [18] [19] [20] [21]

A.H. Borzabadi, A.V. Kamyad, M.H. Farahi, H.H. Mehne, Appl. Math. Comput. 170 (2005) 1418. J.C. Latombe, Robot Motion Planning, Kluwer Academic Publishers, Boston, 1991. Y. Wang, D.M. Lane, G.J. Falconer, Robotica 18 (2000) 123. M. Zamirian, M.H. Farahi, A.R. Nazemi, Appl. Math. Comput. 190 (2007) 1479. J. Balicki, Int. J. Comput. Sci. Netw. Secur. 6 (12) (2006) 1. J. Balicki, Int. J. Comput. Sci. Netw. Secur. 7 (11) (2007) 32. K. Miettinen, Nonlinear Multi-Objective Optimization, Kluwer Academic, New York, 1999. M. Sakawa, Fuzzy Set and Interactive Multi-Objective Optimization, Plenum Press, New York, 1993. J.L. Cohon, Multi-Objective Programming and Planning, Academic Press, New York, 1985. H. Leberling, Fuzzy Set Syst. 6 (1981) 105. S. Bells, Flexible membership functions, http://www.louderthanabomb.com/Spark_Features.htm, 1999. H.J. Zimmerman, Inf. Sci. 36 (1985) 25. H.J. Zimmerman, Fuzzy Sets, Decision Making, and Expert Systems, Kluwer, Boston, 1987. F.A. Lootsma, Fuzzy Logic for Planning and Decision Making, Kluwer Academic Poblishers, Dordrecht/Boston/London, 1997. P.M. Vasant, Fuzzy Optim. Decis. Mak. 3 (2003) 229. G.J. Llir, B. Yuan, Fuzzy Sets and Fuzzy Logics-Theory and Application, Prentice Hall, New York, 1995. A.V. Kamyad, H.H. Mehne, Int. J. Eng. Sci. 14 (2003) 143. K.L. Teo, L.S. Jenning, H.W.J. Lee, V. Rehbock, J. Austral. Math. Soc. B 40 (1999) 314. J.E. Rubio, Control and Optimization: The Linear Treatment of Nonlinear Problems, Manchester University Press, UK, 1986. G.Y. Tang, Syst. Control Lett. 54:55 (2005) 429. H.M. Jaddu, Numerical methods for solving optimal control problem using Chebyshev polynomials, PH.D thesis, School of Information Science, Japan Advanced Institute of Science and Technology, 1998. [22] J. Store, R. Bulirsch, Introduction to Numerical Analysis, Springer-Verlag, New York, 1992.