Applied Soft Computing 11 (2011) 778–792
Contents lists available at ScienceDirect
Applied Soft Computing journal homepage: www.elsevier.com/locate/asoc
Railway freight transportation planning with mixed uncertainty of randomness and fuzziness Lixing Yang ∗ , Ziyou Gao, Keping Li State Key Laboratory of Rail Traffic Control and Safety, Beijing Jiaotong University, Beijing 100044, China
a r t i c l e
i n f o
Article history: Received 10 June 2008 Received in revised form 30 August 2009 Accepted 30 December 2009 Available online 11 January 2010 Keywords: Railway freight transportation planning Random fuzzy variable Pessimistic value Optimistic value Chance measure
a b s t r a c t The railway freight transportation planning problem under the mixed uncertain environment of fuzziness and randomness is investigated in this paper, in which the optimal paths, the amount of commodities passing through each path and the frequency of services need to be determined. Based on the chance measure and critical values of the random fuzzy variable, three chance-constrained programming models are constructed for the problem with respect to different criteria. Some equivalents of objectives and constraints are also discussed in order to investigate mathematical properties of the models. To solve the models, a potential path searching algorithm, simulation algorithms and a genetic algorithm are integrated as a hybrid algorithm to solve an optimal solution. Finally, some numerical examples are performed to show the applications of the models and the algorithm. © 2010 Elsevier B.V. All rights reserved.
1. Introduction With the development of economy, freight transportation plays a more and more important role in the modern society. Accordingly, this tendency produces a highly competitive environment for different transportation modes, such as railway, road, air shipment and water carriage. To make a profit, different firms have to design favorable transportation products and supply the convenient services to attract freight. Among these modes, railway transportation has comparative advantages over others on its safety and transportation capacity, above all, for the long-distance or large-scale freight transportation. In railway firms, it is desirable that the transportation plan should adjust to the changing economic and regulatory conditions, offer reliable, high quality, low cost services to their customers and then, obviously, make a profit. Thus, how to make an optimal transportation plan becomes an important issue for railway firms. Actually, transportation planning problem is a special case of service network design problems. In recent decades, a lots of researchers had investigated the service network design problem, and presented many models and algorithms. Up to now, these models had been widely used to represent planning and operation issues in some real application fields, for instance, transportation, logistics and production-distribution systems. The goal of this problem, as stated by Crainic [6], is to plan services and operations to
∗ Corresponding author. E-mail address:
[email protected] (L. Yang). 1568-4946/$ – see front matter © 2010 Elsevier B.V. All rights reserved. doi:10.1016/j.asoc.2009.12.039
satisfy demand and ensure the profitability of the firm. The “supply” side of this problem implies a system wide, network view of operations that simultaneously addresses various operations performed in different facilities. On the “demand” side, the routing of freight through the network has to be planned to ensure timely and reliable delivery according to the customer specifications and the carrier’s own targets. In the service network design problem, how to make service strategies is an important and complex issue. According to the planning levels, Crainic and Laporte [5] divided service strategies into three planning levels, that is, strategic (long term) planning level, tactical (medium term) planning level and operational (short term) planning level. For more details of service strategies on different planning levels, interested readers can refer to Crainic and Laporte [5]. From the viewpoint of mathematical models, Crainic [6] divided service network design problems into two classes, that is, frequency and dynamic service network design problems. For the frequency service network design problem, service frequencies are treated as explicit integer decision variables or derived outputs in the mathematical model (for more details, please refer to Crainic and Rousseau [3], Powell [22]). For the dynamic service network design problem, a time dimension is introduced into the model, which is generally constructed by representing the operations of the system over a certain number of time periods by using a space-time network (please refer to Farvolden [7], Gorman [9,11]). As an application of service network design problems, the railway freight transportation planning had been investigated by many researchers, and many useful methods have been obtained. For instance, Crainic et al. [2] designed a tactical planning model
L. Yang et al. / Applied Soft Computing 11 (2011) 778–792
for rail freight transportation. Crainic et al. [4] developed a general modeling and algorithmic framework, based on mathematical programming techniques for the tactical planning of freight transportation. This approach has been applied to planning problems of freight transportation by rail and by truck. Keaton [13] modeled railroad operating plans problem as a mixed-integer programming, whose objective is to minimize the sum of train costs, car time costs and classification yard costs, while not exceeding limits on train size and yard volumes. Lagrangian relaxation and heuristic approaches were proposed to solve the problem. Gorman [9] discussed the joint train-scheduling and demand-flow problem for a major US freight railroad. A “tabu-enhanced” genetic search algorithm was proposed to find an approximate optimal solution. Gorman [11] designed an operating-plan model to minimize the schedule-related costs of service within the rail-operating capabilities, which had been successfully used in Santa Fe Railway. Since the model is complex, a combination of genetic and tabu searches is employed to search for optimal operating plans. Keaton [14] designed a dual adjustment method for implementing Lagrangian relaxation to design optimal railroad operating plans. Newton et al. [21] modeled the railroad blocking problem as a network design problem, which is intended as a strategic decision-making tool. Also, they developed a column generation, branch-and-bound algorithm to solve the problem. It is easy to see from the literature that most of researches have been done under the certain environment, in which all the parameters involved are fixed quantities. Actually, since the railway transportation system is complex, decision-makers inevitably meet uncertain parameters in the process of making a decision, such as random parameters and fuzzy parameters. However, most researchers ignore the existence of the uncertainty in literature, which probably causes potential risks in the real applications. In view of this fact, we shall consider the problem under the stochastic fuzzy environment in this paper, which intends to make service strategies on the tactical planning level. For the optimization methods under the uncertain environments, interested readers can refer to Liu [16], Liu and Liu [15], Yang [23,27], etc. This paper is organized as follows. Section 2 first introduces the problem in a mathematical way, and then constructs a mathematical model under the certain environment. In Section 3, to model the problem under the random fuzzy environment, some mathematical properties of the chance measure and critical values are discussed, including boundary, monotonicity and continuity. Section 4 constructs three classes of mathematical models for the problem, i.e., the chance-constrained pessimistic value model, the chanceconstrained optimistic value model and the chance-constrained Hurwicz criterion model. Section 5 discusses the equivalents of objectives and constraints for different models. In Section 6, a genetic algorithm, random fuzzy simulations and a potential path searching algorithm are integrated as a hybrid algorithm to search for an optimal solution. At last, some numerical examples are performed to show the applications of models and algorithm.
• i, j = 1, 2, . . . , n ∈ G: the nodes in the graph; • (i, j): the arc from node i to node j; • k = 1, 2, . . . , m: the notations of different ODs in the railway network; • ak : the total amount of commodities for OD k; • hk : the fixed charge of a train for OD k; • pi : the turnover capacity at node i; • qk : the capacity of each train for OD k; • cij : the passing capacity on arc (i, j); • dij : the length of arc (i, j); • zkt : the frequency of train service on path t for OD k, which is a decision variable; • Tk : the total number of potential paths for OD k. • xkt : the amount of commodity flow on path t for OD k, which is a decision variable; • ek : the unit transportation cost of commodity flow on arc (i, j) for ij OD k; • U: a penalty parameter; • Z + : the set of non-negative integers. 2.1. Objective function To model the problem, the following decision variable ykt is introduced to show whether a path is selected to serve the transportation system or not:
ykt =
1, 0,
if path t is selected to serve for OD k otherwise.
Generally, a path can be represented by a sequence of adjacent arcs. A binary variable ukt is used to determine whether an arc (i, j) ij is a segment of path t of OD k:
ukt ij
=
1, 0,
if arc (i, j) is a segment of path t for OD k otherwise.
Then the total transportation cost can be summarized as the following form: f (x, y) =
Tk m
eijk xkt ukt y . ij kt
k=1 t=1 (i,j) ∈ E
Generally, railway firms need to spend some money (or time) in examining and repairing trains before the transportation activity occurs. In addition, the costs of occupying rail lines and the depreciation of trains should also be considered. These kinds of costs can be treated as the fixed charge in the mathematical model. The total fixed charge of trains in this transportation system can be computed as follows: g(z) =
2. Description of the problem The aim of the railway freight transportation planning is to determine the optimal paths, the amount of commodity flow on each path and the frequency of services for each OD (origindestination) in a given railway transportation network such that the total relevant cost is minimized. To model the problem conveniently, the involved railway network is represented by a graph, denoted by (G, E), where notations G and E are sets of nodes and arcs, respectively. In the graph, a node represents an important facility in the railway network, for instance, station or yard, and an arc represents a rail line between two adjacent facilities. To construct a mathematical model for the problem, the relevant notations and parameters are listed in the following.
779
Tk m
hk zkt .
k=1 t=1
In the real conditions, it is desirable that each service train should be fully loaded. If this case does not hold, the penalty cost should be added to the total relevant transportation cost. For this purpose, we first compute the empty ratio of trains on route t for OD k by the following function:
ıkt =
⎧ ⎪ ⎨ 0,
if
xkt = qk
x kt
qk
⎪ ⎩ 1 − xkt − xkt , otherwise, qk
qk
where the notation [x] represents the largest integer that is less than or equal to x. Thus, the total empty ratio of the involved trains
780
L. Yang et al. / Applied Soft Computing 11 (2011) 778–792
m T
k is ı . The penalty cost is formulated as follows, where t=1 kt k=1 U is a predetermined penalty parameter:
h(x) = U
Tk m
ıkt .
k=1 t=1
The total relevant cost of the railway transportation system, denoted by F(x, y, z), consists of the transportation cost, the fixed charge and the penalty cost. That is, F(x, y, z) = f (x, y) + g(z) + h(x).
2.2. System constraints First of all, it is required that the total transported commodities on each arc should not exceed its passing capacity, which produces the following constraint Tk m
xkt ukt y ≤ cij , (i, j) ∈ E. ij kt
k=1 t=1
The turnover capacity of each station needs to be considered in the transportation system. That is, the total amount of commodities passing though each station should not exceed its capacity. To model this constraint, the following notation is introduced:
rikt
=
1, if node i lies in path t for OD k 0, otherwise.
This constraint can be formulated as follows: Tk m
xkt rikt ykt ≤ pi ,
i = 1, 2, . . . , n.
k=1 t=1
It is stipulated that the demand of each OD should be satisfied. Then Tk
In this model, the objective is to minimize the total relevant cost in the railway transportation system. The first and second constraints ensure that the amount of commodity flow will not exceed the passing capacities of each arc and each node, respectively. The third constraint ensures that the total amount of commodity flow should satisfy the demand of each OD. The fourth constraint ensures that the designed train service can satisfy the transportation demand on each path. It is easy to see that all the parameters in model (1) are supposed to be fixed quantities. In the real conditions, a transportation plan is usually made before the occurrence of transportation activity, above all, in making a transportation plan on the strategic or tactical planning level. Thus, the concrete values of some parameters actually can not be obtained in advance. To handle the problem in a mathematical way, we always treat these parameters as random variables by statistical ways, or as fuzzy variables according to the expert’s experience if the sample data are not enough. In model (1), some parameters, such as hk , pi , cij , eijk and U, can also be treated as random variables or fuzzy variables. For such a condition, model (1) turns meaningless in the sense of mathematical viewpoints. To construct a reasonable mathematical model under the uncertain environment, the chance theory of hybrid uncertainty will be first introduced in the next section. 3. Chance theory of hybrid uncertainty In probability theory, a random variable is a measurable function from the probability space (˝, A,Pr) to the set of real numbers to describe random phenomena, and the probability measure (Pr) is a unique tool to describe the chance of randomness. In fuzzy set theory, a fuzzy variable is defined as a function from the possibility space (, P,Pos) to the set of real numbers to describe fuzzy phenomena. There exist two measures to describe the chance of a fuzzy event, i.e., possibility measure (Pos) and necessity measure (Nec). Suppose that is a fuzzy variable with the membership function (x). Then for any subset A of , we have Pos{ ∈ A} = sup(x)
xkt = ak ,
x∈A
k = 1, 2, . . . , m.
t=1
Nec{ ∈ A} = 1 − Pos{ ∈ Ac } = 1 − sup (x).
Additionally, the total commodities transported on path t for OD k can not exceed the total capacity of the involved trains. That is, xkt ≤ zkt qk ,
k = 1, 2, . . . , m, t = 1, 2, . . . , Tk .
2.3. Mathematical model The mathematical model of this problem is constructed as follows:
⎧ min F(x, y, z) = f (x, y) + g(z) + h(x) ⎪ ⎪ s.t. ⎪ ⎪ ⎪ Tk ⎪ m ⎪ ⎪ ⎪ xkt ukt y ≤ cij , (i, j) ∈ E ⎪ ij kt ⎪ ⎪ ⎪ ⎪ k=1 t=1 ⎪ ⎪ Tk m ⎪ ⎪ ⎪ ⎪ xkt rikt ykt ≤ pi , i = 1, 2, . . . , n ⎨ k=1
t=1
⎪ Tk ⎪ ⎪ ⎪ ⎪ xkt = ak , k = 1, 2, . . . , m ⎪ ⎪ ⎪ ⎪ t=1 ⎪ ⎪ ⎪ ⎪ xkt ≤ zkt qk , k = 1, 2, . . . , m, t = 1, 2, . . . , Tk ⎪ ⎪ ⎪ xkt ≥ 0, k = 1, 2, . . . , m, t = 1, 2, . . . , Tk ⎪ ⎪ ⎪ 1}, k = 1, 2, . . . , m, t = 1, 2, . . . , Tk ⎪ ⎩ ykt ∈ {0, + zkt ∈ Z ,
k = 1, 2, . . . , m, t = 1, 2, . . . , Tk .
x ∈ Ac
If randomness and fuzziness appear at the same time, it is desirable that probability theory and possibility theory should be integrated to deal with this kind of hybrid uncertainty. For this purpose, Liu and Liu [15] presented a mathematical tool, called the random fuzzy variable, to describe the hybrid uncertainty of randomness and fuzziness, which is defined as a function from the possibility space (, P,Pos) to the set of random variables. That is, is a random variable valued fuzzy variable. For more details of random fuzzy variables, we can refer to Liu [16,17]. Remark 3.1. Random variables and fuzzy variables are two special cases of the random fuzzy variable. This fact shows that a random fuzzy variable is a generalization of a random variable or a fuzzy variable.
(1)
Remark 3.2. Suppose that 1 , 2 , . . . , k are random variables and k+1 , k+2 , . . . , n are fuzzy variables. If a function f (x1 , x2 , . . . , xn ) is a measurable function, then f (1 , 2 , . . . , n ) is obviously a random fuzzy variable. This property ensures that the operations between random fuzzy variables are also random fuzzy variables. In addition, based on the probability measure, possibility measure and necessity measure, Liu and Liu [15] defined the following chance measure to describe the occurring chance of a random fuzzy event.
L. Yang et al. / Applied Soft Computing 11 (2011) 778–792
Definition 3.1. [15] Let be a random fuzzy variable, and A a Borel subset of . Then the chance measure of the random fuzzy event ∈ A can be defined as the following two forms:
1
Chp { ∈ A} =
Pos{Pr{() ∈ A} ≥ ˛} d˛,
Proof. By using the definition of the chance measure, parts (i), (ii) and (iii) can be proved easily. We only prove part (iv) in the following. In fact, Ch { ∈ A} + Ch(1−) { ∈ Ac }
Remark 3.3. If degenerates to a random variable, then Chp and Chn degenerate to the probability measure. On the other hand, if degenerates to a fuzzy variable, then Chp and Chn degenerate to the possibility measure and the necessity measure, respectively. This fact shows that chance measures are generalizations of probability measure, possibility measure and necessity measure. In fuzzy set theory, the possibility measure is an optimistic tool for making a decision since it may cause potential risks in the decision system. Conversely, the necessity measure is a conservative tool for the pessimistic decision-makers. These properties state that chance measures Chp and Chn are suitable for optimistic decision-makers and pessimistic decision-makers, respectively. However, not all the decision-makers are pessimistic or optimistic. If some decision-makers are eclectic, they can use the following linear combination to describe the chance of a random fuzzy event. Definition 3.2. Let be a random fuzzy variable, and A a Borel subset of . Then the chance measure of random fuzzy event ∈ A can be defined as the following form (where ∈ [0, 1]):
Ch { ∈ A} =
=
Nec{Pr{() ∈ A} ≥ ˛}d˛
0
1
Nec{Pr{() ∈ Ac } ≥ ˛}d˛
0 1
0 1
(1 − Pos{Pr{() ∈ Ac } > ˛})d˛
Pos{Pr{() ∈ A} ≥ ˛}d˛ + (1 − ) 0
1
Pos{Pr{() ∈ Ac } ≥ ˛}d˛ +
+ (1 − )
0
0
1
1
Pos{Pr{() ∈ Ac } ≥ ˛}d˛ +
+ (1 − ) 0
= + (1 − ) = 1.
The proof is thus completed.
(1 − Pos{Pr{() ∈ A} > ˛})d˛ 0
Generally, for any two random fuzzy variables, it is meaningless to rank them directly. In other words, we need to rank random fuzzy variables in the sense of some criterion. Three ranking criteria will be defined as follows. Definition 3.3. Let be a random fuzzy variable. Then for any ˛ ∈ (0, 1], the (; ˛)-optimistic value of is defined as sup (; ˛) = sup{r|Ch { ≥ r} ≥ ˛}; the (; ˛)-pessimistic value of is defined as inf (; ˛) = inf {r|Ch { ≤ r} ≥ ˛}; the Hurwicz critical value of is defined as
hur (; ˛) = sup (; ˛) + (1 − )inf (; ˛) for any ∈ (0, 1).
1
Pos{Pr{() ∈ A} ≥ ˛} d˛ 0
1
Nec{Pr{() ∈ A} ≥ ˛} d˛.
+ (1 − ) 0
Remark 3.4. In this definition, the parameter represents the optimistic degree of decision-makers. The more optimistic the decision-maker is, the larger the parameter should be. Remark 3.5. If degenerates to a random variable, then Ch degenerates to the probability measure. On the other hand, if degenerates to a fuzzy variable, then Ch degenerate to the M measure(Yang and Iwamura [24]), i.e., M {·} = Pos{·} + (1 − )Nec{·}. In the next section, the chance measure will be employed to construct the mathematical models of the railway freight transportation planning problem. For the convenience of applications, the mathematical properties of the chance measure will be investigated in the following. Theorem 3.1.
1
Nec{Pr{() ∈ A} ≥ ˛} d˛.
1
Pos{Pr{() ∈ A} ≥ ˛}d˛ + (1 − ) 0
Chn { ∈ A} =
1
=
0
781
Let be a random fuzzy variable. Then
(i) For any Borel set A ⊂ , we have 0 ≤ Ch { ∈ A} ≤ 1; (ii) For any Borel set A ⊂ and 1 ≤ 2 , we have Ch1 { ∈ A} ≤ Ch2 { ∈ A}; (iii) For Borel sets A, B ⊂ with A ⊂ B, we have Ch { ∈ A} ≤ Ch { ∈ B}; (iv) For any ∈ [0, 1] and Borel set A ⊂ , we have Ch { ∈ A} + Ch(1−) { ∈ Ac } = 1.
Let and be two random fuzzy variables. The abovementioned definitions can be employed to rank and . If the optimistic value criterion is employed, then it is called ≥ in the sense of (; ˛)-optimistic value if and only if sup (; ˛) ≥ sup (; ˛). If the pessimistic value criterion is employed, then it is called ≥ in the sense of (; ˛)-pessimistic value if and only if inf (; ˛) ≥ inf (; ˛). If the Hurwicz critical value criterion is employed, then it is called call ≥ in the sense of (; ˛)-Hurwicz critical value if and only if hur (; ˛) ≥ hur (; ˛). As for model (1), since some parameters are random fuzzy variables, the objective is also a random fuzzy variable for any feasible solution. To search for the optimal objective, the objectives can also be ranked by the above mentioned critical value criteria. The mathematical properties of critical values will be investigated as follows. Theorem 3.2.
Let be a random fuzzy variable. Then we have
(i) if ı ≥ 0, then (ı)sup (; ˛) = ısup (; ˛), (ı)inf (; ˛) = ıinf (; ˛), (ı)hur (; ˛) = ıhur (; ˛); (ii) if ı < 0, then (ı)sup (; ˛) = ıinf (; ˛), (ı)inf (; ˛) =
1−
ısup (; ˛), (ı)hur (; ˛) = ıhur (; ˛). Proof.
(i) If ı ≥ 0, then
(ı)sup (; ˛)
= sup{r|Ch {ı ≥ r} ≥ ˛} = ı sup{r/ı|Ch { ≥ r/ı} ≥ ˛} = ısup (; ˛).
The other cases can be proved similarly. (ii) If ı < 0, then (ı)sup (; ˛)
= sup{r|Ch {ı ≥ r} ≥ ˛} = ı inf {r/ı|Ch { ≤ r/ı} ≥ ˛} = ıinf (; ˛).
The other cases can be proved similarly.
782
L. Yang et al. / Applied Soft Computing 11 (2011) 778–792
Theorem 3.3.
Let be a random fuzzy variable. Then we have
(i) inf (; ˛) is an increasing function with respect to ˛ and a decreasing function with respect to ; (ii) sup (; ˛) is a decreasing function with respect to ˛ and an increasing function with respect to . Proof.
⊃ {r|Ch { ≤ r} ≥ ˛2 }.
Thus,
r|Ch { ≤ r} ≥ ˛1
≤ inf {r|Ch { ≤ r} ≥ ˛2 },
that is, inf (; ˛) is increasing with respect to ˛. For any 1 ≤ 2 , it follows from Theorem 3.1 that
r|Ch1 { ≤ r} ≥ ˛
r|Ch1 { ≤ r} ≥ ˛
≥ inf {r|Ch2 { ≤ r} ≥ ˛},
which implies that inf (; ˛) is decreasing with respect to . Part (ii) can be proved similarly. Theorem 3.4. Let be a random fuzzy variable. Then inf (; ˛), sup (; ˛) and hur (; ˛) are left-continuous with respect to ˛ and right-continuous with respect to . Proof. This part only proves that inf (; ˛) is left-continuous with respect to ˛ and right-continuous with respect to . The other cases can be proved similarly. Let {˛i } be an arbitrary sequence of real numbers in (0, 1] such that ˛i ↑ ˛. Since inf (; ˛) is increasing with respect to ˛, it is obvious that inf (; ˛1 ) ≤ inf (; ˛2 ) ≤ . . . ≤ inf (; ˛). Then it follows that lim inf (; ˛i ) ≤ inf (; ˛). On the other i→∞
hand, if lim inf (; ˛i ) < inf (; ˛), then there exists a number x∗ i→∞
such that lim inf (; ˛i ) < x∗ < inf (; ˛).
i→∞
Thus Ch { ≤ x∗ } ≥ ˛i for i = 1, 2, . . . Letting i → ∞, we have Ch { ≤ x∗ } ≥ ˛, which implies that x∗ ≥ inf (; ˛). A contradiction proves the theorem. Let {i } be an arbitrary sequence of real numbers in (0, 1] such that i ↓ . Since inf (; ˛) is decreasing with respect to , it is obvious that inf (1 ; ˛) ≤ inf (2 ; ˛) ≤ . . . ≤ inf (; ˛). Then lim inf (i ; ˛) ≤ inf (; ˛). If lim inf (i ; ˛) < inf (; ˛), i→∞
then there exists a number x∗ such that
i→∞
lim inf (i ; ˛) < x∗ < inf (; ˛).
i→∞
Thus Chi { ≤ x∗ } ≥ ˛ for i = 1, 2, . . . Letting i → ∞, we have Ch { ≤ x∗ } ≥ ˛, which implies that x∗ ≥ inf (; ˛). A contradiction proves the theorem. The proof is thus completed. Theorem 3.5.
Ch(1−) { ≥ x∗ } < ˛. Then 1 ≤ Ch { ≤ x∗ } + Ch(1−) { ≥ x∗ } < ˛ + ˛ ≤ 1.
⊂ {r|Ch2 { ≤ r} ≥ ˛}.
Thus, inf
Ch { ≤ x∗ } < ˛
For any ˛1 ≤ ˛2 , we have
r|Ch { ≤ r} ≥ ˛1
inf
A contradiction proves part (i). (ii) If inf (; ˛) > sup (1 − ; ˛). By the definitions of the optimistic value and the pessimistic value, it follows that
Let be a random fuzzy variable. Thus,
(i) if ˛ > 0.5, then inf (; ˛) ≥ sup (1 − ; ˛); (ii) if ˛ ≤ 0.5, then inf (; ˛) ≤ sup (1 − ; ˛). Proof. (i) Let x∗ = (inf (; ˛) + sup (1 − ; ˛))/2. If inf (; ˛) < sup (1 − ; ˛), then 1 ≥ Ch { < x∗ } + Ch(1−) { > x∗ } ≥ ˛ + ˛ > 1.
This contradiction proves part (ii).
4. Chance-constrained programming models On the basis of the mathematical tools defined in Section 3, model (1) can be reconstructed as the following forms with the different criteria.
4.1. Pessimistic value model By using the pessimistic value and the chance measure, model (1) can be remodeled as the chance-constrained programming model.
⎧ min minf¯ ⎪ ⎪ f¯ ⎪ ⎪ ⎪ ⎪ ⎪ s.t. ⎪ ⎪ ⎪ ⎪ ⎪ Ch f (x, y) + g(z) + h(x) ≤ f¯ ≥ ˛, ⎪ ⎪ ⎪ ⎪ m T ⎪ ⎪ k ⎪ ⎪ ⎪ xkt ukt y ≤ cij ≥ ˇij , (i, j) ∈ E ⎪ Ch ij kt ⎪ ⎪ ⎪ t=1 k=1 ⎪ ⎪ ⎪ m T ⎪ ⎪ k ⎨ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎩
xkt rikt ykt ≤ pi
Ch
≥ ıi ,
i = 1, 2, . . . , n
(2)
k=1 t=1 Tk
xkt = ak ,
k = 1, 2, . . . , m
t=1
xkt ≤ zkt qk , xkt ≥ 0,
ykt ∈ {0, 1}, zkt
∈ Z+,
k = 1, 2, . . . , m, t = 1, 2, . . . , Tk
k = 1, 2, . . . , m, t = 1, 2, . . . , Tk k = 1, 2, . . . , m, t = 1, 2, . . . , Tk
k = 1, 2, . . . , m, t = 1, 2, . . . , Tk .
In this model, the objective is to minimize the pessimistic value of the total relevant cost rather than to minimize the total relevant cost directly. The second constraint states the chance that the total amount of commodities passing through each arc does not exceed its capacity should not be less than some given confidence level. The third constraint states the chance that the total amount of commodities passing through each station does not exceed its turnover capacity should not be less than a given confidence level. In the model, the parameters ˛, ˇij and ıi are predetermined confidence levels.
L. Yang et al. / Applied Soft Computing 11 (2011) 778–792
4.2. Optimistic value model
best, and the applications of different models are dependent on decision-makers’ preferences.
If the optimistic value and the chance measure are employed, the following optimistic value model can be constructed:
⎧ min maxf¯ ⎪ ⎪ f¯ ⎪ ⎪ ⎪ ⎪ ⎪ s.t. ⎪ ⎪ ⎪ ⎪ ⎪ Ch f (x, y) + g(z) + h(x) ≥ f¯ ≥ ˛, ⎪ ⎪ ⎪ ⎪ m T ⎪ ⎪ k ⎪ ⎪ kt ⎪ Ch xkt uij ykt ≤ cij ≥ ˇij , (i, j) ∈ E ⎪ ⎪ ⎪ ⎪ k=1 t=1 ⎪ ⎪ ⎪ m T ⎪ ⎪ k ⎨ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎩
xkt rikt ykt ≤ pi
Ch
≥ ıi ,
i = 1, 2, . . . , n
xkt = ak ,
Proposition 5.1. Suppose that f = − , where and are fuzzy variable and continuous random variable, respectively. Then Pr{f ≥ 0} ≥ ˛ is equivalent to ≥ ˚−1 (˛) and Pr{f ≤ 0} ≥ ˛ is equivalent 1 (3)
xkt ≥ 0, zkt ∈ Z + ,
k = 1, 2, . . . , m, t = 1, 2, . . . , Tk
k = 1, 2, . . . , m, t = 1, 2, . . . , Tk .
(x) =
Different from the pessimistic value model, the optimistic value model is to minimize the optimistic value of the total relevant cost, which is generally suitable for optimistic decision-makers. 4.3. Hurwicz criterion model Generally speaking, the pessimistic value model is suitable for pessimistic decision-makers and the optimistic value model is suitable for optimistic decision-makers. If a decision-maker is neither absolutely pessimistic nor absolutely optimistic, the Huirwicz criterion can be employed. That is, a parameter ∈ (0, 1) is introduced to produce a new objective to balance the pessimism and the optimism. In this model, the objective is the Hurwicz critical value of the total relevant cost rather than the optimistic value or the pessimistic value. The Hurwicz criterion model is constructed as
⎧ min maxg¯ + (1 − )minf¯ ⎪ ⎪ g¯ ⎪ f¯ ⎪ ⎪ ⎪ ⎪ s.t. ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ Ch f (x, y) + g(z) + h(x) ≤ f¯ ≥ ˛, ⎪ ⎪ ⎪ ⎪ ⎪ Ch f (x, y) + g(z) + h(x) ≥ g¯ ≥ ˛, ⎪ ⎪ ⎪ m T ⎪ ⎪ k ⎪ ⎪ ⎪ ⎪ xkt ukt y ≤ cij ≥ ˇij , (i, j) ∈ E ⎪ Ch ij kt ⎪ ⎪ ⎪ t=1 k=1 ⎨ m T k kt ⎪ xkt ri ykt ≤ pi ≥ ıi , i = 1, 2, . . . , n ⎪ ⎪ Ch ⎪ ⎪ ⎪ t=1 k=1 ⎪ ⎪ ⎪ Tk ⎪ ⎪ ⎪ ⎪ xkt = ak , k = 1, 2, . . . , m ⎪ ⎪ ⎪ ⎪ t=1 ⎪ ⎪ ⎪ ⎪ xkt ≤ zkt qk , k = 1, 2, . . . , m, t = 1, 2, . . . , Tk ⎪ ⎪ ⎪ ⎪ x k = 1, 2, . . . , m, t = 1, 2, . . . , Tk ⎪ kt ≥ 0, ⎪ ⎪ ⎪ y ∈ {0, 1}, k = 1, 2, . . . , m, t = 1, 2, . . . , Tk ⎪ ⎪ ⎩ kt + zkt ∈ Z ,
˚−1 (x) = inf {y|˚(y) ≥ x} and ˚−1 (x) = sup{y|˚(y) ≤ x}. 1 2
Suppose that = (r1 , r2 , r3 , r4 ) is a trapezoidal fuzzy variable. That is, the membership function of is
k = 1, 2, . . . , m, t = 1, 2, . . . , Tk
k = 1, 2, . . . , m, t = 1, 2, . . . , Tk
ykt ∈ {0, 1},
(1 − ˛), where ˚(x) is the probability distribution of , to ≤ ˚−1 2
Proof. It is easy to see that Pr{f ≥ 0} ≥ ˛ is equivalent to ˚() ≥ ˛, which implies that ≥ ˚−1 (˛). The other case can be proved 1 similarly. The proof is thus completed.
k = 1, 2, . . . , m
t=1
xkt ≤ zkt qk ,
5. Crisp equivalent analysis It is easy to see that there exist random fuzzy chance functions in models (2)–(4). Thus, to solve the models, it is needed to compute chance functions to check feasibilities of solutions and compute objectives. We shall discuss some crisp equivalents of different chance functions in this section.
k=1 t=1 Tk
783
⎧ x−r 1 ⎪ , if r1 ≤ x ≤ r2 ⎪ r − r ⎪ 2 1 ⎨ 1,
if r2 ≤ x ≤ r3
0.
otherwise.
x − r4 ⎪ , if r3 ≤ x ≤ r4 ⎪ ⎪ ⎩ r3 − r4
If r2 = r3 , then the trapezoidal fuzzy variable degenerates to a triangular fuzzy variable, denoted by = (r1 , r2 , r4 ). For the trapezoidal fuzzy variable , the following relationships hold:
⎧ ⎨ 1,
Pos{ ≥ r} =
⎩
r − r4 , r3 − r4
if r3 ≤ r ≤ r4
0,
if r ≥ r4 ,
⎧ ⎨ 1, Nec{ ≥ r} =
r2 − r , ⎩ r2 − r1 0,
⎧ ⎨ 1,
if r ≤ r3
r − r1 , Pos{ ≤ r} = ⎩ r2 − r1 0,
⎧ ⎨ 1,
if r ≤ r1 if r1 ≤ r ≤ r2 Nec{ ≤ r} = if r ≥ r2 ,
⎩
if r ≥ r2 if r1 ≤ r ≤ r2 if r ≤ r1 , if r ≥ r4
r3 − r , r3 − r4
if r3 ≤ r ≤ r4
0,
if r ≤ r3 .
Proposition 5.2. Suppose that f = − , where = (r1 , r2 , r3 , r4 ) is a trapezoidal fuzzy variable and ∼U(a, b) is a uniformly distributed random variable. Then
Chp {f ≥ r} = (4)
k = 1, 2, . . . , m, t = 1, 2, . . . , Tk .
Remark 4.1. Models (2)–(4) are constructed under the different criteria, which are suitable for pessimistic, optimistic and eclectic decision-makers, respectively. It is not clear which model is the
⎧ 1, ⎪ ⎪ ⎪ (b + r − r4 )2 + r32 − r42 ⎪ ⎪ ⎪ 2(r3 − r4 )(b − a) ⎪ ⎪ ⎪ ⎪ r + r4 a+r 3 ⎪ − , ⎨ 2(b − a) b−a
if a + r, b + r ≤ r3 a+r − , b−a
if a + r ≤ r3 , b + r ≥ r4
2 2 (b + r − r4 ) − (a + r − r4 ) ⎪ ⎪ , ⎪ 2(r − r )(b − a) ⎪ 3 4 ⎪ ⎪ ⎪ 2 ⎪ (a + r − r4 ) ⎪ ⎪ , ⎪ ⎩ 2(r4 − r3 )(b − a)
0,
Chn {f ≥ r} =
⎧ 1, ⎪ ⎪ 2 ⎪ r22 − r12 − (r2 − b − r) ⎪ ⎪ ⎪ 2(r2 − r1 )(b − a) ⎪ ⎪ ⎪ ⎪ r1 + r2 a+r ⎪ ⎨ 2(b − a) − b − a , 2
(5) if a + r ≥ r3 , b + r ≤ r4 if r3 ≤ a + r ≤ r4 , b + r ≥ r4 if a + r ≥ r4 . if a + r, b + r ≤ r1
a+r , − b−a
(r2 − a − r) − (r2 − b − r) ⎪ ⎪ 2(r2 − r1 )(b − a) ⎪ ⎪ ⎪ ⎪ ⎪ (r2 − a − r)2 ⎪ ⎪ ⎪ 2(r2 − r1 )(b − a) , ⎪ ⎩ 0,
if a + r ≤ r3 , r3 ≤ b + r ≤ r4
if a + r ≤ r1 , r1 ≤ b + r ≤ r2 if a + r ≤ r1 , b + r ≥ r2
(6)
2
,
if a + r ≥ r1 , b + r ≤ r2 if r1 ≤ a + r ≤ r2 , b + r ≥ r2 if a + r ≥ r2 .
784
L. Yang et al. / Applied Soft Computing 11 (2011) 778–792
Chp {f ≤ r} =
⎧ 1, if a + r, b + r ≥ r2 ⎪ ⎪ ⎪ ⎪ 2 2 2 ⎪ r − r1 + (a + r − r1 ) b+r ⎪ ⎪ − 2 , if r1 ≤ a + r ≤ r2 , b + r ≥ r2 ⎪ ⎪ b − a 2(r2 − r1 )(b − a) ⎪ ⎪ ⎪ ⎪ b+r r1 + r2 ⎪ ⎪ if a + r ≤ r1 , b + r ≥ r2 ⎨ b − a − 2(b − a) , 2 ⎪ (b + r − r1 ) − (a + r − r1 )2 ⎪ ⎪ , ⎪ ⎪ 2(r2 − r1 )(b − a) ⎪ ⎪ ⎪ 2 ⎪ ⎪ (b + r − r1 ) ⎪ , ⎪ ⎪ 2(r − r )(b − a) ⎪ ⎪ ⎩ 2 1
if a + r ≤ r1 , r1 ≤ b + r ≤ r2 if a + r, b + r ≤ r1 .
0,
Chn {f ≤ r} =
⎧ 1, if a + r, b + r ≥ r4 ⎪ ⎪ ⎪ ⎪ 2 2 2 ⎪ r4 − r3 + (r3 − a − r) b+r ⎪ ⎪ ⎪ ⎪ b − a − 2(r4 − r3 )(b − a) , if r3 ≤ a + r ≤ r4 , b + r ≥ r4 ⎪ ⎪ ⎪ ⎪ b+r r3 + r4 ⎪ ⎪ if a + r ≤ r3 , b + r ≥ r4 ⎨ b − a − 2(b − a) , 2 ⎪ (r3 − b − r) − (r3 − a − r)2 ⎪ ⎪ , ⎪ ⎪ 2(r4 − r3 )(b − a) ⎪ ⎪ ⎪ 2 ⎪ ⎪ (r3 − b − r) ⎪ , ⎪ ⎪ 2(r − r )(b − a) ⎪ ⎪ ⎩ 4 3
if a + r ≤ r3 , r3 ≤ b + r ≤ r4 if a + r, b + r ≤ r3 .
Proof. This part only proves Eq. (5) when a + r ≤ r3 , b + r ≥ r4 , and the other cases can be proved in a similar way. Note that
= =
1
0 1 0 1
Pos{Pr{ ≤ − r} ≥ ˛} d˛
⎧ ⎪ r1 − b+ 2˛(r2 −r1 )(b − a), ⎪ ⎪ ⎪ ⎨
Pos{ − r ≥ ˚−1 (˛)} d˛ 1
0
=
Proposition 5.3. Suppose that f = − , where = (r1 , r2 , r3 , r4 ) is a trapezoidal fuzzy variable and ∼U(a, b) is a uniformly distributed random variable. Then (i) if b − a ≥ r2 − r1 ,
inf (1; ˛) =
Pos{ ≥ ˛(b − a) + a + r} d˛
=
1 b−a
Pos{ ≥ x} dx.
4
⎩ r3 − r4
r1 +r2 , 2
⎪ ⎪ ⎪ ⎪ ⎩ r2 − a − (2 − 2˛)(r2 − r1 )(b − a),
if ˛ ∈
if ˛ ∈
0,
⎧ ⎪ ⎪ ⎪ r1 −b+ 2˛(r2 −r1 )(b − a), ⎪ ⎨
if x ≤ r3 inf (1; ˛)=
if r3 ≤ x ≤ r4
,
if x ≥ r4 ,
0,
r2 − r1 2(b − a)
r2 − r1 r2 −r1 ,1 − 2(b−a) 2(b − a)
1−
r2 − r1 ,1 ; 2(b − a)
(ii) if b − a ≤ r2 − r1 ,
a+r
⎧ ⎨ 1, x−r
˛(b−a)−b+
if ˛ ∈
b+r
Since
Pos{ ≥ x} =
(8)
if r3 ≤ a + r ≤ r4 , r3 ≤ b + r ≤ r4
0,
Chp {f ≥ r}
(7)
if r1 ≤ a + r ≤ r2 , r1 ≤ b + r ≤ r2
˛(r2 −r1 )+r1 −
b+a , 2
⎪ ⎪ ⎪ ⎪ ⎩ r2 − a − (2 − 2˛)(r2 − r1 )(b − a),
if ˛ ∈ if ˛ ∈ if ˛ ∈
0,
b−a 2(r2 − r1 )
b−a b−a , 1− 2(r2 − r1 ) 2(r2 −r1 )
1−
b−a ,1 . 2(r2 − r1 )
it follows that
Chp {f ≥ r}
1 = b−a 1 = b−a
r3
Pos{ ≥ x} dx +
r3
r3 r4
dx +
a+r
r3
x − r4 dx + r3 − r4
b+r
b+r
Pos{ ≥ x} dx +
a+r
r4
Pos{ ≥ x} dx r4
0 dx r4
1 1 r3 − a − r + (r4 − r3 ) 2 b−a a+r r3 + r4 − . = 2(b − a) b−a =
The proof is thus completed.
Remark 5.1. If a random fuzzy variable f satisfies the conditions in Proposition 5.2, it follows from the definition of Ch that chance measures Ch {f ≤ r} and Ch {f ≥ r} can also be computed by analytic methods.
Proposition 5.4. Suppose that f = − , where = (r1 , r2 , r3 , r4 ) is a trapezoidal fuzzy variable and ∼U(a, b) is a uniformly distributed random variable. Then
L. Yang et al. / Applied Soft Computing 11 (2011) 778–792
(i) if b − a ≥ r4 − r3 ,
n (0; ˛) = inf
⎧ ⎪ r3 − b + 2˛(r4 − r3 )(b − a), ⎪ ⎪ ⎪ ⎨ ˛(b − a) − b +
r4 + r3
,
2 ⎪ ⎪ ⎪ ⎪ ⎩ r4 − a − (2 − 2˛)(r4 − r3 )(b − a),
n (0; ˛) = inf
˛(r4 − r3 ) + r3 −
b+a
,
2 ⎪ ⎪ ⎪ ⎪ ⎩ r4 − a − (2 − 2˛)(r4 − r3 )(b − a),
r4 − r3 2(b − a) r −r r4 − r3 4 3 if ˛ ∈ ,1 − 2(b −r a)− r 2(b − a) 4 3 ,1 ; if ˛ ∈ 1 − 2(b − a) if ˛ ∈
(ii) if b − a ≤ r4 − r3 ,
⎧ ⎪ r3 − b + 2˛(r4 − r3 )(b − a), ⎪ ⎪ ⎪ ⎨
0,
b−a 2(r4 − r3 ) b−a b−a ,1 − if ˛ ∈ 2(r − r ) 2(r4 − r3 ) 4 b 3− a if ˛ ∈ 1 − ,1 . 2(r4 − r3 ) if ˛ ∈
0,
Proposition 5.5. Suppose that f = − , where = (r1 , r2 , r3 , r4 ) is a trapezoidal fuzzy variable and ∼U(a, b) is a uniformly distributed random variable. Then (i) if b − a ≥ r4 − r3 ,
sup (1; ˛) =
⎧ ⎪ ⎪ ⎪ r4 − a − 2˛(r4 − r3 )(b − a), ⎪ ⎨ r3 + r4
− a − ˛(b − a),
2 ⎪ ⎪ ⎪ ⎪ ⎩ r3 − b + (2 − 2˛)(r4 − r3 )(b − a),
sup (1; ˛) =
r4 − ˛(r4 − r3 ) −
b+a , 2
⎪ ⎪ ⎪ ⎪ ⎩ r3 − b + (2 − 2˛)(r4 − r3 )(b − a),
sup (0; ˛) =
r1 + r2
− a − ˛(b − a),
2 ⎪ ⎪ ⎪ ⎪ ⎩ r1 − b + (2 − 2˛)(r2 − r1 )(b − a),
b−a 2(r4 − r3 ) b−a b−a ,1 − if ˛ ∈ 2(r − r ) 2(r4 − r3 ) 4 b 3− a if ˛ ∈ 1 − ,1 . 2(r4 − r3 ) if ˛ ∈
0,
sup (1; ˛) =
r2 −
b+a
− ˛(r2 − r1 ),
2 ⎪ ⎪ ⎪ ⎪ ⎩ r1 − b + (2 − 2˛)(r2 − r1 )(b − a),
Theorem 5.2. Let 1 and 2 be two parameters, and F1 and F2 the corresponding optimal objectives of model (2). If 1 ≤ 2 , then F1 ≥ F2 .
r2 − r1 2(b − a) r −r r2 − r1 2 1 if ˛ ∈ ,1 − 2(b −r a)− r 2(b − a) 2 1 ,1 ; if ˛ ∈ 1 − 2(b − a) if ˛ ∈
(ii) if b − a ≤ r2 − r1 ,
⎧ ⎪ r2 − a − 2˛(r2 − r1 )(b − a), ⎪ ⎪ ⎪ ⎨
0,
Proposition 5.6. Suppose that f = − , where = (r1 , r2 , r3 , r4 ) is a trapezoidal fuzzy variable and ∼U(a, b) is a uniformly distributed random variable. Then (i) if b − a ≥ r2 − r1 ,
⎧ ⎪ r2 − a − 2˛(r2 − r1 )(b − a), ⎪ ⎪ ⎪ ⎨
(x, y, z), the objectives can be computed by analytic methods according to the results of Propositions 5.2–5.6.
r4 − r3 2(b − a) r −r r4 − r3 4 3 ,1 − if ˛ ∈ 2(b −r a)− r 2(b − a) 4 3 ,1 ; if ˛ ∈ 1 − 2(b − a) if ˛ ∈
(ii) if b − a ≤ r4 − r3 ,
⎧ ⎪ r4 − a − 2˛(r4 − r3 )(b − a), ⎪ ⎪ ⎪ ⎨
785
0,
b−a 2(r2 − r1 ) b−a b−a if ˛ ∈ ,1 − 2(r − r ) 2(r2 − r1 ) 2 b 1− a if ˛ ∈ 1 − ,1 . 2(r2 − r1 ) if ˛ ∈
0,
Propositions 5.3–5.6 can be proved by using Proposition 5.2. Theorem 5.1. In models (2)–(4), suppose that F(x, y, z) = h(x, y, z) − v(x, y, z) satisfying h(x, y, z) = (h1 (x, y, z), h2 (x, y, z), h3 (x, y, z), h4 (x, y, z)) is a trapezoidal fuzzy variable and v(x, y, z)∼U(v1 (x, y, z), v2 (x, y, z)) is a uniformly distributed random variable. Then for any solution
Proof. Let D1 and D2 be the feasible domains of model (2) with respect to 1 and 2 , respectively. Since the chance measure is increasing with respect to , it is easy to prove that D1 ⊂ D2 . Note that pessimistic value is a decreasing function with respect to , then it follows that
786
L. Yang et al. / Applied Soft Computing 11 (2011) 778–792
F1 = ≥
min
F(x, y, z)inf (1 ; ˛) ≥
min
F(x, y, z)inf (2 ; ˛) = F2 .
(x,y,z) ∈ D1 (x,y,z) ∈ D2
The proof is thus completed.
min
(x,y,z) ∈ D2
F(x, y, z)inf (1 ; ˛)
Theorem 5.3. Let ˛1 and ˛2 be two parameters, and F1 and F2 the corresponding optimal objectives of model (2). If ˛1 ≤ ˛2 , then F1 ≤ F2 . Proof. Let D be the feasible domain of model (2). Since the pessimistic value is an increasing function with respect to ˛, it follows that F1 =
min F(x, y, z)inf (; ˛1 ) ≤
(x,y,z) ∈ D
The proof is thus completed.
min F(x, y, z)inf (; ˛2 ) = F2 .
(x,y,z) ∈ D
Theorem 5.4. Let ˛1 and ˛2 be two parameters, and F1 and F2 the corresponding optimal objectives of model (3). If ˛1 ≤ ˛2 , then we have F1 ≥ F2 . Proof. Let D be the feasible domain of model (3). Since the optimistic value is a decreasing function with respect to ˛, it follows that F1 =
min F(x, y, z)sup (; ˛1 ) ≥
(x,y,z) ∈ D
The proof is thus completed.
min F(x, y, z)sup (; ˛2 ) = F2 .
(x,y,z) ∈ D
¯ ij , ı¯ i (i = Theorem 5.5. Let ˇij , ıi (i = 1, 2, . . . , n, (i, j) ∈ E) and ˇ 1, 2, . . . , n, (i, j) ∈ E) be two sequences of parameters in model (2)(or model (3), model (4)), and F1 and F2 the corresponding optimal objec¯ ij and ıi ≤ ı¯ i (i = 1, 2, . . . , n, (i, j) ∈ E), then we have tives. If ˇij ≤ ˇ F1 ≤ F2 . The proof is obvious.
6. Hybrid genetic algorithm In this section, a hybrid genetic algorithm will be designed to solve the models, which consists of a potential path searching algorithm, a genetic algorithm and random fuzzy simulation algorithms. A genetic algorithm is a kind of global search heuristic used to find exact or high quality solutions to optimization problems, which is a particular class of evolutionary algorithms (EA) that is based on the mechanics of natural selection and natural genetics. The genetic algorithm was first introduced by Holland [12] in 1975. And then, it was well developed by many researchers, such as Goldgerg [10], Fogel [8] and Mitchell [19]. Up to now, the genetic algorithm has been successful used to solve optimization problems, such as transportation problem, vehicle routing problem, network optimization, scheduling problem and optimal control problem. For some applications of genetic algorithms, we can refer to Chang and Sim [1], Martens [18], Gen et al. [20], Yang et al. [25,26] and Gorman [9]. To introduce the algorithm more clearly, Fig. 1 is given to show the procedure of the hybrid genetic algorithm. In the real railway network, there generally exist a lot of feasible paths for any given OD pair. To search for the optimal solutions of the models, we generally do not consider all the feasible paths in order to save computational time, and only a subset of feasible paths, called potential paths, are taken into consideration. Thus, in the hybrid genetic algorithm, the first step is to seek the potential paths for each OD. After that, the genetic algorithm with simulation techniques will be employed to seek the best commodity flow assignment on each potential path, where simulation techniques are used to compute the objectives and check the feasibilities of solutions. The detailed techniques of each part of the algorithm are introduced as follows.
Fig. 1. The procedure of the algorithm.
L. Yang et al. / Applied Soft Computing 11 (2011) 778–792
6.1. Search potential paths All the potential paths for each OD will be determined in this part. Generally, there are two methods to seek the potential paths. One is to use K-shortest path algorithm to seek K potential paths for each OD. The other is to determine the potential paths based on the distance constraints, in which decision-makers first give the upper bound of favorite paths’ length, then choose all the paths satisfying the length constraints in the railway network as the potential paths. The upper bound can be randomly set or calculated by the following way: (i) find the shortest path for the considered OD pair, the length of which is denoted by L; (ii) give a parameter ω, then the upper bound can be set as ω · L. In this paper, the second method will be adopted and a branch and bound based searching algorithm will be designed to seek all the potential paths for each OD. In this procedure, a path, denoted by P, is represented by a sequence of ordered nodes. For instance, the set P = {2, 4, 3, 8, 5, 6} denotes the following path: P : 2 → 4 → 3 → 8 → 5 → 6. Let U(i) denote the set of adjacent nodes of node i in the graph, i.e., U(i) = {j|(i, j) ∈ E}, L the actual length of path P, and Upper Bound the upper bound of favorite paths’ length for the OD considered currently. A branch and bound based algorithm will be used to seek all the potential paths between node i0 and node j0 . The following procedure is similar to that of branch and bound algorithm. Step 1. Let P = {i0 }, L = 0, i = i0 , determine U(i0 ), set U(i0 ) ← U(i0 )/P; Step 2. If i = i0 and U(i0 ) = ˚, Stop; Step 3. If U(i) = / ˚, choose a node k in U(i), let U(i) ← U(i)/{k}, P ← P ∪ {k}, L = L + dik ; otherwise, go to Step 7; Step 4. Determine U(k), set U(k) ← U(k)/P. If k = j0 or U(k) = ˚, go to Step 6; Step 5. If L < Upper Bound, let i = k, go to Step 2; Step 6. If k = j0 and L ≤ Upper Bound, then P is a feasible path; Step 7. Return the previous node in path P, denoted by i, delete the last node in path P, go to Step 2. 6.2. Genetic algorithm
6.2.1. Solution representation in genetic algorithm Assume there are Tk potential paths for OD k. Then, the total number in the railway transportation network m of potential paths m T . Then a T -dimensional vector is used to denote a is k k k=1 k=1 solution: T1
Step 4. If vk ≤ qk , let xkt = vk , k++, go to Step 2. Otherwise, go to Step 5; Step 5. Generate a random number u ∈ (0, vk ), let xkt = [u/qk ] × qk , / ˚ go to if xkt > 0, let vk ← vk − xkt and Ok ← Ok /{t}. If Ok = Step 3; otherwise, go to Step 2. We can see that for any xkt > 0, the optimal frequency of train service on path t of OD k is [xkt /qk ] + 1 if [xkt /qk ] = / xkt /qk , or [xkt /qk ] if [xkt /qk ] = xkt /qk . 6.2.2. Simulation algorithm For each solution, it is necessary to check its feasibility and then compute the corresponding objective. In some circumstances, this two works can be done by using analytic methods presented in Section 5. If the involved random fuzzy variables are complex and the analytic methods are invalid, we can design simulation algorithms. These issues will be discussed in the following. (1) Check feasibility by simulation In the mathematical models, we have the following chance constraints Ch
m T k
xkt ukt y ≤ cij ij kt
Ch
≥ ˇij ,
(i, j) ∈ E
k=1 t=1 m Tk
xkt rikt ykt
≤ pi
≥ ıi , i = 1, 2, . . . , n.
k=1 t=1
To check feasibility of a solution, it is enough to simulate the corresponding chance measures. Note that in the above constraints, cij and pi are random fuzzy variables. For simplicity, the above two kinds of chance functions will be abbreviated as the following more general form Ch { ∈ A}, where is a random fuzzy variable and A is a Borel set of (for instance, if = cij and A=[
m Tk k=1
x ukt y , +∞), t=1 kt ij kt
Ch { ∈ A} is the chance function
in the first constraint). To simulate the chance measure Ch { ∈ A}, we first need to simulate Pos{Pr{() ∈ A} ≥ ˛}. Note that the above formula can be rewritten as follows: Pos{Pr{() ∈ A}≥˛}=Pos{ ∈ |Pr{() ∈ A}≥˛}=
After the operation of determining all the potential paths for each OD, a genetic algorithm will be designed to search for the optimal railway transportation plan.
787
T2
Tm
X = (x11 , x12 , . . . , x1T1 ; x21 , x22 , . . . , x2T2 ; . . . ; xm1 , xm2 , . . . , xmTm ) In this solution, xkt denotes the amount of commodities transported on path t for OD k. Note that if the train is not fully loaded, then the penalty cost will be added to the total transportation cost. Thus, in the real conditions, it is better to maximize the number of fully loaded trains. In view of this fact, the following procedure is used to initialize a solution, where Pk denotes the set of potential paths for OD k, and qk represents the capacity of each train for OD k. Step 1. Set xij = 0, j = 1, 2, . . . , Ti , i = 1, 2, . . . , m; let k = 1; Step 2. If k > m, stop; otherwise, let vk = ak , Ok = Pk ; Step 3. Randomly choose a potential path t in Ok ;
Pos{}.
Pr{() ∈ A}≥˛
The procedure of the simulation algorithm is listed as follows: Step 1. Let p = 0; Step 2. Randomly generate an element with Pos{} > ε, where ε is a sufficiently small positive number; Step 3. If Pr{() ∈ A} ≥ ˛, then let p = p ∨ Pos{}; otherwise, go to Step 2; Step 4. Repeat Step 2 to Step 3 N (a sufficiently large integer) times, return p. Next, a simulation algorithm will be designed to simulate the value Chp { ∈ A}. By the definition of Riemann integral, this measure can be rewritten as
1
Chp { ∈ A} =
Pos{Pr{() ∈ A} ≥ ˛} d˛ 0
= lim
n 1
n
n→∞
Pos
Pr{() ∈ A} ≥
i n
.
i=1
Note that measure can be represented the by the limit of a n (1/n)Pos{Pr{() ∈ A} ≥ (i/n)} . Thus, if the intesequence i=1 ger n is large enough, then
n
i=1
(1/n)Pos{Pr{() ∈ A} ≥ (i/n)} can
788
L. Yang et al. / Applied Soft Computing 11 (2011) 778–792
be treated as an approximated value of Chp { ∈ A}. Then the following procedure can be employed to simulate the chance measure Chp { ∈ A}. Step 1. Let i = 1, y = 0, and N a sufficiently large integer; Step 2. If i ≤ N, compute Pos{Pr{() ∈ A} ≥ i/N} by simulation algorithm, and let y = y + Pos{Pr{() ∈ A} ≥ i/N}; otherwise, go to Step 4; Step 3. Let i + +, go to Step 2; Step 4. Return y/N. Remark 6.1. The chance Chn { ∈ A} can be simulated in a similar way, and Ch { ∈ A} is computed by the linear combination of Chp { ∈ A} and Chn { ∈ A}. (2) Compute objective by simulation In models (2)–(4), the objectives are the critical values of the total relevant cost F(x, y, z). Then it is needed to simulate the optimistic value F(x, y, z)sup (; ˛) and the pessimistic value F(x, y, z)inf (; ˛), and the Hurwicz critical value can be represented as the linear combination of pessimistic and optimistic values. Note that F(x, y, z) is a random fuzzy variable for any feasible solution. For simplicity, we shall give the simulation algorithms for inf (; ˛) and sup (; ˛), where is a random fuzzy variable. • Simulate inf (; ˛) Note that inf (; ˛) = inf {r|Ch { ≤ r} ≥ ˛}. Since Ch { ≤ r} is an increasing function with respect to r, it is reasonable to use the following bisection algorithm to simulate the pessimistic value. Step 1. Randomly generate two numbers r1 and r2 such that Ch { ≤ r1 } < ˛ and Ch { ≤ r2 } ≥ ˛, where the chance measures can be simulated by Remark 6.1; r +r Step 2. Let r = 1 2 2 ; Step 3. If Ch { ≤ r} < ˛, let r1 = r; otherwise, let r2 = r; Step 4. If |r2 − r1 | < ε (a predetermined constant), then return r2 ; otherwise, go to Step 2. • Simulate sup (, ˛) Note that
the rearranged chromosomes are denoted by X1 , X2 , . . . , Xpop size , where X1 and Xpop size denote the best and the worst chromosomes, respectively. The rank-based fitness of each chromosome can be calculated by the following formula: fitness(Xi ) = ˛(1 − ˛)i−1 , i = 1, 2, . . . , pop size, where ˛ ∈ (0, 1) is a predetermined parameter. Then, the selection operation is performed on the basis of spinning the roulette wheel pop size times. The procedure is listed as follows. pop Step 1. Compute a sequence {zi }i=0 formulas
size
according to the following
z0 = 0, zi = zi−1 + fitness(Xi ), i = 1, 2, . . . , pop size. Step 2. Generate a random number t in (0, zpop size ]; Step 3. If zi−1 < t ≤ zi , then chromosome Xi will be selected as a member of the new population; Step 4. Repeat Step 2 and Step 3 pop size times. 6.2.4. Crossover operation First of all, it is needed to determine the parents for the crossover operation. The number of parents is completely based on the predetermined crossover probability Pc , and an expected number Pc · pop size of chromosomes will be selected to perform crossover process. To this end, the following procedure is designed. Let i = 1; If i ≤ pop size, go to Step 3; otherwise, stop; Randomly generate a number t in the interval (0, 1); If t < Pc , then chromosome Xi is selected to perform crossover operation; Step 5. Let i + +, go to Step 2.
Step 1. Step 2. Step 3. Step 4.
The selected parents are represented by Xi1 , Xi2 , . . . , XiK after the above operations. Then these chromosomes will be divided into different pairs, denoted by (Xi1 , Xi2 ), . . . , (XiK−1 , XiK ). For each pair, ¯ with the forms without the loss of generality, denoted by (X, X)
T1
Ti
X = (x11 , x12 , . . . , x1T1 ; . . . ; xi1 , xi2 , . . . , xiTi ; Ti+1
Tm
Tm
x(i+1)1 , x(i+1)2 , . . . , x(i+1)T(i+1) ; . . . ; xm1 , xm2 , . . . , xxm Tm )
sup (; ˛) = sup{r|Ch { ≥ r} ≥ ˛}. Since Ch { ≥ r} is a decreasing function with respect to r, it is reasonable to use the following bisection algorithm to simulate the optimistic value. Step 1. Randomly generate two numbers r1 and r2 such that Ch { ≥ r1 } ≥ ˛ and Ch { ≥ r2 } < ˛; r +r Step 2. Let r = 1 2 2 ; Step 3. If Ch { ≥ r} < ˛, let r2 = r; otherwise, let r1 = r; Step 4. If |r2 − r1 | < ε (a predetermined constant), then return r1 ; otherwise, go to Step 2. By using the above procedure, we can calculate the corresponding approximate objective and check feasibility of each solution. 6.2.3. Selection operation In the algorithm, the role of selection operation is to produce a new population for crossover and mutation operations. In the initial population, the chromosomes are first ranked from the good to the bad according to their objectives. Without the loss of generality,
T1
Ti
X¯ = (x¯ 11 , x¯ 12 , . . . , x¯ 1T1 ; . . . ; x¯ i1 , x¯ i2 , . . . , x¯ iTi ; Ti+1
x¯ (i+1)1 , x¯ (i+1)2 , . . . , x¯ (i+1)T(i+1) ; . . . ; x¯ m1 , x¯ m2 , . . . , x¯ xm Tm )
the following process is designed to perform the crossover operation. Step 1. Random generate an integer i in interval [1, m − 1] as the crossover point; Step 2. Perform the crossover operation on the crossover point i, and then obtain two children chromosomes:
T1
Ti
X = (x¯ 11 , x¯ 12 , . . . , x¯ 1T1 ; . . . ; x¯ i1 , x¯ i2 , . . . , x¯ iTi ;
Ti+1
Tm
x(i+1)1 , x(i+1)2 , . . . , x(i+1)T(i+1) ; . . . ; xm1 , xm2 , . . . , xxm Tm )
L. Yang et al. / Applied Soft Computing 11 (2011) 778–792
¯
T1
Ti
X = (x11 , x12 , . . . , x1T1 ; . . . ; xi1 , xi2 , . . . , xiTi ; Ti+1
Tm
x¯ (i+1)1 , x¯ (i+1)2 , . . . , x¯ (i+1)T(i+1) ; . . . ; x¯ m1 , x¯ m2 , . . . , x¯ xm Tm ) ¯ in population by X Step 3. If X (or X¯ ) is feasible, replace X (or X) ¯ (or X ). 6.2.5. Mutation operation Similar to the crossover operation, the first step of this part is to determine the chromosomes for mutation operation. Let Pm be the mutation probability. Then the number Pm · pop size chromosomes are expected to be selected. The following procedure will be repeated pop size times: at the ith time, randomly generate a number t in the interval (0, 1); if t < Pm , then chromosome Xi is selected to perform mutation operation. The selected chromosomes are denoted by Xi1 , Xi2 , . . . , XiK . For each selected chromosome X, the following procedure is designed to perform mutation operation. Step 1. Randomly generate an integer i ∈ [1, m], and then generate two random integers j, k ∈ [1, Ti ] with xij > 0; Step 2. If xij ≤ qi , let xik ← xik + xij , xij = 0, stop; otherwise, go to step 3; Step 3. Generate a random integer u ∈ [1, [xij /qi ]]. Let xik ← xik + u · qi , xij ← xij − u · qi , stop. After performing the above operations, the new chromosome is denoted by X . If X is feasible, then it will replace X in the population. 6.2.6. Hybrid genetic algorithm The hybrid genetic algorithm can be summarized in the following procedure, which is the language description of the procedure in Fig. 1:
789
Step 1. Generate all the potential paths for each OD by branch and bound based potential path searching algorithm; Step 2. Determine the solution representation based on the potential paths of each OD; Step 3. Randomly initialize pop size feasible chromosomes in the population, denoted by X1 , X2 , . . . , Xpop size ; Step 4. Compute the objectives of chromosomes in the population by random fuzzy simulations or analytic methods; Step 5. Perform the selection operation; Step 6. Perform the crossover operation, where the feasibilities of new chromosomes can be checked by the simulation algorithms if analytic methods are invalid; Step 7. Perform the mutation operation, where the feasibilities of new chromosomes can be checked by the simulation algorithms if analytic methods are invalid; Step 8. Repeat Step 4 to Step 7 for a given number (denoted by Generation) of times; Step 9. Output the best solution encountered. 7. Numerical example In this section, some numerical examples are performed to show the applications of models and algorithm. Example 7.1. As shown in Fig. 2, suppose that the railway transportation network consists of 31 nodes and 54 arcs. The relevant experimental data are listed in Tables 1 and 2, including arcs, nodes, the passing capacity of each arc, the length of each arc, the unit transportation cost on each arc and the turnover capacity of each station. There exist 6 ODs in the transportation network. For each OD, the origin, the destination, the upper bound of favorite paths’ length, the amount of total commodities and the fixed charge for each train are listed in Table 3. In addition, the penalty parameter U is supposed to be a uniformly distributed random variable: U∼U(4000, 5000), and the capacity of each train is 10. In Tables 1–3, the notation “(a, b, c)” denotes a triangular fuzzy variable and the notation “U(a, b)” denotes a uniformly distributed random variable. Model (2) is employed as an experimental model,
Table 1 The relevant experimental data. Arc (i, j)
Length (dij )
Capacity (cij )
Unit cost (eijk )
Arc (i, j)
Length (dij )
Capacity (cij )
Unit cost (eijk )
(1, 2) (1, 3) (1, 6) (2, 4) (3, 5) (3, 6) (4, 6) (4, 7) (5, 8) (5, 10) (6, 8) (6, 10) (6, 11) (7, 11) (7, 12) (8, 9) (9, 15) (9, 16) (10, 14) (10, 15) (10, 17) (11, 14) (12, 13) (12, 14) (13, 22) (14, 15) (14, 20) (30, 31)
56 78 53 95 61 55 37 35 57 72 48 27 56 36 77 89 56 48 35 50 69 69 45 66 33 44 48 61
(80, 90, 99) (90, 95, 96) (82, 86, 90) (90, 95, 98) (87, 89, 92) (85, 89, 93) (81, 84, 88) (83, 86, 89) (95, 97, 99) (81, 85, 89) (83, 86, 88) (81, 84, 88) (90, 95, 98) (81, 84, 88) (95, 96, 98) (85, 89, 93) (83, 86, 89) (97, 98, 99) (85, 89, 93) (92, 94, 97) (93, 95, 99) (81, 83, 85) (82, 86, 90) (86, 88, 90) (83, 86, 92) (81, 83, 87) (88, 90, 95) (82, 85, 90)
(4.0, 5.2, 5.4) (4.4, 4.5, 5.0) (5.4, 6.2, 6.4) (5.5, 5.7, 6.5) (3.5, 3.8, 4.0) (3.5, 4.2, 4.4) (6.0, 6.1, 6.5) (7.0, 8.5, 8.7) (5.3, 6.0, 6.1) (4.9, 5.1, 6.7) (8.0, 8.3, 9.5) (6.1, 6.3, 6.9) (8.3, 8.9, 9.0) (5.1, 5.2, 6.0) (6.1, 6.3, 7.0) (7.8, 7.9, 8.5) (8.3, 8.7, 9.0) (7.6, 8.2, 8.4) (6.4, 7.4, 8.0) (5.7, 6.1, 6.3) (4.5, 4.7, 5.2) (2.5, 2.7, 3.2) (3.3, 3.7, 4.2) (3.5, 3.7, 4.0) (4.2, 4.4, 4.6) (1.5, 1.6, 2.0) (4.1, 4.3, 4.6) (4.6, 4.8, 5.0)
(14, 21) (14, 22) (15, 19) (15, 20) (15, 25) (16, 18) (17, 20) (18, 25) (19, 24) (20, 24) (20, 25) (20, 28) (21, 23) (21, 27) (21, 28) (22, 23) (23, 28) (23, 29) (24, 26) (24, 27) (25, 26) (26, 31) (27, 28) (27, 31) (28, 30) (28, 31) (29, 30)
93 55 85 37 51 85 38 65 74 68 48 74 79 73 84 58 37 63 85 36 35 44 56 62 25 35 52
(92, 94, 98) (95, 97, 99) (93, 96, 98) (95, 97, 98) (87, 90, 99) (81, 84, 88) (83, 86, 88) (85, 89, 93) (82, 84, 88) (90, 92, 95) (85, 88, 93) (83, 86, 88) (82, 84, 87) (83, 89, 93) (86, 90, 94) (90, 96, 99) (91, 94, 98) (82, 84, 87) (80, 84, 87) (91, 94, 98) (88, 92, 97) (86, 92, 99) (95, 97, 99) (84, 88, 90) (88, 92, 94) (82, 88, 93) (93, 96, 98)
(8.7, 9.2, 9.3) (7.6, 8.1, 8.2) (2.0, 2.6, 3.0) (6.8, 7.4, 7.6) (3.7, 4.2, 4.5) (3.6, 3.8, 4.6) (8.0, 8.4, 8.5) (5.1, 5.6, 5.8) (4.0, 4.1, 5.0) (6.0, 6.8, 7.0) (3.3, 3.6, 3.8) (3.7, 4.2, 4.4) (5.4, 6.1, 6.3) (8.5, 8.9, 9.2) (6.1, 6.2, 7.0) (7.0, 7.4, 7.5) (6.0, 6.2, 6.7) (5.1, 5.8, 6.0) (4.5, 4.7, 4.8) (5.5, 5.6, 5.7) (6.1, 6.5, 7.0) (6.3, 6.5, 6.8) (6.7, 6.9, 7.2) (6.6, 6.9, 7.0) (2.1, 2.5, 3.0) (3.4, 3.6, 4.0) (4.2, 4.5, 5.0)
790
L. Yang et al. / Applied Soft Computing 11 (2011) 778–792 Table 4 Comparison of the optimal objectives (˛ = 1.0). pop size
Pc
Pm
Generation
Opti Obj.
Relative error
Consumed time (s)
20 20 30 30 30 40 40 50 50 50
0.4 0.6 0.3 0.6 0.6 0.3 0.4 0.5 0.4 0.6
0.5 0.7 0.4 0.5 0.7 0.5 0.6 0.5 0.6 0.7
200 400 300 300 500 500 700 800 900 1000
26212.90 26203.40 26244.00 26245.40 26251.90 26238.90 26191.40 26245.40 26236.90 26212.90
0.0821% 0.0458% 0.2008% 0.2062% 0.2310% 0.1814% 0.0000% 0.2062% 0.1737% 0.0821%
2 3 3 3 5 5 6 9 10 12
Fig. 2. The railway transportation network. Table 5 Comparison of the optimal objectives.
Table 2 Turnover capacity of each station. Station (i)
Turnover capacity (pi )
Station (i)
Turnover capacity (pi )
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16
U (200, 250) U (300, 350) U (220, 240) U (300, 340) U (210, 250) U (400, 450) U (500, 520) U (200, 250) U (400, 450) U (230, 250) U (300, 350) U (230, 280) U (400, 450) U (200, 250) U (270, 300) U (200, 250)
17 18 19 20 21 22 23 24 25 26 27 28 29 30 31
U (100, 160) U (240, 280) U (310, 350) U (600, 750) U (370, 400) U (200, 250) U (220, 270) U (200, 230) U (300, 350) U (340, 390) U (400, 450) U (200, 250) U (260, 280) U (190, 250) U (330, 350)
in which chance-constraints confidence levels, respectively, are ˇij = 0.9, (i, j) ∈ E, ıi = 0.9, i = 1, 2, . . . , 31 and = 1. To solve this model, the potential path searching algorithm is used to seek all the potential paths for each OD, the length of which should not exceed the upper bounds listed in Table 3. After this operation, we obtain five potential paths for OD 1, six potential paths for OD 2, five potential paths for OD 3, five potential paths for OD 4, five potential paths for OD 5 and five potential paths for OD 6. The hybrid genetic algorithm is designed by Microsoft Visual C++ 6.0 software. The algorithm is performed on a computer with Intel(R) Pentium(R) M processor 1.60 GHz. For the case ˛ = 1.0, the corresponding results are listed in Table 4 when different parameters in the algorithm are employed. In this table, “Opti Obj” represents the optimal objective obtained by the algorithm. It is clear that although the groups of parameters are different in the algorithm, the relative errors of optimal objectives are not larger than 0.2310%, which shows the steadiness and robustness of the algorithm. The relative error is computed according to the following formula: relative error =
Opti Obj. − minimal Opti Obj. , minimal Opti Obj.
Para. ˛ Opti Obj.
Para. ˛ Opti Obj.
Para. ˛ Opti Obj.
Para. ˛ Opti Obj.
0.05 0.10 0.15 0.20 0.25
0.30 0.35 0.40 0.45 0.50
0.55 0.60 0.65 0.70 0.75
0.80 0.85 0.90 0.95 1.00
22608.21 22833.65 23050.52 23171.74 23322.05
23472.05 23622.05 23772.05 23922.05 24072.05
24222.05 24372.05 24522.05 24672.05 24822.05
In this experiment, the optimal solution is obtained with the parameters pop size = 40, Pc = 0.4, Pm = 0.6, generation = 700, and the optimal objective of the model is 26191.40. For different groups of parameters, the computational time is not longer than 12 s. In addition, it is needed to analyze the sensitivity of optimal objectives with respect to parameter ˛. For instance, when pop size= 40, pc = 0.4, pm = 0.6 and generation = 700, the model is solved with different ˛ and the corresponding optimal objectives are listed in Table 5. For the convenience of understanding, the change curve of optimal objectives is listed in Fig. 3. It is easy to see that the curve of optimal objectives is nearly linear with respect to the parameter ˛. Also, the optimal objective is increasing with the increase of the parameter ˛, which just coincides with Theorem 5.3. For each experiment, the computational time is not longer than 7 s. Additionally, if the different parameter ˛ is used in the algorithm, the optimal solutions can be different. For instance, set ˛ = 1.0 and ˛ = 0.9, respectively, then the corresponding optimal solutions are different (see Tables 6 and 7). Example 7.2. In this example, the relevant data are the same as those in Example 7.1. We use chance measure Ch to construct the pessimistic chance-constrained model. In the algorithm, the simulation algorithm is used to compute the objective function (where set ε = 1). For this example, the optimal objectives are considered with respect to different parameters and ˛. For each pair (; ˛), we run the algorithm 50 times by using different groups of parameters, where pop size = 30, crossover probability Pc and mutation probability Pm are randomly generated in the interval [0.5, 0.7],
Table 3 The relevant data with respect to different ODs. OD (k) Origin Destination Upper Bound Transportation amount (ak )
Fix charge (hk )
1 2 3 4 5 6
(100, 120, 140) (100, 120, 140) (100, 120, 140) (100, 120, 140) (100, 120, 140) (100, 120, 140)
3 5 10 18 19 26
12 20 23 31 28 29
245 215 210 280 240 235
35 22 115 30 46 102
24961.52 25174.38 25366.42 25591.98 26191.40
Fig. 3. Optimal objectives with respect to ˛.
L. Yang et al. / Applied Soft Computing 11 (2011) 778–792
791
Table 6 The optimal solution (˛ = 1.0). OD
Transportation amount
Frequency
Optimal paths
1
30 5
3 1
3 → 5 → 10 → 14 → 12 3 → 6 → 10 → 15 → 14 → 12
2
12 10
2 1
5 → 10 → 14 → 20 5 → 10 → 17 → 20
3
45 70
5 7
10 → 14 → 21 → 23 10 → 15 → 14 → 22 → 23
4 5
30 46
3 5
18 → 25 → 20 → 28 → 31 19 → 15 → 20 → 28
6
10 10 82
1 1 9
26 → 25 → 20 → 28 → 30 → 29 26 → 31 → 28 → 23 → 29 26 → 31 → 30 → 29
Fig. 4. Optimal objectives with respect to ˛.
Table 7 The optimal solution (˛ = 0.9). OD
Transportation amount
Frequency
Optimal paths
1
35
4
3 → 5 → 10 → 14 → 12
2
2 10 10
1 1 1
5 → 10 → 14 → 20 5 → 10 → 15 → 14 → 20 5 → 10 → 17 → 20
3
20 25 70
2 3 7
10 → 14 → 20 → 28 → 23 10 → 14 → 21 → 23 10 → 15 → 14 → 22 → 23
4
30
3
18 → 25 → 20 → 28 → 31
5
26 20
3 2
19 → 15 → 20 → 28 19 → 24 → 27 → 28
10 10 82
1 1 9
26 → 25 → 20 → 28 → 30 → 29 26 → 31 → 28 → 23 → 29 26 → 31 → 30 → 29
6
and the parameter Generation is randomly generated in the interval [500, 1000]. Then, the best objective encountered will be treated as an optimal objective. The solving process can be summarized as follows: Step 1. Let i = 1, best obj = M (a sufficient large number); Step 2. If i > 50, go to Step 5; otherwise, randomly generate Pc , Pm in the interval [0.5, 0.7], and generate parameter Generation in the interval [500, 1000]; Step 3. Solve the model by the hybrid genetic algorithm, and the corresponding optimal objective is denoted by v; Step 4. Let best obj ← best obj ∧ v, i + +, go to Step 2; Step 5. best obj will be treated as the optimal objective. When using the different parameters, the corresponding optimal objectives are listed in Table 8. For the convenience of understanding, we give Figs. 4 and 5 to show the changes of the opti-
Fig. 5. Optimal objectives with respect to .
mal objectives with respect to different parameters. In Fig. 4, when the parameter is fixed, the optimal objective is increasing with the increase of parameter ˛, which also coincides with Theorem 5.3. Additionally, for different , the corresponding curves are similar to each other, and the change ratios in intervals [0.1, 0.2] and [0.8, 0.9] are larger than those in other intervals. Fig. 5 shows the changes of optimal objectives with respect to . We can see that the optimal objective is decreasing with the increase of parameter , which also coincides with Theorem 5.2. Example 7.3. To further show the performance of the hybrid genetic algorithm, more experiments are performed with the different numbers of OD pairs. In these experiments, the railway network in Fig. 2 is also employed. The OD pairs in each experiment are randomly chosen and the transportation demand for each OD is also randomly generated in a given interval. In each experiment, the upper bounds of the potential paths are randomly set, which ensures that there exist at least 3 potential paths for each OD pair. In the pessimistic value model, set = 0.7, ˛ = 0.9, ˇij = 0.9, (i, j) ∈ E, ıi = 0.9, i = 1, 2, . . . , 31, and the other parameters are the same to those in Example 7.1. In the experiment, the number of OD pairs is first predetermined, then the hybrid genetic algorithm will run 200 times (or called 200 cycles) by using different groups of parameters. In details, the relevant parameters in each cycle are set as follows: pop size = 30, Pc and pm are randomly generated in [0.5, 0.7], and the parameter generation is randomly generated in [500, 1000]. Some characteristics are recorded for the different experiments, which are listed in Table 9. In this table, “Demand interval” repre-
Table 8 Comparison of the optimal objectives. /˛
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9
23953.25 23762.51 23579.41 23402.40 23255.92 23152.16 23069.76 22998.05 22848.51
24351.50 24217.22 24063.11 23907.47 23764.04 23628.23 23503.11 23406.98 23223.88
24658.20 24536.13 24412.54 24284.36 24145.51 24008.18 23876.95 23759.46 23550.42
24960.33 24838.26 24713.13 24594.12 24468.99 24343.87 24218.75 24087.52 23870.85
25262.45 25134.28 25013.73 24891.66 24772.64 24650.57 24526.98 24403.38 24177.55
25569.15 25451.66 25323.49 25193.79 25071.72 24948.12 24838.26 24702.45 24482.73
25889.59 25779.72 25660.71 25527.95 25395.20 25260.93 25128.17 25003.05 24778.75
26211.59 26133.73 26031.49 25917.05 25779.72 25633.24 25483.70 25331.12 25088.50
26583.86 26531.98 26466.37 26387.02 26280.21 26141.36 25970.46 25779.72 25492.86
792
L. Yang et al. / Applied Soft Computing 11 (2011) 778–792
Table 9 Computational results of different experiments. OD pairs
Demand interval
Max Error
Mean Error
Consumed time (s)
6 8 10 12 14 16
[40, 90] [30, 70] [20, 60] [20, 40] [20, 30] [20, 40]
1.7583% 0.9049% 0.4562% 0.3559% 0.1464% 0.3726%
0.2046% 0.1780% 0.0450% 0.1302% 0.0288% 0.0611%
962 917 1000 898 853 1743
sents that the transportation demand is randomly generated in this interval for each OD pair. Note that in each experiment, the hybrid genetic algorithm will run 200 times with the different groups of parameters, and an optimal objective will be obtained in each time. Thus, a total of 200 optimal objectives will be obtained in each experiment. We also use the method presented in Example 7.1 to calculate the relative error of each optimal objective. “Max Error” is the maximum relative error for the optimal objectives, and “Mean Error” is the mean value of the relative errors of optimal objectives. In addition, “Consumed time” is the total computational time in each experiment. It is easy to see from Table 9 that the numerical experiments which have no more than 16 OD pairs are tested. Even when the number of OD pairs is 6, the maximum relative error is only 1.7583%, and both the maximum relative error and the mean relative error for other experiments are less than 1.0000%. This fact also shows the robustness and steadiness of the hybrid genetic algorithm. As for the computational time, when the number of OD pairs is not larger than 14, the mean computational time is less than 5 s in each cycle, and when the number of OD pairs is 16, the mean computational time is less than 9 s in each cycle. 8. Conclusions On the tactical planning level, a railway freight transportation planning problem was investigated under the mixed uncertain environment of randomness and fuzziness. In the problem, some parameters, such as hk , pi , cij , eijk and U, are supposed to be random variables or fuzzy variables. For this case, the traditional mathematical model will turn meaningless. To deal with this problem, a new chance measure was defined to describe the occurring chance of random fuzzy events, and then the definitions of critical values of random fuzzy variables were introduced based on the chance measure, including the optimistic value, the pessimistic value and the Hurwicz criterion value. On the basis of the chance measure and critical values of random fuzzy variables, three kinds of mathematical models of railway freight transportation planning problem were constructed, including the pessimistic value model, the optimistic value model and the Hurwicz criterion model. These mathematical models are frequency service network design models. For the convenience of solving models, some mathematical properties of models with respect to different parameters were discussed, and some equivalents of objectives and constraints were also investigated. To solve the models, a hybrid genetic algorithm, which is the integration of a potential path searching algorithm, simulation algorithms and a genetic algorithm, was designed to seek the optimal solutions of the models. The numerical examples showed that the designed algorithm is steady and robust for not very large scale problems. Additionally, it is worth pointing out that the main focus of this paper is to provide the different decision-making methods for railway freight transportation planning problem under the random and fuzzy environment. Generally, it is not easy to determine which model is the best, and the applications of models are dependent on
decision-makers’ preferences. As for the algorithm, we designed a genetic algorithm based hybrid algorithm to search for the optimal solutions. In fact, some other algorithms, such as tabu search algorithm, simulated annealing algorithm and ant colony algorithm, can also be designed to solve this problem. The comparison between different algorithms can be a new topic in the future researches. Acknowledgements This research was supported by the National Natural Science Foundation of China (Nos. 70901006, 60634010), the Changjiang Scholars and Innovative Research Team in University under Grant No. IRT0605, and the State Key Laboratory of Rail Traffic Control and Safety (Nos. RCS2009ZT001, RCS2008ZZ001), Beijing Jiaotong University. References [1] C.S. Chang, S.S. Sim, Optimising train movements through coast control using genetic algorithms, IEE Proceedings B-Electric Power Applications 144 (1) (1997) 65–73. [2] T.G. Crainic, J.-A. Ferland, J.-M. Rousseau, A tactical planning model for rail freight transportation, Transportation Science 18 (2) (1984) 165–184. [3] T.G. Crainic, J.-M. Rousseau, Multicommodity, multimode freight transportation: a general modelling and algorithmic framework, Transportation Research Part B: Methodological 20 (1986) 225–242. [4] T.G. Crainic, J. Roy, O.R. tools for tactical freight transportation planning, European Journal of Operational Research 33 (3) (1988) 290–297. [5] T.G. Crainic, G. Laporte, Planning models for freight transportation, European Journal of Operational Research 97 (3) (1997) 409–438. [6] T.G. Crainic, Service network design in freight transportation, European Journal of Operational Research 122 (2000) 272–288. [7] J.M. Farvolden, W.B. Powell, Subgradient methods for the service network design problem, Transportation Science 28 (3) (1994) 256–272. [8] D.B. Fogel, Evolution Computation: Toward a New Philosophy of Machine Intelligence, IEEE Press, Piscataway, 1995. [9] M.F. Gorman, An application of genetic and tabu searches to the freight railroad operating plan problem, Annals of Operations Research 78 (1998) 51–69. [10] D.E. Goldgerg, Genetic Algorithms in Search, Optimization and Machine Learning, Addison-Esley, 1989. [11] M.F. Gorman, Santa Fe Railway uses an operating-plan model to improve its service design, Interfaces 28 (4) (1998) 1–12. [12] J.H. Holland, Adaptation in Natural and Artificial Systems, University of Michigan Press, Ann Arbor, 1975. [13] M.H. Keaton, Designing optimal railroad operating plans: Lagrangian relaxation and heuristic approaches, Transportation Research Part B: Methodological 23 (6) (1989) 415–431. [14] M.H. Keaton, Designing optimal railroad operating plans: a dual adjustment method for implementing Lagrangian relaxation, Transportation Science 26 (1992) 263–279. [15] Y.K. Liu, B. Liu, Random fuzzy programming with chance measures defined by fuzzy integrals, Mathematical and Computer Modelling 36 (2002) 509–524. [16] B. Liu, Theory and Practice of Uncertain Programming, Physica-Verlag, Heidelberg, 2002. [17] B. Liu, Uncertainty Theory: An Introduction to its Axiomatic Foundations, Springer-Verlag, Berlin, 2004. [18] J. Martens, Two genetic algorithms to solve a layout problem in the fashion industry, European Journal of Operational Research 154 (1) (2004) 304–322. [19] M. Mitchell, A Introduction to Genetic Algorithms, MIT Press, Cambridge, MA, 1996. [20] M. Gen, R. Cheng, S.S. Oren, Network design techniques using adapted genetic algorithms, Advances in Engineering Software 32 (9) (2001) 731–744. [21] H.N. Newton, C. Barnhart, P.H. Vance, Constructing railroad blocking plans to minimize handling costs, Transportation Science 32 (4) (1998) 330–345. [22] W.B. Powell, A local improvement heuristic for the design of less-thantruckload motor carrier network, Transportation Science 20 (4) (1986) 246–357. [23] L. Yang, L. Liu, Fuzzy fixed charge solid transportation problem and algorithm, Applied Soft Computing 7 (3) (2007) 879–889. [24] L. Yang, K. Iwamura, Fuzzy chance-constrained programming with linear combination of possibility measure and necessity measure, Applied Mathematical Sciences 2 (46) (2008) 2271–2288. [25] L. Yang, X. Ji, Z. Gao, K. Li, Logistics distribution centers location problem and algorithm under fuzzy environment, Journal of Computational and Applied Mathematics 208 (2) (2007) 303–315. [26] L. Yang, Chance-constrained methods for optimization problems with random and fuzzy parameters, International Journal of Innovative Computing, Information and Control 5 (2) (2009) 413–422. [27] L. Yang, K. Li, Z. Gao, Train timetable problem on a single-line railway with fuzzy passenger demand, IEEE Transactions on Fuzzy Systems 17 (3) (2009) 617–629.