Degree-constrained minimum spanning tree problem with uncertain edge weights

Degree-constrained minimum spanning tree problem with uncertain edge weights

G Model ARTICLE IN PRESS ASOC-3738; No. of Pages 9 Applied Soft Computing xxx (2016) xxx–xxx Contents lists available at ScienceDirect Applied So...

897KB Sizes 0 Downloads 116 Views

G Model

ARTICLE IN PRESS

ASOC-3738; No. of Pages 9

Applied Soft Computing xxx (2016) xxx–xxx

Contents lists available at ScienceDirect

Applied Soft Computing journal homepage: www.elsevier.com/locate/asoc

Degree-constrained minimum spanning tree problem with uncertain edge weights Xin Gao, Lifen Jia ∗ School of Mathematical Sciences and Physics, North China Electric Power University, Beijing 102206, China

a r t i c l e

i n f o

Article history: Received 30 September 2015 Received in revised form 22 July 2016 Accepted 29 July 2016 Available online xxx Keywords: Degree-constrained minimum spanning tree Uncertainty theory Genetic algorithm

a b s t r a c t Uncertainty theory has shown great advantages in solving many nondeterministic problems, one of which is the degree-constrained minimum spanning tree (DCMST) problem in uncertain networks. Based on different criteria for ranking uncertain variables, three types of DCMST models are proposed here: uncertain expected value DCMST model, uncertain ˛-DCMST model and uncertain most chance DCMST model. In this paper, we give their uncertainty distributions and fully characterize uncertain expected value DCMST and uncertain ˛-DCMST in uncertain networks. We also discover an equivalence relation between the uncertain ˛-DCMST of an uncertain network and the DCMST of the corresponding deterministic network. Finally, a related genetic algorithm is proposed here to solve the three models, and some numerical examples are provided to illustrate its effectiveness. © 2016 Elsevier B.V. All rights reserved.

1. Introduction The minimum spanning tree (MST) problem was first introduced by Borüvka [1] in 1926, and the degree-constrained minimum spanning tree (DCMST) problem was firstly proposed by Narula and Ho [2] in 1980 in the design of electrical circuits. Since then, the DCMST problem has been widely applied in many fields such as transportation, communications, and logistics. In the past decades, the DCMST problem has drawn great interests of researchers all over the world. Many algorithms have been designed to solve these problems, such as the heuristic algorithm [2–4], the ant colony optimization algorithm [5,6], the evolutionary algorithm [7,8], and the genetic algorithm [9,10], etc. However, classical algorithms are very effective for deterministic networks, while most networks in real life are uncertain networks without historical data. Due to the lack of statistical data, probability theory does not work here. Many other theories such as fuzzy set theory [12,13], fuzzy random theory have their own limitations. As a breakthrough to deal with nondeterministic phenomena, especially expert data and subjective estimation, an uncertainty theory was founded by Liu [14] in 2007 and subsequently studied by many researchers. Nowadays, the uncertainty theory has become a branch of mathematics. In the theoretical aspect, uncertain process [17], uncertain differential equation [15,23] and uncertain set theory [21,22] have been established.

In the practical aspect, uncertain programming [19,11], uncertain inference [16], uncertain statistics [20], and uncertain differential game [24,25], etc., have been developed. In this paper, we apply uncertainty theory to the DCMST problem in uncertain networks. Based on different criteria for ranking uncertain variables, we introduce three models of DCMST in uncertain networks: uncertain expected value DCMST model, uncertain ˛-DCMST model and uncertain most chance DCMST model. The former two models could be transformed to a DCMST problem on corresponding deterministic networks, and the latter one could be transformed to an ˛-DCMST problem. A related genetic algorithm is designed here for these problems. Numerical experiments showed this algorithm can achieve better results. The remainder of this paper is organized as follows. In Section 2, we introduce some basic preliminaries of uncertainty theory. In Section 3, the uncertain DCMST problem is described. In Section 4, three uncertain DCMST models are discussed. In Section 5, we introduce an modified genetic algorithm. In Section 6, numerical examples are given to illustrate the conclusions. In Section 7, we give a brief summary to this paper. 2. Preliminary In Liu’s uncertainty theory [14], an uncertain measure is defined as follows: Axiom 1 (Normality Axiom). M{ } = 1 for the universal set  .

∗ Corresponding author. E-mail addresses: [email protected] (X. Gao), [email protected] (L. Jia).

Axiom 2 (Duality Axiom). M{} + M{c } = 1 for any event .

http://dx.doi.org/10.1016/j.asoc.2016.07.054 1568-4946/© 2016 Elsevier B.V. All rights reserved.

Please cite this article in press as: X. Gao, L. Jia, Degree-constrained minimum spanning tree problem with uncertain edge weights, Appl. Soft Comput. J. (2016), http://dx.doi.org/10.1016/j.asoc.2016.07.054

G Model

ARTICLE IN PRESS

ASOC-3738; No. of Pages 9

X. Gao, L. Jia / Applied Soft Computing xxx (2016) xxx–xxx

2

Axiom 3 (Subadditivity Axiom). For every countable sequence of events 1 , 2 , . . ., we have M

∞ i=1

 i



∞ 

M{i }.

i=1

Definition 1 (Liu [14]). Let  be a nonempty set, L a -algebra over  , and let M be an uncertain measure. Then the triplet (, L, M) is called an uncertainty space. Furthermore, Liu [18] defined a product uncertain measure by the fourth axiom: Axiom 4 (Product Axiom). Let (k , Lk , Mk ) be uncertainty spaces for k = 1, 2, . . . The product uncertain measure M is an uncertain measure satisfying M

∞ 



k

=

∞ k=1

k=1

Mk {k }

where k are arbitrarily chosen events form Lk for k = 1, 2, . . ., respectively. Definition 2 (Liu [14]). An uncertain variable is a measurable function  from an uncertainty space (, L, M) to the set of real numbers, i.e., for any Borel set B of real numbers, the set { ∈ B} = { ∈  | () ∈ B} is an event. Definition 3 (Liu [14]). The uncertainty distribution ˚ of an uncertain variable  is defined by ˚(x) = M{ ≤ x}

Definition 4 (Liu [18]). The uncertain variables  1 ,  2 , . . .,  n are said to be independent if M

i=1



{i ∈ Bi }

has an inverse uncertainty distribution  −1 (˛) = ˚−1 (˛) × ˚−1 (˛) × · · · × ˚−1 n (˛). 1 2 Definition 6 (Liu [14]). Let  be an uncertain variable. Then the expected value of  is defined by





+∞

E[] =

0

M{ ≥ x}dx −

M{ ≤ x}dx −∞

0

provided that at least one of the two integrals is finite. Theorem 2 (Liu [14]). Let  be an uncertain variable with uncertainty distribution ˚. Then





+∞

E[] =

0

(1 − ˚(x))dx −

˚(x)dx. −∞

0

Theorem 3 (Liu [20]). Let  and  be independent uncertain variables with finite expected values. Then for any real numbers a and b, we have

for any real number x.

 n

Fig. 1. Linear uncertainty distribution.

= min M{i ∈ Bi } 1≤i≤n

for any Borel sets B1 , B2 , . . ., Bn of real numbers. Definition 5 (Liu [20]). Let  be an uncertain variable with regular uncertainty distribution ˚(x). Then the inverse function ˚−1 (˛) is called the inverse uncertainty distribution of . We usually assume that all uncertainty distributions in practical applications are regular. Otherwise, a small perturbation can be imposed to obtain a regular one. The following Theorem 1 shows that the inverse uncertainty distribution has good operational properties, which makes it easy to obtain the solution for the uncertain programming problem. Theorem 1 (Liu [20]). Let  1 ,  2 , . . .,  n be independent uncertain variables with regular uncertainty distributions ˚1 , ˚2 , . . ., ˚n , respectively. If f( 1 ,  2 , . . .,  n ) is strictly increasing with respect to  1 ,  2 , . . .,  m and strictly decreasing with respect to  m+1 ,  m+2 , . . .,  n , then

E[a + b] = aE[] + bE[]. Example 2. A linear uncertain variable 1 ∼L(a, b) and a zigzag uncertain variable 2 ∼Z(a, b, c) have uncertainty distributions ˚1 and ˚2 are shown in Figs. 1 and 2, ˚1 (x) =

˚2 (x) =

⎧ ⎨ 0, ⎩

if x < a

(x − a)/(b − a),

if a ≤ x ≤ b

1,

if x > b

⎧ 0, ⎪ ⎪ ⎨ (x − a)/2(b − a),

if x ≤ a if a < x ≤ b

(x + c − 2b)/2(c − b), if b < x ≤ c ⎪ ⎪ ⎩ 1,

if x > c

 = f (1 , 2 , . . ., n ) has an inverse uncertainty distribution −1 −1  −1 (˛) = f (˚−1 (˛), ˚−1 (˛), . . ., ˚−1 m (˛), ˚m+1 (1 − ˛), . . ., ˚n (1 − ˛)). 1 2

Example 1. Let  1 ,  2 , . . .,  n be independent and positive uncertain variables with regular uncertainty distributions ˚1 , ˚2 , . . ., ˚n , respectively. Then the product  = 1 × 2 × · · · × n

Fig. 2. Zigzag uncertainty distribution.

Please cite this article in press as: X. Gao, L. Jia, Degree-constrained minimum spanning tree problem with uncertain edge weights, Appl. Soft Comput. J. (2016), http://dx.doi.org/10.1016/j.asoc.2016.07.054

G Model

ARTICLE IN PRESS

ASOC-3738; No. of Pages 9

X. Gao, L. Jia / Applied Soft Computing xxx (2016) xxx–xxx

their inverse uncertainty distributions are as followed,

the uncertain weight of edge (i, j). We assume that all the uncertain weights  ij are independent and positive uncertain variables. Let V = {1, 2, . . ., n} and E = {e1 , e2 , . . ., em }. If (i, j) ∈ / E, then denote  ij =∞. Note that  ii =∞ as G is loopless. As G is a simple network,  is symmetric, i.e.  ij =  ji . Assuming xij (i, j = 1, 2, . . ., n) is 0-1 decision vector, where xij = 1 means that edge (i, j) is selected in the spanning tree, and xij = 0 means that (i, j) is not selected in the spanning tree. Suppose bi (i = 1, 2, . . ., n) is the degree constraint of each node i. Similarly, the weight of uncertain DCMST is a function of  ij , which is denoted by fT 0 (ij ) in this paper. Obviously, fT 0 (ij ) is an uncertain variable. Thus, the uncertain programming model of the DCMST problem is as followed:

˚−1 (˛) = (1 − ˛)a + ˛b, 1

 ˚−1 (˛) = 2

a + 2(b − a)˛,

if ˛ ≤ 0.5

2b − c + 2(c − b)˛,

if

> 0.5,

and their expected values are as followed, E[1 ] =

(a + b) , 2

E[2 ] =

(a + 2b + c) . 4

⎧ n n−1   ⎪ ⎪ ⎪ f ( ) = ij xij ⎪ T 0 ij ⎪ ⎪ ⎪ i=1 j=i+1 ⎪ ⎪ ⎪ ⎪ subjectto : ⎪ ⎪ ⎪ n−1 n ⎪   ⎪ ⎪ ⎪ xij = n − 1 ⎨

3. Uncertain degree-constrained minimum spanning tree problem 3.1. Deterministic degree-constrained minimum spanning tree problem We consider undirected networks here, which are finite and loopless. A deterministic simple network is denoted by G = (V, E, ω), where V = {1, 2, . . ., n} is a finite set of nodes, E = {e1 , e2 , . . ., em } is the edge set, and ω is the weight matrix. We set ω = (ωij )n×n , where ωij denotes the weight of edge (i, j). If (i, j) ∈ / E, then denote ωij =∞. Note that ωii =∞ as G is loopless. A simple network is all the weights are symmetric, i.e. ωij = ωji (i, j = 1, 2, . . ., n). A spanning tree is a subnetwork, which is acyclic and contains all nodes. A MST is a spanning tree whose weight is minimum. The weight of MST represents the sum of weights that belong to the MST. A DCMST is a MST of given network, subject to constraints on node degrees. The weight of DCMST represents the sum of weights that belong to the DCMST. Assuming xij (i, j = 1, 2, . . ., n) is 0-1 decision vector, where xij = 1 means that edge (i, j) is selected in the spanning tree, and xij = 0 means that edge (i, j) is not selected in the spanning tree. Suppose bi (i = 1, 2, . . ., n) is the degree constraint of each node i. The weight of DCMST is a function of ω, which is denoted by f(ω) in this paper. Set ω1 = {wij1 |(i, j) ∈ E} and ω2 =

{wij2 |(i, j)

⎧ n n−1   ⎪ ⎪ ⎪ min f (ω) = ωij xij ⎪ ⎪ ⎪ ⎪ i=1 j=i+1 ⎪ ⎪ ⎪ ⎪ subject to: ⎪ ⎪ ⎪ n n−1  ⎪  ⎪ ⎪ ⎪ xij = n − 1 ⎨ i=1 j=i+1  

xij ≤| S | −1, ∀S ⊂ V, and

xij ≤ bi , ∀i ∈ V, i = 1, 2, . . ., n

j=1

xij ∈ {0, 1}, ∀i ∈ V, j ∈ V, i, j = 1, 2, . . ., n.

In uncertain networks, each degree-constrained spanning tree (DCST) is called uncertain DCST. We set W (T 0 , ) = fT 0 (ij ), which is the weight of uncertain DCMST T0 . Obviously, we can obtain W (T 0 , ) =



ij .

(i,j) ∈ T 0

Theorem 4. Let G = (V, E, ) be an uncertain network with n nodes, and independent uncertain variables  ij (i, j = 1, 2, . . ., n) be the weights of the edges. Assume that T is an uncertain DCST. Then weight of T is an uncertain variable and its inverse uncertainty distribution is



˚−1 (˛) = T

˚−1 (˛). ij

(i,j) T

Proof. Suppose ˚T , ˚ij are uncertainty distributions of W(T , ) and ˚ij (i, j = 1, 2, . . ., n), respectively, and ˛ is a predetermined (˛) and ij = ˚−1 (˛). confidence level. Then we have W (T , ) = ˚−1 T

ij

 (i,j) ∈ T



ij =

 , (i,j) ∈ T ij



it follows immediately that

˚−1 (˛) ij

(i,j) ∈ T

which can be transformed to ˚−1 (˛) = T

| S |≥ 2

xij ≤ bi , ∀i ∈ V, i = 1, 2, . . ., n

j=1

xij ∈ {0, 1}, ∀i ∈ V, j ∈ V, i, j = 1, 2, . . ., n.

3.2. Uncertain degree-constrained minimum spanning tree problem An uncertain network is denoted by G = (V, E, ), where V is the collection of nodes, E is the collection of edges, and  is the collection of uncertain weights. We set  = (ij )n×n , where  ij denotes

| S |≥ 2

i ∈ S j ∈ S,i
n 

W (T , ) =

(1) xij ≤| S | −1, ∀S ⊂ V, and

(2)

i=1 j=i+1  

Note that W (T , ) =

i ∈ S j ∈ S,i
n 

⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎩

∈ E} are two collections of deterministic weights. It

is easy to prove that f(ω1 ) ≤ f(ω2 ), if wij1 ≤ wij2 for each (i, j) ∈ E. Actually, f(ω) is an increasing function with respect to weights wij . Thus, the mathematical model of the DCMST problem is as followed,

⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎩

3

 (i,j) T

˚−1 (˛). 䊐 ij

This result implies that we can directly find the inverse uncertainty distribution by way of inverse uncertainty distribution of each edge in the DCMST. If an uncertain network G is a tree itself, then we can easily obtain the inverse uncertainty distribution of the DCMST itself. Theorem 5. Let G = (V, E, ) be an uncertain network with n nodes, and independent uncertain variables  ij (i, j = 1, 2, . . ., n − 1) be the weights of the edges. Assume that T0 is an uncertain DCMST, then the weight of T0 is an uncertain variable and its inverse uncertainty distribution is ˚−1 0 (˛) =



T

˚−1 (˛). ij

(i,j) T 0

Please cite this article in press as: X. Gao, L. Jia, Degree-constrained minimum spanning tree problem with uncertain edge weights, Appl. Soft Comput. J. (2016), http://dx.doi.org/10.1016/j.asoc.2016.07.054

G Model

ARTICLE IN PRESS

ASOC-3738; No. of Pages 9

X. Gao, L. Jia / Applied Soft Computing xxx (2016) xxx–xxx

4

Theorem 6. Let G = (V, E, ) be an uncertain network with n nodes, and independent uncertain variables  ij (i, j = 1, 2, . . ., n) be the weights of the edges. Assume that T0 is an uncertain expected DCMST, then the weight of T0 may be comprehended as an uncertain variable E[W (T 0 , )] =



E[ij ].

(i,j) T 0

This result tells us that we can find the whole expected weight of uncertain DCMST replace of expected weight of each edge in uncertain network. Fig. 3. Uncertain ˛-DCMST.

4. Three types of uncertain degree-constrained minimum spanning trees Due to different criteria for ranking uncertain variables, we define the uncertain DCMST problem in different ways. Here we give three types of uncertain DCMSTs: uncertain expected value DCMST, uncertain ˛-DCMST, and uncertain most chance DCMST. Definition 7. Let G = (V, E, ) be an uncertain network with n nodes, and independent uncertain variables  ij (i, j = 1, 2, . . ., n) be the weights of the edges. An uncertain DCST T0 is called an uncertain expected value DCMST if E[W (T 0 , )] ≤ E[W (T , )] holds for every other DCST T , where E[W(T , )] stands for the expected value weight of any DCST T and E[W(T0 , )] is stands for the expected value weight of T0 . Theorem 7. Given a connected uncertain network G = (V, E, ), and let independent uncertain variables  ij (i, j = 1, 2, . . ., n) be the weights of the edges, a DCST T0 is an uncertain expected value DCMST if and only if



E[ij ] ≤



E[ij ]

min{ω|M{W (T o , ) ≤ ω} ≥ ˛} ≤ min{ω|M{W (T , ) ≤ ω} ≥ ˛} holds for every other MCST T , where ˛ ∈ (0, 1) is a given confidence level. Remark 1. Definition 8 can also be described as ˚−1 (˛) ≤ ˚−1 (˛). T

T0 Since min{ω|M{W (T , ) ≤ ω} ≥ ˛} = min{ω|˚T (ω) ≥ ˛} and ˚T (ω) is an increasing function, the minimum value of ω is obtained when ˚T (ω) = ˛. Then min{ω|˚T (ω) ≥ ˛} is equal (˛). Similarly, we to {ω|˚T (ω) = ˛} which is the same as ˚−1 T

have min{ω|M{W (T 0 , ) ≤ ω} ≥ ˛} = ˚−1 (˛). In this case, Fig. 3 can T0 describe the uncertain ˛-DCMST. Theorem 8. Given a connected uncertain network G = (V, E, ) and a predetermined confidence level ˛, a spanning tree T0 is an uncertain ˛-DCMST if and only if



(i,j) ∈ T

(i,j) ∈ T 0

Definition 8. Let G = (V, E, ) be an uncertain network with n nodes, and independent uncertain variables  ij (i, j = 1, 2, . . ., n) be the weights of the edges. An uncertain DCST T0 is called the uncertain ˛-DCMST, if



˚−1 (˛) ≤ ij

˚−1 (˛) ij

holds for every other DCST T , where E[ ij ] are the expected values of uncertain weights  ij (i, j = 1, 2, . . ., n), respectively.

(i,j) ∈ T 0

Proof. Note that  ij (i, j = 1, 2, . . ., n) are independent  uncertain variables. According to Theorem 3 and W (T , ) =  , we (i,j) ∈ T ij get

distributions of uncertain weights  ij (i, j = 1, 2, . . ., n), respectively.



E[W (T , )] = E ⎣





ij ⎦ =

(i,j) ∈ T

Similarly, 0

E[W (T , )] =



and

(i,j) ∈ T 0





(i,j) ∈ T

E[ij ],

E[ij ].



˚−1 (˛). ij −1

satisfies E[W(T0 ,

(i,j) ∈ T

E[ij ] ≤

˚−1 (˛) ij

(i,j) ∈ T

E[ij ].

T0 is an uncertain expected value DCMST. On the other hand, if T0 is an uncertain expected value DCMST, then T0 must satisfies E[W(T0 , )] ≤ E[W(T , )], which can be turned into





(i,j) ∈ T 0

(i,j) ∈ T

)] ≤ E[W(T , )], which is the same as

(i,j) ∈ T 0

According to Theorems 4 and 5, we can obtain

T

E[ij ].

Depending on Definition 7 we know that if DCST T0

E[ij ] ≤

Proof.

˚−1 (˛) = T





holds for every other DCST T , where ˚−1 (˛) are the inverse uncertainty ij

˚−1 0 (˛) =

(i,j) ∈ T 0



(i,j) ∈ T

Depending on Definition 8, the DCST T0 satisfies ˚−1 (˛) ≤ ˚T (˛), T0 which is the same as



˚−1 (˛) ≤ ij



˚−1 (˛), ij

(i,j) ∈ T

(i,j) ∈ T 0

T0 is an uncertain ˛-DCMST. If T0 is an uncertain ˛-DCMST, then T0 −1 must satisfy ˚−1 0 (˛) ≤ ˚T (˛) which can be transformed to



(i,j) ∈ T 0

T

˚−1 (˛) ≤ ij



˚−1 (˛). ij

(i,j) ∈ T

䊐 Definition 7 means that the decision-maker wants to minimize the expected value of the uncertain weight. Definition 8 shows

Please cite this article in press as: X. Gao, L. Jia, Degree-constrained minimum spanning tree problem with uncertain edge weights, Appl. Soft Comput. J. (2016), http://dx.doi.org/10.1016/j.asoc.2016.07.054

G Model ASOC-3738; No. of Pages 9

ARTICLE IN PRESS X. Gao, L. Jia / Applied Soft Computing xxx (2016) xxx–xxx

5

Fig. 5. The structure of the tree.

Fig. 4. Uncertain most chance DCMST.

that the decision-maker sets a confidence level ˛ as an appropriate safety margin, and hopes to minimize a critical value ω such that M{W (T 0 , ) ≤ ω} ≥ ˛. Definition 9. Given a connected uncertain network G = (V, E, ) and a predetermined weight supermum x* , an uncertain DCST T0 is called an uncertain most chance DCMST if ˚T 0 (x∗ ) ≥ ˚T (x∗ )

number as a chromosome of genetic algorithm. According to this idea, the Prüfer coding process of a tree can be obtained: Step 1: To make i the smallest leaf on the tree; Step 2: The only node i connected to the node j as the first number of coding; Step 3: By deleting node i and the edge of node i to node j, a tree with n-1 nodes is obtained; Step 4: Repeat Steps 1 to 3 until only one edge is left.

holds for every other DCST T .

Thus a tree T of Prüfer number is obtained (namely Prüfer encoding), and apparently Prüfer coding total n − 2 numbers, and these numbers are composed of tree T of n nodes (nodes can be repeated).

By the Definition 9, for a given appropriate weight supermum x* , the uncertain most chance DCMST T0 is the one whose chance of having a weight less than or equal to x* is greater than or equal to any other DCST, which has been illustrated in Fig. 4.

Example 3. Assuming that a complete network G with 5 nodes, T is a spanning tree of G, as is shown in Fig. 5 for its Prüfer encoding is {5, 4, 4}.

Theorem 9. Given a connected uncertain network G = (V, E, ) and a predetermined weight supermum x* , the uncertain most chance DCMST T0 of G is just the uncertain ˛-DCMST of G, where ˛ = ˚T 0 (x∗ ). Proof. Assume that T0 is an uncertain most chance DCMST. Following Definition 9, T0 satisfies ˚T 0 (x∗ ) ≥ ˚T (x∗ ). Let ˛ = ˚T 0 (x∗ ). So we have ˚T 0 (x∗ ) = ˛ = ˚T [˚−1 (˛)] T

and ˚T (x∗ ) = ˚T [˚−1 0 (˛)]. T

In this case, it follows from ˚T 0 (x∗ ) ≥ ˚T (x∗ ) that ˚T [˚−1 (˛)] ≥ ˚T [˚−1 0 (˛)]. T

T

Since ˚T is an increasing function, it is easy to obtain the inequal(˛) ≤ ˚−1 (˛) which is right the necessary and sufficient ity ˚−1 T

T0 condition of uncertain ˛-DCMST. 䊐 5. A modified genetic algorithm In this section, we give a practical algorithm for the uncertain DCMST problem. The main idea is to transfer the uncertain DCMST problem to the DCMST problem of a corresponding deterministic network, and then solve the DCMST problem by a genetic algorithm. 5.1. Tree encoding and decoding process We first give a brief introduction to Prüfer encoding as it is a essential part of our algorithm. The encoding idea of spanning tree is derived from the Prüfer proof of the Cayley theorem. The Cayley theorem is proved to have all the spanning trees of a complete network of n nodes and a corresponding one correspondence of all permutations of n-2 numbers. Suggesting that can only use the arrangement of n − 2 numbers to uniquely express a tree, where each number is an integer between 1 and n, this arrangement is often referred to as Prüfer number, so that we can put a Prüfer

And Prüfer decoding process is as followed: Step 1: Assuming p is a spanning tree of the Prüfer number, q is not in the P of the node set; Step 2: If i is the smallest node q, j is the number of p in the left, node i to node j to choose edge. At the same time, the node i is removed from the q, delete node j from p, according to the down, until node j no longer appear in p, and the node j is added to q, repeat the operation, p until there is no longer a number so far. Step 3: If there is no number in p, then the q just have two numbers, the connection of the two nodes to join the tree, it constitutes a n − 1 edges of the DCST. Using the above steps will Prüfer number p = {5, 4, 4} decoding into a tree in Fig. 5. Because the nodes 1, 2, and 3 is not included in the p, therefore, q = {1, 2, 3}, notice the 1 is the smallest number of q, 5 is the most on the left side of the digital in p, so added the edge (1, 5) to the tree, at the same time deleted 1 from the q, removes 5 from p and add 5 to q, so the rest of the p = {4, 4}, q = {5, 2, 3}. Repeat this process until p is empty, the best is added the rest of the edge (3, 4) to the tree constitutes in Fig. 5. 5.2. The initialization of population and fitness function In order to avoid the infeasible solution, we first give the following theorem. Theorem 10. The degree number of any node in the tree T is equal to the number of times node goes in Prüfer coding plus one. Proof. Prüfer encoding process ensure that, its label will no longer appear in Prüfer coding, as long as the leaf node label become the minimum. Therefore, the number of edges which any one node connected to in the tree is equal to the number of times it appears in Prüfer coding plus 1. In other words, the number of nodes in the tree is reduced by 1 for the number of times that the node appears in the Prüfer code. 䊐

Please cite this article in press as: X. Gao, L. Jia, Degree-constrained minimum spanning tree problem with uncertain edge weights, Appl. Soft Comput. J. (2016), http://dx.doi.org/10.1016/j.asoc.2016.07.054

G Model ASOC-3738; No. of Pages 9

ARTICLE IN PRESS X. Gao, L. Jia / Applied Soft Computing xxx (2016) xxx–xxx

6

Here we have population initialization. Order b = {b1 , b2 , . . ., bn } for the problem of degree constrained sequence, s = {1, 1 . . . 1, 2, 2 . . . 2 . . . j, j, . . ., n, n, . . ., n} for a number set, and the number of digital j in the collection is (bj − 1). We generate the Prüfer number in the following way: arbitrarily select the n − 2 numbers from the set s, which is the Prüfer number of the feasible solution of the satisfaction degree constraint. So many times, to achieve the number of population requirements. We can get the feasible solution of the full satisfaction degree constraint condition, and not produce any infeasible solution. The main functions of the fitness function are calculate the fitness of chromosomes to evaluation. From our design algorithm can be seen, the solution of the coding, solution of the initialization and the following introduction of crossover and mutation genetic operation are applicable to the three types of models. The uncertain DCMST problem only is the difference between the three models used in the process of choosing the different fitness function. According to the DCMST problem of different uncertain programming models F(x) for our fitness function:



F(x) =

K − f (x),

f (x) < K

0,

f (x) ≥ K

among them, K is a suitably large constant, and the value of uncertain function f(x) is obtained by the value of ˛, respectively, using the equivalent model, in the concrete programming can use them as a module called directly. Obviously individual (spanning tree) x is optimal, the fitness function is bigger. 5.3. Selection process In the selection process, when the population of several adaptation values significantly greater than the rest of the individual “super”. If the traditional roulette gambling selection mode, will lead to these individuals reproduce rapidly, after a few iterations are accounted for with the location of the populations, so that the population diversity decreased dramatically. Although it is still possible to produce implicit sub generation of individuals containing excellent genetic information, because of its small number and its own adaptation value is not high enough, the probability of being eliminated is very large, is not conducive to the evolution of the population. In this case, it is considered that a qualified individual should have a higher probability of choice than their parents. Therefore, we designed a strategy to limit the number of reserved parents. First of all, we define the judgment function of qualified individual x as:



V (x) =

1, if 0,

Fig. 6. Reverse mutation process.

5.4. Genetic manipulation Crossover and mutation are the basic genetic operations in genetic algorithm, which make the population evolve continuously. Because the essence of crossover and mutation is to make the genetic change, according to the traditional way of the cross will produce a lot of infeasible solutions. To avoid this situation, we design three kinds of mutation operators instead of crossover and mutation process, two kinds of mutation operation does not produce infeasible solution, only one point mutation operation may produce infeasible solutions and we can give up the mutation or re variation, so greatly reduces the infeasible solutions produced by chance. (1) Displacement mutation operation: displacement operation is in the father of the chromosome (that is, Prüfer number) in the random selection of a sub string, and then the sub string into a random position, so as to form a sub generation of chromosomes. It is clear that such displacement operations do not produce infeasible solutions. (2) Reverse mutation operation: the reverse operation is to randomly determine the number of chromosome Prüfer reverse gene sequence starting and ending position, and then determine the gene string reversal, the operation does not produce infeasible solutions. The process is shown as follows in Fig. 6: (3) Point mutation operation: a little variation is first in the number of Prüfer randomly determine the location of a mutation, and then the location of the number of random into another number (a number of (1, . . ., n)), which forms the offspring of the chromosome, as shown in the Fig. 7. It is clear that such a point mutation operation is likely to produce infeasible solutions, if the solution is not feasible, you can give up the mutation or re variation. 5.5. The algorithm flow chart The flow chart of the genetic algorithm with application to the DCMST problem is shown in Fig. 8. 6. Numerical examples

F(x) > Farg

otherwise.

where (≥0.9) is qualified coefficient, Farg is the average fitness of the contemporary population. We take the method is: in order to retain the best individual K, at the same time limit the number of parents of the individual to retain the number of n, so that the more choice opportunities for qualified sub generation of individuals.





⎢ L(30, 40) ⎢ ⎢ Z(23, 30, 45) =⎢ ⎢ L(26, 42) ⎢ ⎣ Z(22, 34, 44) L(26, 48)

In this section, we will give some examples of the uncertain DCMST problems to illustrate the conclusions above. An uncertain network with 6 vertices and 15 edges is shown in Fig. 9. The network is an undirected and connected network with uncertain weights  ij (i, j = 1, 2, . . ., 6), and the degree-constrained of each node is 3. Assuming that all the weights  ij (i, j = 1, 2, . . ., 6) are independent of the linear uncertain variables or the zigzag uncertain variables with linear or zigzag uncertainty distributions, and the weight matrix  is as followed,



L(30, 40)

Z(23, 30, 45)

L(26, 42)

Z(22, 34, 44)

L(26, 48)



Z(28, 38, 46)

L(22, 50)

Z(25, 35, 43)

L(31, 47)

Z(28, 38, 46)



Z(23, 33, 41)

L(32, 44)

L(22, 50)

Z(23, 33, 41)



L(21, 41)

Z(25, 35, 43)

L(32, 44)

L(21, 41)



⎥ ⎥ ⎥ Z(28, 32, 40) ⎥ ⎥ ⎦ L(21, 39)

L(31, 47)

Z(29, 37, 43)

Z(28, 32, 40)

L(21, 39)



Z(29, 37, 43) ⎥

Please cite this article in press as: X. Gao, L. Jia, Degree-constrained minimum spanning tree problem with uncertain edge weights, Appl. Soft Comput. J. (2016), http://dx.doi.org/10.1016/j.asoc.2016.07.054

.

G Model ASOC-3738; No. of Pages 9

ARTICLE IN PRESS X. Gao, L. Jia / Applied Soft Computing xxx (2016) xxx–xxx

7

Table 1 Parameter values in Fig. 9.

Fig. 7. One point mutation process.

Edge  ij

 ij

E[ ij ]

˚−1 (0.8) ij

 12  13  14  15  16  23  24  25  26  34  35  36  45  46  56

L(30, 40) Z(23, 30, 45) L(26, 42) Z(22, 34, 44) L(26, 48) Z(28, 38, 46) L(22, 50) Z(25, 35, 43) L(31, 47) Z(23, 33, 41) L(32, 44) Z(29, 37, 40) L(21, 41) Z(28, 32, 40) L(21, 39)

35.0 32.0 34.0 33.5 37.0 37.5 36.0 34.5 39.0 32.5 38.0 36.5 31.0 33.0 30.0

38.0 39.0 38.8 40.0 43.6 42.8 44.4 39.8 43.8 37.8 41.6 40.6 37.0 36.8 35.4

Fig. 10. Expected value DCMST.

250 max fitness average fitness

240 230 220

Fig. 8. Genetic algorithm flow chart.

According to the operational law and inverse uncertainty distributions of L(a, b) and Z(a, b, c), take confidence level ˛ = 0.8, we can obtain ˚−1 (0.8)(i, j = 1, 2, . . ., 6). According to expected ij

Fitness

210 200 190 180

value of L(a, b) and Z(a, b, c), we can obtain the expected E[ ij ]. (0.8), i, j = 1, 2, . . ., 6, are listed in The values of  ij , E[ ij ], and ˚−1 ij Table 1.

170

Example 4. In order to find the uncertain expected value DCMST of the uncertain network G is shown in Fig. 9. By using the genetic algorithm based on Prüfer coding, through many experiments, the parameters are as followed: displacement mutation rate is 0.3, reverse mutation rate is 0.2, a point mutation rate is 0.2, the size of population is 100, the number of evolution generation is 100. Fig. 11 represents the convergence process of the genetic algorithm. We can obtain the Prüfer number of expected value DCMST T0 is {3, 5, 4, 5}. By decoding the Prüfer number, we can find the DCMST is

140

160 150 10

20

30

40 50 60 generation number

70

80

90

100

Fig. 11. Convergence process of expected value DCMST.

shown in Fig. 10. The DCMST consists of edges  13 ,  25 ,  34 ,  45 ,  56 , and the weight is calculated as followed, E[W (T 0 , )] = E[13 ] + E[25 ] + E[34 ] + E[45 ] + E[56 ] = 160. Example 5. Using the genetic algorithm based on Prüfer coding, the parameters values as the same as Example 4, so the Prüfer

Fig. 9. Uncertain complete network of six nodes.

Fig. 12. 0.8-DCMST.

Please cite this article in press as: X. Gao, L. Jia, Degree-constrained minimum spanning tree problem with uncertain edge weights, Appl. Soft Comput. J. (2016), http://dx.doi.org/10.1016/j.asoc.2016.07.054

G Model

ARTICLE IN PRESS

ASOC-3738; No. of Pages 9

X. Gao, L. Jia / Applied Soft Computing xxx (2016) xxx–xxx

8 Table 2 Results with different confidence levels.  ij

˛ = 0.1

˛ = 0.2

˛ = 0.3

˛ = 0.4

˛ = 0.5

˛ = 0.6

˛ = 0.7

˛ = 0.8

˛ = 0.9

 12  13  14  15  16  23  24  25  26  34  35  36  45  46  56 ˚−1 (˛) 0

31.0 24.4 27.6 24.4 28.2 30.0 24.8 27.0 32.6 25.0 33.2 30.6 23.0 28.8 22.8 119.4

32.0 25.8 29.2 26.8 30.4 32.0 27.6 29.0 34.2 27.0 34.4 32.2 25.0 29.6 24.6 129.8

33.0 27.2 30.8 29.2 32.6 34.0 30.4 31.0 35.8 29.0 35.6 33.8 27.0 30.4 26.4 140.0

34.0 28.6 32.4 31.6 34.8 36.0 33.2 33.0 37.4 31.0 36.8 35.4 29.0 31.2 28.2 149.8

35.0 30.0 34.0 34.0 37.0 38.0 36.0 35.0 39.0 33.0 38.0 37.0 31.0 32.0 30.0 159.0

36.0 33.0 35.6 36.0 39.2 39.6 38.8 36.6 40.6 34.6 39.2 38.2 33.0 33.6 31.8 168.4

37.0 36.0 37.2 38.0 41.4 41.2 41.6 38.2 42.2 36.2 40.4 39.4 35.0 35.2 33.6 177.8

38.0 39.0 38.8 40.0 43.6 42.8 44.4 39.8 43.8 37.8 41.6 40.6 37.0 36.8 35.4 186.8

39.0 42.0 40.4 42.0 45.8 44.4 47.2 41.4 45.4 39.4 42.8 41.8 39.0 38.4 37.2 194.4

T

number of uncertain 0.8-DCMST T0 is {1, 4, 4, 6}. The DCMST consists of edges  12 ,  14 ,  34 ,  46 and  56 is shown in Fig. 12, and the weight is calculated as followed, W0.8 (T 0 , ) = ˚−1 (0.8) + ˚−1 (0.8) + ˚−1 (0.8) + ˚−1 (0.8) + ˚−1 (0.8) 12 14 34 46 56 = 38 + 38.8 + 37.8 + 36.8 + 35.4 = 186.8. Fig. 13. 0.3-DCMST.

Fig. 14. 0.6-DCMST.

Fig. 15. 0.9-DCMST.

Comparing Figs. 10 and 12, it is easy to conclude that different definitions of uncertain DCMST models lead to different solutions. Finally, different confidence levels influence the solution is discussed. Table 2 lists some of different confidence levels and their corresponding solutions. Figs. 13–15 present the uncertain ˛-DCMST when ˛ are 0.3, 0.6, and 0.9. Example 6. Given an uncertain DCMST weight supermum x * =186.8. Form Table 2, we get ˚T 0 (186.8) = 0.8 and the uncertain most DCMST consists of edges  12 , 14 , 34 , 46 and  56 . Fig. 16 shows the uncertainty distribution of W(T0 , ) based on the data in Table 2, which can help us obtain results with given supermum x*.

Fig. 16. Distribution of W(T0 , ).

Please cite this article in press as: X. Gao, L. Jia, Degree-constrained minimum spanning tree problem with uncertain edge weights, Appl. Soft Comput. J. (2016), http://dx.doi.org/10.1016/j.asoc.2016.07.054

G Model ASOC-3738; No. of Pages 9

ARTICLE IN PRESS X. Gao, L. Jia / Applied Soft Computing xxx (2016) xxx–xxx

7. Conclusions In this paper, we investigated the uncertain expected value DCMST model, uncertain ˛-DCMST model, and uncertain most chance DCMST model with uncertain edge weights. In addition, we proved that the former two uncertain programming models could be transformed to DCMST problems in corresponding deterministic networks, and then be solved by the genetic algorithm. As a dual problem to the ˛-DCMST problem, we can transform the most chance DCMST problem to an ˛-DCMST problem, and then solve it. Finally, some numerical examples were given to illustrate the conclusions presented above by using the genetic algorithm based on Prüfer encoding. Future directions may be as followed: 1. This work can be applied to uncertain traveling salesman problem where traveling salesman problem is exactly DCMST problem when all the nodes’ degree constraints are 2; 2. It may be desirable to study uncertain leaf constrained MST problem; 3. The DCMST problem also can be extended to uncertain random network where randomness and uncertainty simultaneously appear. Acknowledgments This work was supported by the Beijing Higher Education Young Elite Teacher Project (No. YETP0723) and the Fundamental Research Funds for the Central Universities (No. 2016MS65). References [1] O. Borüvka, O jistém problému minimálním, Práce Mor. Pˇrírodovéd. Spol. v Brnˇe 3 (1926) 37–58. [2] S.C. Narula, C.A. Ho, Degree-constrained minimum spanning tree, Comput. Oper. Res. 7 (1980) 239–249. [3] A. Volgenant, A Lagrangean approach to the degree-constrained minimum spanning tree problem, Eur. J. Oper. Res. 39 (3) (1989) 325–331. [4] L. Caccetta, S.P. Hill, A branch and cut method for the degree-constrained minimum spanning tree problem, Networks 37 (2) (2001) 74–83.

9

[5] Y.T. Bau, C.K. Ho, H.T. Eve, Ant colony optimization approaches to the degree-constrained minimum spanning tree problem, J. Inf. Sci. Eng. 24 (4) (2008) 1081–1094. [6] Y.N. Bui, X.H. Deng, C.M. Zrncic, An improved ant-based algorithm for the degree-constrained minimum spanning tree problem, IEEE Trans. Evol. Comput. 16 (2) (2012) 266–278. [7] J. Knowles, D. Corne, A new evolutionary approach to the degree-constrained minimum spanning tree problem, IEEE Trans. Evol. Comput. 4 (2000) 125–134. [8] G.R. Raidl, An efficient evolutionary algorithm for the degree-constrained minimum spanning tree problem, IEEE Trans. Evol. Comput. 1 (2000) 104–111. [9] G.G. Zhou, M. Gen, Approach to the degree-constrained minimum spanning tree problem using genetic algorithm, Eng. Des. Autom. 3 (1997) 157–165. [10] Y.Z. Mu, G.G. Zhou, Genetic algorithm based on Prüfer number for solving degree-constrained minimum spanning tree problem, Comput. Eng. Appl. 44 (12) (2008) 53–56. [11] X. Li, Z.F. Qin, Interval portfolio selection models within the framework of uncertainty theory, Econ. Model. 41 (2014) 338–344. [12] X. Li, S.N. Guo, L.A. Yu, Skewness of fuzzy numbers and its applications in portfolio selection, IEEE Trans. Fuzzy Syst. 23 (6) (2015) 2135–2143. [13] X. Li, A numerical-integration-based simulation algorithm for expected values of strictly monotone functions of ordinary fuzzy variables, IEEE Trans. Fuzzy Syst. 23 (4) (2015) 964–972. [14] B. Liu, Uncertainty Theory, second ed., Springer-Verlag, Berlin, 2007. [15] X.W. Chen, B. Liu, Existence and uniqueness theorem for uncertain differential equations, Fuzzy Optim. Decis. Mak. 9 (1) (2010) 69–81. [16] X. Gao, Y. Gao, D.A. Ralescu, On Liu’s inference rule for uncertain systems, Int. J. Uncertain. Fuzziness Knowl.-Based Syst. 18 (1) (2010) 1–11. [17] B. Liu, Fuzzy process, hybrid process and uncertain process, J. Uncertain Syst. 2 (1) (2008) 3–16. [18] B. Liu, Some research problems in uncertainty theory, J. Uncertain Syst. 3 (1) (2009) 3–10. [19] B. Liu, Theory and Practice of Uncertain Programming, second ed., Springer-Verlag, Berlin, 2009. [20] B. Liu, Uncertainty Theory: A Branch of Mathematics for Modeling Human Uncertainty, Springer-Verlag, Berlin, 2010. [21] B. Liu, Uncertain set theory and uncertain inference rule with application to uncertain control, J. Uncertain Syst. 4 (2) (2010) 83–98. [22] K. Yao, Inclusion relationship of uncertain sets, J. Uncertain. Anal. Appl. 3 (2015) (article 13). [23] K. Yao, Extreme values and integral of solution of uncertain differential equation, J. Uncertain. Anal. Appl. 1 (2013) (article 2). [24] X.F. Yang, J.W. Gao, Uncertain differential games with applicatians to capitalism, J. Uncertain. Anal. Appl. l (2013) (article 17). [25] X.F. Yang, J.W. Gao, Linear-quadratic uncertain differential games with applications to resource extraction problem, IEEE Trans. Fuzzy Syst. (2015), http://dx.doi.org/10.1109/TFUZZ.2015.2486809.

Please cite this article in press as: X. Gao, L. Jia, Degree-constrained minimum spanning tree problem with uncertain edge weights, Appl. Soft Comput. J. (2016), http://dx.doi.org/10.1016/j.asoc.2016.07.054