Linear optimization of bipolar fuzzy relational equations with max-Łukasiewicz composition

Linear optimization of bipolar fuzzy relational equations with max-Łukasiewicz composition

Information Sciences 360 (2016) 149–162 Contents lists available at ScienceDirect Information Sciences journal homepage: www.elsevier.com/locate/ins...

510KB Sizes 3 Downloads 31 Views

Information Sciences 360 (2016) 149–162

Contents lists available at ScienceDirect

Information Sciences journal homepage: www.elsevier.com/locate/ins

Linear optimization of bipolar fuzzy relational equations with max-Łukasiewicz composition Chia-Cheng Liu a, Yung-Yih Lur a, Yan-Kuen Wu b,∗ a b

Department of Industrial Management, Vanung University, Taoyuan, 320, Taiwan, ROC Department of Business Administration, Vanung University, Taoyuan, 320, Taiwan, ROC

a r t i c l e

i n f o

Article history: Received 4 June 2015 Revised 22 March 2016 Accepted 28 April 2016 Available online 3 May 2016 Keywords: Bipolar fuzzy relational equations Max-Łukasiewicz composition 0–1 integer linear programming problem

a b s t r a c t According to the literature, a linear optimization problem subjected to a system of bipolar fuzzy relational equations with max-Łukasiewicz composition can be translated into a 0–1 integer linear programming problem and solved using integer optimization techniques. However, the technique of integer optimization may involve hight computation complexity. To improve computational efficiency for solving such an optimization problem, this paper proves that each component of an optimal solution obtained from such an optimization problem can either be the corresponding component’s lower bound or upper bound value. Because of this characteristic, a simple value matrix with some simplified rules can be proposed to reduce the problem size first. A simple solution procedure is then presented for determining optimal solutions without translating such an optimization problem into a 0–1 integer linear programming problem. Two examples are provided to illustrate the simplicity and efficiency of the proposed algorithm. © 2016 Elsevier Inc. All rights reserved.

1. Introduction According to the literature, a system of fuzzy relational equations usually formulates in a matrix as follows:

x ◦ A = b, where x = (xi )1×m , A = [ai j ]m×n and b = (b j )1×n are all defined over [0, 1]. The operator “◦” represents a well-defined algebraic composition for matrix multiplication. In a generalized theory of uncertainty, determining solutions for fuzzy relational equations can be categorized according to the concept of granular precisiation proposed by Zadeh [27], which has played a major role in fuzzy modeling. Different equations exist, which are based on a specific composition of fuzzy relations. The first study of fuzzy relational equations was conducted by Sanchez [21], which considered max-min composition. Since then, fuzzy relational equations based on various compositions have been investigated. Many studies have reported fuzzy relational equations with maxmin and max-product compositions. Both compositions are special cases of the max-triangular-norm (max-t-norm). Di Nola et al. [4] demonstrated that the solution set of fuzzy relational equations with continuous max-t-norm composition can be completely determined by the maximum solution and a finite number of minimal solutions. Determining all minimal solutions for fuzzy relational equations was observed to be closely associated with the set covering problem, which is NP-hard ∗

Corresponding author. Tel.: +886 3 4515811x652. E-mail address: [email protected] (Y.-K. Wu).

http://dx.doi.org/10.1016/j.ins.2016.04.041 0020-0255/© 2016 Elsevier Inc. All rights reserved.

150

C.-C. Liu et al. / Information Sciences 360 (2016) 149–162

[2,3,18,20,23]. It is worth to mention that Li and Fang [13] provided a complete survey and a detailed discussion on fuzzy relational equations. They studied the relationship among generalized logical operators involved in the construction of fuzzy relational equations and introduced the classification of basic fuzzy relational equations. Lin et al. [19] demonstrated that all systems of max-continuous u-norm fuzzy relational equations, for example, max-product, max-continuous Archimedean t-norm and max-arithmetic mean are essentially equivalent, because they are all equivalent to the set covering problem. Setting up the mathematic model of fuzzy relational inequalities with addition-min composition, Li and Yang [17] first considered it to meet a data transmission mechanism in a BitTorrent-like Peer-to-Peer file sharing systems. They discussed the solution property for the fuzzy relational inequality with addition-min composition and proposed an algorithm to search for minimal solutions. A typical framework of linear optimization subjected to a system of fuzzy relational equations with different algebraic operations has been proposed in the literature. By far the most frequently studied aspect is the determination of a minimizer of a linear objective function and the use of the max-min composition [1,8]. Thus, it is an almost standard approach to translate this type of problem into a corresponding 0–1 integer linear programming problem, which is then solved using a branch and bound method [5,24]. Some studies have determined a more general operator of linear optimization with the replacement of max-min composition with a max-t-norm composition [10,14,22], max-average composition [11,25], or max-star composition [9,12]. The optimization problem subjected to various versions of fuzzy relational inequalities could be found in the literature as well. Feng et al. [6] investigated a kind of nonlinear and non-convex optimization problems subject to a system of mixed fuzzy relational equations with max-min and max-average compositions. They presented some properties of this optimization problem and proposed a polynomial-time algorithm for solving optimal solutions. Yang [26] considered the optimal solution of minimizing a linear objective function subject to the fuzzy relational inequalities with addition-min composition. Yang discussed some properties of fuzzy relational inequalities with addition-min composition and then utilized the pseudo-minimal indexes algorithm for solving this optimization system. Recently, Freson et al. [7] first considered a linear optimization problem subjected to a system of bipolar fuzzy relational equations with max-min composition. They wanted to consider the antagonistic effects of this new optimization problem. For instance, consider suppliers who want to optimize public awareness for their products and attribute a degree of appreciation to their products. Such a degree of appreciation can be denoted using a real number xi in the unit interval [0, 1] whose complement x˜i = 1 − xi in [0, 1] denotes the degree of disappreciation. Generally, when the positive effect of xi increases, the negative effect of x˜i = 1 − xi decreases, which is called the bipolar character. The bipolar fuzzy relational equations clearly contain the decision vector and its negation simultaneously. Motivated by Freson et al. [7], Li and Liu [16] considered the linear optimization problem with bipolar max-Łukasiewicz equation constraints and translated this problem into a 0–1 integer linear programming problem. The linear optimization problem subjected to a system of bipolar fuzzy relational equations with max-Łukasiewicz composition proposed by Li and Liu [16] can be formulated as follows:

Minimize Z (x ) =

m 

ci xi

(1)

i=1 + −

subject to x ∈ X (A , A , b) := {x ∈ [0, 1]m |x ◦ A+ ∨ x˜ ◦ A− = b}, where ci ∈ R is the cost coefficient associated with the variable xi ; x = (xi )1×m and x˜ = (x˜i )1×m are variable vectors with xi ∈ [0, 1]; x˜i = 1 − xi denotes the bipolar character; A+ = [a+ ] and A− = [a− ] are m × n non-negative matrices with i j m×n i j m×n

a+ ≤ 1 and a− ≤ 1; b = (b j )1×n is an n-dimensional vector with bj ∈ [0, 1]; and X (A+ , A− , b) denotes the solution set of ij ij Model (1). The notation “∨” denotes max operation and the operation “◦” represents the max-Łukasiewicz composition. If either A+ or A− is the zero matrix, the constraint part of Model (1) degenerates into unipolar max-Łukasiewicz fuzzy relational equations as x ◦ A+ = b or x˜ ◦ A− = b, respectively. Essentially, the constraint part of Model (1) formed using the bipolar fuzzy relational equations with max-Łukasiewicz composition involves determing a set of solution vectors x = (xi )i∈I such that

max{max{xi + a+ − 1, 0}, max{x˜i + a− − 1, 0}} = b j , j ∈ J , ij ij i∈I

(2)

where index sets I = {1, 2, . . . , m} and J = {1, 2, . . . , n}. The linear objective function Z(x) in Model (1) increases or decreases in each of its arguments. Using the change of variables, Freson et al. [7] effectively transformed the objective function into an increasing function. Henceforth, ci ≥ 0, ∀i ∈ I is assumed. Li and Liu [16] determined that the consistency of a system of bipolar fuzzy relational equations with (2) is an NPcomplete problem. Following the studies of Li and Jin [15] and Li and Liu [16], the optimization problem of Model (1) can be translated into a 0–1 integer linear programming problem which is then solved using well-developed techniques. Although these techniques of integer optimization can be used to solve Model (1), they may involve high computation complexity. Motivated by improving computational efficiency to solve such an optimization problem, this paper reveals a necessary condition for optimal solutions of Model (1). This necessary condition shows that each component of an optimal solution obtained from Model (1) can either be the corresponding component’s lower bound or upper bound value. Moreover, because of this necessary condition, some rules are proposed to preassign values for some decision variables, and often Model

C.-C. Liu et al. / Information Sciences 360 (2016) 149–162

151

(1) could be reduced quickly when determining optimal solutions. Numerical examples illustrate that the proposed solution procedure can be easily used to determine optimal solutions even without translating such an optimization problem into a 0–1 integer linear programming problem. The rest of the paper is organized as follows. Section 2 presents some properties of bipolar fuzzy relational equations with max-Lukasiewicz composition. According to these properties, characteristics are derived from the objective function with respect to Model (1). Based on these characteristics, Section 3 presents rules to reduce the problem during the process of finding an optimal solution. A procedure for finding an optimal solution is summarized as well. Section 4 describes two numerical examples to illustrate the procedure. Conclusions are given in Section 5. 2. Preliminary properties For a system of fuzzy relational equations with continuous max-t-norm composition, a well-known property exists according to which its solution set, if non-empty, can be completely determined using a unique maximum solution and a finite number of minimal solutions. However, this structural property can not extend to the solution set of bipolar fuzzy relational equations because the (2) contains the decision vector and its negation simultaneously. To investigate the property of the solution set X (A+ , A− , b) for the bipolar fuzzy relational equations with max-Łukasiewicz composition in (2), the definition and some properties are given as follows: Definition 1. For any solution x = (xi )i∈I ∈ X (A+ , A− , b) = ∅ in (2), xi is called a binding variable for the jth bipolar fuzzy relational equation if max{xi + a+ − 1, 0} = b j or max{x˜i + a− − 1, 0} = b j holds true for some j ∈ J . The sets J (xi ) := { j ∈ ij ij

J | max{xi + a+ − 1, 0} = b j , ∀ j ∈ J } and J (x˜i ) := { j ∈ J | max{x˜i + a− − 1, 0} = b j , ∀ j ∈ J } denote the binding set of the bindij ij ing variable xi . A feasible solution for bipolar fuzzy relational equations with max-Łukasiewicz composition in (2) is to determine a set of vector x = (xi )i∈I that satisfies all equations. According to Definition 1, determining a solution for (2) can be considered the selection of binding variables from the binding set that satisfy all equations. Furthermore, if xi is binding in the jth equation, J(xi ) = ∅ or J (x˜i ) = ∅, and the following equations hold true:

max{xi + a+ − 1, 0} = b j or max{x˜i + a− − 1, 0} = b j . ij ij < b j and a− < b j for each i ∈ I in (2), the solution set X (A+ , A− , b) is empty. Lemma 1. If in the jth equation a+ ij ij Proof. Suppose, conversely, the solution set X (A+ , A− , b) is nonempty. A feasible solution x = (xi )i∈I ∈ X (A+ , A− , b) exists and the following equations hold true for some i ∈ I:

max{xi + a+ − 1, 0} = b j or max{x˜i + a− − 1, 0} = b j . ij ij Because a+ < b j and a− < b j for each i ∈ I, we discuss the following two cases to show that the assumption is a contradicij ij tion: Case 1. If b j = 0, a+ < b j = 0 and a− < b j = 0, which is contradictory to a+ , a− ∈ [0, 1]. ij ij ij ij + − Case 2. If bj > 0 and x = (xi )i∈I ∈ X (A , A , b) is a feasible solution,

max{xi + a+ − 1, 0} = b j or max{x˜i + a− − 1, 0} = b j . ij ij If max{xi + a+ − 1, 0} = b j > 0, xi + a+ − 1 = b j such that xi = 1 − a+ + b j > 1, which is contradictory to xi ∈ [0, 1]. ij ij ij Conversely, if max{x˜i + a− − 1, 0} = b j > 0, x˜i + a− − 1 = b j such that x˜i = 1 − a− + b j > 1, which is contradictory to x˜i ∈ ij ij ij [0, 1].  a− ij

According to Lemma 1 we can conclude that if solution set X (A+ , A− , b) of (2) is nonempty, for each j ∈ J , a+ ≥ b j or ij

≥ b j , for i ∈ I, that is b j ≤ maxi∈I {a+ , a− } must hold true. ij ij

Lemma 2. If x = (xi )i∈I ∈ X (A+ , A− , b) = ∅ is a feasible solution for (2), max j∈J {a− − b j , 0} ≤ xi ≤ min j∈J {1 − a+ + b j , 1}, ∀ i ∈ ij ij I. Proof. For any solution x = (xi )i∈I ∈ X (A+ , A− , b) = ∅,

max{max{xi + a+ − 1, 0}, max{x˜i + a− − 1, 0}} = b j , j ∈ J . ij ij i∈I

This implies that maxi∈I {xi + a+ − 1, 0} ≤ b j and maxi∈I {x˜i + a− − 1, 0} ≤ b j , for each j ∈ J . ij ij

Consider the situation where maxi∈I {xi + a+ − 1, 0} ≤ b j for each j ∈ J . Therefore, xi + a+ − 1 ≤ b j and xi ≤ 1 − a+ + ij ij ij

b j , i ∈ I, for each j ∈ J such that xi ≤ min j∈J {1 − a+ + b j }, ∀ i ∈ I. ij

The other situation is where maxi∈I {x˜i + a− − 1, 0} ≤ b j , for each j ∈ J . Therefore, x˜i + a− − 1 = ( 1 − xi ) + a− − 1 ≤ bj ij ij ij

and xi ≥ a− − b j , i ∈ I, for each j ∈ J such that xi ≥ max j∈J {a− − b j }, ∀ i ∈ I. ij ij

152

C.-C. Liu et al. / Information Sciences 360 (2016) 149–162

Combining the results of these two situations with each variable xi ∈ [0, 1], i ∈ I,

max{a− − b j , 0} ≤ xi ≤ min{1 − a+ + b j , 1}, ∀ i ∈ I. ij ij j∈J

j∈J



Lemma 2 shows that if x = (xi )i∈I ∈ X (A+ , A− , b) = ∅ is a feasible solution for (2), the value of each variable xi is bound between max j∈J {a− − b j , 0} and min j∈J {1 − a+ + b j , 1}. They can be called the lower and upper bounds of variable xi , deij ij noted using xi and x¯i , respectively. The lower bound xi = max j∈J {a− − b j , 0} and the upper bound x¯i = min j∈J {1 − a+ + b j , 1} of variable xi , i ∈ I can be ij ij easily computed, but they may not be solutions for (2). Lemma 3. If x = (xi )i∈I ∈ X (A+ , A− , b) = ∅ is a feasible solution for (2), maxi∈I { 12 (a+ + a− − 1 )} ≤ b j ≤ maxi∈I {a+ , a− } for all ij ij ij ij j ∈ J. Proof. According to Lemma 2, if x = (xi )i∈I ∈ X (A+ , A− , b) = ∅ is a feasible solution for (2), xi = max j∈J {a− − b j , 0} ≤ xi ≤ ij x¯i = min j∈J {1 − a+ + b j , 1}, i ∈ I. This implies that for each j ∈ J the following inequalities hold true: ij

a− − b j ≤ xi and x¯i ≤ 1 − a+ + b j , for all i ∈ I. ij ij Because xi ≤ x¯i , a− − b j ≤ 1 − a+ + b j , i ∈ I such that 12 (a+ + a− − 1 ) ≤ b j , i ∈ I. Hence, maxi∈I { 12 (a+ + a− − 1 )} ≤ b j , for all ij ij ij ij ij ij j ∈ J. In addition, b j ≤ maxi∈I {a+ , a− } according to Lemma 1. Hence, ij ij

max i∈I

1 2

 (a+i j + a−i j − 1 ) ≤ b j ≤ max{a+i j , a−i j } for all j ∈ J .  i∈I

Lemma 3 shows that if the value of bj is not in the range of maxi∈I { 12 (a+ + a− − 1 )} to maxi∈I {a+ , a− } for all j ∈ J , the ij ij ij ij system of (2) is inconsistent. Hence, Lemma 3 can be used to check whether the solution set of (2) is empty or not. Theorem 1. Let x = (xi )i∈I be a solution for (2) and, x = (xi )i∈I and x¯ = (x¯i )i∈I represent vectors of the lower and upper bounds, respectively. If xi is binding in the jth equation, x¯i or xi is also binding there. Moreover, if x¯i and xi are non-binding variables, xi is also non-binding any solution x. Proof. For any solution x = (xi )i∈I ∈ X (A+ , A− , b),

max{max{xi + a+ − 1, 0}, max{x˜i + a− − 1, 0}} = b j , j ∈ J . ij ij i∈I

Because the bipolar character with x˜i = 1 − xi , the preceding system is equivalent to

max{max{xi + a+ − 1, 0}, max{a− − xi , 0}} = b j , j ∈ J . ij ij i∈I

Hence, max{xi + a+ − 1, 0} ≤ b j and max{a− − xi , 0} ≤ b j , ∀ j ∈ J hold true for any variable xi , implying that max{x¯i + a+ − ij ij ij 1, 0} ≤ b j and max{a− − x¯i , 0} ≤ b j , ∀ j ∈ J for x¯i ; and max{xi + a+ − 1, 0} ≤ b j and max{a− − x , 0} ≤ b j , ∀ j ∈ J for xi . If xi ij ij ij i

is binding in the jth equation, max{xi + a+ − 1, 0} = b j or max{a− − xi , 0} = b j , for all j ∈ J(xi ) according to Definition 1. ij ij Moreover, xi ≤ xi ≤ x¯i for any solution x. Therefore, the following inequalities hold true:

b j = max{xi + a+ − 1, 0} ≤ max{x¯i + a+ − 1, 0} ≤ b j ij ij or

b j = max{a− − xi , 0} ≤ max{a− − xi , 0 } ≤ b j . ij ij These results suggest that max{x¯i + a+ − 1, 0} = b j or max{a− − xi , 0} = b j . Hence, x¯i or xi is also binding in the jth equation. ij ij a+ ij

Conversely, if x¯i and xi are non-binding variables, max{x¯i + a+ − 1, 0} < b j and max{a− − x¯i , 0} < b j , ∀ j ∈ J ; and max{xi + ij ij

− 1, 0} < b j and max{a− − xi , 0} < b j , ∀ j ∈ J . These results imply that the following inequalities hold true: ij

max{xi + a+ − 1, 0} ≤ max{x¯i + a+ − 1, 0} < b j , ∀ j ∈ J ij ij and

max{a− − xi , 0} ≤ max{a− − xi , 0 } < b j , ∀ j ∈ J . ij ij In other words, xi is a non-binding variable.



The result obtained from Theorem 1 shows that if xi is a binding variable, the binding set of variable xi exists such that  J (xi ) ⊆ J (x¯i ) J (xi ).

C.-C. Liu et al. / Information Sciences 360 (2016) 149–162

153

Theorem 2. Let x∗ = (x∗i )i∈I be an optimal solution for Model (1). If the lower bound x = (xi )i∈I ∈ X (A+ , A− , b) is a solution for (2), x∗i = xi , i ∈ I. Proof. According to Lemma 2, for any solution x = (xi )i∈I ∈ X (A+ , A− , b), xi ≤ xi for all i ∈ I. Because the lower bound x = (xi )i∈I ∈ X (A+ , A− , b) is a solution for (2) and the coefficients of objective function assume ci ≥ 0, ∀i ∈ I, the following inequality holds true:

Z (x ) =

m 

c i xi ≤ Z ( x ) =

i=1

m 

ci xi , for all x ∈ X (A+ , A− , b).

i=1

Hence, x = (xi )i∈I is an optimal solution for Model (1) with x∗i = xi , i ∈ I.



A set of vector x = (xi )i∈I that satisfies all equations is a solution for (2). According to Theorem 2, to minimize the objective function of Model (1), assign xi = xi to be a binding variable, such that it satisfies as many equations as possible. Theorem 3. For any optimal solution x∗ = (x∗i )i∈I of Model (1), the ith component of x∗ satisfies one of the following conditions: (1) If x∗i is binding in the jth equation with bj > 0, x∗i = x¯i or x∗i = xi . (2) If x∗i is binding in the jth equation with b j = 0 or x∗i is a non-binding variable: (i) x∗i = xi if ci > 0; and (ii) xi ≤ x∗i ≤ x¯i if ci = 0. Proof. (1) According to Theorem 1, if x∗i is binding in the jth equation, x¯i or xi is also binding there. In addition, the following equalities hold true:

max{x∗i + a+ − 1, 0} = max{x¯i + a+ − 1, 0} = b j ij ij or

max{a− − x∗i , 0} = max{a− − xi , 0 } = b j . ij ij Because bj > 0, x∗i + a+ − 1 = x¯i + a+ − 1 = b j or a− − x∗i = a− − xi = b j . Hence, x∗i = x¯i or x∗i = xi . ij ij ij ij ∗ (2) According to Theorem 1, xi is binding in the jth equation with b j = 0. Therefore, the following equalities hold true:

max{x∗i + a+ − 1, 0} = max{x¯i + a+ − 1, 0} = b j = 0 ij ij and

max{a− − x∗i , 0} = max{a− − xi , 0 } = b j = 0 , ij ij which implies that a− ≤ x∗i ≤ 1 − a+ , x¯i ≤ 1 − a+ and a− ≤ xi . Moreover, according to Lemma 2, for any optimal solution ij ij ij ij x∗ = (x∗i )i∈I ∈ X (A+ , A− , b), xi ≤ x∗i ≤ x¯i exists. Hence, the following inequality holds true:

xi = max{a− , xi } ≤ x∗i ≤ min{1 − a+ , x¯i } = x¯i . ij ij In other words, if x∗i is binding in the jth equation with b j = 0 or x∗i is a non-binding variable, xi ≤ x∗i ≤ x¯i . Hence, (i) x∗i = xi because ci > 0; and (ii) xi ≤ x∗i ≤ x¯i because ci = 0.  Theorem 3 reveals the necessary condition for an optimal solution for bipolar fuzzy relational equations with maxŁukasiewicz composition. This necessary condition is that for an optimal solution x∗ = (x∗i )i∈I , if x∗i is a non-binding variable, x∗i = xi . Conversely, if x∗i is a binding variable, x∗i = xi or x∗i = x¯i . Moreover, for a general situation of Model (1), except ci = 0 and b j = 0 for some i ∈ I, and j ∈ J , Theorem 3 reveals that each component of an optimal solution is either the corresponding component’s lower or upper bound value. 3. Rules for reducing the problem On the basis of the preliminary properties obtained in Section 2, this section employs a simple value matrix to solve Model (1). This simple matrix is adopted to propose rules for developing an optimal solution procedure. Theorem 1 shows that for any solution x = (xi )ı∈I ∈ X (A+ , A− , b), if xi is a binding variable, x¯i or xi is also binding there,  that is J (xi ) ⊆ J (x¯i ) J (xi ). Furthermore, according to Theorem 3, each component of an optimal solution x∗ = (x∗i )i∈I is ∗ ∗ either xi = x¯i or xi = xi for all i ∈ I. Based on these properties, selecting appropriate binding variables from binding sets J (x¯i ) and J(xi ) can provide useful information in searching an optimal solution for Model (1). Hence, the search is limited within J (x¯i ) and J(xi ) for all i, and a value matrix V is defined as follows:

 

V =

M , M

(3)

where M = (mi j )i∈I, j∈J , M = (mi j )i∈I, j∈J ,



mi j =

ci (x¯i − xi ) ∞

if j ∈ J (x¯i ), otherwise ,

 and mi j =

ci xi ∞

if j ∈ J (xi ), otherwise .

154

C.-C. Liu et al. / Information Sciences 360 (2016) 149–162

Basically, for any solution x = (xi )i∈I , the contribution in the objective function by variable xi is ci x¯i when xi = x¯i is a binding variable. The numerical elements, mi j = ci (x¯i − xi ), j ∈ J (x¯i ), in the ith row of M correspond to relevant contributions in the objective function by setting xi = x¯i because the value of lower bound xi needs to be considered simultaneously to determine the optimal value of Model (1). Conversely, the numerical elements, mi j = ci xi , j ∈ J (xi ), in the ith row of M denote the contributions in the objective function by setting xi = xi . Depending on the value matrix V, this study created several rules to reduce the problem. The idea underlying the use of rues for reducing the problem is to possibly fix the ith component of an optimal solution by using xi or x¯i . To develop a procedure for determining an optimal solution, the following index sets are given for the value matrix V.

Ji (M ) := { j ∈ J |mi j = ci (x¯i − xi )}, Ji (M ) := { j ∈ J |mi j = ci xi }, i ∈ I; I j (M ) := {i ∈ I |mi j = ci (x¯i − xi )} and I j (M ) := {i ∈ I |mi j = ci xi }, j ∈ J .

(4)

Essentially, the index sets Ji (M) and Ji (M) are equivalent to the binding sets J (x¯i ) and J(xi ), respectively. The index sets Ij (M) and Ij (M), respectively, indicate that the possible variables of x = (xi )i∈I may be selected as a binding variable for xi = x¯i and xi = xi in the jth equation. By employing index sets Ji (M), Ji (M), Ij (M), and Ij (M) in the value matrix V, some rules are proposed to fix the components of optimal solution x∗ = (x∗i )i∈I by setting x∗i = xi or x∗i = x¯i .  ∗ Rule 1. If m i=1 Ji (M ) = J, xi can be assigned to the ith component of an optimal solution xi for all i ∈ I. m Proof. i=1 Ji (M ) = J indicates that x = (xi )i∈I is a solution for (2). According to Theorem 2, x = (xi )i∈I is an optimal solution for Model (1).  Rule 2-1. If singletons I j (M ) = ∅ and I j (M ) = {i} exist for some j ∈ J , xi can be assigned to the ith component of any optimal solution. Proof. The index set Ij (M) indicates that the possible variables of x = (xi )i∈I may be selected as a binding variable for xi = xi in the jth equation. Singletons I j (M ) = ∅ and I j (M ) = {i} reveal that the jth equation can only be satisfied by the variable xi . That is, the ith component of any solution must be binding in the jth equation by the variable xi . According to Theorem 3, x i = xi .  Based on Rule 2-1, singletons I j (M ) = ∅ and I j (M ) = {i} denote that the variable xi must be binding in the jth equation and allows xi to be assigned to the ith component of any optimal solution. In addition, according to Theorem 1, if xi is binding in the jth equation, xi is also binding there. Hence, for a binding variable xi = xi , Ji (M ) = Ji (xi ) and the jth equation for all j ∈ Ji (M) can be simultaneously satisfied by xi = xi . That is, if Rule 2-1 is used for determining the optimal solution, the jth column of value matrix V for all j ∈ Ji (M) can be deleted. In addition, the corresponding rows of xi in M and M can be deleted. Rule 2-2. If singletons I j (M ) = {i} and I j (M ) = ∅ exist for some j ∈ J , x¯i can be assigned to the ith component of any optimal solution. Proof. Arguments similar to those in the proof of Rule 2-1 hold true in regard to for singletons I j (M ) = {i} and I j (M ) = ∅. Hence, x¯i can be assigned to the ith component of any optimal solution.  If Rule 2-2 is used for determining the optimal solution, the jth column of value matrix V for all j ∈ Ji (M) can be deleted. In addition, the corresponding rows of xi in M and M can be deleted. Rule 3. If I j (M ) I j (M ) = {i} for some j ∈ J in the value matrix V, the jth column of V can be deleted. Proof. I j (M ) I j (M ) = {i} indicates that the jth equation can be simultaneously satisfied by xi = x¯i and xi = xi . Furthermore, Theorem 3 reveals that each component of an optimal solution is either the lower bound xi or the upper bound x¯i . That is, regardless of the final result that yielded from the process of finding the optimal solution, the solution must be either xi = x¯i or xi = xi and the jth equation must be satisfied. Hence, the jth column of V can be deleted.  Rule 4. If Ji (M ) = ∅ and Ji (M) = ∅ for some i ∈ I in the value matrix V, an optimal solution x∗ = (x∗i )i∈I exists with x∗i = xi . Proof. Ji (M ) = ∅ denotes that no equation can be satisfied by xi = x¯i . Conversely, Ji (M) = ∅ reveals that the jth equation for j ∈ Ji (M) can be satisfied by xi = xi . According to Theorem 2, we can assign x∗i = xi for any optimal solution.  If Rule 4 is used for determining the optimal solution, the jth column of value matrix V for all j ∈ Ji (M) can be deleted. In addition. the corresponding rows of xi in M and M can be deleted. Rule 5. If Ji (M ) = ∅ and Ji (M ) = ∅ for some i ∈ I in the value matrix V, an optimal solution x∗ = (x∗i )i∈I exists with x∗i = xi . Proof. Ji (M ) = ∅ and Ji (M ) = ∅ denote that no equation can be satisfied by xi = x¯i or xi = xi . That is, xi is a non-binding variable. According to Theorem 3, x∗i = xi can be assigned for the optimal solution x∗ .  If Rule 5 is used for determining the optimal solution, the corresponding rows of xi in M and M can be deleted.

C.-C. Liu et al. / Information Sciences 360 (2016) 149–162

155

Rule 6. If Jt (M ) = ∅ and ∅ = Js (M)⊆Jt (M) for some s, t ∈ I with ct (x¯t − xt ) < cs (x¯s − xs ) in M, an optimal solution x∗ = (xi )i∈I exists with x∗s = xs . Proof. Theorem 3 indicates that for an optimal solution x∗ = (x∗i )i∈I , if x∗i is a non-binding variable, x∗i = xi . Conversely, if x∗i is a binding variable, x∗i = xi or x∗i = x¯i . The variable x∗s in a solution is either non-binding or binding. Hence, if x∗s is a non-binding variable or x∗s is only binding in M with x∗s = xs , the proof is complete. Suppose that x∗s is a binding variable with x∗s = x¯s , a solution with a more favorable objective value than x∗ can be established. Therefore, this assumption is a contradiction. The variable x∗s is a binding variable with x∗s = x¯s , ∀ j ∈ J (x¯s ). Because Js (M)⊆Jt (M) in M, it is equivalent to J (x¯s ) ⊆ J (x¯t ). Assume that the optimal solution x∗ = (. . . , x∗s , xt∗ , . . . ) contains x∗s = x¯s and xt∗ = xt . A vector x is then equal to x∗ , except for xs = xs and xt = x¯t can be established. Because J (x¯s ) ⊆ J (x¯t ), the constraints satisfied by x∗s = x¯s can be sustained using xt = x¯t . In addition, Jt (M ) = ∅ shows that no equation can be satisfied by xt = xt . Hence, x is a solution to the problem. Moreover

Z ( x∗ ) − Z ( x ) =

m  i=1

ci x∗i −

m 

ci xi

i=1

= (cs x¯s + ct xt ) − (cs xs + ct x¯t ) = cs (x¯s − xs ) − ct (x¯t − xt ) > 0. This inequality contradicts the optimal assumption of x∗ . Therefore, if ct (x¯t − xt ) < cs (x¯s − xs ), x∗ = (xi )i∈I is an optimal solution with xs in its sth element.  Essentially, if Rule 6 is used for determining the optimal solution, the jth column of value matrix V for all j ∈ Js (M) can be deleted because x∗s = xs . In addition, the corresponding rows of xs in M and M can be deleted. Using these rules in the value matrix V, we present a procedure to determine the optimal solution for Model (1). The idea behind the procedure is to apply Rules 1-6 to fix as many values as possible for variables such that some components of optimal solutions can be determined. Then the problem size is reduced by eliminating the corresponding rows and columns from matrix V. When the problem size cannot be reduced further by using any rules, the remainder of the problem can be translated into a 0–1 integer linear programming problem according to Li and Liu [16]. The procedure for finding optimal solutions for Model (1) is summarized as follows: Step 1. Compute the lower bound xi = max j∈J {a− − b j , 0} and the upper bound x¯i = min j∈J {1 − a+ + b j , 1} of variable xi , ij ij for all i ∈ I. Step 2. Check the case of the empty solution set by using Lemma 3. If the solution set of the problem is empty, stop. Step 3. Compute the binding sets J (x¯i ) and J(xi ) for all i ∈ I by using Definition 1. Generate the value matrix V by using (3) and produce index sets Ji (M) and Ji (M) for i ∈ I and Ij (M) and Ij (M) for j ∈ J by using (4). Step 4. Follow the sequence of rules (Rules 1-6) as far as possible to determine the values of as many decision variables as possible. Delete the corresponding rows and/or columns in V. (The size of the problem is thereby reduced.) Denote the remainder of the submatrix by V again. If no rows or columns exist in V, the corresponding lower bound is assigned to the undetermined variable. If all decision variables have been set, go to Step 6. Step 5. Translate the (remainder) value matrix V into a 0–1 integer linear programming problem. Employ the branch-andbound method to determine the remaining undecided decision variables. Determine the optimal solution and stop the procedure. Step 6. Generate optimal solutions for the problem and determine the optimal value.

4. Numerical examples In this section, two optimization examples subjected to a system of bipolar fuzzy relational equations with maxŁukasiewicz composition are provided to illustrate the proposed solution procedure. The first example is adopted from Li and Liu [16]. According to their method, a 0–1 integer linear programming problem is necessary for solving this example. By applying our procedure, the proposed rules can be used to reduce the problem and quickly determine optimal solutions. The second example demonstrates that the proposed procedure enables us to find an optimal solution without translating the problem into a 0–1 integer linear programming problem. Example 1. Consider the following optimization problem subjected to a system of bipolar fuzzy relational equations with max-Łukasiewicz composition:

Minimize Z (x ) = x1 + x2 + x3 + x4 subject to x ◦ A+ ∨ x˜ ◦ A− = b, 0 ≤ xi ≤ 1, i ∈ I = {1, 2, 3, 4},

156

C.-C. Liu et al. / Information Sciences 360 (2016) 149–162

where x = ( x1 , x2 , x3 , x4 ), x˜ = ( x˜1 , x˜2 , x˜3 , x˜4 ), x˜i = 1 − xi , i ∈ I,



0.9 ⎢0.8 + A =⎣ 0.9 0.6

0.8 0.9 0.7 1.0

0.8 0.6 0.8 0.4



0.5 0.6⎥ , 0.4⎦ 0.4



0.9 ⎢0 . 7 − A =⎣ 0.8 0.9

0.7 0.9 0.7 0.8

0.6 0.8 0.9 0.9



0.6 0.9⎥ , 0.8⎦ 0.4

b = ( 0.8, 0.8, 0.7, 0.6 ).

Step 1. Compute the lower bound xi = max j∈J {a− − b j , 0} and the upper bound x¯i = min j∈J {1 − a+ + b j , 1} of variable xi , ij ij for all i ∈ I. Example 1 includes the index sets I = {1, 2, 3, 4} and J = {1, 2, 3, 4}. Compute the value of the lower bound and the upper bound in detail as follows:

x1 = max{a− − b j , 0} = max{0.9 − 0.8, 0.7 − 0.8, 0.6 − 0.7, 0.6 − 0.6, 0} = 0.1, 1j j∈J

x2 = max{a− − b j , 0} = max{0.7 − 0.8, 0.9 − 0.8, 0.8 − 0.7, 0.9 − 0.6, 0} = 0.3, 2j j∈J

x3 = max{a− − b j , 0} = max{0.8 − 0.8, 0.7 − 0.8, 0.9 − 0.7, 0.8 − 0.6, 0} = 0.2, 3j j∈J

x4 = max{a− − b j , 0} = max{0.9 − 0.8, 0.8 − 0.8, 0.9 − 0.7, 0.4 − 0.6, 0} = 0.2, 3j j∈J

and

+ b j , 1} x¯1 = min{1 − a+ 1j j∈J

= min{1 − 0.9 + 0.8, 1 − 0.8 + 0.8, 1 − 0.8 + 0.7, 1 − 0.5 + 0.6, 1} = 0.9, + b j , 1} x¯2 = min{1 − a+ 2j j∈J

= min{1 − 0.8 + 0.8, 1 − 0.9 + 0.8, 1 − 0.6 + 0.7, 1 − 0.6 + 0.6, 1} = 0.9, + b j , 1} x¯3 = min{1 − a+ 3j j∈J

= min{1 − 0.9 + 0.8, 1 − 0.7 + 0.8, 1 − 0.8 + 0.7, 1 − 0.4 + 0.6, 1} = 0.9, + b j , 1} x¯4 = min{1 − a+ 4j j∈J

= min{1 − 0.6 + 0.8, 1 − 1.0 + 0.8, 1 − 0.4 + 0.7, 1 − 0.4 + 0.6, 1} = 0.8. Hence, we have

x = (0.1, 0.3, 0.2, 0.2 ) and x¯ = (0.9, 0.9, 0.9, 0.8 ). Step 2. Check the case of the empty solution set by using Lemma 3. If the solution set of the problem is empty, stop. According to Lemma 3, the following inequality needs to check for j ∈ J = {1, 2, 3, 4}:

max i∈I

1 2

 (a+i j + a−i j − 1 ) ≤ b j ≤ max{a+i j , a−i j }. i∈I

First, we check the value range for b1 to yield

max i∈I

1

 1  1 1 1 (a+i1 + a−i1 − 1 ) = max (0.9 + 0.9 − 1 ), (0.8 + 0.7 − 1 ), (0.9 + 0.8 − 1 ), (0.6 + 0.9 − 1 )

2 = 0.4,

2

2

2

2

and max{a+ , a− } = max{0.9, 0.9; 0.8, 0.7; 0.9, 0.8; 0.6, 0.9} = 0.9, i1 i1 i∈I

such that

0.4 ≤ b1 = 0.8 ≤ 0.9. Next, we check the value ranges for b2 , b3 , and b4 respectively to get

0.4 ≤ b2 = 0.8 ≤ 1.0, 0.35 ≤ b3 = 0.7 ≤ 0.9, and 0.25 ≤ b4 = 0.6 ≤ 0.9. Because the value of bj is in the range maxi∈I { 12 (a+ + a− − 1 )} to maxi∈I {a+ , a− } for all j ∈ J = {1, 2, 3, 4}, go to the ij ij ij ij next step. Step 3. Compute the binding sets J (x¯i ) and J(xi ) for all i ∈ I by using Definition 1. Generate the value matrix V by using (3) and produce index sets Ji (M) and Ji (M) for i ∈ I and Ij (M) and Ij (M) for j ∈ J by using (4). The binding sets are as follows:

J (x¯1 ) = {1, 3}, J (x¯2 ) = {2}, J (x¯3 ) = {1, 3}, and J (x¯4 ) = {2};

C.-C. Liu et al. / Information Sciences 360 (2016) 149–162

157

and

J (x1 ) = {1}, J (x2 ) = {4}, J (x3 ) = {3, 4}, and J (x4 ) = {3}. The value matrix V is obtained by using (3) as follows:

Equation → 1

2

(x¯1 ) ⎡0.8 (x¯2 ) ⎢ ∞ (x¯3 ) ⎢ ⎢0.7 (x¯ ) ⎢ ∞ V = 4 ⎢ (x1 ) ⎢0.1 ( x2 ) ⎢ ⎣∞ ( x3 ) ∞ ( x4 ) ∞

∞ 0.6 ∞ 0.6 ∞ ∞ ∞ ∞

3 0.8 ∞ 0.7 ∞ ∞ ∞ 0.2 0.2

4



∞ ∞⎥ ∞⎥ ⎥ ∞⎥ ⎥. ∞⎥ 0.3⎥ ⎦ 0.2 ∞

For instance, we have m11 = m13 = c1 (x¯1 − x1 ) = 1 × (0.9 − 0.1 ) = 0.8 because of J (x¯1 ) = {1, 3} and m11 = c1 x1 = 1 × 0.1 = 0.1 because of J (x1 ) = {1}. The index sets for matrix V are obtained by using (4) as follows:

J1 (M ) = {1, 3}, J2 (M ) = {2}, J3 (M ) = {1, 3}, J4 (M ) = {2}; J1 (M ) = {1}, J2 (M ) = {4}, J3 (M ) = {3, 4}, J4 (M ) = {3}; and

I1 (M ) = {1, 3}, I2 (M ) = {2, 4}, I3 (M ) = {1, 3}, I4 (M ) = ∅; I1 (M ) = {1}, I2 (M ) = ∅, I3 (M ) = {3, 4}, I4 (M ) = {2, 3}. Step 4. Follow the sequence of rules (Rules 1-6) as far as possible to determine the values of as many decision variables as possible. Delete the corresponding rows and/or columns in V.  Rule 1 cannot be applied to the current value matrix V because 4i=1 Ji (M ) = {1, 3, 4} = J. The index sets Ij (M) and Ij (M) for all j ∈ J do not contain the singleton; thus, Rule 2 also cannot be used to reduce this example. However, the index sets I1 (M ) = {1, 3} and I1 (M ) = {1} can yield I1 (M ) I1 (M ) = {1}. This result reveals that the first equation can be simultaneously satisfied using x1 = x¯1 and x1 = x1 . Hence, Column 1 in V can be deleted by Rule 3. In addition, Column 3 in V can be deleted using Rule 3 because the index sets I3 (M ) = {1, 3} and I3 (M ) = {3, 4} yield I3 (M ) I3 (M ) = {3}. After the deletion, the reduced matrix V becomes

Equation → 2

(x¯1 ) ⎡ ∞ (x¯2 ) ⎢0.6 (x¯3 ) ⎢ ⎢∞ (x¯4 ) ⎢0.6 V = ( x1 ) ⎢ ⎢∞ ( x2 ) ⎢ ⎣∞ ( x3 ) ∞ ( x4 ) ∞

4



∞ ∞⎥ ∞⎥ ⎥ ∞⎥ ⎥. ∞⎥ 0.3⎥ ⎦ 0.2 ∞

The situation of the reduced matrix V is the same as having two equations with four bipolar variables. Computing the index set for the current matrix V yields J3 (M ) = ∅ and J3 (M ) = {4}. Let x∗ = (x∗i )i∈I be any optimal solution. Then, x∗3 = x3 = 0.2 can be assigned using Rule 4. Variable x3 is binding only in the fourth equation (or Column 4 of V). Hence, Column 4 and the corresponding rows of x¯3 and x3 can be deleted from V. Conversely, J1 (M ) = J1 (M ) = ∅ facilitates assigning x∗1 = x1 = 0.1 by using Rule 5. Hence, the corresponding rows of x¯1 and x1 can be deleted from V. After the deletion, the reduced matrix V becomes

Equation → 2

⎡ ⎤ (x¯2 ) 0.6 (x¯ ) ⎢0.6⎥ V = 4 ⎣ ⎦. ( x2 ) ∞ ( x4 ) ∞ For the current value matrix V, J2 (M ) = J4 (M ) = ∅ and J2 (M ) = J4 (M ) = {2} with c2 (x¯2 − x2 ) = c4 (x¯4 − x4 ) = 0.6. This result demonstrates that x2 and x4 are binding variables that satisfy the remainder of the equation by using Rule 6. Applying Rule 6 to assign x∗2 = x¯2 = 0.9 and x∗4 = x¯4 = 0.8 can obtain two optimal solutions with the same optimal value. After all decision variables have been set, go to Step 6.

158

C.-C. Liu et al. / Information Sciences 360 (2016) 149–162

Step 6. Generate optimal solutions for the problem and determine the optimal value. The proposed procedure obtains two optimal solutions for Example 1 as follows:

x∗1 = (x∗1 , x∗2 , x∗3 , x∗4 ) = (x1 , x¯2 , x3 , x4 ) = (0.1, 0.9, 0.2, 0.2 ), Z (x∗1 ) = 1.4; x∗2 = (x∗1 , x∗2 , x∗3 , x∗4 ) = (x1 , x2 , x3 , x¯4 ) = (0.1, 0.3, 0.2, 0.8 ), Z (x∗2 ) = 1.4. According to the method proposed by Li and Liu [16], the following 0–1 integer linear programming problem is necessary for solving this example:

Minimize Zu = 0.8 + 0.8u1 + 0.6u2 + 0.7u3 + 0.6u4



0 ⎢0 subject to ⎣ 1 0

0 1 0 −1

1 0 1 −1

⎤⎡ ⎤





0 u1 0 1 ⎥⎢u2 ⎥ ⎢ 1 ⎥ ≥ , −1⎦⎣u3 ⎦ ⎣−1⎦ 0 u4 −1

u1 , u2 , u3 , u4 ∈ {0, 1}. Evidently, the proposed rules in this study can reduce the problem and quickly obtain optimal solutions. Example 2. Consider the following optimization problem subjected to a system of bipolar fuzzy relational equations with max-Łukasiewicz composition:

Minimize Z (x ) = x1 + x2 + x3 + x4 + x5 + x6 + x7 + x8 + x9 subject to x ◦ A+ ∨ x˜ ◦ A− = b, 0 ≤ xi ≤ 1, i ∈ I = {1, 2, . . . , 9}, where x = (x1 , x2 , . . . , x9 ), x˜ = ( x˜1 , x˜2 , . . . , x˜9 ), x˜i = 1 − xi , i ∈ I,



0.18 ⎢0.15 ⎢0.12 ⎢ ⎢0.25 ⎢ + A = ⎢0.22 ⎢0.35 ⎢ ⎢0.21 ⎣ 0.12 0.31



0.23 ⎢0.20 ⎢ 0.17 ⎢ ⎢0.30 ⎢ A− = ⎢0.27 ⎢0.40 ⎢ ⎢0.26 ⎣ 0.17 0.36

0.23 0.56 0.71 0.62 0.80 0.93 0.45 0.43 0.38

0.75 0.90 0.76 0.32 0.95 0.61 0.49 0.64 0.68

0.43 0.56 0.72 0.57 0.81 0.19 0.80 0.38 0.47

0.70 0.72 0.45 0.54 0.70 0.90 0.34 0.46 0.63

0.65 0.82 0.72 0.61 0.53 0.78 0.82 0.62 0.72

0.42 0.43 0.58 0.70 0.67 0.80 0.33 0.45 0.26

0.82 0.61 0.67 0.65 0.80 0.63 0.54 0.76 0.42

0.35 0.68 0.43 0.76 0.64 0.55 0.45 0.25 0.80

0.45 0.46 0.48 0.36 0.70 0.45 0.52 0.32 0.77

0.13 0.46 0.61 0.52 0.70 0.83 0.35 0.33 0.28

0.85 0.98 0.86 0.42 1.00 0.71 0.59 0.74 0.78

0.28 0.41 0.57 0.96 0.66 0.04 0.65 0.23 0.32

0.80 0.80 0.55 0.64 0.80 1.00 0.44 0.56 0.73

0.57 1.00 0.64 0.53 0.45 0.70 0.74 0.54 0.64

0.54 0.55 0.70 0.82 0.79 0.92 0.45 0.57 0.38

0.74 0.53 0.59 0.57 0.72 0.55 0.46 0.68 0.34

0.41 0.74 0.49 0.82 0.70 0.61 0.51 0.31 0.86

0.58 0.59 0.61 0.49 0.83 0.58 0.65 0.45 0.90

⎤ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥, ⎥ ⎥ ⎥ ⎦ ⎤ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥, ⎥ ⎥ ⎥ ⎦

and b = (0.00, 0.55, 0.70, 0.56, 0.52, 0.72, 0.42, 0.64, 0.48, 0.45 ). Step 1. Compute the lower bound xi = max j∈J {a− − b j , 0} and the upper bound x¯i = min j∈J {1 − a+ + b j , 1} of variable xi , ij ij for all i ∈ I. They are

x = (0.28, 0.28, 0.28, 0.40, 0.38, 0.50, 0.26, 0.17, 0.45 ) and

x¯ = (0.82, 0.80, 0.84, 0.72, 0.75, 0.62, 0.76, 0.88, 0.68 ). Step 2. Check the case of the empty solution set by using Lemma 3. If the solution set of the problem is empty, then Stop. The value of bj is in the range maxi∈I { 12 (a+ + a− − 1 )} to maxi∈I {a+ , a− } for all j ∈ J = {1, 2, . . . , 10}; go to the next ij ij ij ij step. Step 3. Compute the binding sets J (x¯i ) and J(xi ) for all i ∈ I by using Definition 1. Generate the value matrix V by using (3) and produce index sets Ji (M), Ji (M) for i ∈ I and Ij (M), Ij (M) for j ∈ J by using (4).

C.-C. Liu et al. / Information Sciences 360 (2016) 149–162

159

The binding sets are as follows:

J (x¯1 ) = {1, 5, 8}, J (x¯2 ) = {3, 5, 9}, J (x¯3 ) = {2, 4, 7}, J (x¯4 ) = {7, 9}, J (x¯5 ) = {2, 3, 4, 7, 10}, J (x¯6 ) = {2, 5, 7}, J (x¯7 ) = {4}, J (x¯8 ) = {1, 8}, J (x¯9 ) = {9, 10}; and

J (x1 ) = {5}, J (x2 ) = {3, 5, 6}, J (x3 ) = {7}, J (x4 ) = {4, 7}, J (x5 ) = {10}, J (x6 ) = {7}, J (x7 ) = {1}, J (x8 ) = {1}, J (x9 ) = {10}. Then the value matrix V is obtained by using (3) as follows:

Equation → 1

⎡ (x¯1 ) 0.54 (x¯2 ) ⎢ ∞ (x¯3 ) ⎢ ⎢ ∞ (x¯4 ) ⎢ ⎢ ∞ (x¯5 ) ⎢ ∞ ⎢ (x¯6 ) ⎢ ∞ (x¯7 ) ⎢ ⎢ ∞ (x¯8 ) ⎢ ⎢0.71 (x¯9 ) ⎢ ∞ V = ( x1 ) ⎢ ⎢ ∞ ( x2 ) ⎢ ⎢ ∞ ( x3 ) ⎢ ∞ ⎢ ( x4 ) ⎢ ∞ ( x5 ) ⎢ ⎢ ∞ ( x6 ) ⎢ ⎢ ∞ (x7 ) ⎢0.26 (x8 ) ⎣ 0.17 ( x9 ) ∞

2

3

4

∞ ∞ 0.56 ∞ 0.37 0.12 ∞ ∞ ∞ ∞ ∞ ∞ ∞ ∞ ∞ ∞ ∞ ∞

∞ 0.52 ∞ ∞ 0.37 ∞ ∞ ∞ ∞ ∞ 0.28 ∞ ∞ ∞ ∞ ∞ ∞ ∞

∞ ∞ 0.56 ∞ 0.37 ∞ 0.50 ∞ ∞ ∞ ∞ ∞ 0.40 ∞ ∞ ∞ ∞ ∞

5 0.54 0.52 ∞ ∞ ∞ 0.12 ∞ ∞ ∞ 0.28 0.28 ∞ ∞ ∞ ∞ ∞ ∞ ∞

6

7

∞ ∞ ∞ ∞ ∞ ∞ ∞ ∞ ∞ ∞ 0.28 ∞ ∞ ∞ ∞ ∞ ∞ ∞

∞ ∞ 0.56 0.32 0.37 0.12 ∞ ∞ ∞ ∞ ∞ 0.28 0.40 ∞ 0.50 ∞ ∞ ∞

8

9

0.54 ∞ ∞ ∞ ∞ ∞ ∞ 0.71 ∞ ∞ ∞ ∞ ∞ ∞ ∞ ∞ ∞ ∞

∞ 0.52 ∞ 0.32 ∞ ∞ ∞ ∞ 0.23 ∞ ∞ ∞ ∞ ∞ ∞ ∞ ∞ ∞

10



∞ ∞ ⎥ ∞ ⎥ ⎥ ∞ ⎥ ⎥ 0.37⎥ ⎥ ∞ ⎥ ∞ ⎥ ⎥ ∞ ⎥ ⎥ 0.23⎥ ⎥. ∞ ⎥ ∞ ⎥ ⎥ ∞ ⎥ ⎥ ∞ ⎥ 0.38⎥ ⎥ ∞ ⎥ ⎥ ∞ ⎥ ∞ ⎦ 0.45

For instance, we have m11 = m15 = m18 = c1 (x¯1 − x1 ) = 1 × (0.82 − 0.28 ) = 0.54 because of J (x¯1 ) = {1, 5, 8} and m15 = c1 x1 = 1 × 0.28 = 0.28 because of J (x1 ) = {5}. The index sets for matrix V are obtained by using (4) as follows:

J1 (M ) = {1, 5, 8}, J2 (M ) = {3, 5, 9}, J3 (M ) = {2, 4, 7}, J4 (M ) = {7, 9}, J5 (M ) = {2, 3, 4, 7, 10}, J6 (M ) = {2, 5, 7}, J7 (M ) = {4}, J8 (M ) = {1, 8}, J9 (M ) = {9, 10}; J1 (M ) = {5}, J2 (M ) = {3, 5, 6}, J3 (M ) = {7}, J4 (M ) = {4, 7}, J5 (M ) = {10}, J6 (M ) = {7}, J7 (M ) = {1}, J8 (M ) = {1}, J9 (M ) = {10};

and

I1 (M ) = {1, 8}, I2 (M ) = {3, 5, 6}, I3 (M ) = {2, 5}, I4 (M ) = {3, 5, 7}, I5 (M ) = {1, 2, 6}, I6 (M ) = ∅, I7 (M ) = {3, 4, 5, 6}, I8 (M ) = {1, 8}, I9 (M ) = {2, 4, 9}, I10 (M ) = {5, 9}; I1 (M ) = {7, 8}, I2 (M ) = ∅, I3 (M ) = {2}, I4 (M ) = {4}, I5 (M ) = {1, 2}, I6 (M ) = {2}, I7 (M ) = {3, 4, 6}, I8 (M ) = ∅, I9 (M ) = ∅, I10 (M ) = {5, 9}. Step 4. Follow the sequence of rules (Rules 1-6) as far as possible to determine the values of as many decision variables as possible. Delete the corresponding rows and/or columns in V.  Because 9i=1 Ji (M ) = {1, 3, 4, 5, 6, 7, 10} = J, Rule 1 cannot be applied to the current value matrix V. However, the index sets I6 (M ) = ∅ and I6 (M ) = {2} indicate that a singleton exists in V. Let x∗ = (x∗i )i∈I be any optimal solution. Then, x∗2 = x2 = 0.28 can be assigned using Rule 2-1. The jth equation for all j ∈ J2 (M ) = {3, 5, 6} can be satisfied by x∗2 = x2 . Hence, Columns 3, 5, and 6 and the corresponding row of x2 in M and M can be deleted from V. After

160

C.-C. Liu et al. / Information Sciences 360 (2016) 149–162

deletion, the reduced matrix V becomes

Equation → 1

⎡ (x¯1 ) 0.54 (x¯3 ) ⎢ ∞ (x¯4 ) ⎢ ⎢ ∞ (x¯5 ) ⎢ ∞ ⎢ (x¯6 ) ⎢ ∞ (x¯7 ) ⎢ ⎢ ∞ (x¯8 ) ⎢ ⎢0.71 (x¯9 ) ⎢ ∞ V = ( x1 ) ⎢ ⎢ ∞ ( x3 ) ⎢ ⎢ ∞ ( x4 ) ⎢ ∞ ⎢ ( x5 ) ⎢ ∞ ( x6 ) ⎢ ⎢ ∞ ( x7 ) ⎢ ⎣0.26 (x8 ) 0.17 ( x9 ) ∞

2

4

∞ 0.56 ∞ 0.37 0.12 ∞ ∞ ∞ ∞ ∞ ∞ ∞ ∞ ∞ ∞ ∞

∞ 0.56 ∞ 0.37 ∞ 0.50 ∞ ∞ ∞ ∞ 0.40 ∞ ∞ ∞ ∞ ∞

7 ∞ 0.56 0.32 0.37 0.12 ∞ ∞ ∞ ∞ 0.28 0.40 ∞ 0.50 ∞ ∞ ∞

8

9

10

0.54 ∞ ∞ ∞ ∞ ∞ 0.71 ∞ ∞ ∞ ∞ ∞ ∞ ∞ ∞ ∞

∞ ∞ 0.32 ∞ ∞ ∞ ∞ 0.23 ∞ ∞ ∞ ∞ ∞ ∞ ∞ ∞

∞ ∞ ⎥ ∞ ⎥ ⎥ 0.37⎥ ⎥ ∞ ⎥ ∞ ⎥ ⎥ ∞ ⎥ ⎥ 0.23⎥ ⎥. ∞ ⎥ ∞ ⎥ ⎥ ∞ ⎥ ⎥ 0.38⎥ ∞ ⎥ ⎥ ∞ ⎥ ⎦ ∞ 0.45



For the current matrix V, the index sets I1 (M ) I1 (M ) = {8}, I7 (M ) I7 (M ) = {3, 4, 6}, and I10 (M ) I10 (M ) = {5, 9} exist, Columns 1, 7, and 10 in V can then be deleted using Rule 3. After deletion, the reduced matrix V becomes

Equation → 2

⎡ (x¯1 ) ∞ (x¯3 ) ⎢0.56 (x¯4 ) ⎢ ⎢ ∞ (x¯5 ) ⎢0.37 ⎢ (x¯6 ) ⎢0.12 (x¯7 ) ⎢ ⎢ ∞ (x¯8 ) ⎢ ⎢ ∞ (x¯9 ) ⎢ ∞ V = ( x1 ) ⎢ ⎢ ∞ ( x3 ) ⎢ ⎢ ∞ ( x4 ) ⎢ ∞ ⎢ ( x5 ) ⎢ ∞ ( x6 ) ⎢ ⎢ ∞ ( x7 ) ⎢ ⎣ ∞ ( x8 ) ∞ ( x9 ) ∞

4 ∞ 0.56 ∞ 0.37 ∞ 0.50 ∞ ∞ ∞ ∞ 0.40 ∞ ∞ ∞ ∞ ∞

8

9

0.54 ∞ ∞ ∞ ∞ ∞ 0.71 ∞ ∞ ∞ ∞ ∞ ∞ ∞ ∞ ∞

∞ ∞ ⎥ 0.32⎥ ⎥ ∞ ⎥ ⎥ ∞ ⎥ ∞ ⎥ ⎥ ∞ ⎥ ⎥ 0.23⎥ ⎥. ∞ ⎥ ∞ ⎥ ⎥ ∞ ⎥ ⎥ ∞ ⎥ ∞ ⎥ ⎥ ∞ ⎥ ⎦ ∞ ∞



For the current matrix V, Rule 6 can be applied to assign x∗3 = x3 = 0.28, x∗4 = x4 = 0.40, x∗7 = x7 = 0.26 and x∗8 = x8 = 0.17, because the following conditions hold true:

J5 (M ) = ∅ and J3 (M ) = J5 (M ) = {2, 4} with c5 (x¯5 − x5 ) = 0.37 < c3 (x¯3 − x3 ) = 0.56, J7 (M ) = {4} ⊆ J5 (M ) = {2, 4} with c5 (x¯5 − x5 ) = 0.37 < c7 (x¯7 − x7 ) = 0.50, J9 (M ) = ∅ and J4 (M ) = J9 (M ) = {9} with c9 (x¯9 − x9 ) = 0.23 < c4 (x¯4 − x4 ) = 0.32, J1 (M ) = ∅ and J8 (M ) = J1 (M ) = {8} with c1 (x¯1 − x1 ) = 0.54 < c8 (x¯8 − x8 ) = 0.71. Furthermore, Column 4 of matrix V can be deleted because x∗4 = x4 is binding in the fourth equation, that is J4 (M ) = {4}. In addition, the corresponding rows x3 , x4 , x7 , and x8 in M and M can be deleted. After deletion, the reduced matrix V becomes

Equation → 2

(x¯1 ) ⎡ ∞ (x¯5 ) ⎢0.37 (x¯6 ) ⎢ ⎢0.12 (x¯9 ) ⎢ ∞ V = ( x1 ) ⎢ ⎢∞ ( x5 ) ⎢ ⎣∞ ( x6 ) ∞ ( x9 ) ∞

8 0.54 ∞ ∞ ∞ ∞ ∞ ∞ ∞

9



∞ ∞ ⎥ ∞ ⎥ ⎥ 0.23⎥ ⎥. ∞ ⎥ ∞ ⎥ ⎦ ∞ ∞

C.-C. Liu et al. / Information Sciences 360 (2016) 149–162

161

Computing the index set for the current matrix V yields singletons I8 (M ) = {1}, I8 (M ) = ∅ and I9 (M ) = {9}, I9 (M ) = ∅. Therefore, x∗1 = x¯1 = 0.82 and x∗9 = x¯9 = 0.68 can be assigned using Rule 2-2. Columns 8 and 9 of V can then be deleted. In addition, the corresponding rows x1 and x9 in M and M can be deleted. Moreover, Rule 6 can be applied again to assign x∗5 = x5 = 0.38, because the following condition holds true:

J6 (M ) = ∅ and J5 (M ) = J6 (M ) = {2} with c6 (x¯6 − x6 ) = 0.12 < c5 (x¯5 − x5 ) = 0.37. Hence, the corresponding row x5 in M and M can be deleted. After deletion, the reduced matrix V becomes

Equation → V =

(x¯6 ) ( x6 )



2

0.12 ∞

 .

For the current matrix V, Rule 2-2 can be applied to assign x∗6 = x¯6 = 0.62, because singletons I6 (M ) = 2 and I6 (M ) = ∅ exist in V. After all decision variables have been set, go to Step 6. Step 6. Generate optimal solutions for the problem and determine the optimal value. The proposed procedure obtains an optimal solution for Example 2 as follows:

x∗ = (0.82, 0.28, 0.28, 0.40, 0.380.62.0.26, 9.17, 0.68 ) and the optimal value is Z (x∗ ) = 3.89. According to the method proposed by Li and Liu [16], the following 0–1 integer linear programming problem is necessary for solving Example 2:

Minimize Zu = 3.0 + 0.54u1 + 0.52u2 + 0.56u3 + 0.32u4 + 0.37u5 + 0.12u6 + 0.50u7 + 0.71u8 + 0.23U9



1 ⎢0 ⎢0 ⎢ ⎢0 ⎢ ⎢0 subject to ⎢ ⎢0 ⎢0 ⎢ ⎢1 ⎣0 0

0 0 0 0 0 −1 0 0 1 0

0 1 0 1 0 0 0 0 0 0

0 0 0 −1 0 0 0 0 1 0

0 1 1 1 0 0 1 0 0 0

0 1 0 0 1 0 0 0 0 0

−1 0 0 1 0 0 0 0 0 0

0 0 0 0 0 0 0 1 0 0

0 0 0 0 0 0 0 0 1 0

⎤ ⎡ ⎤ ⎡ ⎤ −1 u1 ⎥ u ⎢1⎥ ⎥⎢ 2 ⎥ ⎢ 0 ⎥ ⎥⎢u3 ⎥ ⎢ ⎥ ⎥⎢ ⎥ ⎢ 0 ⎥ ⎥⎢u4 ⎥ ⎢ ⎥ ⎥⎢ ⎥ ⎢−1⎥ ⎥⎢u5 ⎥ ≥ ⎢ 0 ⎥, ⎥⎢u6 ⎥ ⎢ ⎥ ⎥⎢ ⎥ ⎢−2⎥ ⎥⎢u7 ⎥ ⎢ ⎥ ⎥⎣ ⎦ ⎢ 1 ⎥ ⎦ u8 ⎣1⎦ u9

−1

u1 , u2 , . . . , u9 ∈ {0, 1}. 5. Conclusions This study considered an optimization problem that involved the minimization of a linear objective function subjected to a system of bipolar fuzzy relational equations with max-Łukasiewicz composition. A necessary condition proposed for determining the optimal solution for such an optimization problem showed that each of its components is either the corresponding component’s lower or upper bound value. Based on this necessary condition, we proposed some rules to reduce the problem size so that the optimal solution can be computed efficiently. Furthermore, Lin et al. [19] demonstrated that all systems of max-continuous u-norm fuzzy relational equations, which includes max-product, max-Łukasiewicz, maxcontinuous Archimedean t-norm, max-arithmetic mean compositions, are essentially equivalent. This is also the case for bipolar fuzzy relational equations. Therefore, the solution procedure proposed in this study for the optimization problem subjected to a system of bipolar fuzzy relational equations with the max-Łukasiewicz composition can be extended to dealing with max-continuous u-norm. Acknowledgments This work was supported by the Ministry of Science and Technology under grants no. MOST 103-2410-H-238-004 and MOST 103-2115-M-238-001. References [1] [2] [3] [4]

C.-W. Chang, B.-S. Shieh, Linear optimization problem constrained by fuzzy max-min relation equations, Inf. Sci. 234 (2013) 71–79. L. Chen, P.P. Wang, Fuzzy relation equations (i): the general and specialized solving algorithms, Soft Comput. 6 (6) (2002) 428V435. L. Chen, P.P. Wang, Fuzzy relation equations (ii): the branch-point-solutions and the categorized minimal solutions, Soft Comput. 11 (1) (2007) 33–40. A.D. Nola, S. Sessa, W. Pedrycz, E. Sanchez, Fuzzy Relational Equations and Their Applications in Knowledge Engineering, Dordrecht: Kluwer Academic Press, 1989. [5] S.-C. Fang, G. Li, Solving fuzzy relation equations with a linear objective function, Fuzzy Sets Syst. 103 (1999) 107–113.

162

C.-C. Liu et al. / Information Sciences 360 (2016) 149–162

[6] S. Feng, Y. Ma, J. Li, A kind of nonlinear and non-convex optimization problems under mixed fuzzy relational equations constraints with max-min and max-average composition, in: Proceedings of the Eighth International Conference on Computational Intelligence and Security, 2012, pp. 154–158, doi:10.1109/CIS.2012.42. [7] S. Freson, B. De Baets, H. De Meyer, Linear optimization with bipolar maxvmin constraints, Inf. Sci. 234 (2013) 3–15. [8] A. Ghodousian, E. Khorram, Fuzzy linear optimization in the presence of the fuzzy relation inequality constraints with max-min composition, Inf. Sci. 178 (2008) 501–519. [9] A. Ghodousian, E. Khorram, Solving a linear programming problem with the convex combination of the max-min and the max-average fuzzy relation equations, Appl. Math. Comput. 180 (2006) 411–418. [10] S.-M. Guu, Y.-K. Wu, Minimizing a linear objective function under a max-t-norm fuzzy relational equation constraint, Fuzzy Sets Syst. 161 (2010) 285–297. [11] E. Khorram, A. Ghodousian, Linear objective function optimization with fuzzy relation equation constraints regarding max-av composition, Appl. Math. Comput. 173 (2006) 872–886. [12] E. Khorram, A. Ghodousian, A.A. Molai, Solving linear optimization problems with max-star composition equation constraints, Appl. Math. Comput. 178 (2006) 654–661. [13] P. Li, S.-C. Fang, A survey on fuzzy relational equations, part i: classification and solvability, Fuzzy Optim. Decis. Mak. 8 (2009) 179–229. [14] P. Li, S.-C. Fang, On the resolution and optimization of a system of fuzzy relational equations with sup-t composition, Fuzzy Optim. Decis. Mak. 7 (2008) 169–214. [15] P. Li, Q. Jin, Fuzzy relational equations with min-biimplication composition, Fuzzy Optim. Decis. Mak. 11 (2012) 227–240. [16] P. Li, Y. Liu, Linear optimization with bipolar fuzzy relational equation constraints using łukasiewicz triagular norm, Soft Comput. 18 (2014) 1399–1404. [17] J.-X. Li, S.-J. Yang, Fuzzy relation inequalities about the data transmission mechanism in bittorrent-like peer-to-peer file sharing systems, in: Proceedings of the 9th International Conference on Fuzzy Systems and Knowledge Discovery (FSKD 2012), pp. 452–456. [18] J.-L. Lin, On the relation between fuzzy max-archimedean t-norm relational equations and the covering problem, Fuzzy Sets Syst. 160 (2009) 2328–2344. [19] J.-L. Lin, Y.-K. Wu, S.-M. Guu, On fuzzy relational equations and the covering problem, Inf. Sci. 181 (2011) 2951–2963. [20] A.V. Markovskii, On the relation between equations with max-product composition and the covering problem, Fuzzy Sets Syst. 153 (2005) 261–273. [21] E. Sanchez, Resolution of composite fuzzy relation equations, Inf. Control 30 (1976) 38–48. [22] B.-S. Shieh, Minimizing a linear objective function under a fuzzy max-t-norm relation equation constraint, Inf. Sci. 181 (2011) 832–841. [23] B.-S. Shieh, Solution to the covering problem, Inf. Sci. 222 (2013) 3766–3774. [24] Y.-K. Wu, S.-M. Guu, Minimizing a linear function under a fuzzy max-min relational equation constraint, Fuzzy Sets Syst. 150 (2005) 147–162. [25] Y.-K. Wu, Optimization of fuzzy relational equations with max-av composition, Inf. Sci. 177 (2007) 4216–4229. [26] S.-J. Yang, An algorithm for minimizing a linear objective function subject to the fuzzy relation inequalities with addition-min composition, Fuzzy Sets Syst. 255 (2014) 41–51. [27] L.A. Zadeh, Toward a generalized theory of uncertainty (GTU)-an outline, Inf. Sci. 172 (2005) 1–40.