A new algorithm for resolution of the quadratic programming problem with fuzzy relation inequality constraints

A new algorithm for resolution of the quadratic programming problem with fuzzy relation inequality constraints

Computers & Industrial Engineering 72 (2014) 306–314 Contents lists available at ScienceDirect Computers & Industrial Engineering journal homepage: ...

538KB Sizes 1 Downloads 73 Views

Computers & Industrial Engineering 72 (2014) 306–314

Contents lists available at ScienceDirect

Computers & Industrial Engineering journal homepage: www.elsevier.com/locate/caie

A new algorithm for resolution of the quadratic programming problem with fuzzy relation inequality constraints Ali Abbasi Molai ⇑ School of Mathematics and Computer Sciences, Damghan University, P.O. Box 36715-364, Damghan, Iran

a r t i c l e

i n f o

Article history: Received 19 June 2013 Received in revised form 28 March 2014 Accepted 29 March 2014 Available online 13 April 2014 Keywords: Fuzzy relation inequality Quadratic programming Separable programming Least square technique Minimal solution Max-product composition

a b s t r a c t The minimization problem of a quadratic objective function with the max-product fuzzy relation inequality constraints is studied in this paper. In this problem, its objective function is not necessarily convex. Hence, its Hessian matrix is not necessarily positive semi-definite. Therefore, we cannot apply the modified simplex method to solve this problem, in a general case. In this paper, we firstly study the structure of its feasible domain. We then use some properties of n  n real symmetric indefinite matrices, Cholesky’s decomposition, and the least square technique, and convert the problem to a separable programming problem. Furthermore, a relation in terms of a closed form is presented to solve it. Finally, an algorithm is proposed to solve the original problem. An application example in the economic area is given to illustrate the problem. Of course, there are other application examples in the area of digital data service and reliability engineering. Ó 2014 Elsevier Ltd. All rights reserved.

1. Introduction Fuzzy Relation Equations (FRE), Fuzzy Relation Inequalities (FRI), and the problems associated to them have many applications in different areas such as fuzzy control (Czogala & Predrycz, 1981), fuzzy decision-making (Di Nola, Sessa, Pedrycz, & Sanchez, 1989; Nobuhara, Pedrycz, Sessa, & Hirota, 2006b; Pedrycz, 1985; Peeva & Kyosev, 2004), fuzzy symptom diagnosis (Vasantha Kandasamy & Smarandache, 2004), fuzzy medical diagnosis (Vasantha Kandasamy & Smarandache, 2004), image processing (Di Nola & Russo, 2007; Loia & Sessa, 2005; Nobuhara, Bede, & Hirota, 2006a) and so on. The problems have been studied by many researchers in both theoretical and applied areas since the resolution of FRE was proposed by Sanchez in 1976. An interesting extensively investigated kind of such problems is the optimization of the objective functions on the region whose set of feasible solutions have been defined as FRE (Guu, Wu, & Lee, 2011; Khorram & Zarei, 2009; Li, Feng, & Mi, 2012; Lin, Wu, & Chang, 2009; Shieh, 2011; Thapar, Pandey, & Gaur, 2009) or FRI (Abbasi Molai, 2010; Freson, De Baets, & De Meyer, 2013; Gavalec & Zimmermann, 2013; Ghodousian & Khorram, 2012; Guo, Pang, Meng, & Xia, 2011) constraints. Comprehensive review of the works up to 2007 can be found in (Li & Fang, 2008). The optimization

⇑ Tel.: +98 9126933771. E-mail address: [email protected] http://dx.doi.org/10.1016/j.cie.2014.03.024 0360-8352/Ó 2014 Elsevier Ltd. All rights reserved.

problem of a linear objective function subject to FRE with the max–min composition has firstly been studied by Fang and Li (1999). The problem was equivalently converted to a 0–1 integer programming problem and solved by the branch-and-bound approach. Wu, Guu, and Liu (2002) and Wu and Guu (2005) accelerated Fang and Li’s approach by providing the upper bounds for the branch-and-bound procedure. Then the optimization problem with the max-product composition was investigated by Loetamonphong and Fang (2001) and a similar idea was applied to solve the problem. Also, a necessary condition for its optimal solution in terms of the maximum solution of FRE was presented by Guu and Wu (2002). We can find other generalizations on these kinds of problems with FRE constraints, for example, see references (Chang & Shieh, 2013; Guu & Wu, 2010; Hassanzadeh, Khorram, Mahdavi, & Mahdavi-Amiri, 2011; Lu & Fang, 2001; Thapar, Pandey, & Gaur, 2012; Zhou & Ahat, 2011). The linear objective function optimization problem with the max–min FRI was considered by Zhang, Dong, and Ren (2003). Guo and Xia (2006) presented an approach to solve the problem based on a necessary condition of optimality. Moreover, another condition was provided to accelerate Guo and Xia’s approach by Mashayekhi and Khorram (2009). Ghodousian and Khorram (2008) studied the problem in which the fuzzy inequality replaces the ordinary inequality in the constraints and then suggested a method to solve it. However, the optimization of nonlinear objective functions with FRI has been developing very slowly. The initial research about this topic can be found in Lu and Fang (2001). Since

307

A. Abbasi Molai / Computers & Industrial Engineering 72 (2014) 306–314

the resolution of these kinds of problems, in a general case, is difficult using traditional nonlinear optimization methods, many researchers focused on the fuzzy relation inequality programming with nonlinear objective functions in different forms such as the latticized linear programming problem with the max–min FRI constraints (Wang, Zhang, Sachez, & Lee, 1991), fuzzy relational geometric programming with the max–min and max-product FRE (Singh, Pandey, & Thapar, 2012; Wu, 2006; Yang & Cao, 2005a, 2005b, 2007; Zhou & Ahat, 2011), linear fractional programming problem with the max-Archimedean t-norm FRE (Wu, Guu, & Liu, 2007, 2008), monomial geometric programming problem with the max-product FRI (Shivanian & Khorram, 2009), quadratic programming problem with the max-product FRI (Abbasi Molai, 2012) and so on. With regard to the importance of quadratic programming and the fuzzy relation inequality in both theory and application, Abbasi Molai (2012) proposed a fuzzy relation quadratic programming with the max-product composition. He showed that the minimal solutions and the maximum solution of its feasible domain cannot guarantee the resolution of the problem, in a general case. In the paper, some sufficient conditions were presented to simplify the problem. However, the simplified problem has been solved in a special case where the objective function is convex or equivalently its Hessian matrix, i.e., matrix Q, is positive semi-definite. In this case, we can only apply the modified simplex method to solve the simplified problem. Hence, we are motivated to propose a new algorithm to solve the problem when the objective function is not convex or the (reduced) matrix Q is not semipositive definite. In this paper, some sufficient conditions are presented to simplify the resolution process of the fuzzy relation quadratic programming problem in the recent case. Under the sufficient conditions and applying Cholesky’s factorization, a suitable variable change is proposed to convert the quadratic objective function to a separable quadratic objective function including only expressions as xi ; x2i , and y2i . In this case, we use a linear approximation instead of functions y2i by the least square technique. Then applying the variable change between two variable vectors x and y, we obtain a separable quadratic programming problem with respect to x. This problem is easily solved and the optimal solutions of the original problem are found in x-space. The organization of this paper is as follows. Section 2 is formed by two subsections. The first subsection introduces the quadratic programming problem with FRI constraints and its feasible solution set is investigated in the second subsection. Section 3 presents some sufficient conditions to convert the problem to a special form of quadratic programming including only expressions as xi ; x2i , and y2i by a suitable variable change and Cholesky’s factorization. A linear approximation is then used instead of functions y2i by the least square technique. Moreover, with regard to relation between variable vectors x and y, we obtain a separable quadratic programming problem with respect to x. This problem is easily solved and the approximate optimal value of original problem is found in x-space. With attention to the above points, an algorithm is designed to solve the quadratic programming problem when the objective function is not convex in a general case. In Section 4, the proposed algorithm is compared with the previous works. In Section 5, an application example is presented to illustrate the problem. Of course, we point to other application examples that they can be modeled as the quadratic programming problem with FRI constraints. Finally, conclusions are given in Section 6.

2.1. The formulation of the problem First of all, we present the definition of the max-product composition operator. To do this, we need the following definition. Definition 1 (Zimmermann, 1991). Let X, Y # R be universal sets, e ¼ fððx; yÞ; l ðx; yÞÞ j ðx; yÞ 2 X  Yg is called a fuzzy relation then R eR on X  Y. We are now ready to present the definition of the max-product composition. Definition 2 (Zimmermann, 1991). Let X, Y, Z # R be universal e 1 ðx; yÞ; ðx; yÞ 2 X  Y, and R e 2 ðy; zÞ; ðy; zÞ 2 Y  Z, be sets and R e1  R e 2 is two fuzzy relations. The max-product composition R defined as follows:

e1  R e 2 Þðx; zÞ ¼ fððx; zÞ; maxfl ðx; yÞ; l ðy; zÞgÞjx 2 X; y 2 Y; z 2 Zg: ðR e e

This section is formed by two subsections. We firstly formulate the quadratic programming problem with FRI constraints. Then the structure of its feasible domain will be investigated.

R2

We can now present the formulation structure of the quadratic programming problem with FRI constraints. Let A = [aij] and B = [bij] are m  n and l  n fuzzy matrices with 0 6 aij,bij 6 1, respectively. h iT h iT 1 1 1 2 2 2 Also, assume that d ¼ d1 ; . . . ; dm 2 ½0; 1m and d ¼ d1 ; . . . ; dl 2 ½0; 1l . Moreover, the vector c = [c1, . . . , cn] is a vector of cost coefficients and Q = [qij] is a n  n symmetric matrix. We formulate the quadratic programming problem as follows.

1 Min ZðxÞ ¼ c:x þ xT Qx; 2 1 s:t: A  x P d ;

ð1Þ

2

Bx6d ; x 2 ½0; 1n ; where x = [x1, . . . , xn]T is the vector of decision variables to be determined. The operator ‘‘’’ denotes the max-product composition operator (Zimmermann, 1991). Let M, N, and L be the index sets {1, . . . , m}, {1, . . . , n}, and {1, . . . , l}, respectively. 2.2. The feasible solution set of problem (1) In this subsection, the structure of the feasible domain of problem (1) will briefly be discussed. The constraint part of model (1) is to find a set of solution vectors x 2 [0, 1]n for the following FRI such that 1

8i 2 M;

Maxfaij :xj g P di ; j2N

2

Maxfbij :xj g 6 di ; j2N

ð2Þ

8i 2 L:

h i h i Let x1 ¼ x1j and x2 ¼ x2j be two n-dimensional vectors. Define x1 6 x2 if and only if x1j 6 x2j for all j 2 N, where x1, x2 2 X(A, d1, B, 1 2 d2) = {x 2 [0, 1]njA  x P d1 & B  x 6 d2}. A solution ^x 2 XðA; d ; B; d Þ 1 2 is called the maximum solution if x 6 ^x for all x 2 XðA; d ; B; d Þ. ^

1

2

On the other hand, x 2 XðA; d ; B; d Þ is a minimal solution if for each 1

^

2

^

x 2 XðA; d ; B; d Þ, where x 6 x implies that x ¼ x . A solution 1

2

x 2 XðA; d ; B; d Þ is an optimal solution for problem (1) if Z(x⁄) ^

1 2 6 Z(x) for all x 2 XðA; d ; B; d Þ. In this paper, notations ^x and x are specially applied to show the maximum and the minimal solutions 1

2. The quadratic programming problem with FRI constraints

R1

y

2

of set XðA; d ; B; d Þ, respectively. The solution set of an FRI problem is determined by a unique maximal solution and finitely many minimal solutions. We now briefly pay our attention on finding the maximum solution and the minimal solutions of FRI below. If the feasible domain of problem (1) is not empty, then the maximum solution can be computed by the following relation.

308

A. Abbasi Molai / Computers & Industrial Engineering 72 (2014) 306–314

( 2 ) di  2 l ^xj ¼ ^i¼1 bij > di ; bij

j ¼ 1; . . . ; n;

ð3Þ

where the notation ‘‘^’’ denotes the minimization operator and ^£ = 1 is defined. We now consider below the computation method of the minimal solutions of FRI. Suppose that ^ x is the maximum solution of FRI B  x 6 d2. The matrix V = [vij]mn is called an FRI characteristic matrix for the feasible domain of problem (1), where

(

v ij ¼

otherwise:

Theorem 1. The feasible solution set of problem (1) is not empty if and only if every row of the FRI characteristic matrix V has at least one non-zero component. Proof. The proof is similar to the proof of Theorem 2.2 in Guo and Xia (2006). h If the feasible domain of problem (1) is not empty, then we can determine its feasible solution set by the following theorem. Theorem 2. Suppose that the feasible solution set of problem (1) is not empty. Let p 2 P be an FRI path of its feasible domain. Define   xp ¼ xp1 ; . . . ; xpn by the following relation:

¼

( 1 ) di jp ¼ j ; aij i

1

j ¼ 1; . . . ; n;

x

2T

x

2

h

x

We can now convert the indefinite matrix Q to a positive definite matrix using Property 2. This process is illustrated in the following lemma.

Definition 3. A vector p = (p1, . . . , pm) is called an FRI path for the Q feasible domain of problem (1) if p 2 m i¼1 J i or equivalently pi 2 Ji, for any i 2 {1, . . . , m}. Denote P the set of all the FRI paths for the feasible domain of problem (1). Then we can express the following theorems about its feasible domain.

_m i¼1

1T

x

ð4Þ

Define a series of index sets by Ji = {j 2 Njvij = 1}, i = 1, . . . , m.

xpj

Since the matrix Q is indefinite, we can find at least two non-zero vectors x1 and x2 such that k1 ¼ x 1TQx1 < 0 and kn ¼ x 2TQx2 > 0.

1

1 aij  ^xj P di ; 0

Proof. Since the n  n real matrix Q is symmetric, according to Property 1, the matrix Q has n real eigenvalues. We sort them in a non-decreasing order as k1 6 k2 6 . . . 6 kn. Since k = ki, for i = 1, . . . , n, are eigenvalues for the matrix Q, we can write o, where the notation ‘‘~ o’’ denotes Q  x = k  x. Pre-multiplying xT – ~ the zero vector, in the recent equality, we conclude that xTT Q  x = kxTx. We can find k as k ¼ xxTQx where xTx – 0 due to xT – ~ o. x

ð5Þ

where the notation ‘‘_’’ denotes the maximization operator and _£ = 0 is defined. Then the solution set of its feasible domain is as: S S ¼ p2P fxjxp 6 x 6 ^xg. Proof. The proof is similar to the proof of Theorem 2.2 in Guo and Xia (2006). h From Theorem 2, we obtain the following result. Corollary 1. For any p 2 P, xp is a feasible solution of problem (1), called quasi-minimal solution, and X # XP = {xp jp 2 P} where X denotes the set of all the minimal solutions of its feasible domain. We are now ready to convert problem (1) to a simple form in the next section. 3. Transformation of problem (1) into a separable form We now use some properties of linear algebra to convert the quadratic form of objective function to a separable form. To do this, we express the following properties. Property 1 (Lipschutz & Lipson, 1985) If the n  n real matrix Q is symmetric, then it has n real eigenvalues (with real eigenvectors). Property 2. If the n  n real symmetric matrix Q, where n P 2, is indefinite, then the matrix Q has n real eigenvalues as k1, . . . , kn in a non-decreasing order such that k1 < 0 and kn > 0.

Lemma 1. Assume that real values k1, . . . , kn be the eigenvalues of the n  n real indefinite symmetric matrix Q and d > maxj{kjjj 2 n}, where n = {1, . . . , n}. Then the matrix d  I  Q is positive definite. Proof. Let d > maxj{kjjj 2 n}. Also, suppose that kIdQ be one of the eigenvalues of the matrix d  I  Q. Then we have (d  I  Q)  x = kdIQ  x. Hence, we can write xT  (d  I  Q).x = kdIQxT  x. o, we have Since the eigenvector is a non-zero vector, i.e., x – ~ T

T

T

T

kdIQ ¼ x xT dI x x  x xT Q x x ¼ dxxT  xx  x xT Q x x ¼ d  kQ , where kQ is one of the eigenvalues of matrix Q. On the other hand, according to the assumption, we have d > maxj {kjjj 2 n}. Hence, we conclude that all the eigenvalues of the matrix d  I  Q are positive. Therefore, matrix d.I  Q is positive definite. h With regard to Lemma 1, we can write the objective function of problem (1) as follows:

1 1 ZðxÞ ¼ c  x þ xT Qx ¼ cx þ xT ½d  I  ðd  I  Q Þx 2 2 1 1 T T ¼ cx þ d  x x  x ðd  I  QÞx: 2 2

ð6Þ

Since the matrix d  I  Q is a n  n symmetric positive definite matrix, we have a Cholesky’s factorization of matrix d  I  Q as d  I  Q = LT  L, where L is a lower triangular matrix with positive diagonal elements (Datta, 2010). Now consider the variable change y = LT  x. This converts the objective function of problem (1) as follows:

1 1 Min cx þ d  xT x  yT y; 2 2 1 s:t: A  x P d ; ð7Þ

2

Bx6d ; y ¼ LT  x; ~ o 6 x 6~ 1;

where the vector ~ 1 is a n  1 vector with one components. On the other hand, the solution set X(A, d1, B, d2) can be completely determined by the maximum solution and finitely many minimal solutions (Guo & Xia, 2006; Shivanian & Khorram, 2009). Hence, we can write X(A, d1, B, d2) as follows: 1

2

XðA; d ; B; d Þ ¼

[

^

fx 2 ½0; 1n jx 6 x 6 ^xg:

ð8Þ

^ x 2X

With regard to the above point, we can equivalently rewrite problem (7) as follows:

1 1 Min cx þ d  xT x  yT y; 2 2 ^ ^ s:t: x 6 x 6 ^x; for each x 2 X; y ¼ LT  x:

ð9Þ

309

A. Abbasi Molai / Computers & Industrial Engineering 72 (2014) 306–314

The problem (9) means that the feasible domain in (9) is separable, and each feasible region is convex. These results can be adopted to separate problem (9) into several sub-problems based on the number of minimal solutions. Let S = {1, . . . , t} denote the set of numbers ^ of minimal solutions, and x s represents the sth minimal solution for each s 2 S. The sth sub-problem corresponding to the sth minimal solution can be written as follows:

1 1 Min Z s ¼ cx þ d  xT x  yT y; 2 2 ^ s:t: x s 6 x 6 ^x; y ¼ LT  x;

ð10Þ

Min Z s ¼ 

n n 1X 1X d bj þ c  aj  Lj x þ  xT x; 2 j¼1 2 j¼1 2

s  T s s T s where l ¼ l1 ; . . . ; ln , us ¼ us1 ; . . . ; usn , and lj ; usj , for j = 1, . . . , n, are computed by the following linear programming problems: s

lj :¼ Min Lj  x;

s:t: x 6 x 6 ^x: Problem (16) is separable and it can easily be solved. Let Pn Pn 1 d j¼1 bj , w ¼ c  2 j¼1 aj  Lj , and D ¼ 2. Then we can equivalently rewrite problem (16) as follows:

u ¼  12

Min Z s ¼

n X 

 D  x2i þ wi  xi þ u;

ð17Þ

i¼1

ð11Þ

^

x s 6 x 6 ^x;

s:t: x i 6 xi 6 ^xi ; i ¼ 1; . . . ; n: Since D > 0, we have

8 D  ^x2i þ wi  ^xi ; if pi P ^xi ; > > < ^s

min^s D  x2i þ wi  xi ¼ D  p2i þ wi  pi ; if x i < pi < ^xi ; > ^ x i 6xi 6xi > : ^ s2 ^s ^s D  x i þ wi  x i ; if pi 6 x i ;

and

usj

ð18Þ

:¼ Max Lj  x; s:t:

ð12Þ

^s

x 6 x 6 ^x;

where Lj is the jth column of the matrix L for j = 1, . . . , n. To find the optimal value of the problem (1), we should solve t sub-problems as (10). If the optimal value of each sub-problem is  Z s , for each s 2 S, then the optimal value of problem (1) is as   Z ¼ MinfZ s js 2 Sg. We now rewrite problem (10) as follows:

Min Z s ¼

n n n X X 1 1X c j  xj þ d  x2j  y2 ; 2 2 j¼1 j j¼1 j¼1

^s

x j 6 xj 6 ^xj ; j ¼ 1; . . . ; n;

s:t:

s lj

6 yj 6

usj ;

ð13Þ

j ¼ 1; . . . ; n;

To facilitate the resolution process of problem (13), we apply the least square technique to approximate the function of y2j , where j 2 n, with a linear function as Lj(yj) = aj  yj + bj using some points   h i s in terms of yj ; y2j , where yj’s are chosen lj ; usj . If tj points are cho  s sen to approximate function fj ðyj Þ ¼ y2j as yij ; y2ij where lj 6 yij 6 usj for i = 1, . . . , tj, the least square technique can be applied as follows:

Gj ðaj ; bj Þ ¼

tj  2 X aj  yij þ bj  y2ij : i¼1

Now deriving function Gj(aj, bj) with respect to aj and bj, we have

8 @G < @a j ¼ 0 j : @Gj ¼ 0 @b

)

j

8 Ptj j :y2 > y y2 t j :y > j i¼1 ij ij < aj ¼ P ; tj 2 2  y t j :y j

i¼1 ij

> > 2 : b ¼ y 2  a :y j j j ; j

ð14Þ

j are the average of y2ij , for i = 1, . . . , tj, and the average where y2j and y of yij, for i = 1, . . . , tj, respectively. Therefore, we can use the approximation fj(yj) ffi Lj(yj) = aj  yj + bj. The linear approximation is now applied in problem (13) as follows:

Min Z s ¼

n n n X 1 X 1X cj  xj þ d x2j  Lj ðyj Þ; 2 j¼1 2 j¼1 j¼1

^s

x j 6 xj 6 ^xj ; j ¼ 1; . . . ; n;

 2wDi .

where pi ¼ Therefore, the approximate optimal value of problem (10) is as follows: 

Zs ¼

n X

min^s D  x2i þ wi  xi þ u; ^ 6x 6 x x i i i i¼1

ð15Þ

s

lj 6 yj 6 usj ; j ¼ 1; . . . ; n; y ¼ LT  x: We now substitute vector y by y = LT  x or yj = Lj  x, for j = 1, . . . , n, in problem (15) and arrive to the following problem:

ð19Þ



Moreover, xs ¼ Arg minr fZ s ðrÞg. Now, if we apply the recent technique for each sub-problem (10) ^  with x s ; s 2 S, and create corresponding problem (17), we obtain Z s for each s 2 S. In fact, the problem (17) is an approximation of problem (10), for each s 2 S. Hence, we can produce the approximate optimal value of the problem (1) by the following relation: 

Z  ¼ MinfZ s js 2 Sg;

ð20Þ s



y ¼ LT  x:

s:t:

ð16Þ

^s

^s

s

l 6 y 6 us ;

s:t:

!

s

where x ¼ Arg minxs fZ ðx Þjs 2 Sg. We are now ready to present an algorithm to solve problem (1) based on the above points. Algorithm 1. Assume that problem (1) has been given. x of its feasible Step 1. Compute the maximum solution ^ domain by relation (3). Step 2. Check the feasibility of problem (1) using Theorem 1. If problem is infeasible, then stop! Step 3. If the real symmetric matrix Q is (semi-) positive definite, then we can use the modified simplex method in (Abbasi Molai, 2012) to compute the optimal solutions of problem (1) and stop! Otherwise, compute the largest eigenvalue of the indefinite real symmetric matrix Q and call it as kmax. Choose the value of d > kmax and create the positive definite matrix d  I  Q. Then obtain Cholesky’s factorization as d  I  Q = LT  L. Step 4. Compute the minimal solutions for the feasible ^1 ^t domain of problem (1) and create it as X ¼ x ; . . . ; x . ^s

Step 5. For each minimal solution x , where s = 1, . . . , t, s compute the values lj and usj , for each j = 1, . . . , n, using problems (11) and (12), respectively. Step 6. Choose tj points to approximate function fj ðyj Þ ¼ y2j by linear function Lj(yj) = aj  yj + bj, for j = 1, . . . , n, using the least square technique, and create problem(17) for ^k

each x where k = 1, . . . , t. Step 7. Find optimal solutions of problem (17) by relations (18), (19). Step 8. Find the approximate optimal value of the problem (1) by relation (20).

310

A. Abbasi Molai / Computers & Industrial Engineering 72 (2014) 306–314

Step 1. The maximum solution of set X(A, d1, B, d2) with respect to relation (3) is as follows: ^ x ¼ ½ 0:555 0:333 0:666 0:25 T . Step 2. The characteristic matrix V is as follows:

It is necessary to present some remarks about Algorithm 1. Remark 1. Some sufficient conditions and some results for the reduction of the size of original problem have been presented in Section 3 of paper (Abbasi Molai, 2012). These results have been expressed in a general case. We can also apply these results for reduction of problem (1) before the resolution of the problem (1).

2

1 1 1 0

7 6 61 1 1 07 7 6 V ¼6 7: 60 0 1 07 5 4 0

Remark 2. The problem of finding the minimal solutions of FRE is an NP-hard problem in terms of computational complexity (Markovskii, 2005). Since FRE is a special case of FRI, the problem of finding the minimal solutions of FRI is also NP-hard. It has been shown that a polynomial time algorithm to find all minimal solutions for a general system of FRE (or FRI) simply does not exist with expectation of P = NP (Chen & Wang, 2002). However, in many real applications, the problem of finding minimal solutions of FRE or FRI can be simplified to become polynomial time problems. We now illustrate Algorithm 1 by an example.

2 7 1 2 1 3 7 6 6 1 5 3 6 7 7 6 T 9IQ ¼6 7 ¼ L  L; 6 2 3 8 2 7 5 4 1

1 Min ZðxÞ ¼ c  x þ xT Qx; 2 1 s:t: A  x P d ;

2 2:5443 6 6 0:0155 6 L¼6 6 0:6529 4

2

x 2 ½0; 1n ; 1

2

where the matrices c, Q, A, d , B, and d are as follows:

0:3162

3

2

2 2 1 2 1 7 7 6 6 6 5 7 6 1 4 3 6 7 7 7 6 6 c¼6 7; Q ¼6 7; 6 4 7 6 2 3 1 2 7 5 5 4 4 14 1 6 2 1 3 3 2 2 0:9 0:8 0:75 0:68 0:2 7 7 6 6 6 0:75 0:88 0:6 6 0:25 7 0:8 7 7 7 6 6 1 A¼6 7; d ¼ 6 7; 6 0:66 0:89 6 0:45 7 1 0:56 7 5 5 4 4 0:7 0:25 0:9 0:4 0:4 3 3 2 2 0:25 0:4 0:1 0:4 0:141 7 7 6 6 6 0:3 0:6 0:24 0:8 7 6 0:2 7 7 7 6 6 2 B¼6 7; and d ¼ 6 7: 6 0:71 0:5 0:23 0:3 7 6 0:435 7 5 5 4 4 0:6

0:8

0:5

6

0:6

0

0

0

1:6936

0

0

0:6529

2:7568

0

1:8974

^1

^1

) x1 ¼ x2 ¼ x4 ¼ 0;

) x1 ¼ x2 ¼ 0;

x3 ¼ 0:45;

Since the problem is separable and with regard to the special structure of its constraints, we can easily obtain its optimal solution without using the simplex method as follows: x1 ¼ x2 ¼ 0, 1 x3 ¼ 0:45, and x4 ¼ 0:25. Hence, l1 ¼ 0:214755. Similarly, we can 1 easily compute other values lj as follows:

1

x3 ¼ 0:666; and l2 ¼ 0:4348:

1

x4 ¼ 0:25; and l3 ¼ 1:08246:

1

l4 ¼ min 3:1623x4 ; ^1

s:t: x 6 x 6 ^x:

) x1 ¼ x2 ¼ 0;

x3 ¼ 0:45;

0:6324 3:1623

s:t: x 6 x 6 ^x:

1

l3 ¼ min 2:7568x3  0:6324x4 ;

7 7 7 7: 7 5

1

1

s:t: x 6 x 6 ^x:

3

l1 ¼ min 2:5443x1 þ 0:0155x2 þ 0:6529x3  0:3162x4 ;

0:333

l2 ¼ min 1:6936x2  0:6529x3 þ 1:8974x4 ;

^1

10

Step 4. The FRI paths are as follows: P = {(1, 1, 3, 3), (1, 2, 3, 3), (1, 3, 3, 3), (2, 1, 3, 3), (2, 2, 3, 3), (2, 3, 3, 3), (3, 1, 3, 3), (3, 2, 3, 3), (3, 3, 3, 3)}. With regard to Theorem 2, the quasi-minimal solution set XP is as follows: fxp jp 2 Pg ¼  0; 0:45; 0ÞT ; ð0:2;  0:284091; 0:45; 0ÞT ; ð0:2;  0; 0:45; 0ÞT ; ð0:3;  fð0:3; T T  0; 0:25; 0:45; 0Þ ; ð0; 0:284091; 0:45; 0Þ ; ð0; 0:25; 0:45; 0ÞT ; ð0:3; 0:45; 0ÞT ;ð0; 0:284091; 0:45; 0ÞT ; ð0; 0; 0:45; 0ÞT g. The minimal solutions X using pair-wise comparison of elements XP is as ^ follows: X ¼ fx 1 g ¼ fð0; 0; 0:45; 0ÞT g. ^ Step 5. For the minimal solution x 1 , we create the problems (11) and (12) as follows:

Moreover, the eigenvalues of the indefinite matrix Q are as follows: {8.2985, 3.5899, 1.1124, 7.0008}. We are now ready to solve this problem by Algorithm 1.

s:t: x 6 x 6 ^x:

2

where

Bx6d ;

3t

0 1 0

Since each row of the characteristic matrix V has at least one nonzero component, its feasible domain is non-empty. Step 3. It is necessary to recall that the real symmetric matrix Q is not positive definite. Its largest eigenvalue is kmax = 8.2985. We choose the value d = 9 and create positive definite matrix d.I  Q as follows:

Example 1. Consider the following problem.

2

3

1

x4 ¼ 0; and l4 ¼ 0:

311

A. Abbasi Molai / Computers & Industrial Engineering 72 (2014) 306–314

Also, we can similarly compute the values u1j as follows:

u11 ¼ max 2:5443x1 þ 0:0155x2 þ 0:6529x3  0:3162x4 ; ^1

s:t: x 6 x 6 ^x: u12 ¼ max 1:6936x2  0:6529x3 þ 1:8974x4 ; ^1

s:t: x 6 x 6 ^x: u13 ¼ max 2:7568x3  0:6324x4 ; ^1

s:t: x 6 x 6 ^x: u14 ¼ max 3:1623x4 ; ^1

s:t: x 6 x 6 ^x:

) x ¼ ð0:555; 0:333; 0:666; 0ÞT with u11 ¼ 1:8521:

) x ¼ ð0; 0:333; 0:45; 0:25ÞT ; with u12 ¼ 0:744514:

) x ¼ ð0; 0; 0:666; 0ÞT with u13 ¼ 1:836:

) x ¼ ð0; 0; 0; 0:25ÞT with u14 ¼ 0:7906:

Step 6. We choose tj = 3 points, for j = 1, . . . , 4, to approximate 1 function fj ðyj Þ ¼ y2j where lj 6 yj 6 u1j , for j = 1, . . . , 4. The chosen points for approximation of function fj, for j = 1, . . . , 4, are as follows: The required parameters for creating problem (17) are computed as Table 1.

8 3 2 2:5443 T > > > 4 < 6 5 7 6 0:0155 7 X 7 7 6 6 w ¼ c  12 aj  Lj ¼ 6 7  12 2:08836 7 þ 0:2579 > 5 5 4 4 0:6529 4 > j¼1 > : 0:3162 14 3T 3T 9 2 3 2 2 > 0 0 0:6566 T > > > 7 6 6 0 7 = 6 5:2346 7 0 7 7 7 6 6 6 þ 2:97836 ¼6 7 þ 0:78996 7 7 ; 4 2:7568 5 4 0 5 > 4 0:7028 5 > > > 0:6324 3:1623 ; 14:2217 2

D ¼ 2d ¼ 4:5; 2

3T

0

6 1:6936 7 7 6 7 6 4 0:6529 5 1:8974

u ¼  12

0:6566 ¼ 0:073 ) x1 1 ¼ 0:073; 2  4:5 5:2346 r 2 ¼ ¼ 0:5816 ) x1 2 ¼ 0:333; 2  4:5 0:7028 r 3 ¼ ¼ 0:0781 ) x1 3 ¼ 0:45; 2  4:5 14:2217 r 4 ¼ ¼ 1:5802 ) x1 4 ¼ 0:25: 2  4:5

r 1 ¼

2

3T

4 X bj ¼ 1:2961: j¼1

With regard to the above points, we can produce problem (17) ^ for x 1 as follows:     Min Z 1 ¼ 4:5x21  0:6566x1 þ 4:5x22  5:2346x2     þ 4:5x23  0:7028x3 þ 4:5x24  14:2217x4 þ 1:2961; s:t: 0 6 x1 6 0:555; 0 6 x2 6 0:333; 0:45 6 x3 6 0:666; 0 6 x4 6 0:25:

Step 7. The optimal solutions of problem (17) for this example, according to relations (15)–(19) are as follows:

^

Hence, the optimal solution of problem related to x 1 is as: x1⁄ = (0.073, 0.333, 0.45, 0.25)T with Z1⁄ = 2.6512. ^ Step 8. Since X ¼ fx 1 g is singleton, we have S = {1}. According to relation (20), the approximate optimal value of original problem can be written as follows: Z⁄ = Min{Z1⁄} = 2.6512 with x⁄ = x1⁄ = (0.073, 0.333, 0.45, 0.25)T. 4. Comparison with the previous work in (Abbasi Molai, 2012) The advantages of our proposal with respect to the work done in (Abbasi Molai, 2012) are as follows:

Table 1 The chosen points and the linear functions obtained by the least square technique. Function

Point 1

Point 2

Point 3

Linear function obtained least square

f1 ðy1 Þ ¼ y21

(0.22, 0.0484)

(1.85, 3.42)

(0.975, 0.951)

L1(y1) = 2.0883y1  0.6465

f2 ðy2 Þ ¼ y22

(0.43, 0.18)

(0.74, 0.5476)

(0.215, 0.046)

L2(y2) = 0.2579y2 + 0.2063

f3 ðy3 Þ ¼ y23

(1.1, 1.21)

(1.84, 3.386)

(1.21, 1.464)

L3(y3) = 2.9783y3  2.1

f4 ðy4 Þ ¼ y24

(0, 0)

(0.79, 0.624)

(0.395, 0.156)

L4(y4) = 0.7899y4  0.052

312

A. Abbasi Molai / Computers & Industrial Engineering 72 (2014) 306–314

1. As it was explained in Subsection 4.2 of paper (Abbasi Molai, 2012), the proposed procedure can only solve the fuzzy relation quadratic programming problem (1) in a special case. In this special case, the Hessian matrix of the objective function, i.e., Q, is positive semi-definite. In this case, we can only apply the modified simplex method to solve it. Otherwise, we cannot use the method to solve problem (1). In this paper, we have covered a more general class of the problem (1). In this class, we assumed that the matrix Q is a real symmetric matrix that it covers the class of positive semi-definite matrices. This class is very more extensive from the class of positive semi-definite matrices. 2. The recognition of the symmetric matrices is easier with respect to the recognition of the positive semi-definite matrices. 3. In this paper, the problem (1) is finally converted to some separable quadratic programming problems with box constraints. We can obtain the optimal solutions of these problems with attention to relation (18). This relation produces a closed form to obtain the approximate optimal value of the original problem. We do not even need to use the modified simplex method. This point reduces the computation effort, considerably. 5. An application example for model (1) and its extension for real problems In this section, we present an application example and some explanation on the extension of real problems that can be modeled as (1). One of the problems is in the economic area. Here, we apply the application example used in (Abbasi Molai, 2010) to explain the problem that can be modeled as (1). Of course, there are other application examples in the area of digital data service with random costs and reliability engineering that their mathematical models are as the one defined in (1). Example 2. There are six factories in six cities. Each factory produces required foodstuff for the people of one of the cities and cover its people’s alimentary requirements. A financier decides to cover the people’s alimentary requirements of six cities by enhancing the alimentary quality of factory (A) in city (A). He considers six criteria below to convince the people in the six cities to select productions of this factory:

By considering criteria (1)–(6), we can categorize the people’s requirements in six classes: (1) the problem of the quality of the primary materials, (2) the problem of the quality of packing productions, (3) the rate of durability, (4) the quality level of laboratories, (5) the quality level of cleanliness, and (6) the quality level of production machineries in factory (A). Now, suppose that aij denotes the required quality or rate of criterion (i) from viewpoint of people in city (j). This matrix for the six cities and the six criteria is denoted by a 6  6 matrix as A = [aij] where aij 2 [0, 1] for i, j = 1, . . . , 6 below:

The financier estimates that if he expends cost xj (xj has been normalized in [0, 1]) to overcome the requirement from kind of (i) by doing activity (Pi), then he will obtain quality level aij  xj from viewpoint of the people in city (j) from criterion (i). Also, the financier 1

estimates levels di , i = 1, . . . , 6, such that if he fulfills at least quality 1 di ;

i ¼ 1; . . . ; 6, for criterion (i) at least for the people in one of levels the cities, then he will overcome the difficulties from kind of (i) by h i 1 1 doing activity (Pi). The vector d1 is a 1  6 vector as d ¼ di where 1

di 2 ½0; 1 for i = 1, . . . ,6. Furthermore, the financier considers the following eight criteria to prevent from loss and to sell more. (1) (2) (3) (4) (5) (6)

(1) (2) (3) (4) (5) (6)

The The The The The The

quality of primary materials. quality of packing. rate of durability (or perdurability). quality of factory’s laboratories. quality of factory’s cleanliness. quality of production machineries.

The financier has some plans for each potentially poor criterion, as follows: (P1) If the quality level of the primary materials is poor, then he provides the primary material with a high quality. (P2) If the quality level of packing is poor, he contracts packing factories to enhance this quality. (P3) If the durability rate of its productions is low, then he uses innocuous preserver materials to increase the durability rate. (P4) If the laboratorial equipments are poor, then he increases them and equips the laboratories. (P5) If the quality level of cleanliness is poor, he employs more workmen to clean factory (A). (P6) If the quality level of production machineries is low or they are old, then he reconditions them or replaces them by new machineries.

(7) (8)

The rate of used chemical material. The inflation created by increasing prices. The tax based on increasing prices. The incoherence of produced foodstuff with the native foodstuff of the city from viewpoint of taste. The incoherence of produced foodstuff with the native foodstuff of the city from viewpoint of color. The incoherence of produced foodstuff with the native foodstuff of the city from viewpoint of smell. The quality difference of produced foodstuff with the native foodstuff of the city from viewpoint of energy. The quality difference of produced foodstuff with the native foodstuff of the city from viewpoint of vitamins.

Now, suppose that number bij denotes the required rate of criterion (i) from viewpoint of experts in city (j). This matrix for the six cities and the eight criteria is denoted by a 6  8 matrix as B = [bij] where bij 2 [0, 1] for i = 1, . . . , 8 and j = 1, . . . , 6 below:

The financier also estimates that if he expends cost xj to control the criterion (i), then he will obtain quality level or rate bij  xj from

A. Abbasi Molai / Computers & Industrial Engineering 72 (2014) 306–314

viewpoint of the financier. Also, the financier estimates levels 2 di ; 2 di ;

i ¼ 1; . . . ; 8, such that if he fulfills at most quality levels

i ¼ 1; . . . ; 8, for criterion (i) at least for one of the cities, then he will prevent from the bankruptcy. The vector d2 is a 1  8 vector h i 2 2 2 d ¼ di where di 2 ½0; 1 for i = 1, . . . , 8. The range of costs that the 1

financier should pay to fulfill the levels di ; i ¼ 1; . . . ; 6, and 2 di ;

i ¼ 1; . . . ; 8, is obtained by solving fuzzy relation inequalities A  x P d1 and B  x 6 d2, where ‘‘’’ denotes the operator of the max-product and x 2 [0, 1]6. If the financier expends cost xj for the P city (j), where j = 1, . . . , 6, then the total profit is as: Z ¼ 6j¼1 cj xj P with the total risk R ¼ 6j¼1 r j xj . The coefficients cj and rj, for j = 1, . . . , 6, are not fixed but they are normally distributed random variables. The functions Z and R will thus be two random variP P ables with means Z ¼ 6j¼1 cj xj and R ¼ 6j¼1r j xj , respectively, and variances r ¼ x V Z x and r ¼ x V R x, respectively, where cj and r j , for j = 1, . . . , 6, are the means of cj and rj, for j = 1, . . . , 6, and x = (x1,x2,x3,x4,x5,x6)T. Also, the matrices VZ and VR are the covariance matrices cj and rj, respectively, defined as follows: 2 Z

2

Varðc1 Þ

T

2 R

T

6 Cov ðc ; c Þ Varðc Þ Cov ðc ;c Þ Cov ðc ;c Þ Cov ðc ;c Þ 2 1 2 2 3 2 4 2 5 6 6 6 Cov ðc3 ; c1 Þ Cov ðc3 ; c2 Þ Varðc3 Þ Cov ðc3 ;c4 Þ Cov ðc3 ;c5 Þ VZ ¼ 6 6 Cov ðc ; c Þ Cov ðc ; c Þ Cov ðc ;c Þ Varðc Þ Cov ðc ;c Þ 4 1 4 2 4 3 4 4 5 6 6 4 Cov ðc5 ; c1 Þ Cov ðc5 ; c2 Þ Cov ðc5 ;c3 Þ Cov ðc5 ;c4 Þ Varðc5 Þ Cov ðc6 ; c1 Þ Cov ðc6 ; c2 Þ Cov ðc6 ;c3 Þ Cov ðc6 ;c4 Þ Cov ðc6 ;c5 Þ

Varðr1 Þ 6 Cov ðr ; r Þ 2 1 6 6 6 Cov ðr3 ; r1 Þ VR ¼ 6 6 Cov ðr ; r Þ 4 1 6 6 4 Cov ðr5 ; r1 Þ Cov ðr6 ; r1 Þ

Varðc6 Þ

Varðr6 Þ

R

the expected value of the utility and to minimize the expected value of the normalized risk by the following relations: þ1

1

0 !2 1 Z þ1 1 1 Z  Z AdZ ð1  eZ Þ/Z ðZÞdZ ¼ 1  pffiffiffiffiffiffiffi exp @Z  2 rZ rZ 2p 1   1 ¼ 1  exp Z þ r2Z 2

and

Z

þ1

1

!2 1 1 1 RR A R @ dR ð1  e Þ/R ðRÞdR ¼ 1  pffiffiffiffiffiffiffi exp R  2 rR rR 2p 1   1 ¼ 1  exp R þ r2R : 2 Z

þ1

2

Bx6d ; x 2 ½0; 16 : where c ¼ ðc1 ; c2 ; c3 ; c4 ; c5 ; c6 ÞT and r ¼ ðr 1 ; r2 ; r 3 ; r 4 ; r 5 ; r 6 ÞT . This problem can equivalently be converted to the following problem:

  1 T x V Z x;  cT x þ 2   1 T x V R x; Min r T x  2

Min

1

Bx6d ;

It is convenient to normalize the values Z and R by functions of utility and risk of u and q so that (1) u = 0 for Z = 0 and u = 1 as Z approaches the value +1, and (2) q = 0 for R = 0 and q = 1 as R approaches the value +1. The functions u and q are called the financier’s utility and normalized risk function and they are usually non-decreasing continuous functions. Different curves can be expressed mathematically for u and q as: u(Z) = 1  eZ and q(R) = 1  eR. The density functions /Z and /R are given by the   2  1 ZZ exp  and /R ðRÞ ¼ following functions: /z ðZÞ ¼ r p1 ffiffiffiffi rZ 2 Z 2p     2 1 ffiffiffiffi p exp  12 RR , respectively. The financier wish to maximize rR r 2p

Z

1

s:t: A  x P d ;

Cov ðc2 ;c6 Þ 7 7 7 Cov ðc3 ;c6 Þ 7 7 Cov ðc4 ;c6 Þ 7 7 7 Cov ðc5 ;c6 Þ 5

3 Cov ðr1 ;r2 Þ Cov ðr1 ;r 3 Þ Cov ðr1 ; r4 Þ Cov ðr 1 ;r 5 Þ Cov ðr1 ; r6 Þ Varðr2 Þ Cov ðr2 ;r 3 Þ Cov ðr2 ; r4 Þ Cov ðr 2 ;r 5 Þ Cov ðr2 ; r6 Þ 7 7 7 Cov ðr3 ;r2 Þ Varðr 3 Þ Cov ðr3 ; r4 Þ Cov ðr 3 ;r 5 Þ Cov ðr3 ; r6 Þ 7 7: Cov ðr4 ;r2 Þ Cov ðr4 ;r 3 Þ Varðr4 Þ Cov ðr 4 ;r 5 Þ Cov ðr4 ; r6 Þ 7 7 7 Cov ðr5 ;r2 Þ Cov ðr5 ;r 3 Þ Cov ðr5 ; r4 Þ Varðr 5 Þ Cov ðr5 ; r6 Þ 5 Cov ðr6 ;r2 Þ Cov ðr6 ;r 3 Þ Cov ðr6 ; r4 Þ Cov ðr 6 ;r 5 Þ

  1 T x V Z x; 2   1 T x V R x; Min rT x  2

Max cT x 

s:t: A  x P d ;

and 2

 simultaneously. This is equivalent to maximizing Z  12 r2Z and 1 2 minimizing R  2 rR , respectively. Substituting for Z; R; r2Z , and r2R , we have:

3

Cov ðc1 ; c2 Þ Cov ðc1 ;c3 Þ Cov ðc1 ;c4 Þ Cov ðc1 ;c5 Þ Cov ðc1 ;c6 Þ

0

Therefore, we want to maximize the expected value of the utility and minimize the expected value of the normalized risk,

313

2

x 2 ½0; 16 : A new deterministic minimization problem can be formulated as follows:

        1 T 1 T x V Z x þ k2 r T x  x V Rx ; Min k1 cT x þ 2 2 1

s:t: A  x P d ; 2

Bx6d ; x 2 ½0; 16 : Equivalently, we have:

Min



   1 T x ðk1  V Z  k2  V R Þx; k1  cT þ k2  rT  x þ 2 1

s:t: A  x P d ;

ð21Þ

2

Bx6d ; x 2 ½0; 16 : where k1 and k2 are non-negative constants whose values indicate  the relative importance of two expressions cT  x þ 12 xT V Z x and   rT x  12 xT V R x or indirectly the expected value of the utility and the risk, respectively. Here, we are interested to give equal impor tance to the minimization of expression cT  x þ 12 xT V Z x as well   as the expression r T x  12 xT V R x, i.e., k1 = k2 = 1. Since two matrices VZ and VR are symmetric, in the problem (21), the matrix k1  VZ  k2  VR is symmetric. It is not necessary the matrix k1  VZ  k2  VR to be semi-positive definite or positive definite in a general case. The problem has the condition and form of problem (1). Therefore, we can solve the problem by the proposed approach in this paper. 6. Conclusions The minimization problem of a quadratic objective function with fuzzy relation inequality constraints was studied in this paper. Its objective function is not necessarily convex. Some sufficient conditions were presented to simplify the resolution process of the fuzzy relation quadratic programming problem. Under the sufficient conditions and applying Cholesky’s factorization, a suitable variable change was proposed to convert the quadratic objective function to a separable quadratic objective function including only expressions as xi ; x2i , and y2i . In this case, we used a linear

314

A. Abbasi Molai / Computers & Industrial Engineering 72 (2014) 306–314

approximation instead of functions y2i by the least square technique. Then applying the variable change between two variable vectors x and y, we obtained a separable quadratic programming with respect to x. This problem was easily solved and the approximate optimal value of original problem was found. Finally, an application of the problem in the real world was presented and some other applications were potentially pointed in the area of digital data service and reliability engineering. Acknowledgements The author is very grateful to the anonymous referees for their comments and suggestions which have been very helpful in improving the paper. References Abbasi Molai, A. (2010). Fuzzy linear objective function optimization with fuzzyvalued max-product fuzzy relation inequality constraints. Mathematical and Computer Modelling, 51, 1240–1250. Abbasi Molai, A. (2012). The quadratic programming problem with fuzzy relation inequality constraints. Computers and Industrial Engineering, 62, 256–263. Chen, L., & Wang, P. P. (2002). Fuzzy relation equations (I): The general and specialized solving algorithms. Soft Computing, 6, 428–435. Czogala, E., & Predrycz, W. (1981). On identification in fuzzy systems and its applications in control problems. Fuzzy Sets and Systems, 6, 73–83. Chang, C.-W., & Shieh, B.-S. (2013). Linear optimization problem constrained by fuzzy max–min relation equations. Information Sciences, 234, 71–79. Datta, B. N. (2010). Numerical linear algebra and applications. SIAM. Di Nola, A., & Russo, C. (2007). Lukasiewicz transform and its application to compression and reconstruction of digital images. Information Sciences, 177, 1481–1498. Di Nola, A., Sessa, S., Pedrycz, W., & Sanchez, E. (1989). Fuzzy relation equations and their applications to knowledge engineering. Dordrecht, Boston, London: Kluwer Academic Publishers. Fang, S. C., & Li, G. (1999). Solving fuzzy relation equations with a linear objective function. Fuzzy Sets and Systems, 103, 107–113. Freson, S., De Baets, B., & De Meyer, H. (2013). Linear optimization with bipolar max–min constraints. Information Sciences, 234, 3–15. Gavalec, M., & Zimmermann, K. (2013). Duality of optimization problems with generalized fuzzy relation equation and inequality constraints. Information Sciences, 234, 64–70. Ghodousian, A., & Khorram, E. (2008). Fuzzy linear optimization in the presence of the fuzzy relation inequality constraints with max–min composition. Information Sciences, 178, 501–519. Ghodousian, A., & Khorram, E. (2012). Linear optimization with an arbitrary fuzzy relational inequality. Fuzzy Sets and Systems, 206, 89–102. Guo, F.-F., Pang, L.-P., Meng, D., & Xia, Z.-Q. (2011). An algorithm for solving optimization problems with fuzzy relational inequality constraints. Information Sciences. http://dx.doi.org/10.1016/j.ins.2011.09.030. Guo, F.-F., & Xia, Z. Q. (2006). An algorithm for solving optimization problems with one linear objective function and finitely many constraints of fuzzy relation inequalities. Fuzzy Optimization and Decision Making, 5, 33–47. Guu, S.-M., & Wu, Y.-K. (2010). Minimizing a linear objective function under a maxt-norm fuzzy relational equation constraint. Fuzzy Sets and Systems, 161 285–297. Guu, S.-M., & Wu, Y. K. (2002). Minimizing a linear objective function with fuzzy relation equation constraints. Fuzzy Optimization and Decision Making, 1, 347–360. Guu, S.-M., Wu, Y.-K., & Lee, E. S. (2011). Multi-objective optimization with a max-tnorm fuzzy relational equation constraint. Computers and Mathematics with Applications, 61, 1559–1566. Hassanzadeh, R., Khorram, E., Mahdavi, I., & Mahdavi-Amiri, N. (2011). A genetic algorithm for optimization problems with fuzzy relation constraints using maxproduct composition. Applied Soft Computing, 11, 551–560. Khorram, E., & Zarei, H. (2009). Multi-objective optimization problems with Fuzzy relation equation constraints regarding max-average composition. Mathematical and Computer Modelling, 49, 856–867. Li, P., & Fang, S.-C. (2008). On the resolution and optimization of a system of fuzzy relational equations with sup-T composition. Fuzzy Optimization and Decision Making, 7, 169–214.

Li, J., Feng, S., & Mi, H. (2012). A kind of nonlinear programming problem based on mixed fuzzy relation equations constraints. Physics Procedia, 33, 1717–1724. Lin, J.-L., Wu, Y.-K., & Chang, P.-C. (2009). Minimizing a nonlinear function under a fuzzy max-t-norm relational equation constraint. Expert Systems with Applications, 36, 11633–11640. Lipschutz, S., & Lipson, M. (1985). Linear algebra, 4th ed. Loia, V., & Sessa, S. (2005). Fuzzy relation equations for coding/decoding processes of images and videos. Information Sciences, 171, 145–172. Loetamonphong, J., & Fang, S. C. (2001). Optimization of fuzzy relation equations with max-product composition. Fuzzy Sets and Systems, 118, 509–517. Lu, J. J., & Fang, S.-C. (2001). Solving nonlinear optimization problems with fuzzy relation equation constraints. Fuzzy Sets and Systems, 119, 1–20. Mashayekhi, Z., & Khorram, E. (2009). On optimizing a linear objective function subjected to fuzzy relation inequalities. Fuzzy Optimization and Decision Making, 8, 103–114. Markovskii, A. V. (2005). On the relation between equations with max-product composition and the covering problem. Fuzzy Sets and Systems, 153, 261–273. Nobuhara, H., Bede, B., & Hirota, K. (2006a). On various eigen fuzzy sets and their application to image reconstruction. Information Sciences, 176, 2988–3010. Nobuhara, H., Pedrycz, W., Sessa, S., & Hirota, K. (2006b). A motion compression/ reconstruction method based on max t-norm composite fuzzy relational equations. Information Sciences, 176, 2526–2552. Pedrycz, W. (1985). On generalized fuzzy relational equations and their applications. Journal of Mathematical Analysis and Applications, 107, 520–536. Peeva, K., & Kyosev, Y. (2004). Fuzzy relational calculus: Theory. Applications and software. New Jersey: World Scientific. Sanchez, E. (1976). Resolution of composite fuzzy relation equations. Information and Control, 30, 38–48. Singh, G., Pandey, D., & Thapar, A. (2012). A posynomial geometric programming restricted to a system of fuzzy relation equations. Procedia Engineering, 38, 3462–3476. Shieh, B.-S. (2011). Minimizing a linear objective function under a fuzzy max-t norm relation equation constraint. Information Sciences, 181, 832–841. Shivanian, E., & Khorram, E. (2009). Monomial geometric programming with fuzzy relation inequality constraints with max-product composition. Computers and Industrial Engineering, 56, 1386–1392. Thapar, A., Pandey, D., & Gaur, S. K. (2009). Optimization of linear objective function with max-t fuzzy relation equations. Applied Soft Computing, 9, 1097–1101. Thapar, A., Pandey, D., & Gaur, S. K. (2012). Satisficing solutions of multi-objective fuzzy optimization problems using genetic algorithm. Applied Soft Computing, 12, 2178–2187. Vasantha Kandasamy, W. B., & Smarandache, F. (2004). Fuzzy relational maps and neutrosophic relational maps, hexis church rock (See Chapters one and two). . Wang, P. Z., Zhang, D. Z., Sachez, E., & Lee, E. S. (1991). Lattecized linear programming and fuzzy relational inequalities. Journal of Mathematical Analysis and Applications, 159, 72–87. Wu, Y.-K., Guu, S. M., & Liu, J. Y. C. (2002). An accelerated approach for solving fuzzy relation equations with a linear objective function. IEEE Transactions on Fuzzy Systems, 10, 552–558. Wu, Y.-K., & Guu, S. M. (2005). Minimizing a linear function under a fuzzy max–min relational equation constraint. Fuzzy Sets and Systems, 150, 147–162. Wu, Y.-K. (2006). Optimizing the geometric programming problem with max-min fuzzy relational equation constraints. Technical Report, Vanung University, Department of Industrial Management. Wu, Y.-K., Guu, S.-M., & Liu, J. Y.-C. (2007). Optimizing the linear fractional programming problem with max-Archimedean t-norm fuzzy relational equation constraints, In Proc. IEEE Inter. Conf. Fuzz. Syst. (pp. 1–6). Wu, Y.-K., Guu, S.-M., & Liu, J. Y.-C. (2008). Reducing the search space of a linear fractional programming problem under fuzzy relational equations with maxArchimedean t-norm composition. Fuzzy Sets and Systems, 159, 3347–3359. Yang, J. H., & Cao, B. Y. (2005a). Geometric programming with fuzzy relation equation constraints. In Proc. IEEE Inter. Conf. Fuzz. Syst. (pp. 557–560). Yang, J. H., & Cao, B. Y. (2005b). Geometric programming with max-product fuzzy relation equation constraints. In Proc. Annual Meet. North Amer. Fuzz. Inf. Proces. Soc. (pp. 650–653). Yang, J. H., & Cao, B. Y. (2007). Posynomial fuzzy relation geometric programming. In P. Melin, O. Castillo, L. T. Aguilar, J. Kacprzyk, & W. Pedrycz, (Eds.), Proc. 12th Inter. Fuzz. Syst. Assoc. World Cong. (pp. 563–572). Cancun, Mexico. Zhang, H. T., Dong, H. M., & Ren, R. H. (2003). Programming problem with fuzzy relation inequality constraints. Journal of Liaoning Normal University, 3, 231–233. Zhou, X. G., & Ahat, R. (2011). Geometric programming problem with single-term exponents subject to max-product fuzzy relational equations. Mathematical and Computer Modelling, 53, 55–62. Zimmermann, H.-J. (1991). Fuzzy set theory and its applications (2nd revised ed.). Kluwer Academic Publishers.