An algorithm for optimizing the linear function with fuzzy relation equation constraints regarding max-prod composition

An algorithm for optimizing the linear function with fuzzy relation equation constraints regarding max-prod composition

Applied Mathematics and Computation 178 (2006) 502–509 www.elsevier.com/locate/amc An algorithm for optimizing the linear function with fuzzy relatio...

165KB Sizes 3 Downloads 92 Views

Applied Mathematics and Computation 178 (2006) 502–509 www.elsevier.com/locate/amc

An algorithm for optimizing the linear function with fuzzy relation equation constraints regarding max-prod composition Amin Ghodousian, Esmaile Khorram

*

Faculty of Mathematics and Computer Science, Amirkabir University of Technology, 424, Hafez Avenue, 15914 Tehran, Iran

Abstract Fuzzy sets as the feasible region for optimization problems is an interesting and on-going research topic [S.C. Fang, G. Li, Solving fuzzy relations equations with a linear objective function, Fuzzy Sets Syst. 103 (1999) 107–113 [7]; J. Lu, S.C. Fang, Solving nonlinear optimization problems with fuzzy relation constraints, Fuzzy Sets Syst. 119 (2001) 1–20 [16]; E. Khorram, A. Ghodpusian, Linear objective function optimization with fuzzy relation constraints regarding maxav composition, Appl. Math. Comput., in press, doi:10.1016/j.amc.2005.04.021]. In this paper, we focus on these kind problems in which the solutions region is the fuzzy relation equation with max-prod composition and the objective function is linear. Whereas, one of the major difficulties in such problems is non-convexity of the feasible region, it is preferable to study these regions in the first step. Hence, we have primarily investigated two methods and their relationship and then we have determined the feasible region via them. After determining the feasible set, we have given an algorithm to optimize the linear objective function on such these regions. Finally, we have presented two examples to illustrate the methods and algorithms.  2005 Elsevier Inc. All rights reserved. Keywords: Linear objective function optimization; Fuzzy relation Equations; Fuzzy relations Composition

1. Introduction Let I = {1, 2, . . . , m}, J = {1, 2, . . . , n} and X = {x 2 Rm : 0 6 xi 6 1 for i 2 I}. Suppose an m · n fuzzy matrix A = (aij) and an n-dimensional vector b = (bj) such that 0 6 aij 6 1, 0 6 bj 6 1, for each i 2 I and j 2 J. We are interesting in finding a solution vector x 2 X such that x o A ¼ b;

ð1Þ



where ‘‘o’’ denotes max-prod composition [1] i.e. 

maxðxi  aij Þ ¼ bj ; i2I

*

j ¼ 1; 2; . . . ; n.

Corresponding author. E-mail address: [email protected] (Esmaile Khorram).

0096-3003/$ - see front matter  2005 Elsevier Inc. All rights reserved. doi:10.1016/j.amc.2005.11.069

ð2Þ

Amin Ghodousian, Esmaile Khorram / Applied Mathematics and Computation 178 (2006) 502–509

503

The fuzzy relation equations topic is investigated by few numbers of researchers [2–5,8–16]; in this paper what we are interesting is learning specific form of these sort problems. Assume an m-dimensional cost coefficient vector c = (cj) where ci is associated with variable xi, for each i 2 I. We would like to solve the problem below m X ci xi min z ¼ i¼1

ð3Þ

x o A ¼ b; 

8i 2 I.

0 6 xi 6 1

First of all non-empty solution set of the fuzzy relation equations is generally non-convex set determined in term of the maximum solution and the finite number of minimum solutions [3,8]. Next, we will show that these properties are also true for such equations with max-prod composition. On the other hand, these properties results in some structural differences between such problems and the traditional linear programming [6] both in their forms and their solution method. For example, the simplex method and also the interior point method and the other classical method can not be applied in solving the problems such as (3). In Section 2, the feasible region of the fuzzy relation equations with max-prod composition is precisely investigated. Furthermore, the first method is introduced to find the maximum solution and minimum solutions in it. In Section 3, we convert the matrix A into another modified matrix and derive some structural properties which are very useful to obtain the optimum points. The matrix modification process improves the first method and accelerates it to find minimum solutions faster. In Section 4, the second method is introduced. This method and its corollaries results in using 0–1 integer programming and a type of branch and bound techniques for linear optimization with the fuzzy relation equations regarding max-min composition [7]. Although, we can apply these methods again for max-prod composition, but it would be quite better to choose a tabular method [17] which is more effective. At the end of this section, it is proved the first method with the matrix modification is the same as second method. Furthermore, we give the necessary and sufficient condition for solution set. In Section 5, we present an algorithm in base of Sections 3 and 4 in order to optimizing linear objective functions. Finally, an example is given to illustrate what have been presented, and then the conclusion is derived. 2. Characterization of feasible solutions set Definition 1. For each 1x, 2x 2 X[A, b], 1x 6 2x iff 1xi 6 2xi for "i 2 I. where X[A, b] denotes the feasible solutions set of problem (3). Definition 2. ^x 2 X ½A; b is the maximum solution if x 6 ^x for "x 2 X[A, b]. Similarly, x 2 X ½A; b is the minimum solution if x 6 x implies x ¼ x for "x 2 X[A, b]. To determine X[A, b], we take apart (1) into the following equation: x o aj ¼ bj 

8j 2 J ;

ð4Þ

where aj is the j the column of matrix A. If x is a feasible solution in (4), for a fixed j 2 J, surely, we have got to have ðaÞ

xi  aij 6 bj

ðbÞ

9i 2 I j : xi  aij ¼ bj .

8i 2 I;

ð5Þ

Let 1Ij = {i 2 I : aij > bj}, 2Ij = {i 2 I : aij = bj}, 3Ij = {i 2 i : aij < bj} and Ij = 1Ij [ 2I2. For any x 2 X the condition (a) is always true for each x 2 1Ij [ 2I2 and the condition (b) is not so for each x 2 3Ij. Hence, we can easily simplify (5) as follows: 8i 2 1 I j ;

ðaÞ

xi  aij 6 bj

ðbÞ

9i 2 1 I j [ 2 I j : xi  aij ¼ bj .

ð6Þ

504

Amin Ghodousian, Esmaile Khorram / Applied Mathematics and Computation 178 (2006) 502–509

Now, we define an n-dimensional vector j^x ¼ ðj^x1 ; j^x2 ; . . . ; j^xm Þ such that ( bj =aij i 2 1 I j ; j ^x ¼ 1 i 2 2I j [ 3I j; and also, we let jxðiÞ ¼ ð½jxðiÞ1 ; ½jxðiÞ2 ; . . . ; ½jxðiÞm Þ for "i 2 1Ij [ 2I2 such that  bj =aij k ¼ i; j ½ xðiÞk ¼ 0 k 6¼ i.

ð7Þ

ð8Þ

The proof of following lemma is similar to that of Theorem 1 in [17] and is easily attained from (7) and (8) for a fixed j 2 J. Lemma 1 (a) j^x is the maximum solution of Eq. (4). (b) xðiÞ; 8i 2 1 I j [ 2 I 2 , is the minimum solution S of Eq. (4). (c) x is the feasible solution Eq. (4) iff x 2 i2I j ½jxðiÞ; j^x. Definition 3. Let ^x ¼ minj2J fj^xg and e = (e(1), e(2), . . . , e(n)) such that e(j) = i 2 Ij for "j 2 J. According to each e 2 IJ, we define an m-dimensional vector ex = ([ex]1, [ex]2, . . . , [ex]m), ex ¼ maxj2J jxðeðjÞÞ. Hence ( max bj =aij J e ðiÞ 6¼ £; ½ex i ¼ j2J e ðiÞ 0 J e ðiÞ ¼ £; where Je(i) = {j 2 J : e(j) = i}. Also, IJ = I1 · I2 ·    · In, X ¼ fex : e 2 I J g. Theorem 1. X ½A; b ¼

S

x. e2I J ½ex ; ^

Proof. See proof of Theorem 1 in [8]. h From Theorem 1, X[A, b] is non-convex and ^x is the maximum solution in X[A, b] and X 0 ½A; b  X , where X0[A, b] denotes the minimum solutions of set X[A, b]. Furthermore, Theorem 1 with Definitions 1 and 2 indicate that, if for any e 2 IJ and i 2 I, ½ex i > ^xi then ex is infeasible where ^xi is i’the component of the vector ^x. Hence, in order to find minimum solutions, we can primarily generate the set X and find the infeasible vectors ex by comparing them with ^x and remove them from X . Subsequently, we can attain minimum solutions by pair wise comparison of the feasible vectors ex. Remark 1. It is more easily achieve the maximum solution from Definition 3 and (7) as follows: ( maxfbj =aij g T ðiÞ 6¼ £; ^xi ¼ j2T ðiÞ 1 T ðiÞ ¼ £; where T(i) = {j 2 J : aij > bj} and ^xi is i the component of ^x. 3. Modification of the matrix A In this section, we accelerate the finding process of the minimum solutions proposed in the first method by modifying the matrix A. Lemma 2. If for j1, j2 2 J we have aij1 P bj1 , aij2 P bj2 and bj1 =aij1 > bj2 =aij2 then aij1 can be zero value. Proof. See Lemma 2 in [16].

h

Amin Ghodousian, Esmaile Khorram / Applied Mathematics and Computation 178 (2006) 502–509

505

e b where A e denotes the Matrix attained by Lemma 2 is called the modified one. It is obvious, X ½A; b ¼ X ½ A; modified matrix A. The first method gives many points as candidate for being minimum solution. Actually, the final set generated by this method includes minimum solutions, and the feasible solutions that are not minimum, some infeasible solutions and sometimes the maximum solution (see Example 1). At the end, the first method applies pair wise comparison to find actual minimum solutions that may be very time-consuming. Lemma 3 shows that after modification of the matrix A, the important group of these candidate points which are neither minimum nor feasible, are removed. Lemma 3. Suppose the matrix A has been modified and e 2 IJ then there is no i 2 I such that ½ex i > ^xi . Proof. By contradiction, suppose $i 2 I such that ½ex i > ^xi . From Remark 1 and Definition 3, ½ex i ¼ maxk2J e ðiÞ fbk =aik g > ^xi ¼ maxj2T ðiÞ fbj =aij g. Let maxk2J e ðiÞ fbk =aik g ¼ bt =ait and maxj2T ðiÞ fbj =aij g ¼ bs =ais . From inequality above there are ais, ait, bs, bt such that ais > bs, ait P bt and bt/ait > bs/ais that contradicts the modified matrix A and proves the lemma. h Following lemma shows a structural property for fuzzy equations with max-prod composition that will be used in Section 5. Lemma 4. Assume the matrix A has been modified and Je(i) 5 Bfor a fixed i 2 I. For each e 2 IJ and j, j 0 2 Je(i) we have (a) aij > bj ; aij0 > bj0 and bj =aij ¼ bj0 =aij0 . or (b) aij ¼ bj ; aij0 ¼ bj0 that implies automatically bj =aij ¼ bj0 =aij0 ¼ 1. Proof. Suppose j 2 Je(i) and aij = bj. if $j 0 2 Je(i) such that aij0 > bj0 , bj0 =aij0 < bj =aij . This inequality with aij0 > bj0 and aij = bj contradict the given assumption of modified matrix A. Hence aij0 ¼ bj0 ; 8j0 2 J e ðiÞ. Otherwise, suppose j 2 Je(i) and aij > bj. If $j 0 2 Je(i) such that aij0 ¼ bj0 , then, the modification process is contradicted like previous case. Hence, we have to have aij0 > bj0 . Furthermore, bj =aij ¼ bj0 =aij0 or else in both case bj0 =aij0 < bj =aij and bj0 =aij0 > bj =aij the modification process is contradicted. h 4. Second method Let Ij(x) = {i 2 I : xi Æ aij = bj}, "j 2 J, and I(x) = I1(x) · I2(x) ·    · In(x). If x 2 X[A, b] then from (6), Ij(x) 5 B for "j 2 J and hence, I(x) 5 B. Let f = (f(1), f(2), . . . , f(n)) such that f(j) = i 2 Ij(x) for "j 2 J and Jf(i) = {j 2 J : f(j) = i}. Definition 4. For each f 2 I(x) let f[x] = (f[x]1, f[x]2, . . . , f[x]m) such that ( max fbj =aij g J f ðiÞ 6¼ £; f ½xi ¼ j2J f ðiÞ 0 J f ðiÞ ¼ £; and let F ðxÞ ¼ ff ½x : f 2 IðxÞg. Lemma 5. For each f 2 I(x) and j, j 0 2 Je(i), bj =aij ¼ bj0 =aij0 ¼ xi . Proof. Since j, j 0 2 Je(i), xi Æ aij = bj and xi  aij0 ¼ bj0 . Hence, bj =aij ¼ bj0 =aij0 ¼ xi for any j, j 0 2 Je(i).

h

Lemma 6. Suppose x 2 X[A, b] and f 2 I(x) then f[x] 6 x and f[x] 2 X[A, b]. Proof. From Lemma 5 and Definition 4, f ½xi ¼ maxj2J f ðiÞ fbj =aij g ¼ xi , for "i 2 I when Jf(i) 5 B. If Jf(i) = B, f[x]i = 0 6 xi. Hence, f[x] 6 x. Since x 2 X[A, b] and f[x] 6 x, maxi 2 I(f[x]i Æ aij) 6 maxi2I(xi Æ aij) = bj for "j 2 J. Hence, f[x] satisfies condition (a) in (5). Furthermore, since x 2 X[A, b], Ij(x) 5 B, "j 2 J. Hence,

506

Amin Ghodousian, Esmaile Khorram / Applied Mathematics and Computation 178 (2006) 502–509

for each j 2 J, $i 2 I such that f(j) = i 2 Ij(x). Since f ½xi ¼ maxk2J f ðiÞ fbk =aik g ¼ bj =aij from Lemma 5, f[x]i Æ aij = bj that means f[x] satisfies conditions (b) in (5). Hence, f[x] satisfies in (5) and the proof is complete. h Lemma 7. Suppose 1x,2x 2 X[A, b] and 1x 6 2x. If f 2 I(1x) then f 2 I(2x) and f[1x] = f[2x]. Proof. Proof is similar to that of Theorem 5 in [17].

h

Corollary 1. Suppose x 2 X[A, b] then x 2 X0[A, b] iff for each f 2 I(x), f[x] = x. Proof. See Lemma 8 in [8].

h

Theorem 2. X 0 ½A; b  F ð^xÞ  X ½A; b. Proof. Proof is quietly similar to that of Theorem 6 in [17].

h

From the Theorem 2, we can find the minimum solutions in the set F ð^xÞ by pair wise comparison. The only advantage that the second method has over the first method is the deletion of the infeasible solutions, automatically, since F ð^xÞ  X ½A; b. e is the same second method on the primary matrix A. Theorem 3. First method on the matrix A e and vice versa. For this, we show for Proof. To prove, it is sufficient to show each f 2 Ið^xÞ in A is a e 2 IJ in A e e then a fixed i 2 I, i 2 I j ð^xÞ in A iff i 2 Ij in A. Suppose i 2 I j ð^xÞ in A i.e. ^xi  aij ¼ bj then aij P bj. If i 62 Ij in A 0 $j 2 J in A such that aij0 > bj0 and bj0 =aij0 < bj =aij . Hence, ^xi ¼ bj =aij > bj0 =aij0 . This inequality with aij0 > bj0 contradicts the feasibility of the vector ^x from (6). Otherwise, if i 62 I j ð^xÞ, then either ^xi  aij > bj or ^xi  aij < bj . If ^xi  aij > bj , aij P bj then it contradicts feasibility of ^x. Hence, if i 62 I j ð^xÞ, ^xi  aij < bj , in this case, if aij < bj, e by definition of Ij and therefore the proof is finished. Otherwise, if aij P bj, then by Remark 1, then i 62 Ij in A ^xi ¼ mink2T ðiÞ fbk =aik g < bj =aij . Letting mink2T(i){bk/aik} = bs/ais, we have bs/ais < bj/aij,ais > bs and aij P bj. e and therefore the proof is complete. h Hence aij is zero by modification process and hence i 62 Ij in A Theorem 4 (Necessary and sufficient condition). X[A, b] 5 B iff (a) In matrix A for "j 2 J, $i 2 I such that aij P bj. e for "j 2 J, $i 2 I such that aij P bj. (b) In matrix A Proof. If X[A, b] 5 B, equality (4) has feasible solution for each j 2 J and hence the condition (a) is true from e b, X ½ A; e b 6¼ £ and hence, the condition (b) is similarly true from (5). (5). Furthermore, since X ½A; b ¼ X ½ A; Conversely, suppose conditions (a) and (b) are satisfied. By contradiction, suppose X[A, b] = B. Then e b ¼ £ and hence for each x 2 X if xi Æ aij = bj then there must be j 0 2 J such that xi  aij0 > bj0 in A. e ThereX ½ A; e fore xi ¼ bj =aij > bj0 =aij0 that is contradiction from Lemma 4 and hence X ½A; b ¼ X ½ A; b 6¼ £. h Example 1. Consider problem (1) with A and b as follows: 2 3 1 0:2 1 6 7 A ¼ 4 0:9 0:5 0:2 5 b ¼ ½ 0:9 0:5 0:2  0:4 0:5 0:1 we have I1 = {1, 2}, I2 = {2, 3}, I3 = {1, 2}, IJ = {1, 2} · {2, 3} · {1, 2}. From Remark 1, ^x ¼ ½0:2; 1; 1. Considering IJ, the first method selects eight cases to generate all vectors ex. If e = (1, 2, 2), then ex ¼ ½0:9; 1; 0i^x and hence it is infeasible. Removing such vectors ex by comparing them with the maximum solution, only four cases remained the vector e are (2, 3, 1), (2, 2, 1), (2, 2, 2), (2, 3, 2). Notice, if e = (2, 3, 1) then, ex ¼ ^x. Finally, pair wise comparison of these four cases leads to one minimum solution ex ¼ x ¼ ½0; 1; 0

Amin Ghodousian, Esmaile Khorram / Applied Mathematics and Computation 178 (2006) 502–509

507

generated by e = (2, 2, 2). The selections e as (2, 2, 1), (2, 3, 2), lead to feasible solutions ex = [0.2, 1, 0] and ex = [0, 1, 1], respectively, that are not minimum. However, these two points and the maximum point are removed by pair wise comparison with x. Notice, each vector e such that e(1) = 1 leads to infeasible solution. After modification, a11 is only component that vanishes. Hence, infeasible vectors ex are never generated when we apply first method after modification. In second method, I 1 ð^xÞ ¼ f2g, I 2 ð^xÞ ¼ f2; 3g, I 3 ð^xÞ ¼ f1; 2g; Ið^xÞ ¼ f2g  f2; 3g  f1; 2g. In this case there are four cases to generate all vectors f 2 Ið^xÞ. Considering Ið^xÞ and IJ, it is clear that these four cases are the same cases generated by the first method after modification or equivalently after removing infeasible vectors ex. 5. An algorithm for optimizing the linear functions Consider problem (3). We can split it into two sub-problems as follows: P 0 P 00 SP1 : min ci xi SP2 : min ci xi ; x o A ¼ b;

x o A ¼ b;



0 6 xi 6 1 where



i ¼ 1; 2; . . . ; m;

0 6 xi 6 1

i ¼ 1; 2; . . . ; m;

 ci ci P 0; 0 ci P 0; and c00i ¼ 0 ci < 0. ci ci < 0. From Lemmas 4, 5 and Theorem 2 in [7], ^x is the optimal solution for SP2 and one of the minimum solutions is the optimal solution of SP1. Also, the optimal solution of problem (3) is achieved as ( ^xi ci 6 0;      ð9Þ x ¼ ðx1 ; x2 ; . . . ; xm Þ such that xi ¼ _ xi ci > 0; c0i ¼



where ^xi ; xi are optimal in SP2 and SP1, respectively. The optimal solution of SP2 is easily achieved from Remark 1. If the optimal solution of SP1 is in hand, the optimal of problem (3) is found from (9). To solve SP1 it is possible to use 0–1 integer programming and to apply branch and bound technique. Although this algorithm is useful for problems that are similar to problem (3) with max-min composition instead of max-prod composition[7] and it is possible to apply such algorithms for problem(3), but we present more effective algorithm [17] by considering some structural properties of maxprod composition stated in Sections 3 and 4. Definition 5 (i) For each f 2 Ið^xÞ, let vector x(f) be such that xðf Þf ðjÞ ¼ ^xf ðjÞ ; 8j 2 J and let x(f)i = 0 when Jf(i) = B. (ii) Let S ¼ fxðf Þ : f 2 Ið^xÞg. Since x 6 ^x for each x 2 X[A, b] then xf ðjÞ ¼ ^xf ðjÞ , "j 2 J and "f 2 I(x) from Lemmas 5 and 6. Specially, if x 2 X 0 ½A; b then, xf ðjÞ ¼ ^xf ðjÞ for "j 2 J and 8f 2 IðxÞ. By this fact and the Definition 5, we can easily conclude following lemma. Lemma 8. X0[A, b]  S. Therefore, we can definitely find the optimal solution of SP1 in S. Now, we give following lemma and remark in order to merge optimization of sub-problems SP1 and SP2 in one algorithm. Lemma 9. Suppose a fixed row i 2 I in A. If there is no j 2 J, such that ^xi  aij ¼ bj then aij > bj for "j 2 J in A. e for j 2 J. Hence, if ^xi  aij 6¼ bj for "j 2 J i.e. i 62 I j ð^xÞ for Proof. From Theorem 3, i 2 I j ð^xÞ in A iff i 2 Ij in A e This means that there is no j 2 J, aij P bj in A. e Now, suppose for j 2 J, aij P bj in "j 2 J then i 62 Ij, "j 2 J in A. e Otherwise, if aij is vanished, it have to be A. If aij is not vanished by modification operation then i 2 Ij in A. e h j 0 2 J such that aij0 P bj0 , bj0 =aij0 < bj =aij that is i 2 I j0 . Both cases contradict i 62 Ij, "j 2 J in A.

508

Amin Ghodousian, Esmaile Khorram / Applied Mathematics and Computation 178 (2006) 502–509

Remark 2. It should note that the variable xi with ci < 0 in SP2 is actually a variable in SP1 with ci = 0. If assumption of Lemma 8 is true for some i 2 I then aij < bj for "j 2 J and hence ^xi ¼ 1 and x(f)i = 0 for 8f 2 Ið^xÞ by using their definitions. Algorithm 1 Phase I (1) Find the maximum solution ^x from Remark 1. (2) For each row i 2 I, if there is j 2 J, ^xi  aij ¼ bj insert ^xi ¼ bj =aij in (i, j)th cell. If ci < 0 let xi ¼ ^xi . (3) For each row i 2 I, if there is not j 2 J, ^xi  aij ¼ bj let xi ¼ 1 if ci < 0 and xi ¼ 0 if ci > 0 and let i 2 I0. Phase II (1) Let I = {1, 2, . . . , m}  I0, J = {1, 2, . . . , n}. If ci < 0 let ci = 0. (2) Let crxr = min{cixi:i 2 I}. (3) Select all the cells (r, j) inserted ^xr in them. (4) Let I = I  {r} and J = J  {j:(r, j) selected in 3}. (5) If J 5 B go to 2 or else go to phase III. Phase III (1) Remove all rows i 2 I such that ci = 0. (2) Insert zero value in all cells that have not been selected in phase II. (3) Perform row maximization operation on remaining rows.

Example 2. Consider problem min z ¼ 2x1 þ 3x2  4x3 þ 0:2x4  2x5 þ 7x6 þ 5x7  9x8  x9 3 2 0:95 0:75 0:128 0:571 0:104 6 0:9 0:015 0:7 0:3 7 7 6 0:42 7 6 6 0:93 0:321 0:88 0:875 0:375 7 7 6 7 6 6 0:95 0:425 0:625 0:222 0:25 7 7 6 0:81 0:52 0:25 0:34 0:12 7 xo6 7 ¼ ½0:912; 0:72; 0:6; 0:56; 0:24 0 6 xi 6 1 i ¼ 1; 2; . . . ; m. 6 7 6 0:11 0:512 0:6 0:56 0:081 7 6 7 6 6 0:871 0:768 0:64 0:594 0:211 7 7 6 7 6 0:7 0:25 0:1 0:22 5 4 0:62 0:722 0:702 0:614 0:8 0:33 From Remark 1, ^x ¼ ½0:96; 0:8; 0:64; 0:96; 1; 1; 0:9375; 1; 0:7. Phase I is demonstrated in the following table with cells that contain ^xi : 0.96

0.96* 0.8

0.96* 0.9375

0.8 0.64* 0.96* 1 0.9375

1 0.7

0.8 0.64* 0.96

Amin Ghodousian, Esmaile Khorram / Applied Mathematics and Computation 178 (2006) 502–509

509

Since c3 ; c4 < 0; x3 ¼ ^x3 ¼ 0:64 and x9 ¼ ^x9 ¼ 0:7. Also, since aij < bj for i = 5, 8 and j = 1, 2, 3, 4, 5, rows 5 and 8 have been removed from table and x5 ¼ 0, x8 ¼ 1 because of c5 > 0, c8 < 0, respectively. Selected cells by phase II in stage 3 are noted by star notation. Row maximization in phase III, produces: x1 ¼ 0:96; x4 ¼ 0:96; x2 ¼ x6 ¼ x7 ¼ 0. Hence, the optimal solution is x* = [0.96, 0, 0.64, 0.96, 0, 0, 0, 1, 0.7]. 6. Conclusion In this paper, firstly, we defined the max-prod fuzzy relation composition and the feasible region, then with applying the first method a number of candidate’s points are obtained that none of them are optimum. In order to put away the additional points, we try to simplify the matrix A. The simplification process makes easier access to optimal points. Secondly, we introduced the second approach which represent a new effective method to obtain the optimal points, and finally, it is proved by Theorem 3 that two methods are equivalent, also the necessary and sufficient condition of the feasibility are obtained by Theorem 4 and an algorithm for solving (1),where the coefficient of the objective function are free in sign, are resulted, and an example, practically, showed the steps of algorithm. References [1] [2] [3] [4] [5] [6] [7] [8] [9] [10] [11] [12] [13] [14] [15] [16] [17]

H.J. Zimmermann, Fuzzy Set Theory and Its Application, Kluwer Academic Publishers, Boston/Dordrecht/London, 1999. G.I. Adamopoulos, c.p. Pappis, Some results on the resolution of fuzzy relation equations, Fuzzy Sets Syst. 60 (1993) 83–88. E. Czogala, J. Drewniak, W. Pedrycz, Fuzzy relation equation on a finite set, Fuzzy Sets Syst. 7 (1982) 89–101. K.P. Aliasing, Fuzzy set theory in medical diagnosis, IEEE Trans. Syst. Man Cybernet. 16 (1986) 260–265. A. Di nola, Relational equations in totally ordered lattices and their complete resolution, J. Math. Appl. 107 (1985) 148–155. S.C. Fang, S. Puthenpura, Linear Optimization and Extensions: Theory and Algorithm, Prentice-Hall, Englewood Cliffs, NJ, 1993. S.C. Fang, G.Li Solving, fuzzy relations equations with a linear objective function, Fuzzy Sets Syst. 103 (1999) 107–113. M. Higashi, G.J. Klir, Resolution of finite fuzzy relation equations, Fuzzy Sets Syst. 13 (1984) 65–82. G.Li, S.C. Fang, On the resolution of finite fuzzy relations, or report 0.322, North Carolina State University, Raleigh. North Carolina, May 1996. M. Prevot, Algorithm for the solution of fuzzy relation, Fuzzy Sets Syst. 5 (1985) 319–322. H.F. Wang, An algorithm for solving iterated composite relation equations, in: NAFIPS, 1988, pp. 242–249. P.Z. Wang, S. Sessa, A. Di Nola, How many lower solutions does a fuzzy relation equation have? Bull. Pour. Sous. Ens. Flous. Appl. (BUSEFAL) 18 (1984) 67–74. W.L. Winston, Introduction to Mathematical Programming: Application and Algorithms, Duxbury Press, Belmont, CA, 1995. E. Sanchez, Resolution of composite fuzzy relations equation, Inform Control 30 (1976) 38–48. S.z. Guo, P.Z. Wang, A. Di nola, S. Sessa, Further contributions to the study of finite fuzzy relation equations, Fuzzy Sets Syst. 26 (1988) 93–104. J. Lu, S.C. Fang, Solving nonlinear optimization problems with fuzzy relation constraints, Fuzzy Sets Syst. 119 (2001) 1–20. E. Khorram, A. Ghodpusian, Linear objective function optimization with fuzzy relation constraints regarding max-av composition, Appl. Math. Comput., in press, doi:10.1016/j.amc.2005.04.021.