Applied Mathematics and Computation 175 (2006) 645–674 www.elsevier.com/locate/amc
Iterative solution of fuzzy linear systems Mehdi Dehghan *, Behnam Hashemi Department of Applied Mathematics, Faculty of Mathematics and Computer Sciences, Amirkabir University of Technology, No. 424, Hafez Ave., Tehran, Iran
Abstract Linear systems have important applications to many branches of science and engineering. In many applications, at least some of the parameters of the system are represented by fuzzy rather than crisp numbers. So it is immensely important to develop numerical procedures that would appropriately treat general fuzzy linear systems (will be denoted by FLS) and solve them. In this paper firstly a general fuzzy linear system using the embedding approach, has been investigated and then several well-known numerical algorithms for solving system of linear equations such as Richardson, Extrapolated Richardson, Jacobi, JOR, Gaus –Seidel, EGS, SOR, AOR, ESOR, SSOR, USSOR, EMA and MSOR are extended for solving FLS. The iterative methods are followed by convergence theorems and the presented algorithms are tested by solving some numerical examples. ( Hackbusch noticed that Gauss spelling is less correct than Gaus [W. Hackbusch, Iterative Solution of Large Sparse Systems of Equations, SpringerVerlag Inc., New York, 1994 [9]].) 2005 Elsevier Inc. All rights reserved. Keywords: Fuzzy linear systems (FLS); Nonnegative matrix; Iterative methods; Jacobi; Gaus– Seidel (GS); Successive overrelaxation (SOR) method; Accelerated overrelaxation (AOR); Symmetric SOR (SSOR); Unsymmetric SOR (USSOR); Rate of convergence; Extrapolated Richardson (ER); Extrapolated modified Aitken (EMA)
*
Corresponding author. E-mail addresses:
[email protected] (M. Dehghan),
[email protected] (B. Hashemi).
0096-3003/$ - see front matter 2005 Elsevier Inc. All rights reserved. doi:10.1016/j.amc.2005.07.033
646
M. Dehghan, B. Hashemi / Appl. Math. Comput. 175 (2006) 645–674
1. Introduction One of the major applications using fuzzy number arithmetic is treating linear systems whose parameters are all or partially represented by fuzzy numbers [8]. A general model for solving a FLS whose coefficient matrix is crisp and the right-hand side column is an arbitrary fuzzy number was first proposed by Friedman et al. [8]. They used the embedding method and replaced the original fuzzy linear system by a crisp linear system with a nonnegative coefficient matrix S, which may be singular even if A be nonsingular. Another class of methods for solving fuzzy linear systems is iterative methods. With our best knowledge Allahviranloo [1] introduced the Jacobi method for solving FLS for the first time. Also he proposed the Gaus–Seidel and the SOR methods in [1,2]. The main aim of this paper is to modify these methods and propose other iterative schemes for solving FLS here. Also with small additional computational effort, it seems advisable to add to any iterative scheme its extrapolated form as an addition term. In this paper several different numerical techniques and their extrapolated form will be developed and will be compared for solving this type of FLS. This paper is organized in the following way: Some basic definitions and results on fuzzy numbers and FLS are given in Section 2. In Section 3, basic concepts of an iterative method are introduced. Iterative methods are introduced for solving FLS in Section 4 and the proposed algorithms are discussed and compared by solving several examples in Section 5. Section 6 ends this paper with conclusion.
2. Preliminaries 2.1. Fuzzy numbers Fuzzy numbers are one way to describe the vagueness and lack of precision of data. The theory of fuzzy numbers is based on the theory of fuzzy sets which was introduced in 1965 by Zadeh [19]. The concept of a fuzzy number was first used by Nahmias in the United States and by Dubois and Prade in France in the late 1970s. Our definition of a fuzzy number is explained in the following. Definition 2.1. We represent an arbitrary fuzzy number by a pair of functions ðuðrÞ; uðrÞÞ; 0 6 r 6 1 which satisfy the following requirements: • u(r) is a bounded left continuous nondecreasing function over [0, 1]. • uðrÞ is a bounded left continuous nonincreasing function over [0, 1]. • uðrÞ 6 uðrÞ; 0 6 r 6 1.
M. Dehghan, B. Hashemi / Appl. Math. Comput. 175 (2006) 645–674
647
A crisp number a is simply represented by uðrÞ ¼ uðrÞ ¼ a; 0 6 r 6 1. The set of all fuzzy numbers fuðrÞ; uðrÞg becomes a convex cone that is denoted by E1 which is then embedded isomorphically and isometrically into a Banach space. A popular fuzzy number is the trapezoidal fuzzy number u(x0, y0, r, b) with two defuzzifiers x0, y0 and left fuzziness r and right fuzziness b. Its parametric form is uðrÞ ¼ x0 r þ rr;
uðrÞ ¼ y 0 þ b br.
Let v = (x0, y0, r, b) be a trapezoidal fuzzy number and x0 = y0, then v is called a triangular fuzzy number and is shown by v = (x0, r, b). For example, the fuzzy number (1 + r, 6 2r) is shown in Fig. 1, where x0 = 2, y0 = 4, r = 1 and b = 2. Now suppose that yðrÞ ¼ ðyðrÞ; yðrÞÞ is an exact fuzzy solution and y 0 ðrÞ ¼ ðy 0 ðrÞ; y 0 ðrÞÞ is the estimated fuzzy solution (see Fig. 2).
Fig. 1. A fuzzy number.
Fig. 2. The distance of two fuzzy numbers.
648
M. Dehghan, B. Hashemi / Appl. Math. Comput. 175 (2006) 645–674
Our aim is to maximize the possibility of closeness of exact solutions and our estimated solutions. So we define the criterion to this objective: Z 1 Error ¼ jyðrÞ y 0 ðrÞj þ jyðrÞ y 0 ðrÞj dr. ð2:1Þ 0
It is trivial what ever this criterion close to zero, the possibility of closeness of these solutions becomes well. 2.2. Fuzzy linear systems Definition 2.2. Consider the n · n linear system of equations: 8 a11 x1 þ a12 x2 þ þ a1n xn ¼ y 1 ; > > > > > < a21 x1 þ a22 x2 þ þ a2n xn ¼ y 2 ; > .. > . > > > : an1 x1 þ an2 x2 þ þ ann xn ¼ y n .
ð2:2Þ
The matrix form of the above equations is AX ¼ Y ;
ð2:3Þ
where the coefficient matrix A = (aij), 1 6 i, j 6 n is a crisp n · n matrix and yi 2 E1, 1 6 i 6 n. This system is called a fuzzy linear system (FLS). 2.3. Solution of fuzzy linear systems For arbitrary fuzzy numbers x ¼ ðxðrÞ; xðrÞÞ, y ¼ ðyðrÞ; yðrÞÞ and real number k, we may define the addition and the scalar multiplication of fuzzy numbers by using ZadehÕs extension principle [21] that can be used to generalize [4] the crisp mathematical concepts to the fuzzy sets in the following way: • x = y if and only if x(r) = y(r) and xðrÞ ¼ yðrÞ, • x þ y ¼ ðxðrÞ þ yðrÞ; xðrÞ þ yðrÞÞ, ðkx; kxÞ; k P 0; • kx ¼ ðkx; kxÞ; k < 0. Definition 2.3. A fuzzy number vector (x1, x2, . . . , xn)t given by xi ¼ ðxi ðrÞ; xi ðrÞÞ, 1 6 i 6 n, 0 6 r 6 1 is called a solution of the fuzzy linear system (2.2) if n X j¼1
ai;j xj ¼
n X j¼1
ai;j xj ¼ y i ;
n X j¼1
ai;j xj ¼
n X j¼1
ai;j xj ¼ y i .
M. Dehghan, B. Hashemi / Appl. Math. Comput. 175 (2006) 645–674
649
If for a particular k, ak,j > 0, 1 6 j 6 n, we simply get n n X X ak;j xj ¼ y k ; ak;j xj ¼ y k . j¼1
j¼1
Now consider the ith equation of the system (2.2): ai;1 ðx1 ; x1 Þ þ ai;2 ðx2 ; x2 Þ þ þ ai;i ðxi ; xi Þ þ þ ai;n ðxn ; xn Þ ¼ ðy i ; y i Þ; we have (
ai;1 x1 þ ai;2 x2 þ þ ai;i xi þ þ ai;n xn ¼ y i ðrÞ; ai;1 x1 þ ai;2 x2 þ þ ai;i xi þ þ ai;n xn ¼ y i ðrÞ;
1 6 i 6 n; 0 6 r 6 1.
As you see for any i we have two crisp equations, so the fuzzy linear system (2.3) is extended to a 2n · 2n crisp linear system where the right-hand side column is the vector ðy 1 ; y 2 ; . . . ; y n ; y 1 ; y 2 ; . . . ; y n Þt . We get the 2n · 2n linear system 8 s1;1 x1 þ þ s1;n xn þ s1;nþ1 ðx1 Þ þ s1;nþ2 ðx2 Þ þ þ s1;2n ðxn Þ ¼ y 1 ; > > > > > s2;1 x1 þ þ s2;n xn þ s2;nþ1 ðx1 Þ þ s2;nþ2 ðx2 Þ þ þ s2;2n ðxn Þ ¼ y 2 ; > > > > > > .. > > > . > > > > > > > < sn;1 x1 þ þ sn;n xn þ sn;nþ1 ðx1 Þ þ sn;nþ2 ðx2 Þ þ þ sn;2n ðxn Þ ¼ y n ; snþ1;1 x1 þ þ snþ1;n xn þ snþ1;nþ1 ðx1 Þ þ snþ1;nþ2 ðx2 Þ þ > > > > > þsnþ1;2n ðxn Þ ¼ y 1 ; > > > > > .. > > > . > > > > > > s2n;1 x1 þ þ s2n;n xn þ s2n;nþ1 ðx1 Þ þ s2n;nþ2 ðx2 Þ þ > > : þs2n;2n ðxn Þ ¼ y n ; in which aij P 0 ) sij ¼ siþn;jþn ¼ aij ; siþn;j ¼ si;jþn ¼ 0; aij < 0 ) si;jþn ¼ siþn;j ¼ aij ; si;j ¼ siþn;jþn ¼ 0. Using the matrix notation we get SX ¼ Y ;
ð2:4Þ
where X ¼ ½x1 ; x2 ; . . . ; xn ; x1 ; x2 ; . . . ; xn t ; t
Y ¼ ½y 1 ; y 2 ; . . . ; y n ; y 1 ; y 2 ; . . . ; y n .
650
M. Dehghan, B. Hashemi / Appl. Math. Comput. 175 (2006) 645–674
The structure of S implies that S = (sij) P 0, 1 6 i, j 6 2n and that " # BP0 CP0 S¼ ; CP0 BP0 where B contains the positive entries of A and C contains [12,13] the absolute values of the negative entries of A and A = B C. Also suppose that B = D1 + L1 + U1. Thus we have S = D + L + U, where L1 0 D1 0 U1 C L¼ ; D¼ ; U¼ . ð2:5Þ C L1 0 D1 0 U1 The linear system of Eq. (2.4) is now a 2n · 2n crisp linear system and can be uniquely solved for X, if and only if the matrix S is nonsingular. The fact is however that S may be singular even if the original matrix A is not. The following results emphasize the difficulties in obtaining the fuzzy solution for a linear system. Theorem 2.4. The matrix S is nonsingular if and only if the matrices A = B C and B + C are both nonsingular. Proof. See Theorem 1 of [8]. h Corollary 1. If a crisp linear system does not have a solution, the associated fuzzy linear system does not have one either. In order to solve the fuzzy linear system SX = Y directly, we must calculate S1 (whenever it exists). The next result provides information related to the structure of S1. Theorem 2.5. If S1 exists it must have the same structure as S, i.e., E F 1 S ¼ . F E Proof. See Theorem 2 of [8]. h Assume that S is nonsingular, we obtain X ¼ S 1 Y .
ð2:6Þ
The solution vector is indeed unique but another question that we must answer is: ‘‘Do the components of the 2n-dimensional solution vector X represent an ndimensional solution fuzzy vector to the fuzzy system given by Eq. (2.2)?’’ The following result provides necessary and sufficient conditions for the unique solution vector to be a fuzzy vector given arbitrary input fuzzy vector Y.
M. Dehghan, B. Hashemi / Appl. Math. Comput. 175 (2006) 645–674
651
Theorem 2.6. The unique solution X of Eq. (2.6) is a fuzzy vector for arbitrary Y if and only if S1 is nonnegative. Proof. See Theorem 3 of [8]. h We can show that a nonnegative square matrix has a nonnegative inverse if and only if it is the product of a permutation matrix by a diagonal matrix. Another note is that a nonnegative square matrix has a nonnegative inverse if and only if its entries are all zero except for a single positive entry in each row and column [6]. Definition 2.7. Let X ¼ ðxi ðrÞ; xi ðrÞÞ, 1 6 i 6 n denote the unique solution of SX = Y. The fuzzy number vector U ¼ ðui ðrÞ; ui ðrÞÞ, 1 6 i 6 n defined by ui ðrÞ ¼ min fxi ðrÞ; xi ðrÞ; xi ð1Þg; 16i6n
ui ðrÞ ¼ maxfxi ðrÞ; xi ðrÞ; xi ð1Þg; 16i6n
is called the fuzzy solution of SX = Y. If ðxi ðrÞ; xi ðrÞÞ, 1 6 i 6 n, are all triangular fuzzy numbers then ui ðrÞ ¼ xi ðrÞ; ui ðrÞ ¼ xi ðrÞ, 1 6 i 6 n and U is called a strong fuzzy solution. Otherwise, U is a weak fuzzy solution [8]. Theorem 2.8. Suppose that aii > 0; 1 6 i 6 n. Then matrix S in (2.4) is strictly diagonally dominant if and only if the matrix A in (2.3) be strictly diagonally dominant. Proof. Let A be row strictly diagonally dominant, i.e., jai;i j >
n X
jai;j j;
1 6 i 6 n.
j¼1;j6¼i
Obviously we can write 2n X
n X
jsi;j j ¼
j¼1;j6¼i
jsi;j j þ
j¼1;j6¼i
n X
jsi;jþn j;
1 6 i 6 2n.
j¼1;j6¼i
There are two cases on ai,j when i 5 j: Case (i): ai,j P 0, that we have si;j ¼ siþn;jþn ¼ ai;j ; 2n X j¼1;j6¼i
jsi;j j ¼
n X j¼1;j6¼i
si;jþn ¼ siþn;j ¼ 0;
jsiþn;jþn j þ
n X j¼1;j6¼i
jsiþn;j j;
1 6 i 6 2n.
652
M. Dehghan, B. Hashemi / Appl. Math. Comput. 175 (2006) 645–674
So we have 2n X
jsi;j j ¼
j¼1;j6¼i 2n X
n X
jsi;j j ¼
j¼1;j6¼i
jsi;j j ¼
j¼1;j6¼i
n X
n X
jai;j j < jai;i j;
j¼1;j6¼i
jsiþn;jþnj ¼
j¼1;j6¼i
n X
jai;j j < jai;i j;
1 6 i 6 2n;
j¼1;j6¼i
because A is row strictly diagonally dominant. Furthermore aii P 0, so si,i = ai,i and we can write 2n X jsi;j j < jsi;i j; 1 6 i 6 2n. j¼1;j6¼i
Case (ii): ai,j < 0, that we have si;j ¼ siþn;jþn ¼ 0; si;jþn ¼ siþn;j ¼ ai;j ; 2n X
jsi;j j ¼
j¼1;j6¼i 2n X
n X
jsi;j jþ
j¼1;j6¼i
jsi;j j ¼
j¼1;j6¼i
n X
n X
jsi;jþn j ¼ 0 þ
j¼1;j6¼i
j¼1;j6¼i
jsi;jþn j ¼
j¼1;j6¼i
n X
jsiþn;jþn jþ
n X
jsiþn;j j ¼ 0 þ
j¼1;j6¼i
n X
jai;j j < jai;i j;
j¼1;j6¼i
n X
n X
jsiþn;j j ¼
j¼1;j6¼i
jai;j j < jai;i j;
j¼1;j6¼i
1 6 i 6 2n, and we know that aii P 0, so si,i = ai,i. Hence in both cases we have 2n X
jsi;j j < jai;j j ¼ jsi;i j )
j¼1;j6¼i
2n X
jsi;j j < jsi;i j;
1 6 i 6 2n.
j¼1;j6¼i
Thus S is row strictly diagonally dominant. On the other hand suppose that aii P 0 and S is row strictly diagonally dominant, we must show that A is row strictly diagonally dominant, too. Consider two different cases on ai,j, i 5 j as we did in above. Therefore we have 2n X j¼1;j6¼i
jsi;j j ¼
n X
jai;j j;
1 6 i 6 2n;
j¼1;j6¼i
2n X
jsi;j j < jsi;i j;
j¼1;j6¼i
and aii P 0, so si,i = ai,i and then n X jai;j j < jai;i j; 1 6 i 6 2n. j¼1;j6¼i
Thus A is row strictly diagonally dominant and the proof is complete.
h
Corollary 2. The condition aii P 0; "i, in the above theorem is essential. As a contradiction example, we can consider the matrix
M. Dehghan, B. Hashemi / Appl. Math. Comput. 175 (2006) 645–674
A¼
3=2
1
2
3
653
;
which is strictly diagonally dominant, but the corresponding matrix S is not strictly diagonally dominant. Notation. Let m(S) and M(S) denote the algebraically smallest and largest eigenvalues of matrix S, respectively, i.e., m(S) = min16i62nki, and M(S) = max16i62nki. Definition 2.9. The matrix A has property A, i.e., is 2-cyclic, if there exists a permutation matrix P such that D1 H 1 PAP ¼ ; K D2 where the diagonal blocks D1 and D2 are diagonal but not necessarily of the same order.
3. Basic concepts in the iterative techniques We now consider iterative methods in a more general mathematical setting. A general type of iterative process for solving the system SX = Y, where BP0 CP0 X Y S¼ ; X ¼ ; Y ¼ CP0 BP0 X Y can be described as follows. By applying a certain matrix Q (the splitting matrix) we obtain the equivalent form QX ¼ ðQ SÞX þ Y .
ð3:1Þ
Eq. (3.1) suggests an iterative process, defined by writing QX ðmþ1Þ ¼ ðQ SÞX ðmÞ þ Y ; m P 0; (0)
ð3:2Þ
The initial vector X can be arbitrary; if a good guess of the solution is available, it should be used for X(0). We say that the iterative method in Eq. (3.2) converges if it converges for any initial vector X(0). A sequence of vectors X(1), X(2), . . . can be computed from Eq. (3.2), and our objective is to choose Q so that (i) the sequence {X(m)} is easily computed, (ii) the sequence {X(m)} converges rapidly to the solution.
654
M. Dehghan, B. Hashemi / Appl. Math. Comput. 175 (2006) 645–674
In this section, we shall see that both of these conditions follow if the system QX(m) = b can be solved easily and quickly. If we simply take the limit in Eq. (3.2), the result is [13,14] QX = (Q S)X + Y, which means that SX = Y. To assure that system (2.4) has a solution for any vector Y, we assume that A = B C and B + C are both nonsingular (so S is nonsingular) and Q is nonsingular as well, so that Eq. (3.2) can be solved for the unknown vector X(m): X ðmþ1Þ ¼ Q1 ðQ SÞX ðmÞ þ Q1 Y ) X ðmþ1Þ ¼ ðI Q1 SÞX ðmÞ þ Q1 Y . Observe that the exact solution X satisfies the equation X ¼ ðI Q1 SÞX þ Q1 Y .
ð3:3Þ
We can write X(m+1) = .X(m) + j, where . = I Q1S, and j = Q1Y. Now consider the decomposition B = D1 + L1 + U1 in which D1 is the diagonal of B and L1 and U1 are the strict lower and the strict upper parts of B, respectively. So we have " # L1 þ D 1 þ U 1 C S¼ . C L1 þ D1 þ U 1
4. Iterative methods 4.1. Richardson method As an illustration of this concept, we consider the Richardson method, in which Q is chosen to be the identity matrix of order 2n. Eq. (3.3) in this case reads as follows: X ¼ ðI SÞX þ Y ; and the Richardson iteration matrix is C In B .Rich ¼ . C In B Hence the Richardson method will be in the following form: ( X ðmþ1Þ ¼ Y þ ðI n BÞX ðmÞ þ CX ðmÞ ; X ðmþ1Þ ¼ Y þ ðI n BÞX ðmÞ þ CX ðmÞ .
M. Dehghan, B. Hashemi / Appl. Math. Comput. 175 (2006) 645–674
655
And the elements of X ðmþ1Þ ¼ ðX ðmþ1Þ ; X ðmþ1Þ Þt are 8 ! n n > P P > ðmþ1Þ ðmÞ ðmÞ > > x ðrÞ ¼ y i ðrÞ þ 1 sij xj ðrÞ þ si;nþj xj ðrÞ; > < i j¼1 j¼1 ! > n n > P P ðmþ1Þ ðmÞ ðmÞ > > ðrÞ ¼ y i ðrÞ þ 1 sij xj ðrÞ þ si;nþj xj ðrÞ. > : xi j¼1 j¼1 In this approach the sequence {X(m)} is easily computed, but the rate of convergence of the sequence {X(m)} is very slow. By this method we have q(.Rich) = max{j1 m(S)j, j1 M(S)j}, so when S is a symmetric positive definite matrix, then the Richardson method converges if and only if M(S) < 2 (see [11, p. 23]). 4.2. The extrapolated Richardson (ER) method Another illustration of our basic theory is provided by the ER iteration, in which Q ¼ 1a I 2n , where a > 0 is called the extrapolation parameter. In this case we have X ¼ ðI aSÞX þ aY ; and the ER iteration matrix as I n aB aC .ER ¼ . aC I n aB In other words we have the ER method in the following form: ( X ðmþ1Þ ¼ aY þ ðI n aBÞX ðmÞ þ aCX ðmÞ ; X ðmþ1Þ ¼ aY þ ðI n aBÞX ðmÞ þ aCX ðmÞ . It can be shown that when S is symmetric positive definite, the optimum 2 extrapolation parameter would be aopt ¼ mðSÞþMðSÞ , and in this case we have qðSÞ1 qð.ERaðoptÞ Þ ¼ qðSÞþ1 (see [11]). 4.3. Jacobi method Another illustration of our basic theory is provided by the Jacobi iteration, in which Q is the diagonal matrix whose diagonal entries are the same as those in the matrix S = (si,j), so we have D1 0 Q¼D¼ . 0 D1 Using Eq. (3.3) we have the following form: X ¼ ðI D1 SÞX þ D1 Y ;
ð4:1Þ
656
M. Dehghan, B. Hashemi / Appl. Math. Comput. 175 (2006) 645–674
and we have " .J ¼
D1 1 ðL1 þ U 1 Þ
D1 1 C
D1 1 C
D1 1 ðL1 þ U 1 Þ
# .
ð4:2Þ
The elements of X ðmþ1Þ ¼ ðX ðmþ1Þ ; X ðmþ1Þ Þt are 8 " # n n > P P > ðmþ1Þ ðmÞ ðmÞ > > xi ðrÞ ¼ s1ii y i ðrÞ sij xj ðrÞ þ si;nþj xj ðrÞ ; > < j¼1 j¼1;j6¼i " # > n n > P P ðmþ1Þ ðmÞ ðmÞ > 1 > xi ðrÞ ¼ sii y i ðrÞ sij xj ðrÞ þ si;nþj xj ðrÞ . > : j¼1 j¼1;j6¼i k ¼ 0; 1; 2; . . . ; i ¼ 1; 2; . . . ; n. This is similar to Eq. (3.5) in [1]. For the Jacobi method the sequence {X(m)} is easily computed, and the rate of convergence is better than the RichardsonÕs method. 4.4. Jacobi Overrelaxation (JOR) method If we use the Jacobi method with an extrapolation parameter x, i.e., Q ¼ x1 D, we have the extrapolated Jacobi method which is called the JOR method by Young [17]. Thus we have # " 1 1 X xD1 1 Y þ ðI n xD1 BÞX þ xD1 CX ¼ . 1 1 X xD1 1 Y þ ðI n xD1 BÞX þ xD1 CX So the JOR iterative method will be ( 1 ðmÞ ðmÞ X ðmþ1Þ ¼ xD1 þ xD1 ; 1 Y xD1 ½ð1 1=xÞD1 þ L1 þ U 1 X 1 CX 1 ðmÞ ðmÞ X ðmþ1Þ ¼ xD1 þ xD1 . 1 Y xD1 ½ð1 1=xÞD1 þ L1 þ U 1 X 1 CX
Hence we have the JOR method in matrix form: X ðmþ1Þ ¼ D1 ðD xSÞX ðmÞ þ xD1 Y .
ð4:3Þ
If S be a symmetric positive definite matrix, we can show [11, p. 25] that 2 . xopt ¼ 2 mð.J Þ Mð.J Þ Evidently the JOR method is reduced to Jacobi method for x = 1. 4.5. Gaus–Seidel method Let us first examine the forward Gaus–Seidel iteration and then examine the backward Gaus–Seidel iteration. The forward Gaus–Seidel is defined by letting Q to be the lower triangular part of S, including its diagonal. So
M. Dehghan, B. Hashemi / Appl. Math. Comput. 175 (2006) 645–674
Q¼DþL¼ Hence we have " 1
Q
¼
0
C
D1 þ L1
ðD1 þ L1 Þ
.
ð4:4Þ
1
1
1
ðD1 þ L1 Þ
1
1
.
#
1
ðD1 þ L1 Þ U 1 1
#
0
ðD1 þ L1 Þ CðD1 þ L1 Þ
Finally we get " .GS ¼
D1 þ L1
657
ðD1 þ L1 Þ C 1
1
1
ðD1 þ L1 Þ CðD1 þ L1 Þ U 1 ðD1 þ L1 Þ ½U 1 CðD1 þ L1 Þ C
.
ð4:5Þ Thus the forward GS iterative method will be 8 1 1 1 ðmþ1Þ > ¼ ðD1 þ L1 Þ Y ðD1 þ L1 Þ U 1 X ðmÞ ðD1 þ L1 Þ CX ðmÞ ; > >X > > > < X ðmþ1Þ ¼ þðD þ L Þ1 CðD þ L Þ1 Y þ ðD þ L Þ1 Y 1
> > > > > > :
1
1
1
1
1
1
ðD1 þ L1 Þ ½U 1 CðD1 þ L1 Þ CX 1
1
ðmÞ
1
ðD1 þ L1 Þ CðD1 þ L1 Þ X ðmÞ . ð4:6Þ
Proposition 4.1. If we reduce our proposed solution in (4.6) for our choice of Q to the crisp case with X ðmÞ ¼ X ðmÞ and Y ¼ Y , we get X ðmþ1Þ 6¼ X ðmþ1Þ . We may choose D1 þ L1 0 Q¼ ; ð4:7Þ 0 D1 þ L1 to construct a solution which can be reduced to X ðmþ1Þ ¼ X ðmþ1Þ for the crisp case as introduced in [1]. But this is not a complete Gaus–Seidel iteration, because: (i) If S be a 2n · 2n matrix, by the choice of (4.4), 2n2 + n elements of S consist of the diagonal and lower diagonal part of S influenced, but with the choice of (4.7) n2 + n elements of S influenced, i.e., n2 elements which belong to C, ignored. (ii) Some theorems related to Gaus–Seidel and other iterative methods in the literature (will be introduced later in this paper), do not satisfy for the case (4.7), for example see Theorem 4.5. (iii) Our examples in Section 5, converge with (4.4) faster than (4.7). So we prefer to choose and work with (4.4).
658
M. Dehghan, B. Hashemi / Appl. Math. Comput. 175 (2006) 645–674
Similarly the backward Gaus–Seidel iteration is defined by letting D1 þ U 1 C Q¼DþU ¼ . 0 D1 þ U 1
Theorem 4.2. Let matrix A in Eq. (2.2) be strictly diagonally dominant with nonnegative diagonal elements, then both the Jacobi and Gaus–Seidel iterative methods are converge to S1Y for any arbitrary initial value X0. Proof. We know that .(J) (4.2) and .(GS) (4.4) are independent to the value of right hand side vector, so by using Theorem 2.5 and Corollaries 6.10.1 and 6.10.2 of [5] the result is trivial. h 4.6. The extrapolated Gaus–Seidel (EGS) method We can introduce the forward extrapolated Gaus–Seidel (EGS) method with 0 1 1 D1 þ L1 Q ¼ ðD þ LÞ ¼ ; a a C D 1 þ L1 where a is the extrapolation parameter. So the EGS iteration matrix is " # c1 c2 ; ð4:8Þ .EGS ¼ c3 c4 1
c1 ¼ ðD1 þ L1 Þ ½ð1 aÞD1 þ ð1 aÞL1 aU 1 ; 1
c2 ¼ aðD1 þ L1 Þ C; 1
1
c3 ¼ aðD1 þ L1 Þ CðD1 þ L1 Þ U 1 ; h i c4 ¼ ðD1 þ L1 Þ1 ð1 aÞD1 þ ð1 aÞL1 aU 1 þ aCðD1 þ L1 Þ1 C . For a = 1 the EGS method coincides with the GS method, and the EGS method is 8 ðmþ1Þ 1 1 X ¼ aðD1 þ L1 Þ Y þ ðD1 þ L1 Þ ½ð1 aÞD1 þ ð1 aÞL1 aU 1 X ðmÞ > > > > 1 > > þaðD1 þ L1 Þ CX ðmÞ ; > < X ðmþ1Þ ¼ þaðD1 þ L1 Þ1 CðD1 þ L1 Þ1 Y þ aðD1 þ L1 Þ1 Y > > > > > þðD1 þ L1 Þ1 ½ð1 aÞD1 þ ð1 aÞL1 aU 1 þ aCðD1 þ L1 Þ1 CX ðmÞ > > : 1 1 aðD1 þ L1 Þ CðD1 þ L1 Þ U 1 X ðmÞ . ð4:9Þ
M. Dehghan, B. Hashemi / Appl. Math. Comput. 175 (2006) 645–674
659
Theorem 4.3. Let S be an irreducible matrix with weak diagonal dominance. Then both the GS method and the EGS method converge for 0 < a 6 1. Theorem 4.4. If S is a consistently ordered matrix with nonvanishing diagonal elements, such that .J has real eigenvalues, then q(.EGS) < 1 if and only if q(.J) < 1 and 0 6 a 6 2. Proof. See Theorem 2.1 of [16]. h Theorem 4.5. Let S be a consistently ordered matrix with nonvanishing diagonal elements, such that the matrix .J has real eigenvalues, with q(.J) < 1. If we let aopt ¼
2 2 ½qð.J Þ2
;
then q(.EGS) is minimized and its corresponding value is qð.EGS Þ ¼
ð4:10Þ aopt ½qð.J Þ2 . 2
Proof. See Theorem 2.2 of [16]. h Theorem 4.6. Under hypotheses of the previous theorem and by aopt and R(M) = log q(M) as the asymptotically rate of convergence of matrix M, we have lim
qð.J Þ!1
Rð.EGSaopt Þ Rð.GS Þ
¼ 2.
Proof. See Theorem 2.2 of [16]. h Corollary 3. The asymptotically rate of convergence of the EGS method is twice as fast as the rate of convergence of the GS method. Unfortunately S may not be consistently ordered, even if A is consistently ordered. Similarly the backward EGS method can be obtained by letting: C 1 1 D1 þ U 1 Q ¼ ðD þ U Þ ¼ . a a 0 D1 þ U 1 4.7. The successive overrelaxation (SOR) method The next important iterative methods are known as forward and backward Successive Over-Relaxation methods commonly abbreviated as SOR.
660
M. Dehghan, B. Hashemi / Appl. Math. Comput. 175 (2006) 645–674
For the forward SOR iteration suppose that the splitting matrix Q is chosen to be " # 1 ðD þ xL Þ 0 1 1 1 Q ¼ ðD þ xLÞ ¼ x ; 1 x C ðD1 þ xL1 Þ x with a real parameter x. So using the forward SOR method we have c1 c2 .SOR ¼ ; c3 c4
ð4:11Þ
1
c1 ¼ ðD1 þ xL1 Þ ½ð1 xÞD1 xU 1 ; c2 ¼ xðD1 þ xL1 Þ1 C; c3 ¼ xðD1 þ xL1 Þ1 CðD1 þ xL1 Þ1 ½ð1 xÞD1 xU 1 ; 1
1
c4 ¼ ðD1 þ xL1 Þ ½ð1 xÞD1 xU 1 þ x2 CðD1 þ xL1 Þ C. Thus the forward SOR iterative method will be 8 ðmþ1Þ X ¼ xðD1 þ xL1 Þ1 Y þ ðD1 þ xL1 Þ1 ½ð1 xÞD1 xU 1 X ðmÞ > > > > > > xðD1 þ xL1 Þ1 CX ðmÞ ; > < 1 1 1 X ðmþ1Þ ¼ x2 ðD1 þ xL1 Þ CðD1 þ xL1 Þ Y þ xðD1 þ xL1 Þ Y > > > 1 1 > > þðD1 þ xL1 Þ ½ð1 xÞD1 xU 1 þ x2 CðD1 þ xL1 Þ CX ðmÞ > > : xðD1 þ xL1 Þ1 CðD1 þ xL1 Þ1 ½ð1 xÞD1 xU 1 X ðmÞ . ð4:12Þ Evidently the forward SOR method (4.12) is reduced to the forward Gaus–Seidel method (4.6) for x = 1. Similarly, the backward SOR iteration is defined by letting " # 1 ðD1 þ xU 1 Þ C 1 x Q ¼ ðD þ xU Þ ¼ ; 1 x 0 ðD1 þ xU 1 Þ x and the backward SOR method reduces to the backward Gaus–Seidel method for x = 1. Theorem 4.7. Let S be an irreducible matrix with weak diagonal dominance. Then (a) Both the Jacobi method and the JOR method converge for 0 6 x 6 1. (b) The Gaus–Seidel method and the SOR method converge for 0 6 x 6 1. Proof. See Theorem 4.2.1 of [17].
h
M. Dehghan, B. Hashemi / Appl. Math. Comput. 175 (2006) 645–674
661
4.8. The accelerated overrelaxation (AOR) method and the extrapolated successive overrelaxation (ESOR) method The AOR method is first introduced by Hadjidimos [10]. We may use this method for solving a fuzzy linear system by using two parameters a (called relaxation parameter) and x (called extrapolation parameter). For forward AOR method we can choose 0 1 1 D1 þ aL1 Q ¼ ðD þ aLÞ ¼ . x x aC D1 þ aL1 So we have
"
Q1 ¼ x
ðD1 þ aL1 Þ1
#
0
1
aðD1 þ aL1 Þ CðD1 þ aL1 Þ
1
ðD1 þ aL1 Þ
1
;
and .a;x ¼
c1
c2
c3
c4
ð4:13Þ
; 1
c1 ¼ ðD1 þ aL1 Þ ½ð1 xÞD1 þ ða xÞL1 xU 1 ; 1
c2 ¼ xðD1 þ aL1 Þ C; 1
1
c3 ¼ xðD1 þ aL1 Þ CðD1 þ aL1 Þ ½ð1 aÞD1 aU 1 ; c4 ¼ ðD1 þ aL1 Þ1 ½ð1 xÞD1 þ ða xÞL1 xU 1 þ axCðD1 þ aL1 Þ1 C; and the AOR method is 8 > X ðmþ1Þ ¼ xðD1 þ aL1 Þ1 Y þ ðD1 þ aL1 Þ1 ½ð1 xÞD1 þ ða xÞL1 xU 1 X ðmÞ > > > > > > > xðD1 þ aL1 Þ1 CX ðmÞ ; > > < X ðmþ1Þ ¼ axðD1 þ aL1 Þ1 CðD1 þ aL1 Þ1 Y þ xðD1 þ aL1 Þ1 X ðmÞ > > > > > > þðD1 þ aL1 Þ1 ½ð1 xÞD1 þ ða xÞL1 xU 1 þ axCðD1 þ aL1 Þ1 CX ðmÞ > > > > : xðD1 þ aL1 Þ1 CðD1 þ aL1 Þ1 ½ð1 aÞD1 aU 1 X ðmÞ . ð4:14Þ
Corollary 4. Evidently the forward AOR method (4.14) is reduced to (i) Jacobi method (4.1) for a = 0 and x = 1. (ii) JOR method (4.3) for a = 0. (iii) Forward Gaus–Seidel method (4.6) for a = x = 1.
662
M. Dehghan, B. Hashemi / Appl. Math. Comput. 175 (2006) 645–674
(iv) EGS method (4.9) for and a = 1 and x = a. (v) Forward SOR method (4.12) for a = x. Let li be the real eigenvalues of the Jacobi iteration matrix (4.2) which are less than one in modulus and l = minijlij and l ¼ maxi jli j. The optimum values for the AOR method are introduced in Table 1 as calculated by Avdelas and Hadjidimos [3]. Similarly the backward AOR iteration is defined by letting Q ¼ x1 ðD þ aU Þ which is easily reduced to Jacobi, JOR, backward Gaus–Seidel, backward EGS and backward SOR methods, as mentioned in Corollary 4. Theorem 4.8. If A in fuzzy linear system (2.2) is a strictly diagonally dominant matrix with aii > 0 and x P a P 0, then a sufficient condition for the AOR method (4.14) to be convergent is 2 ; 0
0, then q(.a,x) < 1 with 0 < a 6 1 provided 2a . 0
Identification
a
x
l=0
p2ffiffiffiffiffiffiffiffi 1þ 1l2
p2ffiffiffiffiffiffiffiffi 1þ 1l2
0
p2ffiffiffiffiffiffiffiffi 1þ 1l2
p2ffiffiffiffiffiffiffiffi 1þ 1l2
(iib)
0
1þ
p2ffiffiffiffiffiffiffiffi2
ð1l2 Þð1þ
(iii)
0
1þ
p2ffiffiffiffiffiffiffiffi2
1 pffiffiffiffiffiffiffiffi 2
(i) (iia)
1l
1l p2ffiffiffiffiffiffiffiffi2 1þ 1l
Spectral radius q pffiffiffiffiffiffiffiffi 1 1l2 pffiffiffiffiffiffiffiffi2 1þ 1 1þ
pffiffiffiffiffiffiffiffi 1l2 1l2 pffiffiffiffiffiffiffiffi2
1l p1 ffiffiffiffiffiffiffiffi2 1l
1l Þ
1l
pffiffiffiffiffiffiffiffi2 p1l ffiffiffiffiffiffiffiffi2 1l
pffiffiffiffiffiffiffiffiffi ffi l2 l2 Þ pffiffiffiffiffiffiffiffi2
lð
pffiffiffiffiffiffiffiffiffiffi ffi 2
ð1l Þð1þ
0 0
1l Þ
M. Dehghan, B. Hashemi / Appl. Math. Comput. 175 (2006) 645–674
663
Proof. Firstly, with Theorem 2.8 S is strictly diagonally dominant. Now use Theorem 5 of [15] for matrix S. 1 With Q ¼ xa ðD þ aLÞ ¼ x10 ðD þ aLÞ, where x 0 = xa , we can obtain the Extrapolated SOR (ESOR) method. So it is clear that AOR method (with relaxation parameter a and extrapolation parameter x) coincides with an ESOR method (with relaxation parameter a and extrapolation parameter x 0 ). Similarly, the backward ESOR iteration is defined by letting
Q¼
1 1 ðD þ aU Þ ¼ 0 ðD þ aU Þ. xa x
4.9. The symmetric SOR (SSOR) method If we use Q¼
1 ðD þ xLÞD1 ðD þ xU Þ; xð2 xÞ
the SSOR iteration matrix will be obtained.
4.10. The unsymmetric SOR (USSOR) method The USSOR method differs from the SSOR method in the second SOR part of each iteration where a different relaxation factor is used. We can obtain this method easily by Q¼
1 ðD þ x1 LÞD1 ðD þ x2 U Þ. x1 þ x2 x1 x2
4.11. The extrapolated modified Aitken (EMA) method The extrapolated modified Aitken (EMA) iterative method was first introduced by Evans [7] as a method for solving the systems of linear algebraic equations arising from discretizing of the elliptic difference equations. This method could be easily used for solving the fuzzy linear systems by taking Q¼
1 ðD þ xLÞD1 ðD þ xU Þ. x
664
M. Dehghan, B. Hashemi / Appl. Math. Comput. 175 (2006) 645–674
4.12. The modified successive overrelaxation (MSOR) method In applying the SOR method one can let x changes from equation to equation and from iteration to iteration. Assuming that A is positive definite and x varies only from iteration to iteration; Ostrowski (1954) has proved that if 0 < e 6 xi 6 2 e, i = 1, 2, . . . , then the SOR method with x1, x2, . . . , converges for any arbitrary vector as an initial value [17]. The MSOR method is [18,20] a variant of the SOR method which can be defined when the coefficient matrix has a 2 · 2 block form where the diagonal blocks are diagonal matrices, i.e., for a linear system AX = b in which A can be written as D1 H ; ð4:15Þ K D2 where D1 and D2 are square nonsingular diagonal elements. If the coefficient matrix has property A, then by a suitable rearrangement of the rows and corresponding columns of the coefficient matrix, we can obtain the form (4.15), eX e has property A. Thus there e ¼~ i.e., consider the linear system A b where A e has the form (4.15). So exists a permutation matrix P such that A ¼ P 1 AP we have e A eX e ) PA ¼ AP e ) PAP 1 ¼ A; e ¼ ~b A ¼ P 1 AP e Þ ¼ P 1 ~b. e ¼~ b ) AðP 1 X ) ðPAP 1 Þ X e ¼ X and P 1 ~ Now let P 1 X b ¼ b. So we have AX = b where A has the form eX e has property e ¼ ~b where A (4.15). Thus when we want to solve the system A A, we can transform it to the system AX = b such that A has the form (4.15). For fuzzy linear system of equations (2.3) we get the system (2.4) by the embedding method. Now we introduce the MSOR method for solving system SX = Y in (2.4). So initially we must suppose that S has the form (4.15). If we partit X and Y in accordance with the partitioning of S, we can write the system (2.4) in the form D1 H X1 Y1 ¼ ; K D2 X 2 Y2 or
D1 X 1 þ HX 2 ¼ Y 1 ; KX 1 þ D2 X 2 ¼ Y 2 :
Evidently (4.16) is equivalent to the system X 1 ¼ FX 2 þ C 1 ; X 2 ¼ GX 1 þ C 2 ; 1 1 1 where F ¼ D1 1 H , G ¼ D2 K, C 1 ¼ D1 Y 1 , and C 2 ¼ D2 Y 2 .
ð4:16Þ
M. Dehghan, B. Hashemi / Appl. Math. Comput. 175 (2006) 645–674
665
The MSOR method is the same as the SOR method with the red-black ordering except that we use the relaxation factor x for the equations corresponding to D1 (red equations) and the relaxation factor x 0 for the equations corresponding to D2 (black equations). This method that is called the modified SOR method with fixed parameters (MSOR method with fixed parameters) and was first considered by De Voglaere in 1958 [17]. Thus we have 8 ðmþ1Þ ðmÞ ðmÞ > ¼ x FX 2 þ C 1 þ ð1 xÞX 1 ; : X ðmþ1Þ ¼ x0 GX ðmþ1Þ þ C 2 þ ð1 x0 ÞX ðmÞ . 2 1 2 Evidently we may write (4.17) in the form X ðmþ1Þ ¼ .x;x0 X ðmÞ þ jx;x0 ; where .x;x0 ¼
ð1 xÞI 1 x0 ð1 xÞG
xF ; xx0 GF þ ð1 xÞI 2
ð4:18Þ
and jx;x0 ¼
xC 1 xx0 GC 1 þ x0 C 2
.
We note that .x,x 0 = .0,x 0 .x,0 and .x,x = .0,x.x,0 = .x. Thus for x = x 0 , the MSOR method reduces to the SOR method. We can also first apply the SOR method with relaxation factor x 0 to the black equations and then use x for the red equations, which provides the backward MSOR method with iteration matrix ð1 xÞI 1 þ xx0 FG xð1 x0 ÞF fx;x0 ¼ .x;0 .0;x0 . x0 G ð1 x0 ÞI 2 Another similar approach is the modified SOR method with variable parameters [17] where x and x 0 are allowed to vary from iteration to iteration, so the (4.15) are replaced by 8 ðmÞ > < X 1ðmþ1Þ ¼ xmþ1 FX ðmÞ þ C 1 þ ð1 xmþ1 ÞX 1 ; 2 ð4:19Þ > : X 2ðmþ1Þ ¼ x0mþ1 GX 1ðmþ1Þ þ C 2 þ ð1 x0mþ1 ÞX ðmÞ 2 .
Theorem 4.10. Let S be a matrix with nonvanishing diagonal elements which has the form (4.15) and let .x,x 0 be defined by (4.18). Then
666
M. Dehghan, B. Hashemi / Appl. Math. Comput. 175 (2006) 645–674
(a) If l is a nonzero eigenvalue of .J (4.2) and if k satisfies ðk þ x 1Þðk þ x0 1Þ ¼ ðxx0 l2 kÞ;
ð4:20Þ
then k is an eigenvalue of .x,x 0 . If l = 0 is an eigenvalue of .J, then k = 1x and or k = 1 x 0 is an eigenvalue of .x,x 0 in (4.18). (b) If k is an eigenvalue of .x,x 0 , then there exists an eigenvalue l of .J such that (4.20) holds.
Proof. See Theorem 8.2.1 of [17].
h
Young [17] showed that some variants of the MSOR method are faster than the SOR method. Some other results about the convergence of the MSOR method were also presented in his book.
5. Numerical results Example 5.1. Consider the 5 · 5 fuzzy system 8 8x1 þ 2x2 þ x3 3x5 ¼ ðr; 2 rÞ; > > > > > > > > 2x1 þ 5x2 þ x3 x4 þ x5 ¼ ð4 þ r; 7 2rÞ; < x1 x2 þ 5x3 þ x4 þ x5 ¼ ð1 þ 2r; 6 3rÞ; > > > > > x3 þ 4x4 þ 2x5 ¼ ð1 þ r; 3 rÞ; > > > : x1 2x2 þ 3x5 ¼ ð3r; 6 3rÞ. The extended 2 8 6 60 6 6 61 6 6 60 6 6 61 S¼6 6 60 6 62 6 6 60 6 6 60 4 0
10 · 10 matrix corresponding to the above system is 3 2 1 0 0 0 0 0 0 3 7 5 1 0 1 2 0 0 1 07 7 7 0 5 1 1 0 1 0 0 07 7 7 0 0 4 2 0 0 1 0 07 7 7 0 0 0 3 0 2 0 0 07 7. 7 0 0 0 3 8 2 1 0 07 7 0 0 1 0 0 5 1 0 17 7 7 1 0 0 0 1 0 5 1 17 7 7 0 1 0 0 0 0 0 4 27 5 2 0
0
0
1
0 0
0
3
M. Dehghan, B. Hashemi / Appl. Math. Comput. 175 (2006) 645–674
667
Table 2 Comparison results for Example 5.1 Method
Jacobi
Forward Backward EGS Gaus– Gaus– Seidel Seidel
Error CPU time Number of required iterations Matrix spectrum radius The rate of convergence
0.0045 0.0034 10.1256 4.2261 48 10
Forward Backward JOR SOR SOR
ESOR
0.0014 2.2933 5
0.0033 0.0012 2.0930 3.8756 8 8
0.0035 2.4135 5
0.0027 0.0042 2.5295 4.5866 10 8
0.8897
0.2081
0.1156
0.2873 0.3148
0.2545
0.5117 0.4195
0.0508
0.6817
0.9371
0.5417 0.5020
0.5943
0.2910 0.3773
The exact solutions with X = S1Y are (Table 2) x1 ¼ ðx1 ðrÞ; x1 ðrÞÞ ¼ ð0:7287 0:33057r; 0:044135 þ 0:35399rÞ; x2 ¼ ðx2 ðrÞ; x2 ðrÞÞ ¼ ð0:61418 þ 0:16626r; 1:0773 0:29682rÞ; x3 ¼ ðx3 ðrÞ; x3 ðrÞÞ ¼ ð0:12628 þ 0:29059r; 0:91822 0:50136rÞ; x4 ¼ ðx4 ðrÞ; x4 ðrÞÞ ¼ ð0:24192 0:33149r; 0:4158 þ 0:32622rÞ; x5 ¼ ðx5 ðrÞ; x5 ðrÞÞ ¼ ð0:47528 þ 0:91231r; 2:3947 1:0072rÞ; with tolerance e = 5 · 103 in (2.1), the solutions of Jacobi method are x1 ¼ ðx1 ðrÞ; x1 ðrÞÞ ¼ ð0:72948 0:33136r; 0:043351 þ 0:35478rÞ; x2 ¼ ðx2 ðrÞ; x2 ðrÞÞ ¼ ð0:61511 þ 0:16534r; 1:0763 0:2959rÞ; x3 ¼ ðx3 ðrÞ; x3 ðrÞÞ ¼ ð0:12705 þ 0:28981r; 0:91744 0:50058rÞ; x4 ¼ ðx4 ðrÞ; x4 ðrÞÞ ¼ ð0:24269 0:33227r; 0:41658 þ 0:327rÞ; x5 ¼ ðx5 ðrÞ; x5 ðrÞÞ ¼ ð0:47627 þ 0:91132r; 2:3938 1:0062rÞ. The results using the forward Gaus–Seidel method can be shown as: x1 ¼ ðx1 ðrÞ; x1 ðrÞÞ ¼ ð0:72882 0:33061r; 0:044174 þ 0:35397rÞ; x2 ¼ ðx2 ðrÞ; x2 ðrÞÞ ¼ ð0:6143 þ 0:16628r; 1:0775 0:29689rÞ; x3 ¼ ðx3 ðrÞ; x3 ðrÞÞ ¼ ð0:12645 þ 0:2906r; 0:91852 0:50144rÞ; x4 ¼ ðx4 ðrÞ; x4 ðrÞÞ ¼ ð0:2419 0:3314r; 0:41654 þ 0:32697rÞ; x5 ¼ ðx5 ðrÞ; x5 ðrÞÞ ¼ ð0:47497 þ 0:91243r; 2:3948 1:0071rÞ.
668
M. Dehghan, B. Hashemi / Appl. Math. Comput. 175 (2006) 645–674
The results using the backward Gaus–Seidel method are x1 ¼ ðx1 ðrÞ; x1 ðrÞÞ ¼ ð0:7287 0:33056r; 0:044048 þ 0:35391rÞ; x2 ¼ ðx2 ðrÞ; x2 ðrÞÞ ¼ ð0:61415 þ 0:16621r; 1:0773 0:2968rÞ; x3 ¼ ðx3 ðrÞ; x3 ðrÞÞ ¼ ð0:12629 þ 0:2906r; 0:91813 0:50139rÞ; x4 ¼ ðx4 ðrÞ; x4 ðrÞÞ ¼ ð0:24188 0:33152r; 0:41577 þ 0:32618rÞ; x5 ¼ ðx5 ðrÞ; x5 ðrÞÞ ¼ ð0:47531 þ 0:91235r; 2:3947 1:0071rÞ. The extrapolated Gaus–Seidel method with a = 0.90 gives x1 ¼ ðx1 ðrÞ; x1 ðrÞÞ ¼ ð0:7288 0:33059r; 0:044146 þ 0:35398rÞ; x2 ¼ ðx2 ðrÞ; x2 ðrÞÞ ¼ ð0:61434 þ 0:16626r; 1:0775 0:29688rÞ; x3 ¼ ðx3 ðrÞ; x3 ðrÞÞ ¼ ð0:12644 þ 0:29058r; 0:9185 0:50143rÞ; x4 ¼ ðx4 ðrÞ; x4 ðrÞÞ ¼ ð0:24207 0:33142r; 0:41544 þ 0:32614rÞ; x5 ¼ ðx5 ðrÞ; x5 ðrÞÞ ¼ ð0:47496 þ 0:91244r; 2:3948 1:0072rÞ. The results using the forward SOR method with x = 0.87 are x1 ¼ ðx1 ðrÞ; x1 ðrÞÞ ¼ ð0:72879 0:3306r; 0:044117 þ 0:354rÞ; x2 ¼ ðx2 ðrÞ; x2 ðrÞÞ ¼ ð0:61405 þ 0:16631r; 1:0773 0:29683rÞ; x3 ¼ ðx3 ðrÞ; x3 ðrÞÞ ¼ ð0:12608 þ 0:29071r; 0:91839 0:50143rÞ; x4 ¼ ðx4 ðrÞ; x4 ðrÞÞ ¼ ð0:24208 0:33161r; 0:41586 þ 0:32631rÞ; x5 ¼ ðx5 ðrÞ; x5 ðrÞÞ ¼ ð0:47501 þ 0:91239r; 2:3946 1:0071rÞ. The results using the backward SOR method with x = 0.90 can be shown as: x1 ¼ ðx1 ðrÞ; x1 ðrÞÞ ¼ ð0:72858 0:33068r; 0:044393 þ 0:35385rÞ; x2 ¼ ðx2 ðrÞ; x2 ðrÞÞ ¼ ð0:61461 þ 0:1659r; 1:0759 0:2959rÞ; x3 ¼ ðx3 ðrÞ; x3 ðrÞÞ ¼ ð0:12659 þ 0:29044r; 0:91776 0:50084rÞ; x4 ¼ ðx4 ðrÞ; x4 ðrÞÞ ¼ ð0:24198 0:33153r; 0:41528 þ 0:32621rÞ; x5 ¼ ðx5 ðrÞ; x5 ðrÞÞ ¼ ð0:47488 þ 0:91279r; 2:3953 1:008rÞ. The JOR method with x = 0.80 produces the following results: x1 ¼ ðx1 ðrÞ; x1 ðrÞÞ ¼ ð0:73086 0:33205r; 0:043532 þ 0:35528rÞ; x2 ¼ ðx2 ðrÞ; x2 ðrÞÞ ¼ ð0:61335 þ 0:16681r; 1:0774 0:29725rÞ; x3 ¼ ðx3 ðrÞ; x3 ðrÞÞ ¼ ð0:12562 þ 0:29088r; 0:91807 0:50156rÞ; x4 ¼ ðx4 ðrÞ; x4 ðrÞÞ ¼ ð0:24292 0:33314r; 0:41827 þ 0:32805rÞ; x5 ¼ ðx5 ðrÞ; x5 ðrÞÞ ¼ ð0:47448 þ 0:91251r; 2:3943 1:0073rÞ.
M. Dehghan, B. Hashemi / Appl. Math. Comput. 175 (2006) 645–674
669
The obtained solutions using the AOR method with a = 0.60 and x = 1.333 are x1 ¼ ðx1 ðrÞ; x1 ðrÞÞ ¼ ð0:72908 0:33089r; 0:043622 þ 0:35435rÞ; x2 ¼ ðx2 ðrÞ; x2 ðrÞÞ ¼ ð0:61422 þ 0:16609r; 1:0772 0:29676rÞ; x3 ¼ ðx3 ðrÞ; x3 ðrÞÞ ¼ ð0:12628 þ 0:29068r; 0:91861 0:50155rÞ; x4 ¼ ðx4 ðrÞ; x4 ðrÞÞ ¼ ð0:24415 0:33344r; 0:41751 þ 0:32768rÞ; x5 ¼ ðx5 ðrÞ; x5 ðrÞÞ ¼ ð0:4742 þ 0:91341r; 2:3957 1:008rÞ. The ESOR method solutions with a = 0.60 and x = 0.60 · 1.333 = 0.7998 is the same as the AOR method: x1 ¼ ðx1 ðrÞ; x1 ðrÞÞ ¼ ð0:72908 0:33089r; 0:043622 þ 0:35435rÞ; x2 ¼ ðx2 ðrÞ; x2 ðrÞÞ ¼ ð0:61422 þ 0:16609r; 1:0772 0:29676rÞ; x3 ¼ ðx3 ðrÞ; x3 ðrÞÞ ¼ ð0:12628 þ 0:29068r; 0:91861 0:50155rÞ; x4 ¼ ðx4 ðrÞ; x4 ðrÞÞ ¼ ð0:24415 0:33344r; 0:41751 þ 0:32768rÞ; x5 ¼ ðx5 ðrÞ; x5 ðrÞÞ ¼ ð0:4742 þ 0:91341r; 2:3957 1:008rÞ.
Fig. 3. Diagram (b) shows that the optimal value for Alpha is equal to the optimal value of the Omega, so the AOR method is reduced to the SOR method. (a) The asymptotically radius of convergence. (b) Colormap scaling for Alpha and Omega. (c) The asymptotically rate of convergence.
670
M. Dehghan, B. Hashemi / Appl. Math. Comput. 175 (2006) 645–674
Because x1 and x4 are not fuzzy numbers, the fuzzy solution of AOR method in this example is a weak solution given by x1 ¼ ðx1 ðrÞ; x1 ðrÞÞ ¼ ð0:043622 þ 0:35435r; 0:72908 0:33089rÞ; x2 ¼ ðx2 ðrÞ; x2 ðrÞÞ ¼ ð0:61422 þ 0:16609r; 1:0772 0:29676rÞ; x3 ¼ ðx3 ðrÞ; x3 ðrÞÞ ¼ ð0:12628 þ 0:29068r; 0:91861 0:50155rÞ; x4 ¼ ðx4 ðrÞ; x4 ðrÞÞ ¼ ð0:41751 þ 0:32768r; 0:24415 0:33344rÞ; x5 ¼ ðx5 ðrÞ; x5 ðrÞÞ ¼ ð0:4742 þ 0:91341r; 2:3957 1:008rÞ. Example 5.2. Consider the 6 · 6 fuzzy system 8 3x1 x2 ¼ ð10r; 20 10rÞ; > > > > > > < x1 þ 3x2 x3 ¼ ð6 þ 2r; 10 2rÞ; x2 þ 3x3 ¼ ð5 þ 10r; 20 5rÞ; > > > x4 þ 3x5 x6 ¼ ð10r; 30 20rÞ; > > > : x5 þ 3x6 ¼ ð10 þ 20r; 60 30rÞ. The coefficient matrix A and the corresponding matrix S are both symmetric positive definite and consistently ordered. So we can find the optimal extrapolation factors for the ER method as 0.333, the JOR method as 1, the EGS Table 3a Comparison results for Example 5.2 Method
Richardson
Matrix spectrum radius CPU time Number of required iterations The rate of convergence
3.41.42
Optimal ER
Jacobi
Optimal JOR
Forward GS
Backward GS
0.4719
0.4714
0.5117
0.2222
0.2222
Divergent Divergent
9.7440 19
5.0873 19
5.9686 19
3.1044 11
2.8641 10
Divergent
0.3261
0.3266
0.3266
0.6532
0.6532
Table 3b Comparison results for Example 5.2 Method
Optimal EGS
Optimal forward SOR
Optimal backward SOR
Optimal AOR
SSOR
EMA
Matrix spectrum radius CPU time Number of required iterations The rate of convergence
0.1250 2.3834 9
0.0660 2.4635 8
0.0894 2.14308 7
0.0660 3.4249 8
0.2247 6.1589 10
0.2629 7.0401 11
0.9031
1.1806
1.0485
1.1806
0.6484
0.5801
M. Dehghan, B. Hashemi / Appl. Math. Comput. 175 (2006) 645–674
671
method as 1.125, the SOR method as xopt = 1.0627, and for the AOR method as Fig. 3 shows aopt = xopt = 1.0627. So the AOR method is reduced to the SOR method, and its asymptotically rate of convergence is not better than the rate of convergence in SOR method. See Tables 3a and 3b for other results obtained by choosing e = 5 · 105. Also we use x = 0.96 and x = 0.95 for the SSOR and EMA methods in Table 3b, respectively. Firstly note that the Richardson method is divergent, and since S is consistently ordered and as you can see in Tables 3a and 3b, q(.J) = 0.4714 < 1 and by aopt = 1.125 we get q(.EGS) = 0.125 that is true in comparision with Theorem 4.5 and Rð.EGSa
opt
Rð.GS Þ
Þ
¼ 1:38 (see Theorem 4.6).
Example 5.3. Consider the 4 · 4 fuzzy system 8 x1 x2 ¼ ð14 r; 12 14 rÞ; > > > < 2x þ 4x þ 3x ¼ ð5 þ 1 r; 6 1 rÞ; 1 2 3 2 2 4 8 1 > x x ¼ ð þ r; rÞ; > 3 4 9 9 3 > : 2x3 þ 4x4 ¼ ð2 þ r; 6 3rÞ.
Fig. 4. (a) The asymptotically radius of convergence. (b) Colormap scaling for Alpha and Omega. (c) The asymptotically rate of convergence.
672
M. Dehghan, B. Hashemi / Appl. Math. Comput. 175 (2006) 645–674
Table 4a Comparison results for Example 5.3 Method
Jacobi
JOR
Forward GS
Backward GS
Optimal EGS
Optimal forward SOR
Matrix spectrum radius CPU time Number of required iterations The rate of convergence
0.7071 5.6875 144
0.7364 10.9531 163
0.5000 3.7031 75
0.5000 3.3906 73
0.3333 8.5781 48
0.1716 5.6875 35
0.1505
0.1329
0.3010
0.3010
0.4771
0.7655
Table 4b Comparison results for Example 5.3 Method
Optimal backward SOR
Optimal AOR
SSOR
USSOR
EMA
Matrix spectrum radius CPU time Number of required iterations The rate of convergence
0.1716 5.5469 33
8e-9 0.6094 7
0.5378 33.2656 80
0.4335 5.5938 62
0.4223 4.1250 58
0.7655
8.0949
0.2694
0.3630
0.3744
The coefficient matrix A and corresponding matrix S are both consistently ordered. So we can find the optimal extrapolation factors for the EGS method as 1.333, the SOR method as xopt = 1.17157287525381, and for the AOR method as Fig. 4b shows aopt = 1.17157287525381 and xopt = 1.41421356237310. The asymptotically rate of convergence for the AOR method is better than the rate of convergence in SOR method. See Tables 4a and 4b obtained for other results by choosing e = 5 · 1020. Also we use x = 0.9, x = 1.2, x1 = 1.2 and x2 = 0.7 and x = 1.2 for the JOR, SSOR, USSOR and EMA methods in Tables 4a and 4b, respectively. Firstly the Richardson method is divergent. As you can see the Jacobi method converges with 144 iterations, while the optimal SOR method needs 35 iterations and the optimal AOR method needs only 7 iterations. Thus the AOR technique is more attractive than the SOR scheme at least in the case that the coefficient matrix S be consistently ordered, as is shown in Example 5.3.
6. Conclusion and future directions Iterative methods are applied for solving fuzzy linear systems (FLS) by assuming that the proposed matrix S be nonsingular. Then we proved that if aii > 0, the matrix A is strictly diagonally dominant if and only if, S be strictly diagonally dominant. If the unique solution of SX = Y is a strong or weak
M. Dehghan, B. Hashemi / Appl. Math. Comput. 175 (2006) 645–674
673
fuzzy number, then the approximate solution of iterative method also would be a strong or weak fuzzy number. As numerical examples show, whenever new methods are used, computing the sequence of solutions is more difficult and the rate of convergence can be increased. Another question that could be considered, is: ‘‘How could we solve the fully fuzzy linear system AX = Y where A is a fuzzy matrix with iterative methods?’’.
Acknowledgement We would like to acknowledge the helpful discussions with Mr. Mehdi Ghatee (Department of Applied Mathematics, Amirkabir University of Technology, Tehran, Iran).
References [1] T. Allahviranloo, Numerical methods for fuzzy system of linear equations, Appl. Math. Comput. 155 (2004) 493–502. [2] T. Allahviranloo, Successive overrelaxation iterative method for fuzzy system of linear equations, Appl. Math. Comput. 162 (2005) 189–196. [3] G. Avdelas, A. Hadjidimos, Optimum accelerated overrelaxation method in a special case, Math. Comput. 36 (1981) 183–187. [4] S.L. Chang, L.A. Zadeh, On fuzzy mapping and control, IEEE Trans., Syst. Man Cyb. 2 (1972) 30–34. [5] B.N. Datta, Numerical Linear Algebra and Applications, Brooks/Cole Pub., California, 1995. [6] R. DeMarr, Nonnegative matrices with nonnegative inverses, Proc. Amer. Math. Soc. (1972) 307–308. [7] D.J. Evans, The extrapolated modified Aitken iteartion method for solving elliptic difference equations, Comput. J. 6 (1963) 193–201. [8] M. Friedman, M. Ming, A. Kandel, Fuzzy linear systems, FSS 96 (1998) 201–209. [9] W. Hackbusch, Iterative Solution of Large Sparse Systems of Equations, Springer-Verlag Inc., New York, 1994. [10] A. Hadjidimos, Accelerated overrelaxation method, Math. Comput. 32 (1978) 149–157. [11] L.A. Hageman, D.M. Young, Applied Iterative Methods, Academic Press, New York, 1981. [12] A. Kandel, M. Friedamn, M. Ming, Fuzzy linear systems and their solution, IEEE (1996) 336–338. [13] D. Kincaid, W. Cheney, Numerical Analysis Mathematics of Scientific Computing, Brooks/ Cole, California, 1991. [14] M. Ming, A. Kandel, M. Friedman, A new approach for defuzzification, FSS 111 (2000) 351–356. [15] M. Madalena Martins, On an accelerated overrelaxation iterative method for linear systems with strictly diagonally dominant matrix, Math. Comput. 152 (1980) 1269–1273. [16] N.M. Missirils, D.J. Evans, On the convergence of some generalized preconditioned iterative methods, SIAM J. Numer. Anal. 18 (1981) 591–596. [17] D.M. Young, Iterative Solution of Large Linear Systems, Academic Press, New York, 1971. [18] D.M. Young, R.T. Gregory, A Survey of Numerical Mathematics, vol. 2, Dover Publications, New York, 1973.
674
M. Dehghan, B. Hashemi / Appl. Math. Comput. 175 (2006) 645–674
[19] L.A. Zadeh, Fuzzy sets, Inform. Control 8 (1965) 338–353. [20] Q.L. Zhang, Q.M. Zhu, A. Longden, T. Davis, Fuzzy analogy of linear systems, Program of the Second CEMS Research Student Conference, University of the West of England, 15 October 2003. Available from: . [21] H.-J. Zimmermann, Fuzzy Set Theory and its Applications, Kluwer, Dordrecht, 1996.