Residuation in fuzzy algebra and some applications

Residuation in fuzzy algebra and some applications

FUZZY sets and systems ELSEVIER Fuzzy Sets and Systems 71 (1995) 227-239 Residuation in fuzzy algebra and some applications R.A. Cuninghame-Green*,...

598KB Sizes 0 Downloads 37 Views

FUZZY

sets and systems ELSEVIER

Fuzzy Sets and Systems 71 (1995) 227-239

Residuation in fuzzy algebra and some applications R.A. Cuninghame-Green*, Katarina Cechl/trovfi University of Birmingham, Edgbaston, Birmingham B15 2TT, UK

Received January 1994; revised May 1994

Abstract The system (<0, 1 >, max, min) has a number of well-known applications, many of which involve the inversion of linear relations. We show the advantage of residuation theory for this process and apply it to the analysis of the problem of Chebycbev approximation. Keywords: Fuzzy algebra: Residuation; Chebychev approximation

I. Introduction Applications of systems of linear equations over structures different from the classical field were studied in the 1960s and in recent years have again received increasing attention as a tool for modelling discrete event systems or fuzzy relations. Several authors have observed that a system of the form R ® x = b always has a 'principal' solution x*, i.e. such that the system is solvable if and only if x* is a solution; and in this case x* is the m a x i m u m solution. The connection with residuation theory was pointed out in I-4, 5]. The aim of this paper is to show further implications of residuation theory for fuzzy algebra, including a method for testing linear dependence of sets of vectors. Next we address 'the inverse problem' studied by Pedrycz in 1-9] and propose an efficient algorithm for its solution. Analogies of this approach with methods used in a similar algebra, where multiplication is a group operation, are stressed.

2. Residuation An extensive study of residuation can be found in I-1].

Definition 1. A function f : S ~ T where S, T are given partially ordered sets is called residuated if there exists a function f * : T ~ S such that the following hold:

* Corresponding author. 0165-0114/95/$09.50 © 1995 - Elsevier Science B.V. All rights reserved SSDI 0 1 6 5 - 0 1 1 4 ( 9 4 ) 0 0 2 5 2 - 5

R.A. Cuninghame-Green, IC Cechlarov6 / Fuzz), Sets and Systems 71 (1995) 227-239

228

(i) f i s isotone and f * (ii) f ( f * ( t ) ) <~ t for all (iii) f * ( f ( s ) ) >1 s for all f * is called the residual

is isotone; t ~ T; s ~ S. off.

It is well k n o w n that the residual of J" is unique. Example 1. Let S be an ordered g r o u p with the g r o u p operation @. (This structure will be refered to as 'the group case'.) If r e S is arbitrary, then the function ~(x) : S --* S defined by ¢(x) = r ® x is residuated and its residual is ~*(x)-- r - l ® x (compare I-4]). Example 2. Let S be the closed interval (0, 1 ) of reals and let a ® b = min{a, b}. Define further

a®'b=

{~

iff a~
(1)

N o w let r 6 S be given. Consider two functions ~, ¢* : S --* S of one variable x defined by ~(x)=r@x

and

~*(x)=r@'x.

(2)

L e m m a 1. z is residuated and ~* is its residual.

Proof. (i) If x ~< x', then obviously r ® x ~< r ® x ' , hence ¢ is isotone. N o w we want to show that r ® ' x ~< r ® ' x' for x ~< x'. Recall that r ® ' x = 1 only if r ~< x, otherwise it is equal to x. Similarly, r ® ' x' = 1 only if r ~< x', otherwise it is equal to x'. Hence the only case that could cause difficulty is r ® ' x = 1 and r @' x' = x'. But this cannot arise, since if r ~< x, then r ~< x' too, so r ® ' x' = 1. Therefore ¢* is isotone. (ii) To verify the inequality ~(¢*(x)) ~< x for all x ~ S, we shall distinguish two cases. I f r ~< x, then ¢*(x) = 1, and r ® 1 = r ~< x. If r > x, then ~*(x) = x, but in this case r @ x = x, so the inequality holds. (iii) N o w we need ~*(¢(x)) >~ x for all x e S. Again distinguish two cases: If r ~< x, then r ® x = r and ~*(r) = 1 >~ x. If r > x, then r ® x = x, ~*(x) = x and the p r o o f is completed. [] The usefulness of residuation theory for solving equations is expressed in the following lemmas.

Lemma 2. Let f : S ~ T be a residuated function with residual f * . The equation f (s) = t for t ~ T is soluble if and only if f * ( t ) is its solution; and in this case f * ( t ) is the maximum solution. Proof. If suffices to show that if f ( s ) = t has any solution, then f * ( t ) is its m a x i m u m solution. Let us therefore suppose that f(so) = t for some So e S. Then property (iii) of residuation implies

f * ( t ) = f * ( f ( s o ) ) >1 So

(3)

and we have the maximality of f * ( t ) . Property (ii) gives f ( f * ( t ) ) <~ t and (3) together with isotonicity of f imply f ( f * ( t ) ) >~f(so) = t, hence f * ( t ) is a solution. [ ] L e m m a 3. Let f : S -~ T, a residuated function with residual f * : T ~ S, and t ~ T be 9iven. Then (a) Equation f ( s ) = f ( f * ( t ) ) is always soluble. (b) f ( s ) = t is soluble if and only if t = f ( f * ( t ) ) . (c) I l l ( s ) = t' is soluble and t' <<,t, then t' <~f(f*(t)).

229

R.A. Cuninghame-Green, K. Cechl,~rov6 / Fuzzy Sets and Systems 71 (1995) 227 239

Proof. (a) Obviously a solution is f*(t). (b) If f ( s ) = t is soluble, then one solution is f * ( t ) by Lemma 2; hence t = f ( f * ( t ) ) . The converse implication follows from (a). (c) Solubility of f(s) = t' and (b) imply t' = f ( f * ( t ' ) ) . Now isotonicity of f and f * gives t' = f ( f * ( t ' ) ) <~ f ( f * ( t ) ) . []

Corollary 1. I f f (s) = t is insoluble, then f ( f * ( t ) ) is the maximum right-hand side t', not exceeding t, for which f (s) = t' is soluble.

3. Systems of linear equations over fuzzy algebra In what follows, N will stand for the set { 1, 2, ..., n}, M for { 1, 2 .... , m} and we shall use the notation O, ~e for maximum and @', y e, for minimum in the partially ordered set S. The set of all n-tuples over S, considered as column vectors, will be denoted as S,, the set of all m × n matrices as S(m, n). The following lemma extends 'scalar' residuated functions to 'matrix multiplication' functions. Lemma 4. Let ~11, ¢12, • • ., ¢mn : S ~ S be residuated functions with respective residuals ~*~, ~ ~2 . . . . ~t" S~ ~ Sm and Jl* : S~ ~ S~ by

,

~,~* . Define

i1

[~(x)]i =

~e,u(xj)

for i e M,

j=l

[~*(y)]j=

~e':t~j(yk)

forjeN.

k=l

Then ~l is residuated with residual ~!*. Proof. ~ , ~ * are both compositions of isotone functions, therefore they are isotone. Now, let y ~ Sm be arbitrary. Since @' is minimum, we have m

~ e , ~j(Yk) <<.¢*'(Yl)

(4)

k=l

for each i e M, j ~ N. Hence

-----

¢~j(Yk) j=l

(definition of ~ and ~ * )

k=l

<< ~ e ~u(**(Yi))

(by (4) and isotonicity)

j=l

~< yi

(by residuation).

Therefore ~ ( ~ * ( y ) ) ~ < y for all y e S~. The inequality ~ * ( ~ ( x ) ) / > x for all x ~ S, can be shown similarly. [] Now, fuzzy algebra is the system 6e = (S, •, ®), where S = (0, 1), G = max and ® = min. In what follows, for two vectors b, d ~ S,, the inequality b ~< d means bi ~< di for all i e N; b < d means b ~< d and b # d; b ~ d stands for b~ < di for all i. Define two 'dual' matrix multiplications in fuzzy algebra: for given

R.A, Cuninghame-Green, K. Cechlcirovdl / Fuz~ Sets and Systems 71 (1995) 227 239

230

P ~ S(m, 1) and Q ~ S(l, n), l

P ® Q = R = (rift e S(m,n)

withril=

~ ® pig ® qk~. k=l I

P®'Q

= T = (tq) ~ S(m,n)

with

tij =

E

Pik ® ' qkj.

k=l

Lemma 1 together with Lemma 4 now imply that for the given matrix R ~ S(m,n) the function : ~ ( x ) : S , - o S m defined by J l ( x ) = R Q x is residuated with residual ~ t * ( y ) : S , , ~ S , defined by ~ * ( y ) = R T ® ' y . Now from Lemmas 2 and 3 most of the results for solubility of systems of linear equations over fuzzy algebra can be deduced. In what follows, for a system of the form R ® x = b, the principal solution will be x*(R, b) = R T ® ' b and we denote by ~ ( R , b) the 'residual' for b, i.e. R @ (R v ®'b). Note that part (a) of the following corollary has already been obtained, maybe in a different form, by several authors, we mention here at least rl 1,12, 5,10, 2] from the extensive literature on the subject. For the group case, the obtained structure (S, • = max, ®) is called maxalgebra and formulations formally identical with those below have been derived in [4] using the respective residuation. Corollary 2. Let R ~ S(m, n), b ~ S,, be given. (a) R ® x = b is soluble if and only if b = R ® (R T ®' b) and in this case x*(R, b) = R T ® ' b is the maximum solution. (b) ~e(R, b) = R ® (R T ®' b) is the maximum d such that R ® x = d is soluble and d <<.b.

1

0.2

0

0.2

04; )

0

0.6

0.6

0.5

0.3

0.4

Example 3. Let

R=

0.2

b=

'

0.7 0.3 0.5

Then x*(R, b) = (1,0.3, 0.3) T and ~le(R, b) = (1,0.2, 0.3, 0.5) x. Since ~te(R, b) ~ b, x* is not a solution and the system is insoluble. Moreover, it remains insoluble for any right-hand side d fulfilling ~le(R, b) < d <~ b, i.e. d = (1, ~,0.3,0.5) v for 0.2 < ct ~< 0.7.

4. Linear independence over fuzzy algebra Let a set ~' ~ S,,, f ' = {vl, Vz. . . . . v, } be given. A linear combination of these vectors is any expression of the form (21 ® v ~ ) @ ( 2 2 ® v 2 ) @

. - .

® ( 2 . ® v.)

for 2~, 22 . . . . . 2. ~ S.

Definition 2. A set ¢~ _~ S,, is linearly dependent if one of the vectors from q/~ can be expressed as a linear combination of the others. Several different definitions of linear independence over fuzzy algebra may be considered [3]. However, the above definition is used most widely [6-8], e.g. for solving systems of linear equations, although it has some unpleasant properties ([3], or compare [4] for the group case).

R.A. Cuninghame-Green, K. Cechl6rovtl / Fuzzy Sets and Systems 71 (1995) 227-239

231

To test linear dependence of a set 3e containing n vectors, it suffices to solve n systems of linear equations, with a different vector from 3v playing the role of the right-hand side each time. A tidier method for this task in the group case was described in [4] under the name zg-method. Since it uses only the properties of residuation, it is valid mutatis mutandis in fuzzy algebra, so we only state here the algorithm without proof.

Algorithm 1: d-method Step 1. Denote by A the matrix having vectors of ~ as its columns: A ~ S(m, n). Step 2. Compute A T @'A and replace the entries on its main diagonal by 0. Denote the new matrix by z¢, d ~ S(n, n). Step 3. Compare each column of A ® d with the corresponding column of A. If, say, the j t h column of A ® ~¢ equals t h e j t h column of A, then the j t h vector of ~ is a linear combination of the other vectors; the corresponding coefficients are the entries of the jth column of z¢. If equality does not occur for any j, then ~ is linearly independent. Example 4. Let ~ = {vl, 1)2, /)3 }, where vl = (1, 0, 0, 0.5) T,/)2 Then A is as the matrix in Example 3, i.e.

(,0.20.4)

A=

0

0.2

0.2

0 0.5

0.6 0.3

0.6 0.4

AT

=

(0.2, 0.2, 0.6, 0.3) T and v3 = (0.4, 0.2, 0.6, 0.4) x.

o oo.5

=

0.2 0.4

'

0.2 0.7

0.6 0.6

0.3 0.4

( 02)

and 02

AT Q ' A =

1

°°10)

0.4

0.4

,

~¢ =

0.2

0

0

0

0.2

,

0.2

A®~=

0.2

0.6

0.2

0.4

"

Now we see that only the third column in A @ ~¢ equals the corresponding v3. Therefore /)3 = (0.4 ® vl) • (1 ®/)2) and this is the only vector from ~ that can be expressed as a linear combination of the others. So ~ is linearly dependent.

5. The inverse problem The Chebychev distance of two vectors b, d e Sm is defined by (~,(b,d) = max{[bl - di[; i ~ M}, the Chebychev distance of a vector b and a set ~ is ~ ( b , ~ ) = inf{8~(b,d); d ~ 3¢~}. We shall call a A-approximation of b in 3¢" any vector d ~ 3¢ such that ¢¢(b, d) = A. A Chebychev-best approximation of b in ~ is a vector v ~ d ~ ( b , 3¢') ~_ 3¢" (when it exists), where 8~(b, v) = g ¢ ( b , ~ )

for all v E ~ ( b ,

~).

232

R.A. Cuninghame-Green, 1<2 Cechlcirovd / Fuzzy Sets and Systems 71 (1995) 227 239

In particular, we shall be interested in approximations in the set 5 ~ ( ( R ) = {d ~ Sin; R ® x = d is soluble} for a given R c S(m, n). If R ® x = b is soluble, then b ~ ~¢p(b, ~ f ( R ) ) . A more interesting situation occurs when this is not the case. The problem of finding ~¢p(b, 5GE(R)) for a given matrix R and a given right-hand side b over fuzzy algebra was formulated in [9].

Example5.

Pedrycz [9] illustrated his method of 'minimal distortions' by the following data:

(o4, 1 )

1 0.4 0.5 0.7\ R =

0.7

0.5

0.3

0.5

0.2

1

1

0.6

'

b=

t0.~J

0.4 0.5 0.5 0.8/ He found the approximation { = (0.7 + e, 0.7 - e, 0.5 + e, 0.5 + e)v, which gives the Chebychev error 0.5 + e. However, this ~ 6 G f ( R ) , and ~(t~, He(R, E)) = e. Moreover, the method of 'minimal distortions' would be difficult to implement in a computer, since as it stands, it is not given in precise algorithmic form. For a comparison, let us mention the method used in the group case (compare [4, p. 165]). First 631e(R,b) is found. If it is equal to b, b belongs to , ~ ( ( R ) . Otherwise A = g*(b, He(R,b)) is computed and the best approximation is found by increasing ~e(R, b) componentwise by A/2. This approach relies on the fact that scalar and matrix multiplications commute and cannot be used in fuzzy algebra, as we illustrate by two examples.

i070204) (!) (10)

Example 5 (continued). For the matrix given by Pedrycz, we have

RT=(10"4 0.5

0.5 0.3

0.7

0.5

1 0.5 1 0.5

0.6

and

He(R,b)=

R~ ® ' b =

'

0.8

0 0

'

0

~e(R, b) ~ b and g~(b, JOe(R, b)) = 1. When we increase all the entries of ~ d R , b) by 0.5, the obtained vector d = (0.5, 0.5, 0.5, 0.5) 7 belongs to 5e,~(R), but, as we shall see later, it is not the Chebychev best approximation of b in 5~f(R). Example 6. Now consider R=

(03 0.8

0.3

and

b=

(07)

0.2 "

We obtain ~ ( R , b) = (0.2, 0.2) 'r and hence 8~(b,~le(R,b))=0.5. (0.45, 0.45) v ~ 5P~[ (R).

However, He(R, b) + (0.25, 0.25) T =

Recall that the theory of residuation implies that ~te(R, b) is the best underestimating approximation to b in 6GE(R). To obtain a better approximation, some of the entries ofb have to be increased, but it is not clear which and by how much. Therefore our approach will be to increase all entries by the same amount A (until, possibly, some of them reach the upper bound 1) and find the best underestimation for such a right-hand side.

R.A. Cuninghame-Green, K. Cechhirov6 / Fuzzy Sets and Systems 71 (1995) 227 239

233

Therefore we define two vectors Lo(b, A) and Hi(b, A) by

Lo(b,A)~={~-A

ifbi>~A, otherwise,

Hi(b'A)' = {bl' + d

otherwise.ifb~+A~
Clearly, any vector d, whose Chebychev distance from b is less than or equal to A, must fulfil Lo(b, A) ~< d ~< Hi(b, A). In such an interval with A = g~(b, ~e(R, b)), some vectors belong to 5%f(R). One of them is ~e(R, b) but the question is: Is there any better approximation to b in 5PoE(R)? To find out, we choose a 'target error' A (whose choice will be discussed later) and compute .~te(R, Hi(b, A )). Since this is a best underestimate of Hi(b, A ) in 5eoE(R), we have the following lemma. Lemma 5. Let R E S(m, n) with no zero row, b ~ S,, and A > 0 be given. (a) I f ~e(R, Hi(b, A))i < Lo(b, A)i in at least one entry i, then there is no A-approximation of b in 5e~f(R). (b) If Lo(b,A) <<.~te(R,Hi(b,A)), but equality occurs in at least one entry, then ~le(R, Hi(b,A)) is a Chebychev-best approximation of b in 5%E(R), giving Chebychev error ~¢(b, ~le(R, Hi(b, A)).

-

Proof. (a) Let d be a A-approximation of b in 6Pof(R). Then Lo(b, A) ~< d ~< Hi(b, A). Corollary 2 implies that d <~~e(R, Hi(b, A)) and this holds in particular for the ith entry, so di ~< ~e(R, Hi(b, A))i < Lo(b, A)~ a contradiction. (b) Clearly, ~e(R, Hi(b, A)) is an approximation of b in 6Po((R), we want to show that it is a Chebychevbest. Let i be the component with equality Lo(b, A)i = ~ ( R , Hi(b, A ))i. Unless Lo(b, A)~ = 0, any smaller A' would result in the situation described in (a). But Lo(b, A)~ = ~e(R, Hi(b, A))i = 0 cannot occur, because Hi(b, A) >> 0 for any A > 0 and since R has no zero row, R T ®' Hi(b, A) >> 0 and so

~le(R, Hi(b,A)) >>O. [] Thus the main idea of the algorithm is simple: we repeatedly compute Lo(b, dk), Hi(b, Ak) and ~e(R, Hi(b, AR) ) for a suitable sequence of target errors dk. In each step we compare ~e(R, Hi(b, dR) ) with Lo(b, AR). If the former is strictly greater than the latter, we may try to decrease the target error. If equality

occurs in at least one entry, we have got the best approximation. And finally if there is some 'undershoot', there is no approximation with such an error and we must return to some previous value of A or repeat the computations with d increased. The sequence of target errors A k must be chosen in a way ensuring that no possible solution has been omitted. Because of this, without any further analysis, we could not even ensure that the algorithm will terminate with the correct solution after a finite number of steps. The choice of the sequence of target errors will be discussed later. Example 5 (conclusion). We already know that b~SP~E(R) and o~,(b, ~e(R, b)) = 1. The following procedure is summarized in Table 1 for the sequence of target errors chosen to decrease with constant step 0.1. Notice that the actual error produced can be smaller than the target error, as it is in step 1.

234

R.A. Cuninghame-Green, K. Cechlarovd / Fuzzy Sets and Systems 71 (1995) 227 239

Table 1 k

Ak

Lo(b, Ak)

Hi(b, Ak)=d

R'r®'d

Re(R,d) = ~

g~(b,g)

07 i

0.8

1

0.1 1

1

0.9

1

,)

0.8/

0.9

2

0.7

0.3

1

I

1

0.9

0.7

0.9

0.9 0.7

0.9 0.7

0.8 0.8 0.6

0.7 0.8

0.7

0.7

1

3

4

0.6

0.5

0.4

1

0.8 0.6 0 0.5

0.9

0 0

0.7

0.7 0.5

0.5

0 5

6

0.4

0.3

0.6

0.4

0 0

0.4 0.4

0.1 0.7 0 0

0.3 0.3

°o:i]

0.3

(o.91 0.6

0.9) 0.7

1

0.6

0.7 0.7 0.5

1 t (o-3 t 1 0.7 0.4

0.5

0.4

0.4

0.3 0.3

0.7

0.3

In step 6 an undershoot occurs, but in step 5, { is strictly greater than Lo(b, As) and we try to fit something between the errors 0.4 and 0.3. If we do the computations with A = 0.4 - e, where 0 < e < 0.1, we get 0.4 - e ) Lo(b, A) =

0.60+ e 0

( 0"81---e,e" ,

Hi(b,A)=~0.6 \ 0 . 4 --~

Re(R, Hi(b, A )) =

0.4 - e 0.4 - e 0.4 - e

The obtained solution is not in the prescribed interval; in fact the error produced is 0.6 + E. Hence, 0.4 is the least possible error. However, from the algorithmic point of view it is not clear what is to be done if we do not get an ' u n d e r s h o o t ' for any t-decrease in the target error.

R.A. Cuninghame-Green, K. Cechldrov6 / Fuzzy Sets and Systems 71 (1995) 227-239

235

6. A closer look at d * ( x ) In the proposed method for solving the Pedrycz problem we repeatedly compute ~e(R, Hi(b, d )) = R ® (R T ® ' Hi(b, A )). Since R and b are constant throughout the process, what we get is in fact a vector function of one variable A, ranging from 0 to 1. Let us denote the individual components of this function by

~,(A ) = [~e(R, Hi(b, A ) ) t . In the algorithm we look for the situation ~i(A) ~< Lo(b, A)i for some i. Lo(b, A)i is a continuous function, decreasing at unit rate in the interval (0, b~) and equal to 0 in the interval (b~, 1 ). The behaviour of functions ~(A) is more complicated. Definition 3. A function f(x): (0, 1) ~ (0, 1> is piecewise linear, if there exist a finite number of points 0 = Xl <~ x2 <~ ... <<.xt = 1 and coefficients Pk,qk such that f ( x ) = pkX + qk for x • (Xk,Xk+l). A point Xk will be called corner if f ( x ) is continuous in Xk and Pk-1 V~ Pk. The following lemma summarizes some important properties of the function +* defined in (2). Lemma 6. Function ~* defined in (2) is piecewise linear and non-decreasing with the only possible slopes 0 and 1 and at most one discontinuity point in x = r; however, it is continuous f r o m above everywhere.

A typical appearance of +*(x) is depicted in Fig. 1. Lemma 7. L e t R • S (m, n), b • Sm be given. For every i, function x* (A) = [R T ®' Hi(b, A )]i is piecewise linear. I f Ao is a discontinuity point or a corner, then Ao = 1 - bk f o r some k • M or Ao = atj -- bt f o r some l • M , j • N. Moreover, the slopes o f all linear pieces are either 1 or O.

Proof. This form of the function is implied by the behaviour of 'scalar' functions ~*(x), by the continuity of operators ~ ' and by the fact that the result of ~ ' operation is always equal to one of the operands. []

q,

i

I

I

P X

Fig. 1.

236

R.A. Cuninghame-Green, K. Cechlgtrovti / Fuzzy Sets and Systems 71 (1995) 227 239

Similarly, since the operators 2~', ~2~ are continuous and the result is always equal to one operand, we also have the following lemma. L e m m a 8. I f R ~ S(m, n), b ~ S,, are given, then .]'or every i,

(i) ~i(1) = max{ri~;j ~ N}; (ii) f~(0)= [~e(R,b)]~; (iii) {i(A ) is piecewise continuous and continuous J~'om above everywhere; (f Ao is a discontinuity point, then there exist k ~ M, j ~ N such that Ao = a~i.- b~ ; (iv) ~i(A) is piecewise linear, with slopes 1 or 0; if Ao is a corner, then Ao = a~i - bi or Ao = 1 - b~ for some k, 1 6 M a n d j ~ N . Theorem 1. Let R ~ S(m, n) and b ~ S~ be given. Then

g,(b, 5G((R)) = min {A; ~(A ) >~ Lo(b, A)}. Proof. If d e 5e~f(R) and g~(b, d) = A, then d ~< Hi(b, A ) and by Corollary 2 also d <<,Jle(R, Hi(b, A )) = {(A ). On the other hand, d ~> Lo(b, A ), therefore ~+(b, 6G[(R)) >~ min {A; {(A) ~> Lo(b, A )}. Moreover, since the functions Lo(b, A )~are continuous and {~(A ) are continuous from above, the minimum is attained. [] The graphs of functions #i(A) for the example given by Pedrycz are in Fig. 2, together with graphs of corresponding Lo(b, A )i. From this discussion we see that it is sufficient to compute the residuation for the target errors equal to potential discontinuity points or corners. As soon as we find a target error A, that produces equality Lo(b, A)~ = ;~e(R, Hi(b, A))~ in at least one component, we have a precise solution. Otherwise we find two consecutive target errors A~ < A~+~ such that there is a gap between {(A~+ 1) and Lo(b,A~+~) and Ar produces an undershoot in a coordinate i. We have to decide if (~i(A) is continuous from below in A,+~. If it is, then it is easy to compute where Lo(b, A)~ and {i(A) intersect, bearing in mind that in the interval (A,, A~+ 1) there is neither a discontinuity point nor a corner. If E~(A) is discontinuous from below in A,+ ~, it is sufficient to compute the value of {z(A ) in the midpoint of the interval (A,, A, + ~) - since the slope of E~(A) is either 1 or 0, we can easily obtain the limiting value of {~(A) for A approaching A,+I from the left. And again, it is now a matter of routine to compute the intersection of Lo(b, A )~ and EI(A ). So the algorithm is as follows. Algorithm 2 - the inverse problem

Input: Matrix R ~ S(m, n), m-tuple b ~ Sin. Output: The Chebychev-best approximation of b in .9~o((R) and g~(b, 5~of(R)). Step 1. Compute ~ e ( R , b). If ~e(R, b) = b then stop, b 6 5~f(R). Step 2. Compute and sort into an increasing sequence {A,} the values akj -- bl > O, 1 -- bk for k, l ~ M, j ~ N. (Comment: the potential discontinuity points and corners.) Step 3. By binary search determine the interval (At, At+ 1) such that {i(A,+ 1)/> Lo(b, At+ ~)i for all i E M; but f~(Ar) <<.Lo(b, A~)i for at least one i. Denote the set of such i's by I. Step 4. If { ( A t + l ) ~ > Lo(b,A,+I) but ~i(dr~ 1) --- L o ( b , A r + l ) i for at least one i, output .~e(R, Hi(b,A,+~)) with g~(b, cJ~((R)) = A,+ 1 and stop. Otherwise set A = (A, + A, ~ 1)/2 and compute .~e(R, Hi(b, A )).

R.A. Cuninghame-Green, K. Cechl6rov~ / Fuzzy Sets and Systems 71 (1995) 227 239

237

CA~ I

I

0.7

o.4

~i I -

l

I

I

I I

-

'.

0.4

0.4

I

=

0.4

0.6

I

A

I

0.7

....F

0.7

0.4

0.1 --i

0.1

i

i

:

y

;

0.4 0.6

0.8

0.7

I Fig. 2.

N o w for all i e I denote

e = ~ e(R, Hi(b, A))i - ~e(R, Hi(b, A,)),, (01 = .~e(R, Hi(b, A,))i, (02 ~-- (01 -[-

2e,

Pl = Lo(b, A,)i, ~A 2 =

Lo(b,A,+l)i

and c o m p u t e

hi

Ar +

(A,+I

~A 1

Ar)(p

- - Ok 1

I __ 031) + ((02 - - ]A2)"

Output ~le(R, Hi(b, 6)) with #~(b, 5ao#(R)) = max{6i; i e I} and stop.

I

2. A

238

R.A. Cuninghame-Green, K. Cechl6rovh / Fuzzy Sets and Systems 71 (1995) 227-239

Theorem 2. The algorithm correctly finds the Chebychev best approximation of b in S,%{(R) in polynomial

time. The correctness of the algorithm is implied by the preceding discussion. For its polynomiality realize that the number of potential discontinuity points or corners is not more than O (n 3 ) and for each one of them the complexity of the computation is again at m o s t O ( n 3 ) . [ ] Proof.

-

Example 6. We illustrate the algorithm by its application to 0.5 R=

0.6 0.2

09 0.3

,

b=

0.7

(00, \0.8

We compute

R T = ( 0"50.9 0.30.6 0.7 0.2) '

R.r®,b=(O0),

,~le(R,b)=

.

~e(R, b) ~ b and ge(~le(R, b), b) = 0.8, therefore we continue. The potential discontinuity points and corners sorted increasingly are 0,0.2,0.3,0.4,0.6,0.8,0.9. With binary search we can obtain the sequence of computations and results shown in Table 2. And now we get 0.5 - 0.4 63 = 0.3 + (0.4 - 0.3) (0.5 - 0.4) + (0.5 - 0.4) = 0.35. So in the end we obtain Jle(R, Hi(b, 0.35)) = (0.45, 0.35, 0.45) T with Chebychev distance B~(B, 6eoE(R)) = 0.35. In conclusion, we remark that our aim has been specifically to present an exact analysis and polynomialtime algorithm for the Pedrycz problem. We think it quite probable that the computational complexity can be improved.

Table 2 Step k

Ak

1

0.4

2

0.3

Lo(b, Ak)

Hi(b, Ak) = d

RV®'d

(0.5)t05°4 04

(:) 0.4

1

0.5

03 i

~e(R,d) = ~

Result

0.4

Gap

(o.4) 0.5

0.4

0.3 0.4

Undershoot for i = 3

R.A. Cuninghame-Green, K. Cechl6rov/t / Fuzzy Sets and Systems 71 (1995) 227-239

239

Acknowledgement This research was carried out during a visit of Dr. K. Cechlhrov~i to Birmingham University, financed under the European Community's Action for Cooperation in Sciences and Technology with Central and Eastern European countries, Proposal No. 10342. The authors would like to express their gratitude for this sponsorship.

References [1] T.S. Blyth and M.F. Janowitz, Residuation Theory (Pergamon, Oxford, 1972). [2] K. Cech~r~vh~ Unique s~vabi~ity ~f max-min fuzzy equati~ns and str~ng regu~arity~f matrices ~ver fuzzy a~gebra~Fuzzy Sets and Systems, submitted. [3] K. Cechlfi.rovh and J. Plhvka, Linear independence in bottleneck algebras, submitted. [4] R.A. Cuninghame-Green, Minimax Algebra, Lecture Notes in Econ. and Math. Systems, Vol. 166 (Springer, Berlin, 1979). 1-5] A. di Nola, On solving relational equations in Brouwerian lattices, Fuzzy Sets and Systems 34 (1990) 365-376. 1-6] Guo Si-Zhong, Further contributions to the study of finite fuzzy relations, Fuzzy Sets and Systems 26 (1988) 93-104. 17] M. Higarashi and G.J. Klir, Resolution of finite fuzzy relations, Fuzzy Sets and Systems 13 (1984) 65-82. 1-8] Li Jian-Xin, The smallest solution of max-min fuzzy equations, Fuzzy Sets and Systems 41 (1990) 317-327. [9] W. Pedrycz, Inverse problem in fuzzy relational equations, Fuzzy Sets and Systems 36 (1990) 277-291. [10] K. Peeva, Fuzzy linear systems, Fuzzy Sets and Systems 49 (1992) 339-355. [11] E. Sanchez, Resolution of composite relation equations, Inform. and Control 30 (1976) 38-48. [12] K. Zimmermann, Extremal Algebra (Ekon. Ostav (2SAV, Praha, 1976) (in Czech).