Numerical Experience with Parallel Algorithms for Solving the BMI Problem

Numerical Experience with Parallel Algorithms for Solving the BMI Problem

2c-054 Copyright © 1996 tFAC 13th Trie nnial Wurld Cl1ng~:;s, San Francis(:l) , USA. NUMERICAL EXPERIENCE WITH PARALLEL ALGORITHMS FOR SOLVING THE B...

366KB Sizes 0 Downloads 55 Views

2c-054

Copyright © 1996 tFAC 13th Trie nnial Wurld Cl1ng~:;s, San Francis(:l) , USA.

NUMERICAL EXPERIENCE WITH PARALLEL ALGORITHMS FOR SOLVING THE BMI PROBLEM Shih-Mim Liu Opt
G. P. Papavassilopoulos Dept. of Electrical Eng. - Systems University of Southern California Los Angeles, CA 90089·2563

Abstract: This paper presents numerical compub,Lions for solving the BMI problem . Four globa.l algorithms includlng two pa.rallel algorithms are employed to solve l}, e BMI probl'!m by a sequence of concave minimization problems or d.c. programs vi.:l concave

programming. The parallel algorithms with or based on a :5uitable pa.rtit.ion of an initial are more efl'ldent. tha.n the seri"l oncs. Computational experiences are r(:porled for randomly generat.ed BMI problems of small size.

enc1 o~; ing ployhC(hon

Keywords: Bl\.'U Problem , Concave MinimiuLion, d.e. Progra-mming, Global Optimiz.a.tion, Pa.rallel Alg.orithm. 1. INTRODUCTION

The Bilinear Matrix Inequality (BMI) has been introduced by Safonov, et,[. (1994), as a simple and flexible framework for approc.ching robust control system synthesis problems. Owing to the simplicity and generality of the BMI Cormulation, an efficienL and reliable BMI

solver is in order. The problem is hard essentially due t,o it.s nonconvex char.l.Cter. Let us state the DMI problem (5afonov, et al., 1994; a[., 1994b): Let

Goh, cl

" .. n,.

F(x,y;

= LLT.;y;I-';.i

(1 )

1991; Goh , et a[., 1994a) have been proposed for solving (1) . Essentially, th e approach in (Safonov and Papavassilopoul08, 1994) finding the diameter of" set defined as intersection of ellipsoids, is equivalent to maximizing a convex function over a convex constraint set, a problem on which numerous authors (e.g. Pardalos and Rosen ,

1986; Pardalos and Rosen, 1987; Hor.t and Thy, 1993; Hor.t, et al., 1991) have worked. Although there ex· isi several efficient. algorithms for solving the concave minimization problem, for example (Horst, et al., 1991 i Horst and Thoai, 1989; Thieu, 1989; Horst and Thy, 1993), they do no t seem to be appropriate ror minimizing the problem described in (5a.Con ov and Papavas-

silopoulos, 1994). A simple numerical BM! example i. given in (Golt, et al., 19948) which provides a branch and bound global optimization algorithm by minimi z-

i::l j=l

ing the I3M! eigenvalue problem . where FiJ

=

FTj e

{l , ... , n lf }. F:

Rn,x""

for i E {l , ... , n£}, j E

~n., X ~nll ==) ~".)(n,

is obviously bilinear, but not join'Jy convex in (x, y) . We want t.o find (x· , V·) E 3/"" x 3/". such that F(x·,V") < O. i.e. find (x·, y.) in the t.h, feasible set of lhe I3MI (I) which IS

SB

= {(x, y) E ~~""

x 3/", : F(x, y) < 0)

(2)

In (Goh, el al., 1994b), various properties of the BM! problem are investigalro and several local optimization

In t.his paper, numerical results and comparisons of four algorithms for solving the BM! problem are reported. As in (Safonov and Papavassilopoul05, 1994), one can solve a seq uence of concave lninimization problems or d.c. (difference of convex functions) programs instead

of solving (I). The first algorithm considered for such concave minimi zation problems is the method proposed

by Horrmall (1981) . To ""celerate the speed of convergence, a pa.rallel implementation discussed in (Liu and

Papavassilopoulos, 1994) was applied.

Also, another

approaches are discussed. However, local optimization methods can not solve the I3t\H problem in general

parallel algOrIthm with a different partition of the con· vex constraint seL was employed. The numerical exper-

because of the nonCOllvex character of the BMI problem. To explore the feasible set (2), a global optimization method may be net:essary. Recently, two global optimization methods (Safonov and Papavassilopoulos,

promising . For solving the d,c. problems, th e method of (Ilorst, et al., 199 1) where only linear programming problems have 1.0 be solved f:eems t o be not efficient

iments show that Lhe parallel methods seem to be most

1827

because of the convex constraint set generated by t he ellipsoids. Here, the algorithm presented in (Liu and Papav .... ilopo ulOll, 1 ~95b) was used to solve such d. c. problems. The results of test problems indicate that the approach in (Liu and Papavassilopoulos , 1995b) is less efficient than the first one in t he average since the former needs one mor~ variable and has a different outer polybedron. These algorithm. are guaranteed to find a solution of (1) in a finite number of iterations if (2) is non-empty and a suitable tolerance « > 0) is prescribed . In other words t for a prescribed number f > 0, they can be terminated in a finite number of iterations if th ere are no feasibl e soluticns to be found .

2. GLOBAL OPTIMIZATION METHODS Consider An = {A, : A, = A(z,) E

A, =

z; FI,IZ, z; F 2 , I Zr .. .

~".

x wn,} , where

z; Fl,zz, z'! F 2 •2 Z

z; Fl,n,z, z; F 2 ,o,*zr .. .

1•

z; I z'!, z, E !Rn, and IIz, 1I =, I , for r = Fnr , lZr

1 ,

z'[ Fn~ , n,*zr

Fllr , l Z,.

C

~ {u E !R"'+"' .. uT [

I

~A(,)T

~A(z) I

] "


z E !Rn, , IIzll

= I)

(6)

is t he intersection of ellip90idli centered at the origin in u = (x , y)T. Actually, it is not necessary to find a global optimum of (4) because all we need is a point (x , y) with J,(x,V) < -I , for '1z E !Rn, .

~n.+n ll. where

Lemma 1 (Safonov and Papavassilopoulos, 1994) There eXIsts a point (x, y) E S8 if and only if J,(x, y) < -1 for problem (4). 2.2. BM! and d. c. Programming Since F(x, y) < 0 <==> F(x, ),1/) < 0 for all ),

> 0, without loss of generality, problem (3) can also be written as: minql, (x,y) S.l .. : (x , y) E B (7) x,Y

1, 2, .. . (cf. Goh, et al., (1994b) and refer
11.11 denotes the Euclidian norm and V(P ) denotes th e vertex set of P . The fo llowi ng proposition is a straightforward application o r Savkas' lemma. Proposi tion 1 For a fixed y E ~fny, let Gr = ArY where A, E An (r = 1, 2 , ... , I) . Th ere an x E !Rn• • ach that ,.T A,y < 0, for ,. 1, 2, .. . , I if and only if 0 ~convex hull of (C,.,r = I, ... , 1).

=

Obviously, the convex set

"i.l.

Actually, an x E ~Ilz in Proposition 1 can be easily calculated by the methods described in (Gilbert, 1966; Wolfe, 1976; Hauser, 1986). However, it seems diffi cult to find a y sa tisfy ing the condition in Proposition 1 since there is no gene ral ru ..e for choosing such y. 2.1. BM! and Concave Minimi=atw7I From (Safonov and Papavassilopoulos, 1994) , the BMI (J), (2) is equivalent 1.0

where "" max [ x tPz(z, y) = 111= 1

,a ER".

y

]T [

(8) andB= {(x,y) Ewn.+ n.: lI(c,y)1I ~ "(} , "( isa p ositive rea.l number. Note that (8)is an ind efinite quadratic form. Let f,(x , y) and g, (x,y) be two convex functions over ~~n.+n, such that qI,(x,y) = f ,(x,y) -g, (x , y). T hus, by introducing an additional variable, a concave minimization problem is followin g min

X,Y,tI

v - g,(",y)

s.t.: (x,y) E B, f ,(x,v) - v ~ 0

Also, including two additional variabletl, problem (7) can be transformed into a canonical d.c. program (see Horst and Thy (1993 ) and refere nces therein) as follows:

(3)

mm

v

B, f ,(x , y) - s

S.t .: (x, y) E

wh ere [A( z )].; = zT F;p E iR,,· +n, . Thus , (3) leads to minimizing a concave fu nction subject to an infinite number of quadrat.ic const.raints pararneterized by z.

(9)

(10) ~

0, g,(x,y) - s + v ~ 0

Similarly, it is not necessary 1.0 globally solve (7) since any point (x, y) with qI, (x, y) -:: 0 for (7) will satisfy (3).

minJ, (x , y) < ,Y

S.t. :

[~ r[

J

~~(Z)

][ :

'1z E ill" ',

1~ I

(4)

Ilzll = 1.

3. THE ALGORITHMS

where J , (x , y) ~ _(11"11' + Ilyll') and p is a real positive number such that ~A(z) I

Lemma 2 There exists a point (x, y) E S8 if and only if1> , (x,y) < 0 for problem (7; .

(5)

In t his section three algorithms including one serial and two parallel algo rithms for soldng a sequence of concave minimization problelU~ of thf" type (4) and one serial algorithm for solving a sequen·:e of d .e. programs of the type (7) are in trod uced .

1828

3.1. Serial Algorithms Algorithm l(er. liu and Papavassilopoulos, 1994; Hoffman, 1981) Given A, A(z,) E AR (r 1,2, ... ,1), where Zl, Z2, ..• ! Zl are randomly generated. Let Uo be the origin and construct a Eimplex Po :> n~:;;;l Qr, where Qr are ellipsoids enclosing C.

=

=

Step 1: Choose Ut by minimizing .1,(u), u E V(Pt ). If Ut E n~:;;;l Qr I tnen Ut is an optimal solution with optimal value J. (Ut). Otherwise, solve a such that Wt Ut + O"(uo - ud E n~:;;;1 Qr. If .1,(",,) < -1, stop. OI.herwise, set

overlap_ Each region in Figun· 1 is a simplex containing n;t: + ny + 1 linear constraints. Algorithm 4: Execute the same procedure as Algorithm 3 except using the partition described as Figure 1. In order to balance the computation load of each node for Algorithm 4, partitioning the enclosing polyhedron Po into N simplices of equal volume is suggested_ In this paper, N (= n;J; + ny + 1) processors will be used in parallel algorithms 3,4 for numerical computation.

=

=

where H(u) E Qr, r I, ... , I. Step 2: Set PHI = Pt n {u E 3/"'+"' : h,(u) ~ o}. Compute V(P,+,j. Let i ~ t + 1, go to Step 1. The following algoritbm solves (7) via concave programming, i.e. solve (9). Let D ((x.y,v) E 3/",+",+1 : (x,y) E B,f,(x,y) ~ v} be the feasihle set of (9). Let a simplex S :J B with vertex set V(S), a prism Po :J D can be defined by

=

Po

= {(x, y, v)

E ~1"·+",+1 : (x, y) E S,

VB

~ v ~ vd

(12) where VB = min{J,(u) : u E B} and VT = max{J,(u) : u E V (S)}. The prism Po has Tl + 1 vertical lines (parallel to the v-axis) which pass through the n + 1 vertices of S respecti vely. Algorithm 2 (ef. Lil and Papavassilopoulos, 1995b) Given A, A(z,) E AR (e 1,2, ... ,1), where Z1, Z2, ... , Zl are arbit~arily generated. Let Wo = (uo, VD) be strictly interior point. of D. Construct a prism Po::J D. Set / =0.

=

=

Step 1: Choose Wt ::;;; (Ut, Vt) by minimizing iftz(u, v), where (u, v) E I'(P,). If lV, E D, t.hen is an optimal solution with optimal value ~z(wd. Otherwise, solve n sueh that Wt+a(wo-wd E D. If 4>, (w;) < 0, stop. Ot.herwise, set

Figure 1: An example for dividing Po E ~ into 3, 4 regions of equal volume.

3.3. A Global AlgOrithm for BM! Problem Applying the above algorithms, the following algorithm is used to solve BM! problem Algorithm GAll Let A, = A(z,) E AR (r = 1,2, __ .,1), where Zl, Z" ... , z/ are randomly generated with Ilz,ll = 1. Set k

= 1.

Iteration k: Apply Algorithms 1,2,3,4 to solve

min Jjk)(x, V) s.t.:

w,

w; : : :

h,(u,v) = 'i7HT(W;)[(U,v) - w;]

(13)

where H(u, v) =, f,(u) - v and H(",;) = O. Step 2: Set Pt+ 1 = Ptn{(u, v) E 3/n,+n,+1 : ht(u,v) ~ OJ. Compute V(Pt +,}. Let t ~ t + 1, go to Step L 3.2. Parallel Algorifilms Here, two parallel approaches for Algorithm 1 are employed. Algorithm 3: Apply the same method of parti tion as in (Lill and Papavassilopoulos, 1994) to solve each subproblem by Algorithm L All N su bproblems are solved in parallel. However, the partition in Algorithm 3 may cause overlap for the subprobl'~ms. The subdivision of the enclosing polyhedron Po shown in Figure 1 will have no

[:r[

X,Y

or

min 4>\k)(x, V)

(r,y)

where

J;k)(x,y)

~

*:' ][:]~ s.1.: lI(x, v)11 ~ 'I

-(llxll' + Ilvll')

[*~;

1(14) (15)

*:'] ° >

for some positive real number Pr and ift}k) (x, y)

maxlI*l

[~r[

AO;

g

~] [ : ];r=I,2, ... ,1 =

(1) If a globally minimizing pair (x('), y(k)) with .1jk) -1 or 1J~k) = 0 is found in the Algorithms .,3,4 or 2, then the BM! problem has no feasible solution.

°

(2) If any point (x(k), y(')) with .1j') < -1 or 4>\') < has been obtained after ilk) iterations in solving the above problems, then find

"'''' xl' IIzmax 11::: 1 { zT ["'?' L..-l;:1 L..-J:::l I

1829

z41

which solves

V') F· .] z J

t,1

}

(16)

Let s be the solQtion to the equation (16)_ If s < 0, then stop, ("C'l, yC')) is a feasible solution of BMI. Otherwise, form A 1+1 A(zl+tl E AR and set k ~ k + I, I ~ 1+1.

=

Remarks • t(k) means that Algorithms 1,2,3,4 require t iter-

ations to find a point (x, y) such that J)')(x, y) < -1 or q,\')(x,y) < 0 at iteration k, • At iteration k, it is not necessary to restart the algorithms 1,2,::,4 to solve (14),(15) and all algorithms are based on the information obtained at iteration k - 1 to search the solution. • In parallel algorithms 3,4, assign to each processor one subproblem. If several processors find (x,y) with .J)'l(",y) < -I, then choose the point (x(k) I y(k)) havillg t.he smallest objective function value. Lemma 3 At iteration k, if s > 0 for zi 1 in (16), then the ellipsoid generated by A/+ 1 strictty separates (,,(k), y(')) from Q" l.e.

n:-=I

[

X(,)]T[ ek)

Y

I _1_ 4T pr+l' 1+1

1 A p,-:;:-;1+1

I

][ "e'l] > y(')

1.

In practice, Algorithms 1,2,3,4 would stop when Uk Lk ::; c, where U" is an upper bound, Li:; is a lower bound of J)'I(q,\')) and, > 0 is prescribed, Theorem 1 If Algorithms 1,2,3,4 are not considered to

«

stop after a finite number of steps ~ 0), a feasible solution of BAU can be found if there exists a feasible set for problem BM!. Proof: See (Liu and Papavassilopoulos, 1994; Hoffma,!!,.

1981; Liu and PapavLssilopoulos, 1995b).

U

4. EXAMPLE AND NUMERICAL EXPERIMENTS 4.1. Example

To illustrate the four presented Algorithms, consider a 2, n, 3) simple BMI problem (with nx = 2, ny studied in (Goh, et a,'" 1991a):

=

=

= X1Y1Fl,1 + X1Y2Fl,2 + X2YI F2,1 + x2y,F2" (17) where F 1,1 = [-10, -0,5, -2; -0,5.4,5, 0; -2, O. 0], F,,1 = F(x, y)

[9, 0_5, 0; 0_5, 0, -3: 0, -3, -1], F 1 ,2 = [-1.8, -0,1, -0,4; -0,1, 1.2, -J; -0,4, -I, 0], F2 ,2 = [0, 0, 2; 0, -5_5, 3; 2, 3, OJ, Let I = 2, hence

(the greatest eigenvalue of F(,,',y')) was found at = 3, where t(l) = 5, t(2) = 2, t(3) = 20 and L~=1 t(i) = 27_ Algorithm 2: a feasible solution (,,', y') (0_1204, 0.0947, 0.3233, 0.8538) with X(F(x', y')) = -0,0089 was found at k = 4, where t(l) = 1, t(2) = 2, t(3) = 2, t(4) 23 and Lt=l t(i) 28. Algorithm 3: a feasible solution (x', y') = (0,4983, 0,2319, 0,1895, 0,8335) with X(F(x', y')) -0_0338 was found at k = 3, where t(I) = 1, t(2) = 2, t(3) = 10 and L~=l t(i) = 13_ Algorithm 4: a feasible solution (x', y') = (-0,4769, -0.1457 -0,5460, -0,4679, -0.5460) with ~(F(x', y')) was found at k = 4, where t( l) = 2, t(2) = 1, t(3) = 2, t(4) 10 and Lt=, t(i) 15_ k

=

=

=

=

=

7,209 _ 0,871

'- , -

=

4.2. Numerical Experiments In this section) the four algorithms presented in the previous sections were tested. and compared on small problems randomly generated with nx = 2, ny = 2 n z = 3. These algorithms were programmed in MATLAB and run on Sun SPARCstation IPX (4/50), In order to test their performance, the four algorithms were run with the same A l , A2 at the beginning for each test problem. The computational results of Algorithms 1)2 are shown in Table 1. Since Algorithm 2 needs one more variable v and more vertices at the beginning (prism Po), it seems less efficient than Algorithm 1. But due to the different formulation) Algorithm 1 has more iterations than Algorithm 2 in some test problems_ Notice that the performance of Algorithms 1.2 depends heavily on the choice of the interior point lio ([UD, vo]). However, there are no general rules for this cllOice. For the parallel algorithms, the BMI problem can be computed in parallel without any communication among all nodes from iteration k to iteration k + 1. The only message needed to be broadcasted is a possible feasible solution found at each iteration of algorithm GAB. Since the amount of data to be pas:::ed is small, the communication overhead should not Chuse serious delays. Thus, here a serial computer was used to simulate the behavior of a parallel computer. Let T/ denote the time which is needed to complete the t-th il.eration in Algorithms 3,4 for processor i. Obviously, all approximate measure of the time taken by a parallel computer to finish the t-th iteration is (18) Tt max ~'

= l;;;;l . , ... ,N

Without considering the communication, the approximately executing time of parallel algorithms for solving a BMI problem can be measu red by K

A _ (-7,338 - 1.300) 4 _ ( -0,343 0,658 ) 1 -

=

Time (sec)

1.457 - 2.115

Je')

= L: L: 7;'

(19)

k;;;;l t=l

The results by GAB with Algorithms 1,2,3,4 are following: Algorithm 1: a feasible solution (x', y') (0_5493, 0,3767, 0_2889, 0,72(19) with ~(F(x', y')) = -0,0689

=

where K is the total iteratiolls of GAB for solving the BMI problem, Act.ually, Tt" may be not achieved at the same node for each iteration. Thus, (19) seems to overestimate the time. Table 2 contains the computational

1830

Table 1: Perform ance of Serial Algorithms

No. I '2 3

" 5 6 7 8 9 10

11 12 13 14 15 16 17 18 19 20 21 22 23

k 6 4 3 2 3 " 6 3 5 4 '2 .'l

5 .5 '2 7 oS

5 4 1 1 4 6

Algorithm 1 Iter 32 S.1.4,ll ,5,6

i(i ),i _ I . ... , k 1,11 ,2 ,1 2 ,2, 13

1,7 5,1,2 2 ,2,1 ,7 2 ,2,7,10,12,1 5 ,1,1 ),1.63 ,2,45 1,3,17,7

6,1 5,1,10,1,6 2,19,12,20,7 7,4,3,1,6 2,7 . ,11 ,3,10,5.1 ,3 1 ,2,8 ,1,3 5,14,23.28 ,36 1 ,12 ,6,41 2 6 ,1,24 ,65 ,2 4.41 .74 2,9,22, 16 5,2,1,1,3,9

Ti mfli (~ )

31.00

15 17

7.75 9 .06

8 8

3 .10 3.42

12 :34 7 11 2 28

5.4 1 31.86 2.72 264.96 20.71

7

2.90

23 60 '21 9 34 15 106 60

15.51 87.26 12.88 3.93 48.93 7.76 248.68 86.79 0 .50 726.0 2 59 .63 14 .62

'2 235 49 21

k 4 "" 4 '2 3 3 3 4

'2 '2 5 3 4 3 6 " 1 6 2 6

A1so rit hm '2 l , . ... k Iter 2,4 ,3 ,8 17 3,7,6 , 1 17 2, 1.1,2 6 .,3 4 S .5 4,2,3 9 1,1,8 10 1,7,2 la 1,1,13,50 65 1,2 3 1 1 1,14 15 2,28,20,20,5 75 8,5 ,10 23 2 '2 1,12 ,1,8 22 1,2,:} 6 6 ,5,28,12,1 ,76 130 1,8 ,10,28 47 3 3 1,14, 18,HH ,4,lJ 154 2,35 37 1,4,5 ,1,12 ,1 2...

t(i ),i

T irne(lIec) 44 .86 4 2.44 8.69

4.55 5. 42 17 .56 20 .43 17 .72 44 5.61 2.88

0 .95 33.9 3 566. 28 81.01 1.81 74 .66 7 .56 2478 .60 295 .28 2.89 3 100.50 130.45 70 .00

24 __~3~~3~,2~,~JS~__________~40e_____~4~J~.7~O__~5~~3~,6~,8~,~3~ 2~ , J~1______~60 ~__ .. __ 37_2_.2_6_

Table 2: Performance of PaIallel Algorithms

No. 1

k 6

2

4

.3

4

4 5 6 7

8 9 10 11 12

Algoritlun 3 t(i),i_l, ... ,k ILcer 1,2,4,2,1,2 12 ],6,1,2 10 1,2 ,2,2 7 1

3 2 2 6 2

4

13

5

14 15 16 17

4 2 3 2

18

5

19 20 21 22 23

6 1 9 5 3

4 1 ,2,1 1.3 1,5 1.2 ,22 ,3,48 ,7 1,2 1 1.5,6 ,4 1 ,10,5,1 ,:3 2,5,1,2 1,4 1,2 1,3 3,1,2,15,4 1,6,3,3,1 1 1,4,9,26,22.14,2,4,4 1 ,11 ,10, 15,::1 ] ,2, 2

24 .--.:',:-..'1,4, ", 5

4

Algorithm 4

T ime(sec)

k 4 3 3 2 2

4 .62

3.59 l. 74

0.38 0.64

,

4

1. 27 l.04

6

1.70

83

102.60

5

3

0 .88

3

16 20 10 5 3 "

0.400 5.99 10.'14 3.58 1.20 0.69 0.81

'2 3 ,') "

25

12.89

6

14 1 86 40

4 .24 0. 14 103 .05 29.57

5 1 8 6

5

1.23

4

5

3

'2 3

t( i) ':;' . i;:-~J",·co·.c..:.:k~~I~te:;'~...;l~·~;:!!m:!!e:J,(.;::e",c~) 1,3,3,1 8 3.57 1,5,2

B

3.14

1,2,3 1,2 4 ,1 1,1 ,2 1,3.2 4 1.8 ,14 ,5,16 1,2 .4 3 ,1 4 ,4 ,3 1,8,:3,2 ,3 .'5 ,2, 1,2 3 1,6 1,2,4 4,4,8,7,4,2 1,9,6,2,3 1 3 ,1,5,4,3,25,2,2 ] ,5,8 ,'2,5 ,6 4,2,2, 1

6

1.9 5

3 5 4 6

1 .03 1 .02 2.28 2 .... 0 0 .99 41. 75 2 .69 0 .74 ... .10 10 .19 5 .02 0. 71 2.6 2 3.03 20.56 14.25 0. 14 4 1.31 19.30 3.06

4 44

7 4 11

17 10 3 7 7

29 21 45 26 9

.___ _ .!I::. o _ _--'2::,4:.'.:8':--'3~2, 2 .'_I_ _ _ _ _.!8'___._..!2~.7~O~

1831

results for the same t.est problems as Table I. In general, the number of vertices generated by cut is rapidly increasing with the dimension of the problem. The storage of vertices will be significantly large with high dimension (Horst, et • .1., 1988). Although the parallel algorithms can reduce the iterations of solving the BMI problem with lower storage of vedices for each processor

(Liu and Papavassilopoulos, 1994; 1995a), it still needs a bigger computer for large BMI problems. 5. CONCLUSION In this paper four global optimization approaches are introduced to find file solution of the BMI problem via solving a sequenCE: of nonconvex minimization problems of type (4) or (7). Tables I, 2 demonstrates that the parallel algorithms are more efficient than the serial algorithms. However, due to the nonconvex character of the BMI problems, solving them is still timeconsuming especially for high dimension BMI problems as those originated by robust control problems. In fact, computational experiments have shown that most currently available meth'Jds of global nonconvex optimiza.tion with general com'ex constraint set are practical only for problems of small size. In order t.o address this difficulty) parallel computations seem to be a promising approach but obviously more work is needed. RICFICRICN CICS

Gilbert, E. G. (1966). An iterative procedure for computing the minimum of a quadratic form on a convex set. SIAM 10urnal on Control and Optimization, 4, pp. 61-79. Goh) K. C., M. G. Safonov, and G. P. Papavassilopoulos (1994a). A global optimization approach for the bmi problem. In Proceedings IEEE Conference on Decision and Control, pp. 2009-2014,Orlando, FL. Goh, 1(. C., L. Thran, M. G. Safonov, G. P. Papavassilopoulos and .I. H. Ly (1994b). Biafline matrix inequality properties and computational methods. In Proceedings of American Control Conference, pp. 8.')0-855, Baltimore, MD. Hauser, J. E. (1981)). Proximity algorithms: Theory and implementation. College of Engmeermg!

University of California,

Berke/ey,

UCB/ERL:M86/53. Hoffman, K. L. (1981). A method for globally minimizing concave functions over convex sets. Mathematical Programming, 20, pp. 22-.32. Horst, R., T.Q. Phollg, N.V. Thoai and J. de Vries (1991). On so.ving a d.c. programming problem by a sequence of linear programs. Journal of Global OptimizatIOn., 1, pp. 183-203. Horst, R. and N.V. Thoai (1989). Modification, implementation and comparison of three algorithms for globally solving linearly constrained concave minimizat.ion problems. Comp!fiin,g, 42, pp. 271-289.

Horst, R., N.V. Thoai and ILP. Benson (1991). Concave minimization via conical partitions and polyhedral outer approximation. Mathematical Programming, 50, pp. 259-274. Horst, K, N.V. Thoai and J. de Vries (1988). On finding new vertices and redundant constraints in cutting plane algorithms for global optimization. Operations Research Letters, 7, pp. 85-90. llorst, R. and 11. Thy (1993). Global Optimization, volume Second revised edition. Springer-Verlag, Berlin. Liu, S. M. and G. P. PapiLvassilopoulos (1994). A parallel method for globally minimizing concave functions over a convex polyhedron. In Proceedings of the 2nd IEEE Mediterranean Symposium on New Directions in Control fj Automation, Chaoia, Crete, Greece. Liu, S. M. and G. P. Papavassilopoulos (1995a). Parallel computation for a class of global nonconvex minimization problems. to appear in lASTED International Conference; Applied Modelling, SimuIll/ion and Optimization, Cancun, Mexico.

Liu, S. M. and G. P. Papava."ilopoulos (1995b). Algorithms for globally solving d.e. minimization problems via concave programming. In Proceedings of American Control Conference, Seattie, WA. Pardalos, P.M. and J.B. Rosen (1986). Methods for global concave minimization: A hihliographic survey. SIAM Review, 28, pp. 367-379. Pardalos, P.M. and J.B. Roscn (1987). Constrained Global Optimization:

Algorithms and Applica-

tions, volume 268 of Lecture Notes in Computer Science. Springer-Verlag, Berlin. Safonov, M. G .. K. C. Goh "nd J. H. Ly (1994). Controller synthesis via bilinear matrix inequalities. In Proceedings of A merican Control Conference, pp. 45-49, Baltimore, MD.

Safonov, M. G. and G. P. Papavassilopoulos (1994). The diameter of an intersection of ellipsoids and bmi robust synthesis. rn Proceedings of the IFAC Symposium on Robust Control Design. Thieu, T.V. (1989). Improvement and Implementation of Some Algorithms foT' Nonconvex Optimization Problems, volume 1405 of Lecture Notes in Mathematics. Springer-Verlag, Berlin. Thy, H. (1984). Global minimization of a difference of two convex functions. In Lecture Notes in Economics and Mathematical Systems, volume 226 ,

pp. 98-108. Thy, H. (1986). A general deterministic approach to global optimization: via d.c. programming. In Fermat Days 85: Mathematics for Optimization (J.B. Hiriart-Urruty (Ed.)), pp. 273-303, NorthHolland. Wolfe, P. (1976). Finding the nearest point in a polytape. Mathematical Programming, 11, pp. 128149.

1832