Close
About
FAQ
Home
Collections
Login
USC Login
Register
0
Selected
Invert selection
Deselect all
Deselect all
Click here to refresh results
Click here to refresh results
USC
/
Digital Library
/
University of Southern California Dissertations and Theses
/
00001.tif
(USC Thesis Other)
00001.tif
PDF
Download
Share
Open document
Flip pages
Contact Us
Contact Us
Copy asset link
Request this asset
Transcript (if available)
Content
INFORMATION TO USERS This manuscript has been reproduced from the microfilm master. UMI films the text directly from the original or copy submitted. Thus, some thesis and dissertation copies are in typewriter face, while others may be from any type of computer printer. The quality of this reproduction is dependent upon the quality of the copy submitted. Broken or indistinct print, colored or poor quality illustrations and photographs, print bleedthrough, substandard margins, and improper alignment can adversely affect reproduction. In the unlikely event that the author did not send UMI a complete manuscript and there are missing pages, these will be noted. Also, if unauthorized copyright material had to be removed, a note will indicate the deletion. Oversize materials (e.g., maps, drawings, charts) are reproduced by sectioning the original, beginning at the upper left-hand comer and continuing from left to right in equal sections with small overlaps. Each original is also photographed in one exposure and is included in reduced form at the back of the book. Photographs included in the original manuscript have been reproduced xerographically in this copy. Higher quality 6” x 9” black and white photographic prints are available for any photographs or illustrations appearing in this copy for an additional charge. Contact UMI directly to order. UMI A Bell & Howell Information Company 300 North Zeeb Road, Ann Arbor MI 48106-1346 USA 313/761-4700 800/521-0600 Parallel Algorithms for Global Nonconvex Minimization with Applications to Robust Control Design via BMI’s by Shih-Mim Liu A Dissertation Presented to the FACULTY OF THE GRADUATE SCHOOL UNIVERSITY OF SOUTHERN CALIFORNIA In Partial Fulfillment of the Requirements for the Degree DOCTOR OF PHILOSOPHY (Electrical Engineering-Systems) August 1995 Copyright 1995 Shih-Mim Liu UMI Number: 9617111 UMI Microform 9617111 Copyright 1996, by UMI Company. All rights reserved. This microform edition is protected against unauthorized copying under Title 17, United States Code. UMI 300 North Zeeb Road Ann Arbor, MI 48103 UNIVERSITY OF SOUTHERN CALIFORNIA T H E GRA DUATE SCHOOL UNIVERSITY PARK LOS ANGELES, CALIFO RNIA 90007 This dissertation, w ritten by Shih-Mim Liu under the direction of h.X?.......... D issertation Com m ittee, and approved b y all its m embers, has been presented to and accepted b y The G raduate School, in partial fulfillm ent of re quirem ents fo r the degree of D O C T O R OF PH ILO SO PH Y Dean o f Graduate Studies D a te .1 .9. ’ . ................ DISSERTA TIO N COM M ITTEE Chairperson To My Parents and My Fiancee Emily 1 1 A ck n o w led g m en ts I would like to deeply acknowledge my advisor, Professor George P. Papavas- silopoulos, for his guidance and support in my research. I am greatly indebted to Professor Michel Dubois for the perm ission of using parallel com puter DELTA. I also th ank Professors Edm ond A. Jonckheere and Gary Rosen for serving on the dissertation com m ittee. Acknow ledgm ents are also due to the financial support from D epartm ent of Electrical Engineering, U niversity of Southern California, Los Angeles and Na tional Science Foundation, NSF G rant CCR-9222734. I am grateful for the helpful com m ents of Tom Tsao on the research pre sented in this dissertation and m any valuable discussions from Professor Michael Safonov’s robust control research group. Finally I th ank my family, my cousin A ngela Chang, her husband Andrew Atkins and m y friends Amos W u, Jason Li and P eter Chu for their encouragem ent. C o n ten ts Acknowledgm ents iii List Of Tables vi List Of Figures vii A bstract viii 1 Introduction 1 1 .1 M otivation for the S t u d y .............................................................................. 1 1.2 Problem s and T heir A p p lic a tio n s ............................................................ 3 1.3 O utline of the d is s e r ta tio n .......................................................................... 7 2 Concave M inim ization Problem s 10 2.1 In tro d u c tio n ....................................................................................................... 10 2.2 Problem S tatem ent and A l g o r it h m ......................................................... 12 2 .2 .1 Serial A lgorithm .............................................................................. 13 2.2.2 Parallel A l g o r it h m .......................................................................... 16 2.3 A Scheme W ith n 2 -f 1 P ro c e s s o rs ............................................................ 20 2.4 E x a m p le s ........................................................................................................... 2 2 2.5 C om putational R e s u l ts ................................................................................. 27 2.6 Extension to A G eneral C om pact Convex Set ................................... 32 2.7 C o n clu sio n .......................................................................................................... 36 3 R everse Convex Program m ing 37 3.1 In tro d u c tio n ...................................................................................................... 37 3.2 Problem S tatem ent and Fundam ental P r o p e r tie s .......................................................................................................... 40 3.3 A lgorithm D e s c r ip tio n ................................................................................. 44 3.3.1 Serial A lgorithm ............................................................................. 44 3.3.2 Parallel A l g o r it h m .......................................................................... 47 3.4 Discussion of I m p le m e n ta tio n ................................................................... 51 IV 3.5 E x a m p l e s .......................................................................................................... 54 3.6 N um erical Results and A n a ly s is ................................................................ 58 3.6.1 DELTA and Test Problem s .......................................................... 59 3.6.2 C om putational R e s u lts .................................................................... 61 3.7 C o n clu sio n .......................................................................................................... 67 4 D .C . M inim ization Problem s 68 4.1 I n tro d u c tio n ....................................................................................................... 6 8 4.2 Basic idea of the M e t h o d s .......................................................................... 71 4.2.1 An O uter A pproxim ation M ethod Using C utting Plane . . 72 4.2.2 A Parallel Sim plicial A lg o r ith m ................................................... 73 4.2.3 A Parallel M ethod via Linear P ro g ra m s .................................... 74 4.3 Im p le m e n ta tio n ................................................................................................ 75 4.3.1 C onstruction of S ............................................................................... 75 4.3.2 P artitions of B oth Prism and S im p le x ........................................ 76 4.3.3 C om putation of Bounds and C onstruction of C utting P l a n e s ................................................................................. 79 4.3.4 Deletion of S im p lic e s ........................................................................ 83 4.4 T he A lg o rith m s ................................................................................................ 83 4.5 N um erical T e s ts ................................................................................................ 89 4.6 C o n clu sio n .......................................................................................................... 94 5 N um erical E xperience w ith Parallel A lgorithm s for Solving the B M I Problem 96 5.1 I n tro d u c tio n ....................................................................................................... 96 5.2 Global O ptim ization M e th o d s .................................................................... 99 5.2.1 BM I and Concave M in im iz a tio n .................................................. 99 5.2.2 BM I and d.c. P r o g ra m m in g ......................................................... 101 5.3 T he A lg o rith m s ................................................................................................ 102 5.3.1 Serial A lg o rith m s ............................................................................... 103 5.3.2 Parallel A lg o rith m s ........................................................................... 105 5.3.3 A Global A lgorithm for BM I P ro b le m ........................................ 106 5.4 Exam ples and N um erical E x p e r im e n t s .................................................. 110 5.4.1 E x a m p le s ............................................................................................. 110 5.4.2 N um erical E x p e rim en ts.................................................................... 113 5.5 C o n clu sio n .......................................................................................................... 116 6 Sum m ary and Future Research 117 6.1 S u m m a r y .......................................................................................................... 117 6.2 Proposal for F uture W ork .......................................................................... 118 R eference List 120 L ist O f T ables 2.1 Iterative R esults for Exam ple 2 ................................................................. 28 2.2 C om putational R esults for SA, PA, Q A ................................................... 30 2.3 Iterative Results for Exam ple 3 ................................................................. 35 3.1 C om putational R esults for Serial and Parallel A lg o rith m s................. 62 3.2 Results for 50 Tested Problem s w ith m = 32, n = 1 6 ............................. 64 4.1 R esults for A lgorithm 2, 3 w ith N P ro c e sso rs........................................ 93 5.1 C om putational R esults for Exam ple 2 ...................................................... 113 5.2 Perform ance of Serial A lg o r ith m s ............................................................. 114 5.3 Perform ance of Parallel A lg o r ith m s ......................................................... 115 VI L ist O f F igu res 2.1 two sim plices containing D for n = 2 , 3 ................................................ 14 2 . 2 Q? for n = 2 .................................................................................................. 2 2 2.3 Serial algorithm for exam ple 2 .............................................................. 24 2.4 Parallel algorithm for exam ple 1 ........................................................... 26 3.1 An Exam ple for U nstable Problem .......................................................... 46 3.2 G eom etrical H istory of Exam ple 1 ....................................................... 56 3.3 A ./-level binary tree of m essage passing. Here 0 is the host node . 59 3.4 Speedup for 50 problem s w ith m = 32, n= 16 on the DELTA . . . . 65 4.1 Q -triangulation of S and P rism atic triangulation of V ................ 78 5.1 An exam ple for dividing Pq G 5 R 2 into 3, 4 or 6 regions of equal volum e............................................................................................................. 106 vii A b stra ct Several parallel algorithm s are developed to solve three classes of nonconvex optim ization problem s: ( 1 ) concave m inim ization, (2 ) reverse convex program s, (3) d.c. (difference of two convex functions) program m ing. Also, the algorithm s for (1) & (3) are applied in solving robust control synthesis problem s via bilinear m atrix inequalities (BM I). Theoretical analysis and com putational experience are presented. F irst, an algorithm for globally m inim izing concave functions over a bounded polyhedron by generating a sequence of nested polyhedra is presented. A crucial feature of the algorithm is th a t it can be im plem ented in parallel by partitioning. T he num ber of processors used in the parallel algorithm depends on the num ber of vertices of an initial polyhedron enclosing the feasible set. C om putational considerations and results of both serial and parallel algorithm s are discussed. Also, w ith a little e x tra work the algorithm s can be extended to solve the problem w ith a general convex constraint set. Second, a parallel m ethod com bining outer approxim ation techniques and cut ting plane m ethods is proposed for globally m inim izing linear program s w ith an additional reverse convex constraint (LR C P). Basically n processors are used for a problem w ith n variables. C om putational results are presented for test problem s w ith num ber of variables up to 80 and 63 linear constraints (plus non-negativity viii constraints). These results were obtained on a distributed-m em ory M IM D parallel com puter DELTA. T hird, three algorithm s suitable for parallel im plem entation are presented to globally solve a d.c. problem via solving an equivalent concave m inim ization problem . To d istribute th e com putation load as evenly as possible, a sim plex subdivision process such as bisection, triangulation or other p artition procedures of sim plices can be employed. Some num erical results and com parison of these algorithm s are given. Finally, we deal with num erical m ethods for solving BM I, an application of the nonconvex optim ization problem . Essentially, the BM I problem can be tran s form ed into either a concave m inim ization problem or a d.c. problem so th at th e algorithm s described above can be applied. N um erical experiences w ith these algorithm s for sm all BM I problem s are reported, including sim ulation of parallel algorithm s. C h a p ter 1 In tr o d u c tio n 1.1 M o tiv a tio n for th e S tu d y S tandard nonlinear program m ing m ethods have not been successful for solving the nonconvex global optim ization problem s because all of them can at m ost locate local m inim a; m oreover, there is no local criterion for deciding w hether a local solution is global. Therefore, conventional techniques of nonlinear optim ization using such tools as derivatives, gradients, sub-gradients and the like, are, in gen eral, not capable of finding or identifying a global optim al solution. For exam ple, a stationary point is often detected for which there is no guarantee of local m inim al ity. For these reasons, m ethods for solving global solutions of nonconvex problem s are significantly different from standard nonlinear program m ing approaches; and they can be expected to be m uch m ore expensive for com putation. Thus, it is necessary to develop parallel algorithm s for globally solving nonconvex optim iza tion problem s. T hroughout this dissertation, our focus will be on the developm ent of parallel algorithm s for globally solving some classes of nonconvex optim ization 1 problem s and th eir application on robust control synthesis via BM I (B ilinear Ma trix Inequality). In general, m ost m ethods for global optim ization can be divided into two classes, determ inistic and stochastic. In this dissertation we are going to consider only determ inistic m ethods. Nonconvex optim ization problem s seem to have a ttra c te d the attention of a num ber of researchers. In particu lar, w ith the rapidly advancing com puter tech nology, solving global nonconvex optim ization problem s seems to be considered m ore tractab le. However, due to the absence of com plete characterizations of globally optim al solutions of nonconvex problem s, one can not expect a direct tre a tm e n t of all th e problem s of nonconvex optim ization and it is thus necessary to classify nonconvex problem s and to study particular classes. T he m ain classes of nonconvex global optim ization problem s th a t we study are global concave m in im ization problem s, linear program s w ith an additional reverse convex constraint, and d.c. (difference of two convex functions) program m ing. These three classes cover a very broad range of nonconvex global optim ization problem s th a t are frequently encountered in applications. D uring th e past three decades a variety of m ethods have been proposed (see, for exam ple, [HT93],[PR87],[PR86],[Par92], [HP94],[FP92],[PPR93]) to solve the nonconvex optim ization problem s. However, only a few of them have been tested by sim ulations; m oreover, m ost of them are practical only for problem s of rather sm all size. T he m ost im p o rtan t determ inistic approaches for these nonconvex opti m ization problem s use enum erative techniques, cutting plane m ethods, branch and bound, successive approxim ation, successive p artition or different com binations of 2 these m ethods. Since these algorithm s are m ost of the tim e com putationally de m anding, we are led to consider parallel schemes and develop algorithm s th a t are suitable to parallel im plem entation. Indeed, in order to m eet the enorm ous practical need of solving th e nonconvex problem s, parallel com putation seems to be a prom ising m ethod (e.g., [PPR93],[LP94],[LP95e],[LP95d],[LP95a],[LP95c], [LP96],[BT89],[PR88]). 1.2 P ro b lem s an d T h eir A p p lica tio n s In this section we discuss briefly the three types of nonconvex optim ization prob lem s we have studied and th eir applications. T he general concave m inim ization problem has the form: global m in f ( x ) ( 1 -1 ) subject to x £ D. where f(x) is a real concave function and Z) £ 5ft" a convex com pact set. An im p o rtan t property of ( 1 .1 ) is th a t its global solution is achieved at some extrem e point of D. T hus, the search for an optim al solution can be lim ited to the extrem e points of the set. N ote th a t the m agnitude of extrem e points of D m ay be too large. T he range of practical application of concave m inim ization problem is very broad. Large classes of decision models arising from operations research and m ath em atical economics and m any engineering problem s lead to form ulations such as problem (1.1). For exam ple, broad classes of integer program m ing problem s can be form ulated as equivalent concave program m ing problem s, where equiva lence is understood in the sense th a t th e sets of optim al solutions coincide (cf., H orst and Tuy [HT90], Pardalos and Rosen [PR87], [PR 8 6 ]). Transform ations of bilinear problem s as concave m inim ization are described in A ltm an [Alt6 8 ], K onno [Kon76]. T he relationship betw een m ax-m in problem s and concave m ini m ization was discussed and exploited by Falk [Fal73]. O ne of the m ost often encountered classes of nonconvex optim ization problem s is reverse convex program m ing. Generally, reverse convex program s have discon nected feasible regions. As a result, problem s w ith such constraints generally have local o ptim a which are not global optim a. D e fin itio n 1 A n inequality constraint g(x) > 0 is called reverse convex constraint whenever g : 3?" — » 3? is convex. An im p o rtan t problem which is a m em ber of the class of reverse convex program s is the following: global m in cx ( 1 -2 ) subject to x G D. x e G = {x 6 5?" : g{x) > 0 } where D is a bounded polyhedron in 9?n, c is an n-vector and g : 3?" —> 3? is a convex function. This problem is often referred to as a linear program w ith an additional reverse convex constraint (L R C P). If one considers D to be a bounded polyhedron in problem ( 1 . 1 ), then problem ( 1 . 1 ) can be rew ritten as a linear 4 program w ith a reverse convex constraint by introducing an additional variable t € R : global m in t (1-3) subject to x G D. 7i < t < 72 g(x,t) = t - f ( x ) > 0 where 7 1 and 7 2 are constants verifying 7 1 < min{f(x) : x G D} < 7 2 (see [DB89]). Problem s of this form were first studied by Rosen [Ros6 6 ] in a control the oretic setting and subsequently, in an engineering design setting, by Avriel and W illiam s [AW70],[AW71]. For a recent interesting application in VLSI design, see Vidigal and D irector [VD82]. Also, this type of problem s arise frequently in dealing w ith budget constraints and economics of scale [Hil75]. In Bansal and Ja cobsen [BJ75b],[BJ75a] th e special problem of optim izing a netw ork flow capacity under economies of scale was discussed. N ext, we consider th e m ulti-extrem al global optim ization problem : global m in ( f ( x ) - g ( x ) ) (1.4) subject to x G D = {x G : hj(x ) < 0 , (j = 1,...,/)}. 5 where / , g, hj(j = 1 , . . . are finite convex functions on 5 R n. Problem (1.4) is frequently called a d.c. optim ization problem , where d.c. represents the difference of two convex functions. By introducing an additional variable t, problem (1.4) can be transform ed into an equivalent concave m inim ization problem : global m in (t — g(a:)) (1-5) subject to f(x) < t x € D. Sim ilarly, problem (1.4) can be converted into an equivalent convex m inim ization problem subject to an additional reverse convex constraint: global m in {f{x ) — 0 ( 1 -6 ) subject to g(x) > t x G D. Also, a striking feature of d.c. program m ing problem s is th a t any d.c. problem can always be reduced to a canonical d.c. problem which involves two additional variables: global m in cz (1-7) subject to hi(z ) < 0 ^ 2 (2 ) > 0 6 where c € 5 ftn+2; hi, /12 : 3 ? " + 2 — > 3? are convex functions (cf., H orst and Tuy [HT90]). D .C . program m ing problem s arise frequently in engineering and physics (e.g. Heron and Serm ange [HS82], Toland [Tol79], Vidigal and D irector [VD82], Po- lak and V incentelli [PV79], Tuy [Tuy8 6 a, Tuy87], Horst and Tuy [HT90]). P ar ticularly in engineering design we encounter optim ization problem s w ith lower C ^-objective functions (C 2 functions have been shown to be d.c., e.g. H iriart- U rruty [HU85], Polak [Pol87], T hach [Tha8 8 ], Horst and Tuy [HT90]). A nother exam ple is th e optim ization of indefinite quadratic form s X T A X (A € 3 fJ nxn, sym m etric) for which it is well known th a t there are several ways of finding positive sem idefinite m atrices P, Q such th a t X TA X = X TP X — X TQ X (cf., e.g. Parda- los et al. [PPR87], Pardalos and Rosen [PR87]). Also, an application of the design centering problem is the optim al shape design problem from the diam ond industry where one w ants to know how to cut the largest diam ond of shape K th a t can be cut from a rough stone D D K (Nguyen et al. [VNT]). Finally, notice th a t the BM I introduced by Safonov et al. [SGL94] as a sim ple and flexible fram ew ork for approaching robust control system synthesis problem s also can be solved via a sequence of concave m inim ization problem s ( 1 . 1 ) or d.c. program m ing problem s (1.4) (see [SP94],[LP95e],[LP95a][LP96]). A reduction of the dim ension of BM I problem was considered in [LP95b]. 1.3 O u tlin e o f th e d isserta tio n In C hapter 2 we develop an algorithm for globally m inim izing concave functions over a bounded polyhedron. A crucial feature of the algorithm is th a t it can 7 be im plem ented in parallel, thus enhancing th e rapidity of convergence. T he al gorithm generates a sequence of nested polyhedra containing the polyhedron of th e constraint set. T he initial containing polyhedron w ith N vertices is obtained by solving som e linear program m ing problem s. Beginning w ith the vertices of th e initial polyhedron, the algorithm proceeds sim ultaneously w ith N subprob lems whose solutions m inim ize the concave function over N successively tighter polyhedra created by the cutting-plane m ethod. Also, w ith a little e x tra work the algorithm described here can be used to solve concave m inim ization problem s w ith a general convex constraint set. T he algorithm is guaranteed to converge to the global solution. C om putational considerations of the algorithm are discussed. In C hapter 3 a parallel m ethod which combines both the outer approxim ation techniques and the cutting plane m ethods is proposed for globally m inim izing a linear program w ith an additional reverse convex constraint. Basically there are n processors used for a problem w ith n variables and a global optim al solu tion is found very efficiently in a finite num ber of steps. C om putational results are presented for test problem s w ith num ber of variables up to 80 and 63 linear constraints (plus non-negativity constraints). These results were obtained on a distributed-m em ory M IM D parallel com puter DELTA w ith 513 num eric nodes by running both serial and parallel algorithm s w ith double precision. Also, we have an average parallel com putation tim e of approxim ately 3 seconds yielding the average over 2 1 super-linear speedup com pared w ith the average com putation tim e of the serial algorithm based on test problem s w ith 16 variables and 32 linear constraints (plus x > 0 ). C hapter 4 introduces algorithm s for globally solving d.c. problem s via concave program m ing. Several m ethods have been proposed for solving a d.c. program m ing problem , b u t very few have been done in a parallel way. In this chapter, th ree algorithm s suitable for parallel im plem entation are presented to solve a d.c. problem via solving an equivalent concave m inim ization problem . To distribute the com putation load as evenly as possible, a sim plex subdivision process such as bisection, triangulation or other p artition procedures of simplices (cf. [HT90]) will be em ployed. Some num erical test results and com parison of these algorithm s are given. C hapter 5 is devoted to the num erical com putation for solving the BM I prob lem. Four global algorithm s including two parallel algorithm s are em ployed to solve the BM I problem by a sequence of concave m inim ization problem s or d.c. program s via concave program m ing. T he parallel algorithm s w ith or based on a suitable partitio n of an initial enclosing ployhedron are m ore efficient th an the serial ones. C om putational experiences w ith these algorithm s are reported for random ly generated BM I problem s of sm all size. 9 C h a p ter 2 C on cave M in im iza tio n P ro b lem s 2.1 In tr o d u c tio n In recent years, a rapidly growing num ber of papers has been published per taining to solving specific classes of m ulti-extrem al global optim ization problem s (cf.,e.g.,[HT93, P R 8 6 , PR87] and references therein). Several of these papers are on the topic of globally m inim izing a concave function over a polyhedron which Tuy[Tui64] first addressed in 1964. M ost approaches th a t have been developed for m inim izing a concave function over a polyhedron have been based m ainly on the following approaches: c u ttin g planes, successive approxim ation, successive p a rti tion, or com binations of these m ethods. Only a few of the suggested algorithm s have been dem onstrated by num erical tests, including the two successive approx im ation m ethods elaborated by Falk and Hoffman[FH75, FH 8 6 ] and the three different efficient algorithm s discussed by Horst and Thoai[HT89]. In this chapter, first we consider th e problem of m inim izing a concave function subject to a polyhedron constraint set, then extend it to solve the problem with a general convex constraint set. In our m ethod, an initial enclosing polyhedron 10 w ith N vertices (e.g. a sim plex) has to be com puted by solving N linear pro gram m ing problem s. T hen, by suitable p artition of this enclosing polyhedron , we have N independent subproblem s w ith the sam e objective function. A fter initial ization, an upper bound of th e objective function is obtained, and the algorithm proceeds to solve these N subproblem s in parallel. T he com parison of th e up per bounds of th e concave function for all subproblem s will be considered during com putation; and the algorithm is guaranteed to converge to all global solutions in a finite num ber of steps. T he algorithm presented here uses a com bination of cu ttin g planes, outer approxim ation, and partition techniques T he algorithm m ust generate the new vertices in defining a new polyhedron by adding a new linear constraint and this is its m ost expensive com putation. Several algorithm s for finding all vertices of a polyhedron are com pared in [MR80], and the m ethod developed by M attheiss[M at73] seems to be th e m ost efficient. In our algorithm , we will incorporate the idea proposed by Host, Thoai and Vries[RHdV 8 8 ] and the m ethod of M attheiss[M at73] in the calculation of all newly generated vertices. T he algorithm introduced in this chapter, the m ethods proposed by Falk and Hoffman[FH75], and the algorithm OAA described by Horst and Thoai[HT89] have sim ilar characteristics, i.e. successively outer approxim ation and finite pro cedures. However, the choices of th e cutting plane are different in these three m ethods, in particular, our parallel algorithm is m uch m ore efficient th an the o th ers. T he com parison of the com putations in some m ethods[H T93, FH75, HT89] will be given in Section 2.5. Generally speaking, our m ethod has the following properties: it can find accurately all global optim al solutions in finite iterations, it is a parallel algorithm , it can handle the degenerate case which is not considered 11 in [FH75, F H 8 6 ] (see Host, Thoai and Vries[RHdV 8 8 ]), the redundant constraints do not affect the com putational result because they will never be chosen as a cut ting plane during th e com putation, and it does not require a separable objective function. In th e next Section we will present the details of the algorithm . Section 2.3 introduces a schem e w ith n2 + 1 processors. In Section 2.4, two exam ples were given to illustrate th e m ethod. Section 2.5 reports some com putational results. In Section 2.6 we discuss th e extension of the m ethod to globally m inim izing a concave function over a com pact convex feasible set, and a sm all illustrative exam ple is provided. Some concluding rem arks are given in the final Section. 2.2 P r o b le m S ta te m e n t an d A lg o rith m Consider th e problem (CV) m inim ize f(x) subject to Ax < b, x > 0. where A is an m x n m atrix, b E , x E 3?n, and f is a concave function defined throughout 3?n. Let a, and bi denote the zth row of A and b respectively. Assum e D={a: | Ax < b, x > 0} is bounded, nonem pty, and there is a point p in the strict interior of th e feasible region D. i.e. pE { i E S " : a{X < bi, x > 0 , i = 1,..., m. }• T he following theorem is well known: 12 Theorem 1 If a global m inim um o f a concave function over any polyhedron is attained, it can always attained at som e vertex o f the polyhedron. 2.2.1 Serial Algorithm In th e algorithm , an initial enclosing sim plex D° is obtained by solving n + 1 linear program m ing problem s of the form: m in Xj ( j — 1 , 2 ,..., n) s.t. x G D m ax E ” =i x i s.t. x G D (2.i; Let otj(j = 1,2, ...,n ) and a be the optim al solutions of (2.1) obtained at the points pj (j = 1,2,..., n + 1) respectively. T hen it is easily seen th a t D° = { x e R n : a j - X j < 0 (j = 1,2,3,... ,n), - a < 0}(2.2) j =i is a sim plex containing D. T he n+ 1 facets of D° are defined by the n+1 hyper planes {x G 5ft" : xj = aj} (j = 1,2,... ,n) and { i 6 3?" : Z)"=i x i — a }- Each of these hyperplanes is a supporting hyperplane of D. D enote by V(L) the vertex set of any polyhedron L. Obviously, th e vertex set of D° will be V(D°) = {vu v2, . . . , v n+1} (2.3) where un+i = (ax, a 2, •••+ «) and vj - (a x, a 2, • • ■, a?-i, /?y, ■ ■ •, a«) (j = 1,2, ...,n ) w ith (3j = a — Y ^ j ai- Eor n= 2,3, two enclosing sim plices D° are shown in Figure 2.1. Clearly an upper bound of (CV ) is f° = m in { /(p ,), i = 13 a X3 x2 a — a 2 — a 3 , a 2 , a 3 a i, a — « i — a 3, a 3 a i,a 2,a - a x - a 2 Figure 2.1: two simplices containing Z) for n= 2,3 14 1 ,2,... ,n + 1} since the pt - is a vertex of D. Let vertex v° solve the problem m in { /(u j) : Vj G V(£)0)}. T hen f(v°) < f° since D° 3 D (We only need to store the vertices Vj with f(vj) < / ° , vj € V(D0)). If v° G D, then v° solves problem (CV ). If v° 0 D , find a linear constraint from Ax — b, say a,x = 6,-, such th a t a{Z — bi and 2 G boundary of D, where z = u° + A(p—u°), 0 < A < 1 (Recall th a t p is an interior point of D , thus A ^ 1. Notice also th a t A ^ 0 since v° £ D). T hus, we have a new polyhedron defined by D 1 = D° f|{ ^ € 5ft" : a tx < 6^}, and its vertex set V (D 1). Following the sam e procedure as before w ith D 1 assum ing th e role of D ° , the algorithm will generate a sequence of subproblem s, and there will be a corresponding sequence of polyhedra Dk . Let I k be the subset of { 1 ,2 ,... ,m} (i.e. the constraint set {a,x < 6 ;, i = 1,2, ...,m }) whose corresponding constraints are not used in defining Dk. We can thus state the following algorithm : I n itia liz a tio n : Take an n-sim plex D° D D by solving (2.1), and find f°. Only the v\s having f ( vi) < / ° , Vi G V(D°), need to be stored. Set 7° = the constraint set of D (not including nonnegative constraints) = { 1 ,2 ,... ,m}. I te r a t io n k = 1 ,2 ,... At iteration k , we know V k~1(= V(Dk~1)), the prom ising vertex set of Dk~l where f ( v ) < f k~ \ Vu G V k~l. 15 Choose x k by solving m in { /(u ) : v G V k *} (if m ore th an one solution, choose any one). Solve oCk) m axim um subject to: 0 < A < 1, x + A(p — x ) G {x G 5R " : ajx = bj,j G / } If A = 0 (if there is no A satisfying problem (Ck), set A = 0), then xk is a global m inim um solution of (CV ), and f k = f ( x k). O therw ise, set zk = x k + A(p — x k) , and choose any one constraint such th a t ajzk — bj = 0, j G I k~l . Form f k = m i n { f k S/C z*)} Dk = Dk~l f|{® G 5 R n : ajx - 6, < 0} I k = I k~'\{j} (2-4) (2.5) (2 .6 ) calculate the new vertices generated by the cutting plane ajx — bj = 0 (only store th e vertices yielding objective function values < f k). If Vk = 4 i > , th en stop; f k is a global m inim um . O therw ise, k = k + 1 . 2.2.2 Parallel Algorithm In order to cast the previously described algorithm in a parallel form , we assum e th a t there are n+ 1 processors, and an initial enclosing sim plex D° is obtained by solving (2.1) on n+ 1 processors. For the ith processor (i = 1 ,... ,n), set n D° = {x G 5R " : oij < Xj (j — 1 ,2 , . . . , i — 1, i + 1 , . . . , n), ^ Xj < a , a,- < x,} j=i (2.7) 16 and for the (n + l) th processor, set ^ n + i = {z € 9?n : aj < xj (j = 1, 2,..., n ), ^ a, < a} (2 .8 ) 3 = 1 T hen, n + 1 D C ( J D° = D°, and V (D ?) = »;,■ (* = 1,2,...,n + 1) (2.9) t=i Therefore, problem (CV) can be rew ritten as follows: (CVi) m inim ize f ( x ) subject to x € D° fl D, i = 1,2,..., n + 1. where the solution of (CV) = m in {the solutions of (CVi) ,■ ■ ■ ,(CVn+1 )}- There are n + 1 independent subproblem s which are sim ilar to the serial algorithm stated in Section 2.2.1. Thus, these subproblem s could be done in parallel. Denote S k by the set of Dk's. In other words, S k is the set of th e active processors at iteration k (> 1 ) and S k = S k~1 \ where N ^ 1 = : f(u) > f k~l , Vu 6 V(Dk_1), i G (1,2, ...,n + 1)}. Let J k be the subset of { l,2 ,...,m } whose corresponding constraints were not used in defining Dk, i e S k~l. Also, v k denotes the set of the prom ising vertices of Dk. T he algorithm is then as follows: Initialization: C onstruct D°(i — 1,... ,n + 1 ) and n + 1 independent subproblem s described as (2.7), (2 .8 ) and (CVi) respectively. Set / ° = m in {f(pi), i — 1,... ,n + 1 } (the best upper bound so far). 17 We keep the D*-'s w ith i G 5 ° and go to iteration 1. Iteration k = 1 ,2 ,... At iteration k, set S k = S k~1\ N k~1. V k~l th e prom ising vertex set of Dk~l and a current upper bound f k~l are known for each i G 5*-1 . Then, select the m ost prom ising vertex of D k_1 by choosing m in { f ( u ) : u G V k~x }, and let x k be the solution for each i G S k ~ 1 (if m ore th an one solution, take one of them ). Solve ( t f ) m axim um A su b jec t to:0 < A < 1, x k + A(p — x k) G { x G $tn : aj x = 6 j , j G If A = 0 (if there is no A satisfied problem (£^), set A = 0), then stop the z-th processor and x k is a vertex of D. Set fi = f ( xi) if f i x i) < f k~' = / * - 1 if f ( x k) = f k~1(keep x k, it m ay be a global optim al solution) If A > 0, then set zk = x k + A(p — x k), find all constraints of D which are binding at zk and set I k = {j : ajzk = bj; j G T hen add any one constraint a,jX — bj < 0 for j G I k to generate Dk. Set Dk = Dk-x n {x : a .x < bt. J k = J k~l \ I k, (2.10) V k = {the prom ising vertices of the subset Dk} = {v: f ( v ) < f k, v e V ( D k)} (2.11) 18 Now u p d ate the upper bound by f k = m in { f k \ f k (if f k available); i G S k *}. (2.12) Set N k — {* : min(f(u) : u G V^) > f k or V k = (f> , i G If S k = < j > , th en stop the algorithm , and f k is the global m inim um of problem (CV)', otherw ise set k = k + 1 , and go to iteration k. Lem m a 1 The sequence of upper bounds { f k} is monotonically decreasing. Proof: f k = m in { /(x ) , Vx G V (D k)f)V(D)}, where V(D k) = (J? ^ 1 V(Dk), and m in { /(x ) : x G V(Dk+l) f | V(D)} < m in { /(x ) : x G V (D k) f | V(D)} since D C D k+ 1 C D k C . . . C D°. Thus f k+ 1 < f k. □ Lem m a 2 The algorithm will terminate in a finite number of iterations. Proof: Because D is a polyhedron, it has a finite num ber of vertices , and at m ost m cuts will be introduced for determ ining D°. Notice also th a t the num ber of new points generated by the cu tting plane m ethod is finite. It follows th a t the algorithm m ust term in ate in a finite num ber of iterations. D Theorem 2 The algorithm converges, and f k is the global m inim um of problem (CV) if it terminates at iteration k. Proof: By lem m a 2, assum e the algorithm term inates at th e &-th iteration. T hus f k < f k~l < ... < f 1 < f° (by lem m a 1), b u t f k = m in{ f k~lJ k; for i G => f k is th e global m inim um of problem (CV). D 2.3 A S ch em e W ith n 2 + 1 P ro c esso rs In th e parallel algorithm , an initial enclosing polyhedron m ust be com puted first. Since the perform ance of th e parallel procedure depends heavily on the kind of th e initial containing polyhedron, other alternatives could be considered. For instance, we can have an n-rectangular enclosing polyhedron (w ith 2 ” vertices) by solving 2 n linear program m ing problem s of the form: m in Xj (j = 1 , 2 ,..., n) s.t. x £ D m ax Xj (j = 1 , 2 ,..., n) s.t. x £ D (2.13) However, according to the outer approxim ation technique, the initial enclosing polyhedron should be sim ple, closed enough to D and have a sm all num ber of vertices. Now, we are going to introduce an enclosing polyhedron Q° for which th e m axim al num ber of active processors can be up to n2 + l at the first iteration (in general, the num ber of the active processors is less than n 2 + l during com putation. See Table 2.2.). F irst, solve the following linear program m ing problem s on 2 n + l processors. otj — m in{xj : i £ D } (j = 1, 2 ,..., n) n a = m ax { £ Xi : x £ D} i = i flj — m ax{xj : x £ D} (j — 1,2,..., n) (2.14) (2.15) (2.16) 2 0 Let Q° = D° f){x e ?Rn : Xi < pi, i = 1 , 2 ,..., n} (2.17) Q°i = e 5R " : X i < /?,}, i — 1,2,..., n (2.18) 3 °+ i = D ° n+1f ] { x e U n : Xi</3i, i = l,2,...,n} (2.19) Clearly, there are n vertices in V'(Q°) th e vertex set of Q°. Set Vi} € V (Q ° ) and Q°i3 = {* ^ 9?n : xi > at, (I — 1, . . . , n), ^ xi < a} P |{ x G 9?" : x, < /?,•} J / = i (J{all constraints of < 5 ° which are binding at utj}, (2 .2 0 ) where i,j = 1,2,..., n. Therefore, n + 1 dc U « ? = (U U«?,)UW £«) « '= i j=i i= 1 Figure 2 gives the illustration of these subsets for n= 2. E quation (2.21) im plies th a t problem (CV) can be represented by « 2<j) m inim ize f(x) subject to x £ Q ° } f)D, i,j — 1 , 2 ,...,: and (Qn+l) m inim ize f(x) subject to a; € Qn+1 H D (2 .21 ) 2 1 1 V2 o ' • x ~ Q l -Qn+1 aj < ;E i < A C *2 < X2 < /?2 %i + x 2 < a Figure 2.2: for n = 2 where th e solution of (CV) = m in {the solutions of (Q i)(i,j = and (Q n+i)}- Hence, a t m ost n? + 1 subproblem s will be solved sim ultaneously by a procedure sim ilar to the one stated in Section 2.2.2. 2.4 E x a m p les In order to illustrate th e m ethod, we present two exam ples here. In exam ple 1 , bo th serial and parallel algorithm s are em ployed. Only the parallel m ethod will be applied to exam ple 2 . E xam ple 1: . . . - ( ( x ]l — 1)2 - 2 x i X2 + (x 2 - 2 ) 2) m i n i m i z e ---------------------------------------------- --------------— 2x i 2 2 —3 x j + x2 < 0, Ci — 4xi — x2 + 7 < 0 , C 2 3 x i + 2 x 2 — 2 3 < 0, C 3 5 x j — 4 x 2 — 2 0 < 0, C4 2 x j + 3 x 2 - 2 2 < 0, Q s —6 x i — 9 x 2 + 18 < 0, Cs — 1 5 x j + 5 x 2 — 1 0 < 0, C7 x j , x 2 > 0. serial algorithm : An initial enclosing sim plex D° was obtained by solving (2.1), w here vx = (9,0), v2 = (1,8), v3 = (1,0), and px = (1,3), p2 = (3,0), p3 = (5,4) yielding function values of 2.50,-1.333, and 2.0 respectively. Thus / ° = — 1.333 and N° = < f> since f ( y i) = — 3.778, f(v 2) = —10.0,/(u3) = — 2.0 < f°, i.e. S° ={1,2,3}. Let 7° = {1 ,2 ,... ,7}, and interior point p =(2.600,1.867). Iteration 1: obviously, x 1 = v2. By solving (£*), we get A = 0.457 and z 1 = (1.732,5.195). Therefore, the cutting plane is 0 , and the new vertices are (1 ,3 ), (2.25,6.75) w ith function values of 2.5, 1.389 respectively. So, there are no new vertices to be kept. 7 1 = { 2 ,3 ,... ,7}, V 1 = {(9,0),(1,0)}, f 1 — —1.333. Iteration 2: x 2 = v\, thus z 2 =(4.946,1.182) with A = 0.633. It follows th a t the cu ttin g plane is £4, and the new vertices are (4,0), (6.222,2.778) having objective function values of —1.625,0.538 respectively. Hence, (4,0) need to be stored. I 2 = { 2 ,3 ,5 ,6 ,7 } , V 2 ={(1,0),(4,0)}. T he upper bound is not im proved, i.e. f 2 = -1 .3 3 3 . 23 Iteratio n 3: choose x 3 = v3, then z3 =(1.732,0.849) w ith A = 0.455. T here fore, the cu ttin g plane is ( 3 - T he new vertices and corresponding function values are / ( 3 , 0) = /( P 2 ), and / ( l , 1.333) = 1.111. T hen, we have 13 = {2,5,6 ,7}, V 3 = {(4,0),(3,0)}. T he upper bound is still the same. Iteratio n 4: since x 4 = (4,0), we have A = 0. (4,0) is feasible and f 4 = / ( 4 , 0 ) = — 1.625. It im plies V 4 = (j). T he algorithm term inates here, and (4,0) is the global solution. Figure 3 illustrates the sequence of containing polyhedra generated by the serial algorithm . In this exam ple, constraint £7 is redundant. parallel algorithm : T here are 3 processors to solve this problem . A fter initial- 8 I t e r a t i o n I 01 p 2 Xi I t e r a t i o n 4 I t e r a t i o n 2 P 2 Xi f e a s ib le r e g io n Figure 2.3: Serial algorithm for exam ple 2 ization, the parallel algorithm proceeds as follows: 24 Iteration 1: Processor 1 : choose xj = v\. Solving (£}), we have z\ =(4.946,1.182), and A =0.633. Therefore, the cu ttin g plane is £4. The new vertices and corresponding function values are /(4 ,0 ) = —1.625 and /(6.222,2.778)=0.538. So, the lower bound is -1 .6 2 5 , I\ = {4}, J{ ={1,2,3,5,6 ,7}, V j = {(4,0)}. Processor 2: take x \ = v2. Then, z\ =(1.732,5.195) with A =0.457. Thus, the cu ttin g plane is £i, and the new vertices and their objective function values are /(2 .2 5 ,6 .7 5 ) =1.389 and /( 1 ,3 ) =2.50. Hence, I\ = {1}, J \ = {2,..., 7}, Vj = £ > . Processor 2 stops. Processor 3: for x 3 = V3, we get z\ =(1.727,0.849) w ith A =0.455. Therefore, the c u ttin g plane is £3, and th e new vertices are (3,0), (1,1.333) yielding function val ues of —1.333, 1.111 respectively. /{ = {3}, ={1,2,4,5,6 ,7}, V j ={(3,0)}. At th e end of iteration 1, the upper bound was not im proved, i.e. f 1 = f° = —1.333, N 1 = {2}, S 1 ={1,3}. Iteratio n 2: Processor 1: w ith A = 0, x \ =(4,0) is feasible. Then, /j 2 = / ( 4 , 0) = —1.625, V j — 4 > ■ Processor 1 stops. Processor 3: sim ilarly, we have / | = /(3 ,0 ) = —1.333, P3 2 = (j). Processor 3 stops. Now, since N 2 = {1,3}, S 2 = < f> , the algorithm stops, and f 2 = f 2 = —1.625 is the global m inim um solution. T he history of these processors is shown in Figure 2.4. Exam ple 2 was constructed so th a t it has three global m inim izers, and a re dundant constraint £5 is included in the constraint set. 25 Xi P r o c e s s o r 1 : I n i t i a l i s a t i o n I t e r a t i o n I I t e r a t i o n 2 0 1 9 0 1 9 Xi P r o c e s s o r 2 : I n i t i a l i s a t i o n I t e r a t i o n 1 0 1 P2 9 Xi P r o c e s s o r 3 : I n i t i a l i s a t i o n Xi I t e r a t i o n 1 P 2 0 1 9 Xi I t e r a t i o n 2 Figure 2.4: Parallel algorithm for exam ple 1 26 Exam ple 2: m inim ize —(xi — l ) 2 — (x 2 — 1.5) 2 — (x 3 — 2 ) 2 subject to: 3xi + 2 x 2 — ^ 3 < 6, (j 3 x i - 4 x 3 < 3, £2 4 x i + 3 x 2 + 6x 3 < 24, ^3 2 .2 5 x i T 4 x 2 T 3 x 3 ^ 17, £4 x i + 2 x 2 + x 3 < 1 0, (5 x i , x 2, x 3 > 0. A fter th e initialization (interior point p — (1.04,1.37,1.06)), the algorithm re quired three iterations to find all the global solutions (0,0,0), (0,3,0), and (0,0,4) w ith the sam e function value of — 7.250. All the calculations can be found in Table 2.1. 2.5 C o m p u ta tio n a l R e su lts In this section, we report several com putational experim ents. C om putational results for sim ilar problem s to those considered here are also presented in [FH75] and [HT89]. O ur algorithm seems to perform very well in com parison to the results reported in [FH75, HT89], basically due to its parallel character. The largest problem solved in [FH75] is a 12-variable, 19-constraint problem and the algorithm OAA in [HT89] was run only for problem s w ith n< 12. B ut, using our parallel algorithm , we can efficiently solve sim ilar test problem s of different 27 Table 2.1: Iterative Results for Exam ple 2 k 1 P r o c e s s o r s (i) 2 3 4 S k J k 0 /( P .) f(0 ,3 .5 ,l) = -6 .0 0 f(2.73,0,2.18) = -5 .2 7 f ( l , 1.5,0) = -4 .0 0 f ( l .46,1.85,2.10) = -0 .3 5 {1,2, 3,4} -6 .0 0 /( " • ) f(5.42,0,0) = -2 5 .7 4 f(0,5.42,0) = -2 0 .3 2 f(0,0,5.42) = -1 4 .9 1 f(0,0,0) = -7 .2 5 A 0.8951 0.8010 0.4745 0 2 ‘ (1.50,1.23,0.95) (0.83,2.18,0.85) (0.49,0.65,3.35) (0,0,0) C u ttin g P la n e Ci Cl C 3 1 New V ertices (2.00,0,0) (-4.83,10.24,0) (2.85,0,2.56) (-4.83,10.24,0) (0,3.00,0) (0,3.81,1.61) (4.24,0,1.17) (0,2.83,2.59) (0,0,4.00) {1,2, 3,4} -7 .2 5 T h e P ro m isin g V ertices (2.00,0,0) (0,3.00,0) (4.24,0,1.17) Low er B o und - 7 .2 5 -7 .2 5 -1 3 .4 6 -7 .2 5 A 0.4220 0 0.8225 < (1.59,0.58,0.45) (0,3.00,0) (1.61,1.13,1.08) C u ttin g P lan e C 2 Cl 2 New V ertices (1.00,1.50,0) (2.33,0,1.00) (1.00,0,0) (1.46,1.85,2.10) (2.73,0,2.18) (2.85,0,2.56) {1,2, 3} -7 .2 5 T h e p rom ising V ertices (0,0,4) Lower B ound -6 .2 5 -7 .2 5 -7 .2 5 A 0 3 z i (0,0,4.00) {3} -7 .2 5 Lower B ound -7 .2 5 size w ith up to 50 variables and 30 linear constraints (aside from non-negativity constraints). F irst we com pare the results using our algorithm on the exam ples in [HT93, FH75, HT89] as following: (1) exam ple in [FH75], we have D° = {a: G 3 ? 3 : xi > 0.7286, £ 2 , £ 3 > 0,X )L i x * — 3.1333}, p — (0.8981,0.0800,0.7743), global solution= (l,0,0) w ith value —1. SA: I t e r = l l , V m ax= 7, V gen=12, V total= 26, Tim e=2.2848; PA: Iter= 4 , V m a x = 6 , V gen=10, V total= 26, N a = (4,4,4,2,1), Tim e=0.3990. (2) exam ple on page 249 or 306 in [HT93], we have D° = (a- 6 5 R '1 : x > 0, S i= i x i < 5.0755}, p — (0.5, 0.5, 0.15, 2.0), global solution = (0.7904, 0.4824, 0, 3.8027) w ith value — 7.6558, redundant constraint: 1.2xi 4 - 1 .4 x 2 + 0.4x3 + 0 .8 x 4 < 6 .8 . SA: Ite r= 4 , V m ax= 3, V gen= 8 , V total= 20, Tim e=1.9761; PA: Iter= 4 , V m ax= 2, V gen= 8 , V total= 20, N a = (5,2,1,1,1), Tim e=0.5932. 28 (3) exam ple in [HT89], we have D° = {x € 3?5 : xi > 0.0715, x 2, x 4 > 0 , x 3 > 0.2820, x 5 > 0.3606, E L i *i < 12.5720}, p = (1 .5,2.0,3.0,0.7,1.0), global so lution = (0.4096,5.6011,6.1354,0, 0.4258) w ith value — 21.1220, redundant con straint: 0.7620a:! - 0.3048x2 - 0.0123x3 - 0.3940x„ - 0.7921x5 < 1.2057. SA: Ite r= 7 , V m ax= 10, V gen=45, V total= 120, Tim e=4.6049; PA: I te r= 6 , V m ax = 8 , V gen=35, V total= 130, N a = (6 ,4,4,3,3,2 , 1 ), Tim e=1.1898. All com putational results including Table 2.2 are obtained from MATLAB 4.0 running on Sun 4/50 (SPA R C station IPX ) w ith 16 MB of m em ory. The test problem s for Table 2.2 are random ly constructed by hand. Here, we use a definition of speedup which is the ratio between the tim e taken by a given serial com puter executing th e serial algorithm and tim e taken by the parallel com puter(im itated by the sam e serial com puter) executing the parallel algorithm using N processors. In order to im itate the parallel algorithm by serial com puter, we assum e th a t all active processors have the sam e com puting tim e in the same iteration. T hen, the C PU tim e executed by the parallel algorithm will be T im e = E£=o Ta{i)/ Na(i)] where Ta(i) is the tim e taken by all active processors running a t z-th iteration, Na(i) is th e num ber of active processors at z-th iteration. In our com putational experim ents, the perform ance of PA is p retty good in general, and QA often works well when A is sym m etric. Table 2.2 shows the results for the test problem s of different size which have the sam e concave objective function of the form (2 -22 ) n t=i T he interior point p of each test problem is th e sam e for algorithm s SA, PA, and QA. 29 Table 2.2: C om putational Results for SA, PA, QA No AIro m n Ite r N V m ax Vf?en V to ta l F m ax lim e Sno 1 ...SA "" 5 5 2 4 13 1 ..........1.47 PA 5 3 3 8 3 7 17 4 0.33 1 QA 2 5 1 8 10 7 0.26 2 SA 6 6 4 12 32 ....1 ' 2.26 PA 6 4 4 12 4 16 36 5 0.46 1 QA 4 25 8 47 71 12 0.45 3 SA 11 11 16 75 170 1 6.09 PA 6 5 10 46 40 175 455 6 2.56 1 QA 8 143 80 300 920 20 1.55 4 SA 6 6 28 45 124 1 8.54 PA 21 5 6 24 29 68 181 6 2.02 1 Q A 3 24 11 60 100 13 1.03 5 SA 8 8 6 28 109 1 5.48 PA 7 6 7 32 16 84 260 7 1.85 1 QA 8 102 67 285 755 30 1.65 6 SA 4 4 7 42 76 1 4.82 PA 8 6 5 24 11 48 140 7 1.18 1 QA 5 29 12 89 188 15 1.26 7 SA 10 10 28 163 361 1 16.11 PA 6 7 7 41 32 136 466 8 3.44 1 QA 9 184 87 409 1330 42 2.73 8 SA " ...........10 10 33 210 570 1 25.92 PA 6 8 9 49 48 220 737 9 9.15 1 QA 8 229 124 593 2133 56 5.15 9 SA 6 6 10 137 235 1 20.18 PA 6 9 6 21 7 72 149 10 1.91 2 QA 9 38 32 154 348 19 3.72 10 SA 411 411 401 610 3121 1 297.13 PA 10 9 171 1131 1041 2795 12074 10 151.42 81 QA 87 3068 4750 24888 69003 72 135.24 11 SA 6 6 30 535 581 1 29.59 PA 6 10 6 29 22 168 411 11 2.64 2 QA 9 145 71 806 2031 90 5.97 12 SA 4 4 77 498 626 1 37.76 PA 10 10 3 26 73 320 580 11 4.02 1 QA 2 19 9 88 88 21 1.08 13 SA 24 24 1857 3238 10274 1 1451.80 PA 10 10 11 110 4066 7578 21679 11 544.76 10 QA 7 360 200 1300 4172 100 33.10 14 SA 5 5 73 788 1499 1 98.40 PA 15 12 4 48 81 708 1612 13 7.41 1 QA 11 39 47 456 827 25 9.29 15 SA 6 6 154 1716 2922 1 196.57 PA 19 12 6 52 173 1440 2880 13 14.38 1 QA 13 131 99 857 2000 55 9.28 16 SA 9 9 724 4170 15481 T 1536.90 PA 20 12 10 111 999 4585 22318 13 119.96 1 QA 9 544 697 6277 24311 133 27.77 17 SA 6 6 10 I S r 340 1 54.07 PA 15 15 6 43 28 262 796 16 6.57 1 QA 13 57 66 613 1050 31 26.13 18 SA 4 4 16 181 399 I 57.30 PA 15 15 6 48 46 459 1060 16 9.82 1 QA 4 29 14 185 199 31 10.38 19 SA 22 22 57 338 378 1 52.18 PA 5 20 4 57 77 531 1469 21 7.69 1 QA 1 20 0 0 0 41 1.14 20 SA 7 7 946 9964 26188 1 4393.30 PA 12 20 7 107 2112 21176 39643 21 162.83 1 Q A 5 781 621 11051 25923 381 18.45 21 ' 'S A ......................- _ _ _ _ _ PA 10 30 3 90 289 6000 11248 31 44.22 1 QA 3 63 81 1674 2544 61 33.27 22 SA _ _ _ _ _ _ _ PA 30 30 6 179 4711 31721 83803 31 446.74 1 QA 5 1501 2276 67860 116143 871 54.56 23 SA _ _ _ _ _ _ _ PA 20 40 4 100 391 15400 17680 41 97.78 1 QA 4 2281 1559 58235 126731 1326 53.22 24 SA 7 7 172 6665 15615 1 5969.30 PA 6 50 3 123 138 6200 9490 51 60.71 1 QA 3 101 104 2814 5264 101 80.99 25 SA _ _ _ _ _ _ _ PA 10 50 3 136 317 13650 18794 51 113.66 1 QA 3 103 145 5000 7450 101 45.95 30 SA: serial algorithm described in Section 2.2.1 PA: parallel algorithm described in Section 2.2.2 QA: parallel algorithm described in Section 2.3 m: number of constraints(not including non-negativity constraints) n: number o f variables Iter: number of iterations (not including initialization) N: E l= i A a(i), Na[i) = number of active processors at i-th iteration Vmax: maximal number of vertices stored in memory for all processors Vgen: maximal number of new vertices generated by cut in one iteration for all processors Vtotal: total number of newly generated vertices for all processors Pmax: number of active processors at the first iteration Time: approximated CPU-tim e (in seconds) Sno: number of global solutions In our num erical tests, the perform ance of SA, PA, and QA heavily depends on th e choice of the interior point p. However, there are no general rules for this choice. In Table 2.2, SA m ay be not run in some test problem s because SA always requires a significantly large m em ory com pared to the other two m ethods. Essentially, the parallel algorithm introduced in Section 2.2.2, and Section 2.3 is an asynchronous parallel procedure because it is not necessary to wait till the o ther processors finish in each iteration for each processor. The only d a ta needed to be transferred am ong these processors is the new upper bound obtained at th e end of each iteration. Thus, th e com m unication overhead seems not to take m uch tim e. A lthough they are independent subproblem s, the prom ising vertices of some subproblem s m ay overlap some tim es. Notice th a t it can not guarantee th a t a p articular num ber of subproblem s needs to be solved sim ultaneously at each iteratio n (i.e. the num ber of active processors m ay vary anyw here from 0 to th e m axim al num ber a t each iteration). For exam ple, th e problem 12 in Table 2.2, the num ber of active processors in each iteration was 11 (in itial),10,10,6 for PA, and 21 (in itia l),10,9 for QA. Actually, it is possible th a t the inactive processors could share com putation w ith the active ones which need to generate m ore new vertices. As a result, the percentage of processor utilization can be im proved. Finally, there is one point w orth noting in the practical im plem entation of our algorithm , i.e. in some com putational results of Table 2.2, QA w ith n 2 + 1 proces sors takes m ore tim e th an PA w ith n + 1 processors since the sam e vertex perhaps appears repeatedly in some processors. In order to avoid too m uch overlapping am ong the com putation of processors, the initial enclosing polyhedron m ust be carefully partitioned. 2.6 E x te n sio n to A G en eral C o m p a ct C on vex S et As we know, nonlinear convex set can be assum ed to be constructed by infinite num ber of linear constraints. Therefore, we can apply th e algorithm m entioned in this chapter to the problem of m inim izing concave function subject to convex set where it term in ates until the enclosing polyhedron is sufficiently close to the feasible set. In other words, given an appropriate tolerance e > 0, the algorithm will stop if A < e. T hus, an approxim ately global solution can be obtained in finite iterations. 32 Consider th e constrained global concave m inim ization problem (CV) m inim ize f(x) subject to D f) G where f is a concave function on 5?", D = {x € 5R " : Ax < b, x > 0}, G = {x € : gi(x) < 0 , i = gi is a convex function on 5 R ” whose gradient is continuous, A is an m x n m atrix, b is an m -vector , and D f ) G is bounded. We also assum e th a t D f] G ^ < f> , and there is a strict interior point in the feasible set. Now, we apply the algorithm described in Section 2.2.2 w ith the following m odifications which incorporate the idea of Hoffman[Hof81] to solve problem (CV). • C om pute m in Xj (j = 1,2,..., n) s.t. x (E D f]G m ax 2 " =1 Xi s.t. x G D fj G (2.23) by K elley’s m ethod [Kel60] w ith old cut deletion procedure proposed by Topkis[Top70] and Eaves and Zangwill [EZ71]. • For the nonlinear constraints, the golden section algorithm is used to obtain A. • In the algorithm , the cutting plane is either a,-x — 6; = 0, an active linear constraint in Ax — b = 0 or gj(z^) + S7gj(z^)T(x — zf) = 0, the linearization of a nonlinear constraint gj(x). • Given a tolerance num ber e > 0, the algorithm stops if A < e 33 In order to illustrate how extend the m ethod developed in this paper to com pact convex sets, we give a sm all exam ple as follow: E xam ple 3. m inim ize — (xi — 2)2 — (x2 — 1.5)2 — 1 subject to: —28xi + 9^2 + 21 < 0, C i 9x2 - 72xi + 16x2 < 0, C 2 x 2 + x | — 16 < 0 , ( 3 64x2 - 192®i - 36x2 + 153 < 0, C - i x i — 3x2 < 0, C 5 4x2 — 12xi + x\ — 2x2 + 9 < 0, C 6 xi,x2 > 0. Applying K elley’s m ethod on 3 processors, we have pi = (1.0094,0.8070), = (1.2072,0.4024), and p3 = (1.7236,1.8944) yielding function values of — 2.4615, — 2.8332, and —1.2320 respectively. Therefore, f° = — 2.8332, and th e initial enclosing sim plex D° is constructed w ith vertices V\ — (3.2156,0.4024), u2 = (1.0094,2.6086), and v3 = (1.0094,0.4024). N° = < f> , since f { Vl) = -3.6825, f{v2) - -3 .2 1 0 4 , f(v 3) - -3 .1 8 6 0 < / ° , i.e., 5 ° ={1,2,3}. Let interior point p = (1.3134,1.0346) and e = 10-6 . A dditional iterations are described in Table 2.3. 34 Table 2.3: Iterative R esults for Exam ple 3 k i P r o c e s s o r s (i) 2 3 S k sk 0 /( P i) f( 1.009,0.807) = -2 .4 6 1 5 1(1.207,0.402) = -2 .8 3 3 2 1(1.724,1.894) = -1 .2 3 2 0 {1,2,3} -2 .8 3 3 2 / ( « i) f(3.216,0.402) = -3 .6 8 2 5 1(1.009,2.609) = -3 .2 1 0 4 1(1.009,0.402) - -3 .1 8 6 0 A 0.6439 0.7150 0.2455 C o n stra in t Cs <1 C 4 /(*,») 1(1.9908,0.8095) = -1 .4 7 6 9 1(1.2268,1.4832) = -1 .5 9 8 2 1(1.0840,0.5576) = -2 .7 2 7 1 1 C u ttin g P lan e 3 .9 3 * i - 0 .3 8 * 2 < 7.51 — 2 8 * i 4" 9 * 2 < - 2 1 - 5 3 .2 4 * i - 36.00*2 < -7 7 .7 9 {3} -2 .8 3 3 2 New V ertices (1.9513,0.4024) (2.0632,1.5549) (1.4476,2.1704) (1.0094,0.8070) (1.1890,0.4024) (1.0094,0.6680) T h e P ro m isin g V ertices (1.1890,0.4024) Low er B ound -2 .2 0 7 1 -2 .4 6 1 5 -2 .8 6 2 5 A 0.0255 C o n stra in t <4 /( * ? ) 1(1.1921,0.4185) = -2 .8 2 2 3 2 C u ttin g P lan e -3 9 .4 1 * 1 - 3 6.00*2 < -6 2 .0 4 {3} -2 .8 3 3 2 New V ertices (1.1381,0.4777) 11.2068,0.4024) T h e prom isin g V ertices (1.2068,0.4024) Lower B ound -2 .8 3 3 8 A 5.1696X 10“ 4 C o n stra in t <4 n*i) 1(1.2069,0.4027) = -2 .8 3 3 0 3 C u ttin g P lan e - 3 7 .5 2 * i - 36.00*2 < -5 9 .7 8 {3} -2 .8 3 3 2 New V ertices (1.1995,0.4104) (1.2072,0.4024) T h e p rom ising V ertices (1.2072,0.4024) Low er B ound -2 .8 3 3 2 A 2 .3 5 4 0 x 1 0 “ ' 4 C o n stra in t C 4 < t > -2 .8 3 3 2 /( * ? ) 1(1.2072,0.4024) = -2 .8 3 3 2 35 2 .7 C on clu sion In cutting plane algorithm s for solving concave m inim ization problem s, the num ber of new vertices generated by cut in each iteration is rapidly increasing w ith n (cf., e.g., [RHdV88, FH75, FH86, Thi89, HT89]). T he serial algorithm proposed here, of course, requires m uch tim e in com putation and a large m em ory in storage. For each processor in our parallel algorithm , however, we decrease both th e num ber of newly generated vertices and the storage m em ory by using both m ethod of partitio n and com parison of the upper bounds of all subproblem s. C om putational results in Table 2.2 have shown the efficient perform ance of our parallel algorithm in m inim izing a concave function over a polyhedron constraint set by the cutting plane m ethod, due to the reasonably short com puting tim e. N otice th a t the algorithm s developed here are form ulated for general concave functions, not only for quadratic concave functions shown in Section 2.4,2.5,2.6. In Section 2.6, we extend th e concave m inim ization problem s w ith a constraint set from a bounded polyhedron to a general com pact convex constraint set, but only the sm all problem s are tested. Therefore, it seems to be m ore im p o rtan t for a greater variety of num erical tests of such problem s w ith a large n. Actually, it is one of our future work. In the future, we believe th a t parallel algorithm s will play a significant role in solving the nonconvex optim ization problem s, especially in im proving th e perform ance of com puting in large scale nonconvex optim ization problem s. 36 C h a p ter 3 R e v e r se C o n v ex P ro g ra m m in g 3.1 In tr o d u ctio n In th e literatu re on nonconvex optim ization problem s, reverse convex program s, a problem closely related to concave m inim ization (cf. [HT90, Hof81, RHT87, HT89, FH75, FH86, LP94, PR86, Thi89, Tuy64, Zwa73]), has a ttrac ted the attention of a num ber of authors ([DB89, G J91, Hil75, H J80b, H J80a, Tuy87, Ros66]) since Rosen [Ros66] first studied it. T he Problem of linear program s w ith an additional reverse convex constraint is an interesting problem in reverse convex program s. Essentially, the feasible regions (i.e. intersection of a polyhedron and the com ple m entary set of a convex set) for this class of optim ization problem s are nonconvex and often disconnected, and such feasible set results in the com putational diffi culty. In recent studies for linear program s w ith one additional reverse convex con stra in t, H illestad [Hil75] developed a finite procedure for locating a global m ini m um . H illestad and Jacobsen [HJ80a] gave the characterizations of optim ization solutions and provided a finite algorithm based on these optim ality properties. 37 Subsequently, T huong and Tuy [TT84] proposed an algorithm involving a se quence of linear program m ing steps and concave program m ing steps. To increase efficiency, an outer approxim ation m ethod in [HT90, p.490] was used for the above concave program s. In addition, P ham Dinh and El Bernoussi [DB89] im proved both the results and algorithm s described by H illestad and Jacobsen [HJ80a], Thuong and Tuy [TT84], For the procedure of Tuy cuts [Tuy64], G urlitz and Ja cobsen [GJ91] showed th a t it ensures convergence for two dim ensional problem s b u t not for higher-dim ensional problem s. They also modified the edge search procedure presented by H illestad [Hil75]. However, all of them are known to be rath e r tim e-consum ing. Since the com putational effort required strongly depends on the size of th e problem and its type (e.g. linear objective function or reverse convex constraint), we m ust create a sim pler algorithm and investigate m ethods of speeding up th e algorithm in order to lower the com putational effort. A prom ising approach is to design a parallel algorithm for problem (LR C P). In this chapter, we develop two new algorithm s - serial and parallel algo rithm s to solve linear program s w ith an additional reverse convex constraint. Basically, th e serial algorithm can be regarded as a m odification of algorithm 1 (first version) in P ham Dinh and El Bernoussi [DB89]. B ut, our serial algo rith m based on Theorem 6, cutting plane m ethods and th e outer approxim ation schem e w ith a sim pler polyhedron S° (see Section 3.4) m ay be m ore efficient in m any problem s. G enerally speaking, the serial algorithm presented here has to solve both linear program s and concave m inim ization subproblem s as the m eth ods m entioned in [HT90, p.490][TT84]. T he algorithm in [HT90, p.490] seems to be m ore efficient th an th a t in [TT84] because the latter requires m ore work 38 to solve th e concave program m ing subproblem , due to its lack of the outer ap proxim ation technique. In fact, for the outer approxim ation m ethod, a solution of the concave m inim ization problem can frequently be found before revealing all the extrem e points of the feasible set. Essentially, the algorithm in [HT90, p.490] constructs a decreasing sequence of Sk, i.e. Sk C Sk-i C ... C So D D (D is a bounded polyhedron, see Section 3.2), where Sk — Sk-i U ( i 6 S R " : h(x) < 0} or Sk = Sk- i U {a; € 5R " : cx < cxk) (h(x ) denotes a linear constraint of D, Xk E D, and c is a cost vector). However, such construction of the Sk m ay require us to do a rath er expensive com putation. For exam ple, when |V(5fc)|, num ber of th e vertices in Sk, is very large (it often occurs if n is large), the com plexity in the com putation of V^Sfc+i) m ay be increasing considerably, in particular for form ing Sk+i = Sk U {x € 5 R n : cx < cxk}. Also, th e storage of th e vertices is a big problem . In our serial algorithm , the reconstruction of a polyhedron S° (see Section 3.3) is proceeded as soon as a feasible point has been detected in step 2 of P hase II. W ith this construction of 5°, the am ount of work required to calculate th e new vertices m ay be lower th an th a t in [HT90, p .490]; and the storage of the vertices can also be reduced. A lthough a sim ple construction of 5'° is used, th e com putation of generating new vertices is still the m ost expensive portion in th e serial algorithm . To rem edy this problem , we develop a parallel algorithm based on the serial algorithm , which is easily im plem ented for parallel processing. In Section 3.6, the com putational results constitute a very im portant p art of the present work since they em ploy a parallel m achine (DELTA) to dem onstrate th at th e parallel algorithm is efficient for the tested problem s. For exam ple, th e par allel im plem entation w ith 16 processors th a t we presented resulted in an average 39 speedup of 21.4 for 50 random ly created problem s of the sam e size (32 constraints and 16 unknow ns). Also, th e tested problem s w ith a num ber of variables up to 80 and 63 linear T he organization of this chapter is as follows. In Section 3.2, we state th e ba sic properties of optim al solutions for linear program s w ith an additional reverse convex constraint. Section 3.3 is devoted to the descriptions of the algorithm s. Section 3.4 discusses th e details of the im plem entation of our algorithm s. In Section 3.5, two exam ples are presented to illustrate both serial and parallel al gorithm s. Finally, in Section 3.6, a num erical report including both serial and parallel algorithm s running on the parallel m achine DELTA is given on m ore than hundred test problem s w ith variables up to 80. 3.2 P r o b le m S ta te m e n t an d F u n d a m en ta l P r o p e r tie s In this section, we introduce th e m ain results corresponding to th e characteriza tion of o ptim al solution of th e linear program w ith an additional reverse convex constraint (L R C P) problem . Consider the following (LR C P) problem (LR C P) M inim ize {cx : x G D Q G} where D = {x G : A x < b, x > 0}, A an m x n m atrix, b G 5 R m, and G = {a: G 5ft" : g(x) > 0}, g a finite convex function defined throughout 5 R n. A ssum e th a t D is bounded, and D(~)G ^ < f> . For any nonem pty polyhedral set 40 D C 9?n, we denoted by V(D) th e set of vertices of D , E (D ) the set of edges of D and dG the boundary of G for any nonem pty set G C 5ft". Theorem 3 ([HJ80a]) Let D be a bounded polyhedron, and denote the convex hull of D f \ G by convfD f | G). Then we have: (i) conv(D f]G) is a bounded polyhedron and conv(Df)G) = conv(E(D)f]G). (ii) An optimal solution fo r (LRC P) lies in the set E(D)(~)G. Theorem 4 ([HJ80a]) Let D be a bounded polyhedron, ?/£ D f \ G and D(y)— {x G D : cx < cy}. If for every z£ V(D(y)) we have: (i) g (z)<0 or (ii) g (z)—0 and cz=cy. then y is an optimal solution for (LR C P). D efinition 2 ([HT90],[DB89]) The reverse convex constraint G = {x £ K" : g(x) > 0} is called essential in the problem (LR C P) if we have m in {cx : x £ D} < m in{cx : x £ Corollary 1 (see e.g. [HT90]) I f the constraint G— {x £ 5?” : g(x) > 0} is essential in (LRCP) and D f]G ^ ( f> then there is an optimal solution for (LRCP) lying on E(D)f]dG. 41 Proof: Let x be an optim al solution to (L R C P). A ccording to T heorem 3, x is a vertex of conv(E(D) f | G) — conv(E(D)\intGc) where in tG c is the interior set Gc com plem entary set of G. If x (£ dG then g(x) > 0 and x m ust be a global solution to m in {cx : x E D}. B ut G is essential in (LR C P). It im plies th a t g (x ) = 0 , i.e. x E dG. □ N ote th a t if the constraint G is not essential, th en (LR C P) will be equivalent to th e trivial linear program m ing problem , i.e. m in {cx : x E D}. In [DB89], it pointed out th a t if G is essential in (L R C P), th en the sufficient condition in Theorem 4 is also necessary. T h e o r e m 5 ([D B 8 9 ]) Let D be a bounded polyhedron and x* ED be such that g(x*) = 0. Let D (x *) = {.t E D : cx < cx*}. I f the constraint G is essential in (LRCP), then the necessary and sufficient condition for x* to be an optimal solution of (LRCP) is that for each v E V(D(x*)) we have: (i) g (v)<0 or (ii) g(v)=0 and cv=cx*. T h e o r e m 6 Assume that D f]G ^ ( ) > and G is essential in (LRCP). Let v € D and D(v) = {x € D : cx < cv}. For a point v* € D(v)f]G to be an optimal solution to the problem (LRCP), a sufficient condition is m a x { g ( x ) : x E D(v)} = 0 and cv* = m in {cx : g(x) — 0,x E D(y)}. 42 Proof: If max{g(x) : x G D(w)} = 0 and cv* — m in {cx : g(x) = 0 , x € Z)(u)}, then g(x) < 0 for each x in D(v ) and cx >cv* for each x in D(v) f | G. Hence v* is an optim al solution to (L R C P). d R e m a r k Let v* be optim al, th en the sufficient condition in Theorem 6 is also necessary if we have v = v*. N ote th a t max{g(x) : x G D(w)} is equivalent to m in { —g(x) : x G D{v)} which is a concave m inim ization problem . Hence we can only consider the vertices of D(v). C o r o lla r y 2 Assume that D(~)G ^ ( j> and G is essential. Let v G D and D(v) = {x e D : cx < cv}. For a point v* G f | C ? if max{g(x) : x G V(D(v))} = 0 and cv = m in {cx : g(a;) = 0 , i G V (D (u))}. then v* is an optimal solution to problem (LRCP). D e fin itio n 3 ([T u y 8 6 b ]) Problem (LRCP) is said to be stable if m in {cx : x G D,g(x) > e} 4 - m in {cx : x G D,g(x) > 0 } as e } 0 If we assum e th a t problem (LR C P) is stable, then cv* = cv in Theorem 6 and we can rew rite it as following. T h e o r e m 7 ([T u y 8 6 b ]) Let D f \ G c j) and G be essential. If problem (LRCP) is stable, then a point v* £ D f)G is an optimal solution to problem (LRCP) if and only if m a a : { g ( a : ) : x G D(v*)} — 0 43 3.3 A lg o rith m D esc r ip tio n In this Section, we present two algorithm s — serial and parallel algorithm s for (L R C P). B oth algorithm s are based on Theorem s 3 & 6 and Corollary 1, and prim arily m ake use of th e outer approxim ation schem e and the cutting plane m ethod. 3.3.1 Serial Algorithm Initialization Step 0: Let x 0 solve M in{cx : x £ D} and v0 £ V ( D ) f) G. Set k =0. Step 1: If g(^o) > 0, stop; x0 is optim al to (LR C P). O therw ise startin g from x 0, pivot via the sim plex algorithm for solving the linear program m ing max{(uo — x0)a; : x £ D j until a pair of vertices V\ and v2 are obtained such th a t g ^ ) < 0 and g(w2) > 0 Step 2: Solve th e line search problem m inim um a subject to g(vi + a(v2 — ui)) > 0 0 < a < 1 and set z0 = Vi + a (u 2 — ^i) where a is an optim al value of the line search problem . Step 3: Find a polyhedron S° containing vertex xo such th a t D(z0) = {x E D : cx < czQ } C S q C {a: £ 5in : cx < cz0} , delete the redundant constraints according to F (5 ° ) and go to Phase II. 44 P h a s e I S te p 1: S tarting from Xk, pivot via the sim plex algorithm for solving the linear program m ing M in{cx: x£ D(zk-i)} until a pair of vertices V\ and v2 are obtained such th a t g(ui) > 0 and g(u2) < 0. S te p 2 : F ind 0 < a < 1 such th a t g(ui+a(i> 2 — uj)) = 0 and set Zk — v 1+a(v2—vi), i.e. Zk is th e intersection of [vi,v2] w ith th e surface g(a;) = 0. S te p 3: Find a polyhedron S '® of vertex x 0 such th a t D(zk) = {x £ D : cx < czk} C S'® C {a: £ 5?n : cx < czk} (such a polyhedron will discuss later) and delete th e redundant constraints according to V (S°). P h a s e I I Let i = 0. S te p 1: Set V = {v \ g(u) = 0, for v £ V(S£)}. If max{g(v) : v € F (S |.)} = 0 and V C D(zk), th en stop: v* € argMin{cx : g(a:) = 0 ,x € V} is globally optim al for (L R C P). O therw ise, go to Step 2. S te p 2: (a ) If there is a v € argM in{cu : v £ V(S'k), g(u) >0} such th a t v £ D, then set Xk+i = v, k = k + 1, and go to Phase I. O therw ise, go to (b). (b ) If exists v £ argMax{g(v) : v £ V ( S % k), g(u) >0} such th a t v D, then find a constraint pjx — qj < 0 of D which is the m ost violated by v and set S fc+1 = {x £ S'k : pjx — qj < 0}. C om pute th e vertex set V ).I+1 of S '(.+1(from knowledge of V ^.*), and let i = i + 1, go to Step 1. 45 Rem arks • Search for any vq G V { D )f\G in Step 0 by using the m ethods described in H orst and Tuy [HT90] or T. P ham Dinh and El Bernoussi [DB89]. If no such Vo exists, then th ere is no feasible solution. • If .T o is a degenerate vertex of D, then select another vertex x'0 such th a t t 0 is nondegenerate and g(xo) < 0. If no such vertex exists, apply the procedures discussed in Section 3.4. • Step 1 of Phase II is based on Theorem 6. For an unstable problem like Figure 3.1, Theorem 6 will be m ore efficient th an Theorem 4, 5. CX DC\G Figure 3.1: An Exam ple for U nstable Problem 46 3.3.2 Parallel Algorithm I n itia liz a tio n S te p 0: Let x° solve M in{cx : x G D} and v° G V(D )f]G . Set k =0. S te p 1: If g(x°) > 0, stop; x° is optim al to (LR C P). O therw ise find x0t, (i = 1 ,2 ,... ,n) the n neighboring vertices of a:0 (if a:0 is not a nondegenerate vertex of D, then select another vertex x° such th a t g(x° < 0). Let dP' — v° — x°l, i—1,2, . . . ,n. D o a ll i = l,n if g(xoi) < 0 then startin g from x °\ pivot via the sim plex algorithm for solving th e linear program m ing m ax{d°‘a: : x G D} until a pair of vertices v 1 1 and v'2 are obtained such th a t g(u!l) < 0 and g(vt2) > 0. o th e r w is e set v'1 — x° and v'2 = x°‘. Solve th e line search problem m inim um a subject to g(u!l + a(v'2 — vl1)) > 0 0 < a < 1 and set zf = v'1 + a(v'2 — u11) where a is an optim al value of the line search problem . E n d a ll Choose z° = M in{cz°, * = 1 , 2 , . . . , n}. S te p 2: Find a polyhedron Sf] of vertex x° such th a t D(z °) = {x € D : cx < 4 7 cz0} C Sq C {a: G 3?" : cx < cz0} and delete the redundant constraints according to V(Sq) and go to Phase II. Phase I Step 1: For point x k, find its n neighboring vertices x k\ i = 1 ,2 ,... ,n . I f g(xk) > 0 D oall i= l,n if g(xk') < 0 th en set v'1 = x k and vx2 = x kt. O therw ise, starting from x kl, pivot via the sim plex algorithm for solving the linear program m ing M in{cx: x£ D(zk-i)} until a pair of vertices vxl and v'2 are obtained such th a t g(ul1) > 0 and g(vt2) < 0. Find 0 < a < 1 such th a t g(vn + a(vl2 — v'1)) = 0 and set z k = v'1 + a(v'2 — u '1), i.e. zk is th e intersection of [i;11,^ '2] w ith the surface g(x)= 0. Endall Else Doall i= l,n if g(x ki) > 0 th en set v'1 — x k and vl2 — x kt. O therw ise, set d' — v° — x kt, startin g from x kl, pivot via the sim plex algorithm for solving th e linear program m ing m ax{d‘x: x 6 D(zk-i)} until a pair of vertices vxl and v'2 are obtained such th a t g(utl) < 0 and g(vl2) > 0. Find 0 < a < 1 such th a t g(u!l + a(vl2 — v'1)) — 0 and set z k = v'1 + a(v'2 — vn ), i.e. zk is the intersection of [u'1,!;'2] w ith the surface g ( x ) = 0 . Endall End Step 2: Find a polyhedron Sk of vertex a ;0 such th a t D(zh) — {x £ D : cx < czk} C S “ C { x G 3?" : cx < czk} and delete th e redundant constraints according to V (S k), where czk = m in{czk : i = 1 ,2 ,... ,n } Phase I I Let j = 0. Step 1: Set V = {v : g(u) = 0, for v G V^S*.)}. If m a x { g(u) : v G V ,(5'j[.)} = 0 and V C D(zk), then stop: v * G argMin{cx : g(ar) = 0 , i G V} is globally optim al for (L R C P). O therw ise, go to Step 2. Step 2: (a) If there is a feasible vertex v G argM in{cu : v G V (S k), g(v) >0} such th at v G D, then set x k+1 = v, k = k + l, and go to Phase I. O therw ise go to (b). (b ) If exists v G argM ax{ g(u) : v G V (Sk), g(v) >0} such th a t v $ D, then find a constraint prx — qr < 0 of D which is the m ost violated by v and set s[+1 = {a: G S[ : prx — qr < 0}. C om pute the vertex set V ^ +1 of S'^+1(from knowledge of Vf), and let j = j + 1, go to Step 1. Remarks • (1) T he selection of a:0, exploration of v° and construction of S° in parallel algorithm are th e sam e as the serial algorithm ’s. • (2) T he (b) of step 2 in phase II is executed by a parallel com putation. Let = i v ^ V{Sl) ’ • Vrv ~ Q r > 0 ), where prv — qr — 0 is a constraint of D. D enote by |V + (5^)| the num ber of th e vertices of V +(SJ k). For a cutting 49 plane prv — qr — 0 and |V + (5^)| = M , the new vertices can be obtained by each processor executing th e vertex generating procedure (e.g. [RHdV88]) tim es where n is the num ber of processors and [M / n ] is the sm allest integer great th an M /n. • (3) Similarly, the function values of g(u), where v G V (S k), can be com puted in the sam e way as above. L e m m a 3 For step 1 of phase I, if serial and parallel algorithms start from the same point x (i.e. x k = x k), then czk(parallel) < czk(serial), i.e. D (zk) C D(zk). Proof: (1) If x is a nondegenerate point, then we can find exactly its n adjacent vertices. Starting from these points, the edge search paths via the sim plex algo rith m will include the path in the serial algorithm . It im plies czk < czk. (2) If x is a degenerate point, then we choose n neighboring vertices (n edge search paths) including the path startin g from x. From (1) & (2), we know czk < czk. D L e m m a 4 {czk} in serial algorithm is a decreasing and finite sequence. See H illestad and Jacobsen [HJ80a], T h e o r e m 8 The serial and parallel algorithms find an optimal solution for prob lem (LRCP) in a finite number of steps. Proof: (i)From corollary 1, we know th a t there is an optim al solution for (LR C P) lying on E (D ). Now E(D) is finite since the num ber of constraints of D is finite, (ii)A t the step 3 of phase I, the polyhedron S° containing the global solution of 40 (LR C P) and the num ber of the linear constraints of D (zk) which is finite imply th e num ber of cutting related to S° is finite. Hence these two algorithm s will converge to an optim al solution in a finite num ber of steps. □ 3.4 D isc u ssio n o f Im p lem en ta tio n In th e algorithm s presented in Section 3.3, the outer approxim ation m ethod is applied and its efficiency heavily depends on both construction of the polyhedron S° and calculation of th e vertex set V (S k). A lthough the new vertices generated by cuts is th e m ost expensive com putation in the algorithm s, it can be done in parallel. Thus, we will focus on the construction of S°. In general, the efficiency will increase if a suitable containing polyhedron is constructed. According to th e techniques of outer approxim ation and of cutting plane the best choice of S° should be th a t it m ust be sim ple, be closed enough to D(zk) and have a small num ber of vertices. Therefore, we would like to create a polyhedron S° m entioned in step 3 of Phase I by using the sam e polyhedral cone (fixed constraints binding a t x0 or x°). It is sim pler th an the construction described in [DB89] since the la tte r has to find the n adjacent vertices and a linear variety generated by these n points, th en to solve a linear program each tim e. D enote by J(v) the index set of all constraints th a t are active at v (where v = xq or x°), i.e. J(v) = {i E I : PiV - qi = 0} (3.1) where / C IN is a finite index set. 51 i) If v is a nondegenerate vertex of D, then J(v) contains the indices of exactly n linearly independent constraints p{X — qi = 0, i € -/(w) and the set of inequalities PiX 7.' < 0, i G J(v) (3.2) defines a polyhedral cone vertexed at v. Therefore, the polyhedron (sim plex) S° is defined as follows: S° — {x G 5 R n : PiX — qi < 0 , i G */(*>)} P | {x G 5?" : cx < czk or czk}; f/ (5'°) = {w, ui,..., vn}. where v i , . . . , v n can be com puted by the m ethods described in T. Pham Dinh and S. El Bernoussi [DB89]. Here we utilize the procedure of Hoi st, Vries and Thoai [RHdV88] to generate them . A lthough S° is a simplex in m ost case, it could be unbounded. In an unbounded case, we can pron>ed w ith the following th ree m anners: • Try another v such th a t S° is bounded if there is a nondegenerate v G V(D) and g(u) < 0. • Apply the m ethods m entioned in P ham Dinh and El Bernoussi [DB89] to construct a bounded approxim ation of S°. • Use an approxim ate cost vector cc instead of c. Since S° is unbounded, the num ber of vertices generated by the cutting hyperplane cx — czk = 0 will be less th an n. We replace Sj = 0 by Sj — t (e > 0) for some j in the procedure of H orst et al. [RHdV88] and perform a pivot operation 52 such th a t there are n vertices v[,..., v'n to be generated. Hence we may have an approxim ate cutting plane ccx + (3 = 0 which passes through these n points. ii) If v is a degenerate vertex of D, then |J (u )| > ? ? ., i.e. m ore than n linear constraints binding a t v and in this case, we m ay apply the algorithm for finding all vertices of a given polytope to com pute u; (cf. [Mat73],[DP77],[MR80]) or proceed w ith the m ethods m entioned in P ham Dinh and S. El Bernoussi [DB89]. We use the algorithm of M atthess [Mat73] to determ ine V{ (N ote th a t since the algorithm of M atthess [Mat73] needs to m aintain a list stru ctu re, storage m ay be a problem for a large n). T he redundant constraints m ay be deleted by the approach in [RHdVSS] after each construction of 5°. For the determ ination of the vertex set V (S J k) in the cutting plane procedures, these algorithm s used in the above i) and ii) are enough to perform . In addition, recall th a t convex m axim ization problem (i.e. concave m inim ization problem ) always obtains its optim al solution at some vertex. Ilence, to save storage m em ory and accelerate (b) of step 2 in Phase II, only vertices having nonnegative convex function values need to be stored. Let Sk be a bounded polytope defined by the linear inequalities V (5'°) = {v, v^, i = 1, 2, . . . , p} where p > n. (3.4) Sk — : Pix ~ Q i < 0, i € /} (3.5) 53 Let h(x) = pjX — qj = 0 ,j G I be a cutting hyperplane and V^+(S®) = {u : g(u) > 0, v G V (5°)}. Let 14 + (S®) = {bG : h(v) > 0}. According to (b) of step 2 in P hase II, a necessary condition for any constraint h(x) of polyhedron S '® to be a c u ttin g plane is 14 +(S'®) ^ < f. D e fin itio n 4 A cutting plane h(x) is valid for the polyhedron S° ifVf+(S°) ^ (j). Otherwise, it is invalid. Because only the vertex set lg+ (S °) of I4(S°) rath er th an V'(S'°) is useful for com putation, the constraints which is im possible to be a cutting plane can be elim inated by the following lem m a. L e m m a 5 Let V^+(S'^) = {v : g(v) > 0 ,u € I4(S®)}. Then a constraint h(x) of S® is invalid for S® if and only if h (v)< 0, V u e Vs+(S°k). (3.6) 3.5 E x a m p les To illustrate the algorithm s presented in Section 3.3, we are going to give two exam ples . Also, in these two exam ples, th e com parison of both serial and parallel 54 algorithms will be reported. E xam ple 1: m inim ize — 2xx + 3^2 subject to: — 3x! + x 2 < 0, —Ax\ — x 2 < — 7, 3xi + 2x2 < 23, 5xi — 4x2 < 20, 2xi + 3x2 < 22, —6xi — 9x2 < —18, — 3xi + x 2 < 10, Xi,x2 > 0. g(x) = x\ + x\ — 8xi — 4x2 + 13.75 > 0 serial algorithm : Initialization: we obtain x0 — (4,0) by solving the linear program min{cx : x 6 D} and let v0 = (2,6). k = 0 : startin g from x0, step 1 finds Ui = (5,4), v2 — (2,6) and step 2 solves z0 = (4.2723,4.4851). Step 3 constructs a sim plex S q = {x G 5?2 : 5xi — 4x2 < 20, x 2 > 0, —2xj + 3 x 2 < 4.9107} w ith vertices (4,0), (-2 .4 5 5 4 ,0 ), (11.3776,9.2220) and delete a redundant constraint — 3xx + x 2 < 10. Go to Phase II. In Phase II, Since max{g(v) : v £ ^(*50)} > 0 and no v satisfies (a) of Step 2, S q = {x £ S q : 3xj + 2x2 < 23}. T hen, go to step 1 of Phase II and obtain the sam e result as above. Form S q = {x £ S q : —4xj — x2 < — 7} and find xi = (1.1492,2.4031) £ D in (a) of ste p l. Go to Phase I. A ; = 1 : In Phase I, Step 1 finds Vi = (1.5,1), and v2 = (3,0), Step 2 solves z\ = (1.8108,0.7928), and Step 3 determ ines the sim plex 5 ° = {x £ 5 ft2 : 5xi — 4x2 < 20, x2 > 0, —2xj + 3 x 2 < —1.2431} w ith vertices (4,0), (0.6215,0), (7.8611,4.8264) and deletes a redundant constraint — 3xi + x 2 < 0 . Similarly, executing the 55 procedure of Phase II, we have P ( 5 'i) = {(4,0), (5.4989,3.2516), (6,2.5), (1.8108, 0.7928), (3,0)}. T hen Step 1 verifies th a t v* = z\ = (1.8108,0.7928) is an optim al solution because m a x { g(u) : v G F (S 'i)} = 0 and only g(zi) = 0. parallel algorithm: Initialization: In Step 0 we also let x° — (4,0) and Vq = (2,6). k = 0 : Step 1 finds a;0 = (4,0), x0 1 = (6,2.5), x02 = (3,0), v 1 1 = (5,4), v 12 = (2,6), u2 1 = (3,0), v22 = (1.5,1), and z° = (4.2723,4.4851), z° = (1.8108,0.7929). Hence cz° = cz° = min{cz®, cz°}. Step 3 constructs a sim plex S° sam e as 5° in serial algorithm . Phase II: after doing the sam e steps as the serial algorithm ’s, we also obtain an optim al solution v* = z° = (1.8108,0.7929). Figure 3.2 illustrates the geom etric history of exam ple 1. In this exam ple, con strain t — 3xi + x 2 < 10 is redundant. (a) Serial Algorithm (b) P arallel A lgorithm Figure 3.2: G eom etrical History of Exam ple 1 10 12 56 E x a m p le 2: Given an exam ple w ith m = 10 and n= 6 . Cost vector: c — (-4.88166, -6.06580, -7.92004, 7.87233, -9.74772, -9.99590). Polyhedron: D — {x G 3?n : A x < b, x > 0}, where A = ( \ 1.64990 1.78425 1.87958 0.13965 1.09823 1.92671 0.60304 -0.73647 0.11778 -0.40180 0.96786 0.83954 0.29427 -0.22619 0.34955 0.81258 -0.98052 0.46708 -0.37775 -0.75322 0.56079 -0.89150 0.59161 -0.87597 -0.58633 -0.52370 0.24838 0.51515 0.17007 0.32349 -0.01924 0.70454 -0.74845 0.88219 0.90519 -0.43513 -0.82323 0.02622 0.60894 0.52842 -0.81546 0.58568 -0.50914 0.85069 -0.49088 -0.26491 -0.40963 -0.65797 0.88640 -0.28695 -0.81680 0.01408 0.69593 0.49501 -0.64534 -0.19249 0.84724 -0.37147 0.71062 -0.55198 / bT = (8.47832, 2.48636, 1.86858, -1.10545, 1.96469, 3.02506, 0.57517, -1.04776, 1.67729, 1.64407). Reverse convex constraint: g(:r) = 2.10(xi — 1.50)2 + 5.24(x2 — 3)2 + I . 2 O X 3 + x\ + 1.86(a6 - l ) 2 - 4 5 > 0. Choose x Q = x° = (0 , 1.27570, 0.35670, 0, 1.98949, 1.73704) w ith g(x0) = - 23.53236 and v0 = v° =(1.10066, 0, 2.08640, 3.28416, 1.93021, 0.08425) with g(u0) = 20.06409. 57 In Serial A lgorithm , there are 258 vertices generated by cuts and six constructed sim plices having Zk (k = 0,1,..., 5) as following: 0 0 = (0.80574,0.62003,1.96709,2.79354,1.78034,0), cz0 = -18.63633 0 1 = (0,0.24375,0.15993,0,0.48804,1.48438), cz1 = -22.34023 0 2 = (0,0.24350,0,0,0.65845,1.49730), cz2 = -22.86218 0 3 = (2.24338,0.16203,1.16530,0,0.23386,1.05920), C 0 3 = -34.03075 0 4 = (0,0.29620,0.41783,0,1.03270,1.97224), cz4 = -34.88672 0 5 = (1.19419,0.17982,1.36695,0,0.32943,1.68998), cz5 = -37.85075 A fter verification, we have a global optim al solution: 05 w ith optim al value of -37.85075. In Parallel A lgorithm , th ere are only 24 new generated vertices and two simplices before finding th e optim al solution: z° = (0, 0.42927, 1.20051, 1.92978, 2.40445, 1.32133) w ith cz° = -33.56585 and 0 1 = (1.19419, 0.17982, 1.36695, 0, 0.32943, 1.68998) yielding optim al value — 37.85075. 3.6 N u m e r ic a l R e su lts an d A n a ly sis In this Section, we report th e com putational results for solving problem (LR C P) by both serial algorithm (SA) and parallel algorithm (PA) described in the Section 3.3 running on the DELTA supercom puter. T he test problem s are random ly generated so th a t the feasible regions are nonem pty and bounded. 58 3.6.1 DELTA and Test Problems T he Touchstone DELTA supercom puter, located in the Caltech, is a message- passing m ulticom puter, consisting of an ensem ble of individual and autonom ous nodes th a t com m unicate across a tw o-dim ensional m esh interconnection network. It has 513 com putational i860 nodes, each w ith 16 M bytes of m em ory and each node has a peak speed of 60 double-precision Mflops, 80 single-precision Mflops at 40 MHz. A C oncurrent File System (CFS) is attached to the nodes w ith a to tal of 95 G bytes of form atted disk space. T he operating system is In tel’s Node Executive for the m esh (N X /M ). To share the inform ation during the parallel com putation, a node will be as signed as host node to collect th e inform ation from each node and pass the m es sages to the others by an /-level binary com m unication tree, where I = log2 n, n is the num ber of processors (see Figure 3.3) or broadcasting. Here, m essage passing by binary tree is slightly quicker th an th a t by broadcast. T he problem s used to test th e algorithm s are random ly generated by the follow- level 4 le vel 3 level 2 level 1 ' Figure 3.3: A ./-level binary tree of m essage passing. Here 0 is the host node 59 ing way. For the polyhedron D = {x E 3?" : Ax < 6 , x > 0} in a problem (LR C P), each elem ent a,j of A (i = 1,... ,m ; j — 1,... ,n), is obtained by (cf. Horst and Thoai [HT89]) aii = 2 0 . - 1 , (3.7) and bi (i = 1,..., m) is produced by = I > y + 206 (3.8) j=l where 6a,0b are random num bers generated by the function RAND (an uniform distribution on [0, 1]) in M ATLAB. Similarly, we can have C j ( j = l,...,n ), a coefficient of th e linear cost function, in [-10, 10] by Cj = 1O(20C — 1), where 0C is also created by RAND. Clearly, D is nonem pty and bounded if we shift the elem ents of the first row of A by aij = a\j + 1 such th a t all of its elem ents are positive. For the reverse convex constraints, the following functions will be used in the tested problem s. (I) g(x) = xTPx + rx — t, (II) g(u) = ut Qu + ru — t, 60 where P is a positive sem idefinite n x n m atrix, Q is a diagonal positive sem idefinite n x n m atrix , r G 3?", t G 9 f?, and a vector u consists of either x\ (at least one) or Xi (i = l , . . . , n ) . / r r r\ /■ \ I ^ ^ Tl — 1 . 3 ( / / / ) g(x) = Ja:i + - x 2 + ~ x 3 + . . . + x n 2 - t 1 6 n (IV ) g(x) = y/1 + X! + 2x2 + ■ ■ ■ + n xn - t Finally, we solve min{cx : x G D} and let x 0 G V(D) be its solution. Find a u0 G V(D), th en let th e center of th e convex function be close to the x0 such th a t g(rr0) < 0 and g(v0) > 0. 3.6.2 Computational Results B oth serial and parallel algorithm s were coded in F ortran 77 standard. All num er ical tests were executed on th e parallel com puter DELTA w ith double precision. Essentially, running the parallel algorithm for a test problem with n variables, we use n nodes as a partition by specifying the num ber of rows and colum ns. The speedup defined here is the ratio betw een the tim e taken by a given parallel com p u ter executing the serial algorithm using one processor and th e tim e taken by th e sam e parallel com puter executing the parallel algorithm using n processors. Table 3.1 contains the com putational results of the SA and PA described previ ously on the test problem s of different size. Note th a t the choice of v° or v0 may affect th e tim e of calculation. However, so far, we have no general m ethods to choose it. In this paper, the pointG V (D )f)G in both SA and PA was the sam e (i.e. Vo = v°) for each tested problem . In order to dem onstrate the efficiency of PA, we 61 run 50 test problem s random ly constituted w ith the sam e size m = 32, n= 16. Also, a q uadratic reverse convex constraint was considered for each problem . M ost of th em have a com putation tim e less th an 1 second for PA. All num erical results are shown in Table 3.2. T he average com putation tim es of PA and SA are 3.175 and 67.936 seconds respectively. Hence, the average speedup of th e PA will be up to 21.4. Tables 3.1,3.2 illu strate th a t the PA introduced here is very practical and efficient for the solutions of the tested problem s. T he notations em ployed in Table 3.1: C om putational R esults for Serial and Parallel A lgorithm s m n RC C Serial A lgorithm P arallel A lgorithm SpeedU p V m ax V tol Rec T im e V m ax V tol R ec T im e 27 9 (II) 151 1395 2 0.371 16 135 2 0.189 1.96 63 9 (I) 92 10125 4 2.496 79 4896 2 0.740 3.37 40 10 (I) 91 8410 8 1.816 60 3880 4 0.636 2.86 36 12 (III) 688 10092 4 3.257 53 756 3 0.516 6.31 30 16 (I) 5144 91872 8 28.498 691 26096 3 1.258 22.65 32 16 (I) 2328 108480 8 31.163 1196 74752 6 2.903 10.73 42 22 (II) 15282 1747372 10 937.458 15279 1048454 6 81.209 11.54 30 25 (I) 21357 538850 7 384.120 11908 166250 5 13.622 28.20 21 27 (II) 914 10314 6 5.668 728 6588 4 1.227 4.62 32 32 (I) 16670 154592 10 114.862 7770 108512 4 8.339 13.77 30 36 (I) 7121 93564 15 73.226 100 9072 3 1.368 53.53 32 42 (I) 7563 713412 11 644.979 6778 240198 4 14.652 44.02 30 45 (I) 3631 120825 7 127.377 1621 51795 3 4.467 28.52 49 49 (IV ) 9936 114905 2 157.194 48 2646 2 2.509 62.65 40 50 (I) 5376 92500 7 149.575 4818 26750 3 7.288 20.52 27 55 (III) 9715 25575 7 54.679 87 550 2 1.626 33.63 20 64 (I) 7095 67264 15 145.011 1304 13376 4 5.739 25.27 30 64 (I) 6574 125568 12 253.827 195 16320 2 3.144 80.73 30 70 (III) 4344 161560 10 394.196 2247 20510 4 9.997 39.43 12 72 (I) 1671 74304 10 155.598 2525 85104 6 13.951 11.15 25 72 (I) 2318 119232 19 266.870 739 18360 5 8.560 31.18 15 80 (III) 104 13520 3 34.609 81 6480 2 2.128 16.26 20 80 (III) 4122 1959680 16 4964.825 2404 594480 6 42.204 117.64 30 80 (IV ) 2778 111120 9 281.367 39 240 2 2.631 106.94 Table 3.1,3.2 are as follows: 62 m: num ber of constraints in A x < b n: num ber of variables RCC: type of Reverse Convex C onstraint described in Section 3.6.1. Rec: num ber of P olyhedra S° constructed, k — 0,1,2,... Vm ax: m axim al num ber of vertices stored in a processor V total: to tal num ber of vertices generated by cuts for all processors (not including V(S°), k =0,1,2...) Tim e: E xecuting tim e in seconds In our com putational experim ent, the perform ance of SA and PA depends on the type of problem (L R C P) given cost function, linear constraints, and reverse convex constraint. A different cost coefficient will produce a different sequence of 5°, m ore linear constraints m ay cause m ore cuts, and the reverse convex constraint is related to the locations of Zk (or z k). In general the set Zk in SA is not necessary to include the set of zk in PA and \zk\ is frequently greater th an |.zfc |, where \zk|( I'S'k|) is the num ber of Zk(zk) (see exam ples 1, 2 or Rec in Table 3.1). Therefore, com pared w ith SA, PA will frequently decrease the num ber of cuts during the com putation resulting in a lower num ber of newly generated vertices. In addition, the newly generated vertices can be com puted in parallel for PA. T hus, PA is always m uch m ore efficient than SA; even in some problem s, speedups greater th an the num ber of processors can be expected (see Table 3.1,3.2). However, for m ost problem s w ith sm all m and n, the tim e saved in parallel com putation m ay be largely offset by tim e needed for com m unication. This fact results in a small speedup for PA, even less th an 1. 63 Table 3.2: R esults for 50 Tested Problem s w ith m = 32, n=16 Serial Algorithm Parallel A gorithm No Vmax Vtol Rec Tim e Vmax Vtol Rec Tim e Speedllp 1 215 11104 10 3.039 69 4704 3 0.503 6.04 2 6331 113392 5 35.369 123 2528 3 0.522 44.85 3 10453 150624 5 58.590 204 6144 3 0.641 91.40 4 358 17664 9 4.653 86 5200 3 0.530 8.78 5 6343 302288 13 99.030 2682 82944 4 3.553 27.87 6 1313 61552 5 18.878 217 7984 2 0.558 33.83 7 6717 114608 9 40.990 287 9216 3 0.745 55.02 8 4751 119808 5 39.629 5383 117328 5 6.036 6.57 9 28440 1707776 13 644.556 11025 752672 8 41.560 15.51 10 5859 161600 8 53.568 490 46320 4 2.041 26.25 11 492 9616 6 3.293 61 2624 4 0.504 6.53 12 3616 50736 4 17.022 478 10272 3 0.730 23.32 13 6177 101456 5 33.843 390 21616 3 1.036 32.67 14 4977 101360 6 29.709 5101 75376 3 4.053 7.33 15 2558 56768 6 15.239 2098 30432 3 1.633 9.33 16 5673 49488 9 17.056 1022 8304 3 0.906 18.83 17 5223 44368 4 15.155 1027 10368 4 1.064 14.24 18 4168 372752 10 106.328 4089 228880 6 9.181 11.58 19 277 7728 7 2.238 65 1072 3 0.417 5.37 20 4024 296128 12 78.932 415 50960 4 2.031 38.86 21 878 133648 9 33.060 349 39280 3 1.604 20.61 22 1981 23584 10 7.612 79 864 4 0.628 12.12 23 873 12048 4 3.880 871 7744 4 0.756 5.13 24 4414 75680 8 23.248 72 2944 5 0.805 28.88 25 1112 8848 3 2.931 12 64 2 0.286 10.25 26 12432 92432 5 35.761 353 4336 3 0.580 61.66 27 2531 23248 4 7.526 111 848 3 0.372 20.23 28 10107 110032 6 40.658 3308 26688 3 1.733 23.46 29 12457 399584 9 137.908 10705 128304 4 8.123 16.98 30 3064 12176 3 4.851 3064 12000 2 1.239 3.92 31 926 6032 3 1.796 3 32 2 0.269 6.68 32 528 82944 9 20.481 343 41360 5 1.826 11.22 33 1950 49408 8 16.451 1621 22736 5 1.676 9.82 34 1322 27552 5 8.932 1099 27648 3 1.521 5.87 35 24395 1285776 9 585.474 7535 445616 6 24.504 23.89 36 4402 31472 6 11.946 237 3376 4 0.565 21.14 37 1405 7136 4 2.262 1262 8128 2 0.661 3.42 38 934 5792 3 1.903 2 0 1 0.175 10.87 39 4362 36512 3 12.475 439 2224 3 0.488 25.56 40 2628 14544 4 5.448 2493 13360 3 1.206 4.52 41 34251 1000112 8 444.697 2397 63968 4 2.963 150.08 42 304 7360 2 2.263 15 304 1 0.281 8.05 43 8719 778944 11 231.053 5618 302944 6 11.793 19.59 44 5447 44656 8 16.129 15 928 3 0.408 39.53 45 306 88704 11 24.315 305 29488 4 1.248 19.48 46 28494 642368 9 321.501 5466 174896 5 9.293 34.60 47 1397 15888 4 5.356 75 2720 4 0.545 9.83 48 13790 102384 6 44.337 5267 33616 4 2.815 15.75 49 2039 36160 6 13.738 2215 16000 4 1.486 9.24 50 3696 40464 4 11.687 131 5568 3 0.659 17.73 64 1 6 0 140 120 100 a. ■o 1 60 20 30 25 40 Figure 3.4: Speedup for 50 problem s w ith m = 32, n= 16 on the DELTA T he parallel algorithm introduced in Section 3.3.2 is a synchronous parallel procedure since the subsequent step will not be executed until the com pletion of th e com putation for th e previous step. For exam ple, in Phase I, m inim ize z±, (i — in Phase I and generate th e new vertices in (b) of Phase II. However, not all n processors are active all the tim e. For instance, in cutting plane procedure, if |V^+(5^)| = M < n, one can generate all new vertices sim ultaneously only using M processors. Thus, In some problem s, using n processors m ay not be realistic because of its low rate of utilization (e.g. in the tested problem w ith m — 21, n = 27 shown in Table 3.1, |V^+(S, j(.)| is sm all for each k so th a t m ost processors are idle during perform ing the production of new vertices). In practice, if only a restricted num ber p (<n) of processors is available, then some processors m ust process m ore th an one tim e for edge searching, take m ore tim es to generate new vertices and to evaluate the function values of th e vertices (depend on the quantities of processors). Consequently, the percent of utilization will be im proved 65 for each processor and the com m unication tim e can also be reduced. Hence, a high efficiency m ay be achieved if a suitable num ber of processors are chosen. In th e algorithm s proposed here, th e com putation of creating new vertices is the m ost tim e-consum ing. M oreover, Tables 3.1, 3.2 show th a t the num ber of new vertices produced by cuts in PA is usually m uch lower th an th a t created in SA. Therefore, it is expected th a t SA is m uch m ore efficient in a case in which although th e SA needs additional tim e to execute the edge searching procedure shown below, it can save m uch m ore tim e in generating fewer num ber of vertices (e.g. th e tested problem w ith m = 20, n = 80 listed in Table 3.1). Initialization: let cz0 = oo and vl (i = 1,..., n) be the n adjacent vertices of xo in D. for i = 1, n do if g(n‘) < 0 then if cv' > czq then stop the i-th line searching, else continue finding z,-. end if else find z; end if if czi < czo then set czo = cz{. end if. end for. Solving Zk in step 2 of P hase I: let vx (i = 1,..., n) be the n adjacent vertices of xk in D(zk-i). for i — 1, n do if g(v') < 0 then if cv' > czk then stop the i-th line searching, else continue finding z;. end if else find z,- end if 6 6 if czi < czk then set czk — czi. end if. end for. Finally, since the m em ory required to store the list of vertex increases rapidly w ith n, the size of problem is restricted. O f course, CFS can be used for the larger size of problem , b ut it requires an inordinate long tim e to com plete read/w rite. 3 .7 C o n clu sio n In this chapter, we have proposed a new parallel algorithm to solve the prob lem (LR C P) th a t can be efficiently im plem ented on a m assive parallel com puter DELTA. We have tested two sets of random ly generated test problem s. For the first set, we em phasized the problem s of different size; for the other set, we con cen trated on the problem s of sam e size (m = 32, n= 16). In the algorithm proposed here, the calculation of producing new vertices is the m ost expensive part. However, this com putation is distributed over all processors and saves a considerable am ount of tim e although it requires the com m unication. By com paring it w ith the serial algorithm , we have achieved com putational results (Tables 3.1, 3.2) th a t show the parallel algorithm is m ore efficient, even with a super-linear speedup for som e tested problem s. 67 C h a p ter 4 D .C . M in im iza tio n P ro b lem s 4.1 In tr o d u ctio n Consider a d.c. (the difference of two convex functions) global optim ization prob lem m inim ize (f ( x ) — g(x)) (DC) subject to: x € D where D — {a; € 5?" : h{(x) < 0 , (i = 1,... ,m)}, /i,, / , and g are finite convex functions on 9?n. Assum e th a t D is com pact and nonem pty, then problem (DC) has a global solution. In the literature, d.c. problem s play an im portant role in nonconvex pro gram m ing problem s either from a practical or a theoretical view point (cf. Horst and Tuy [HT93]). Indeed, d.c. problem s are encountered frequently in Engineer ing and several m ethods had been proposed to solve the class of d.c. problem s 6 8 (cf. [HT93][RHdV91]). However, only a few approaches were issued in num eri cal test. A lthough some of them m ight be efficient, in particular, problem with a special stru ctu re such as separability of the objective function or a quadratic objective function (e.g. Pardalos, et al. [PPR87], H orst and Tuy [HT93] and refer ences therein), very little is done in parallel. Essentially, these proposed algorithm s m ainly m ake use of three different types of transform ation of problem (DC), i.e. (1) th e equivalent concave m inim ization problem , (2) an equivalent convex m ini m ization problem subject to an additional reverse convex constraint, (3) canonical d.c. problem (cf. [HT93][FH75][PR86][HT89][Tuy86c][DB89][RHdV91]). T he purpose of this chapter is to propose three algorithm s fitting for parallel im plem entation to solve th e problem (DC) via solving an equivalent concave m in im ization problem . T he first approach (A lgorithm 1), which is sim ilar to Hoffman algorithm [Hof81], is an outer approxim ation m ethod using cutting plane. Al though finding all of th e new vertices is com putationally expensive, the num erical experim ents of the serial algorithm indicate th a t it is m uch m ore efficient th an the others ([M 091],[RHdV91],[HT93],p.525) for the tested problem s used here. In ad dition, to investigate parallel behavior of A lgorithm 1 we try a parallel sim ulation incorporating w ith th e m ethod described by Liu and Papavassilopoulos [LP94] in solving the problem . T he second algorithm (A lgorithm 2), a sim plicial pro cedure for solving the equivalent concave m inim ization problem s, is m uch less efficient th an the other two because of the inefficiency in detecting infeasible par titio n sets, th e slow convergent rate of the bounds and the fast growth of the linear constraints. T he efficiency, however, would be im proved in its parallel im plem entation. T he last m ethod (A lgorithm 3), originally proposed by Horst et 69 al. [RHdV91], has sim ilar advantages as A lgorithm 2: only a sequence of linear subprogram s have to be solved and both are appropriate for parallel com putation w ith a suitable sim plex p artitio n procedure. Basically, during the parallel com pu tatio n , in each iteration only the u pdated upper bound is required to com m unicate am ong processors for A lgorithm 1, th e com m unication should not be a problem . In b oth A lgorithm s 2 and 3, in every iteration during th e parallel com putation all the processors have to com m unicate w ith each other and share th e following m essage they have obtained: th e new upper bound, the new vertices created in th e partitio n process, and th e linear constraints added to define a new polyhedral set enclosing th e feasible set. Since the am ount of d a ta to be passed is sm all, the com m unication overhead should not cause serious delay, com pared to the tim e for solving a sequence of linear program s which are th e m ain com putation load of these two algorithm s. Therefore, we use a sequential com puter to sim ulate the parallel behavior of these three algorithm s w ithout considering the com m unication for th e tested problem s in Section 4.5. T he rest of this chapter is organized as follows. T he next section contains the basic idea of th e m ethods. In Section 4.3 we discuss the fundam ental im plem en tatio n of these algorithm s. Section 4.4 describes the details of these algorithm s and their convergences are proved. Some num erical test problem are given in Section 4.5. Conclusion is in the final Section. 70 4 .2 B a sic id ea o f th e M e th o d s By introducing an additional variable t, problem (DC) can be rew ritten as an equivalent global concave m inim ization problem which has the form ( m inim ize (t — g(x)) (CP) subject to: f(x ) < t and x £ D Let T> = {(x,/) £ 3?" x 5? : x £ D ,f( x ) < t} be the feasible set of problem (CP). Given an n-sim plex S w ith vertex set F (5 ') containing the feasible set D (a way to construct S will be described in the next section), a prism (generated by S) V = V (S) C 5?n x 9? is defined by V = {(x,<) G SR " x S R : x £ S, tg < t < tr} (4.1) where ts = m in { /(x ) : x £ D} (4.2) tT — m ax { /(u ) : v £ 14(5)} (4.3) Let xb be an optim al solution of (4.2). Notice th a t (4.2) is a convex m inim ization problem which can be done by any standard nonlinear program m ing m ethod. The prism V has n + 1 vertical lines (parallel to the t —axis) which pass through the n -f 1 vertices of S respectively. 71 4.2.1 An Outer Approximation M ethod Using Cutting Plane As we know, problem (DC) can be solved by obtaining a global solution of problem (C P ). A fter obtaining a prism V described above, let a polyhedron P0 = V . Obviously, the polyhedron Pq encloses the feasible region T > and all of its extrem e points are known. Therefore, a lower bound for problem (C P) can be determ ined by sim ply calculating and m inim izing over the functional values corresponding to all vertices of the polyhedron. If th e m inim izing point is feasible to problem (C P), then th a t point will also be an upper bound, thus th e problem (C P) is solved. O therw ise, the point violates at least one constraint of problem (C P). In this case, we should construct a hyperplane of support to some violated constraint which separates this m inim izing point from the enclosing polyhedron P0. In other words, this constructed hyperplane of support cuts through the previous polyhedron P0 and creates a new polyhedron Pi which will m ore tightly enclose the feasible set T > . All new vertices of this new polyhedron generated from cut can be easily determ ined by the m ethod of [RHdV88]. D enote the vertex set of Pi by V (P i) and go to th e next iteration. For its parallel com putation procedure, we will follow the approach in [LP94]. 72 4.2.2 A Parallel Simplicial Algorithm Here, we introduce a sim plicial algorithm to solve problem (C P) in parallel. In order to use sim plicial algorithm , we m ake a prism atic triangulation of V (such p artitio n of V will be discussed in the next section), i.e. we have V = ( j Vi (4.4) i ' = i where r is an integer m ultiple of (n + 1), Vi is an n-sim plex (i = 1,..., r) and each pair of sim plices Vi , Vj (i ^ j) intersects at m ost in com m on boundary points (i.e. Vi[)Vj = dVif\dVj, where dVi denotes the boundary relative to Vi). Let V(Vj) = {vj,..., v"+1} be th e vertex set of Vj (j = 1,..., r). Obviously, we can find the solution of problem (CP) in term s of solving the following r subproblem s m inim ize (t — g(x)) (SCP) subject to: (x,t) £ Vjf]T>, (j = I , ... ,r) Let ij)(x,t) = t — g(x), then the following linear program s can be used to under estim ate all above subproblem s, i.e. m inim ize Z ^ (v )) ' subject to: (x, t) = £ " = / A,-vj € Vj, (j = 1,..., r) (SA) £ " = / A ,- = 1, A ,- > 0 (* = l , . . . , n + l) 73 T he m ain idea of the parallel procedure of the sim plicial algorithm is to m ake use of a suitable sim plex subdivision technique, then to solve the N linear program m ing subproblem s as (5A ) in each iteration for a case where N processors are used. At iteration & , if choose th e first Nk (num ber of used processors) sim plices from the rem aining sim plices (stored in increasing order of lower bounds) to perform th e bounding com putation and further subdivision, then let (x k,tk ) be the best feasible point w ith an upper bound (Uk = tk — g(xk)) obtained so far and let Ck be a lower bound of the objective function in {CP). Obviously, if Uk — Ck — 0 then (x h,tk ) is an optim al solution of {DC). Otherw ise, we delete th e simplices according to deletion laws in section 4.3.4, then go to the next iteration. As the sim plicial algorithm , this algorithm also involves some fundam ental problem s such as the p artition of a sim plex, the construction of the cutting planes, the deletion rule, and the bound com putation etc. All of them will discuss in section 4.3. 4.2.3 A Parallel M ethod via Linear Programs An approach solving a D.C. program m ing problem by a sequence of linear pro gram s was presented by Horst et. al [RHdV91]. This algorithm in [RHdV91] combines a new prism atic branch and bound technique w ith polyhedral outer ap proxim ation in such a way th a t only linear program m ing problem s have to be solved. Essentially, this m ethod can be parallelized by adopting the sam e simplex partitio n process and parallel procedure as stated in Section 4.2.2 because the prism atic branch and bound is created by both subdividing a prism form ed by a 74 sim plex and solving the linear program m ing problem s. Regardless of com m uni cation, this parallel algorithm will be m uch m ore efficient th an th a t in Horst et. al [RHdV91]. T he com parison of these two algorithm s will be given in Section 4.5. 4.3 Im p le m e n ta tio n T here are four fundam ental operations in th e above algorithm s: i) C onstruction of the initial sim plex S ii) P artitio n s of b oth prism and sim plex iii) D eterm ination of bounds and cutting planes iv) D eletion of simplices 4.3.1 Construction of S A lthough there are m any ways (see [HT93, LP94]) to determ ine a sim plex S D D, we consider th e construction of S as Section 2.1 for A lgorithm s 1, 2, 3. n S — {i 6 R " : a , < Xi (i = 1 ,2 ,3 ,. . . , n), ^ £,- < a} (4.5) i = i where a,- = m in {x,- : x £ D}, i = 1,2,..., n (4-6) n a = m ax { ]P £ ,■ : x £ D} (4-7) i = i 75 Clearly, S w ith a vertex set V (5 ) = {v1, v2, ..., un+1} is a sim plex tightly enclosing D, where v1 = (q i, Pi, a i+i , ..., an), (i = 1, ...,n) (4.8) v n+1 = ( a 1, a 2,...,Q !n ) (4.9) w ith fa = a - otj. 4.3.2 Partitions of Both Prism and Simplex As th e algorithm in H orst e t al. [RHdV91], an exhaustive subdivision process of sim plex was applied to ensure convergence of A lgorithm 2, 3. D e fin itio n 5 ([H o r8 l][H T 9 3 ][R H d V 9 1 ]) A subdivision is called exhaustive if lim^^oo S(Sk) — > 0 ($(Sk) is the length o f a longest edge o f Sk) fo r all decreasing subsequence {-S*,} o f partition elements generated by the subdivision. H ere we also introduce an exhaustive partition process, Q -triangulation well known in operations research. Let an n-dim ensional unit sim plex be defined by n -f-1 S n = {x E 3?”+1 : X > ,' = i} (4.10) ! = 1 For i 6 / n + 1 = { l , . . . , n + 1}, e(i) will denote th e vector in $ R n+1 w ith i-th com ponent equal to one and all other com ponents equal to zero. Let S be a sim plex containing D in SR". T hus we can triangulate S by using th e fact th a t the S is hom eom orphic to the unit sim plex S n and th a t every point in S n can be represented by its barycentric coordinates. In the triangulation of 76 S n, the Q -triangulation is probably the best known triangulation for algorithm ic purposes (cf. [Dou88] and reference therein). M oreover, Q -triangulation seems to be a proper p artition process for parallel com putation here since it is easy to split a sim plex into M 2 (M ~ l is th e grid size) sim ilar subsim plices. In other words, there are M 2 linear program m ing subproblem s of sam e size subdivided from a linear program m ing problem . Therefore if there are a large num ber of processors at hand, one can choose a suitable grid size for Q -triangulation. D e fin itio n 6 ([D o u 8 8 ]) The Q-triangulation o f S n with grid size M 1 is the collection o f all n-simplices <r(u1,7r) with vertices vl, ... ,u n+1 in S n such that 1) each component o f v 1 is a nonnegative multiple o f M ~ l 2) 7r = (7TJ,..., 7r„) is a perm utation o f the elements in /„ = {1,..., n} 3) u,+1 = v' + M ~ 1q0(ni), i = 1,..., n. where q°(j) = e(j + 1) - e(j), j = 1,..., n. As th e prism atic triangulation in [Jon95], the prism V = S x T can be triangulated via triangulation of S and T (tg < T < tr)- Let an be an n-sim plex in the collection Cn(S) of all n-sim plices of th e decom position of S and T be a 1-simplex of th e decom position of the t-axis, i.e. an — v ° ... vn E Cn(S) and (te , tr) = T G Ci(t — axis) T he prism atic triangulation of the C artesian is obtained by • . * T = • • • « (4.11) (fc = o 77 where vtB is the vertex vk of S elevated at height ts- A Q -triangulation of S and a prism atic triangulation of V are illustrated in Figure 4.1 for n= 2. For a (a) Q-triangulation of S with M"*=l/6 fs® (b) Prismatic Triangulation of SxT Figure 4.1: Q -triangulation of S and P rism atic triangulation of V sequential algorithm , an exhaustive p artition process, bisection of sim plex seems to be useful. O ther partitio n procedures of simplices can be found in (H orst and Tuy [HT93]). L e m m a 6 Let {.St} be any decreasing sequence o f n-simplices generated by Q- triangulation process, then limj;_+ 00 6(Sk) — > 0, i.e. Q -triangulation procedure is exhaustive. Proof: Obviously jjS(Sk) = S(Sk+ 1), i.e. S(Sk+i) = (jf)k+ls(So) (M > 2). Therefore, lim ^ o o S(Sk) — > ■ 0. D 4.3.3 Computation of Bounds and Construction of Cutting Planes Let S be an n-sim plex constructed in Section 4.3.1. D enote an initial upper bound for problem (S C P ) or (CP) and the prism V by Uq and a polyhedron P0 respectively, where U0 = min{f(x*) — g(a:*), i = 1,... ,n + 1}. Set Pk be a polyhedron enclosing T> at iteration k and V(Pk) be its vertex set. In A lgorithm 1, a point u0 in the strictly interior of T> m ust be found first (such a point uq is easy to find from x*, i = 1,..., n + 1 and ts, tj)- At iteration k , choose ip(vk) = min {i/>(v), v E V(Pk)} as a lower bound, then solve the line searching problem Zk = Vk + 7 ( ^ 0 — Vk) such th a t Zk € d V (boundary of T>). If 7 = 0 th en Vk (zT> and Vk is a solution of the problem (C P ). O therw ise, update the upper bound Uk+i = m in {Uk, V *(zk)} and find a constraint 7i(x,t) E {hi(x)(i — 1, ..., m ), f(x) — t } which is binding at Zk and the linear constraint corresponding to 'H(x,t) and Zk. Set Pk+i = P* fK O M ) 3?" x$R: Cyt(rr,f)<0} (4.12) where Ck(x,t) = V n T(zk)[(x,t)-zk] < 0 (4.13) C om pute V^Pfc+i) based on the known vertex set V(Pk ), then go to the next iteration. For the determ ination of both bound and cutting plane in the parallel pro cedure, we applied the m ethod in Liu and Papavassilopoulos [LP94]. To use the 79 techniques of outer polyhedral approxim ation in sim plicial algorithm , we let Pq be a polyhedral convex set containing T > which is defined by Po = { i E 9?" x 3? : A x < b, x = (a:,/)}, (4-14) w here A is a real m x (n + 1) m atrix and b E 9£m. C onstruct a sequence of polyhedral convex sets P0, P i, P2, ■ ■ ■ such th a t P0 D Pi D ■ ■ ■ D T > - T he transition from Pk to Pk+i (A : = 0,1,...) is done by adding some appropriate cutting planes Cl(x,t) < 0 (j = 1,..., q) (q < Nk (num ber of processors at iteration k )) to the constraint set which defines Pk, i.e., Pk+i = Pkf]{{x,t) E x : CJ k(x,t) < 0 , j = l ,...,? } (4.15) A t th e iteration k, let upper bound Uk and lower bound Ck for Pk. Denote the num ber of th e rem aining collection of simplices 1Zk by \lZk\- Assum e th a t there is a sim plex Sj. w ith lower bound C(Sl) (/ = 1,..., [R-k\) in Uk and its vertex set V(Sl) — {v],i = 1,... ,n + 1}- Since every point x E S l k is uniquely represented as n+l n + 1 x = J2 W = VJA, £ A < = 1, A i > 0 (* = 1,... ,n + 1), (4.16) »=i »=i where V ) denotes the m atrix w ith colum ns u /,...,u " +1 and A is a vector with com ponents Ai,..., An+J. Every subproblem m inim ize (t — g(x)) (4-17) subject to: (x,t) E S l k f^\T>, (/ = ! ,..., \R k\) (4.18) 8 0 can be underestim ated by th e following linear program w ith the constraint set s l o p * 71+ 1 m inim ize X^A.-V’K ) (4.19) «=i n+l subject to: AViX < 6 , ^ A ,- = 1, A, > 0 (i — 1,... ,n + 1) (4.20) i=l Since An+i = 1 — X)” =1 A; > 0, (4.19) and (4.20) becom e n m inim ize — V ’(u"+1)] + ^ ( u"+1) (4.21) i=i n subject to: AViX < b , 5 ^ A ,- < 1, A; > 0 (i = l ,...,n ) (4.22) i=i where V J denotes the m atrix w ith colum ns (vj — u"+1) (i = 1,..., n), A is a vector w ith com ponents Aj,..., An, and b — b — A v n+l. Let A * be an optim al solution of the linear program (4.21), (4.22), thus we have x* = by (4-16). If (x ‘ k, t l k) € X > , an upper bound of S l k, U(Sl k) = tl k - g(x l k) is obtained. O therw ise, we have a new lower bound, m a x { £ (S l), £ " = / X*tp(vj)}. Let H(x, t ) := m ax{/i,(i = 1,..., m ), f(x) — t} and add a constraint C ‘ k(x ,t)= V M T(x‘ k,tl k)[(x,t)- (x ‘ k, t ‘ k)] + % (x l k, t ‘ k) < 0 (4.23) R e m a r k 1: Let uo be a strictly interior point in T > , and find a point zk between [u0, (x l k, tj.)] such th a t Zk £ T > . If we replace th e constraint (4.23) by C i ( M ) = VM T( 4 ) [ ( M ) - 4 ] < 0 (4.24) 81 T hen this cut is b e tte r because it is closer to T > . N ote th a t a variety of other cuts can also be em ployed (cf. H orst and Tuy [HT93]). Let U w ith vertices u*,...,u " +1 be a subsim plex generated by a exhaustive subdivision process from an initial n-sim plex S D D. Hence the lower bound in A lgorithm 3 is prim arily calculated by the following linear program in (A,t) (cf. H orst et al. [RHdV91]): n+l m ax ( ^ L ' A , — t) (4.25) t = i s.t. A V U A + at < b (4.26) n+l £ a, - = 1, A, > 0 (i = l , . . . , n + l) (4.27) i = i w here Vu denotes the m atrix w ith colum ns u*,..., u„+1, a £ 5 R " 1 and A , b are given in (4.14). If c* is th e optim al objective function value in (4.25), (4.26), (4.27), th en th e lower bound of U is provided by £(U) = + oo, if (4.25),(4.26),(4.27) have no feasible solution £{U), if c* < 0 (4-28) C(U) - c*, if c* > 0 W ith regard to the upper bound of U, it can be obtained from both evaluat ing new vertex in th e corresponding partition of sim plex and solving the linear program m ing problem (4.25), (4.26), (4.27). Besides the com putation of bounds, the construction of th e cutting planes is th e sam e as A lgorithm 2. For parallel process the determ inations of both bound and cu ttin g plane will also follow th e way in sim plicial algorithm . 8 2 4.3.4 D eletion of Simplices At iteration k of th e sim plicial algorithm we try to delete the sim plices th a t do not contain any feasible solution b e tte r th an (x k,tk ). Here, we have the following deletion laws: (a) D elete any sim plex S k (/ = 1,..., \R-k\) if all of its vertices locate outside the current polyhedral set Pk or (4.19), (4.20) has no feasible solution. In this case ^jj. H T > = ( f > (b) D elete sim plex S k if C(Sk) > Uk- This is th e standard branch and bound deletion rule. (c) D elete sim plex Sj. if its optim al objective function value obtained from (4.19), (4.20) is greater th an upper bound Uk- For this case, we know th a t the current optim al value of objective function can not be im proved over S l k. 4 .4 T h e A lg o rith m s In this Section, we describe these algorithm s in detail for solving problem {DC) in term s of solving problem (CP) by cutting plane m ethod or a sequence of linear program s. T he basic im plem entations have been introduced in the previous sec tions. A ssum e th a t there are N processors throughout the following algorithm s. A lg o r ith m 1 C onstruct a prism V D T> associated w ith a sim plex S D D and let a polyhedron P0 = V as described in Section 4.3.3. Obviously, the vertex set V{Po) of P0 is 83 known and consists of the 2(n + 1) points. Let u0 be a strictly interior point and e > 0. Set upper bound: U0 = m in{0, {f{x*) — g(x*)), i = 1,..., n + 1 }, (4.29) lower bound: £ 0 — min{V’(u) : u 6 V (Po)}, (^{x, t) — t — g(x))(4.30) I te r a t io n (k = 0, 1 ,2 ,. . .) Let (x k,tk) = Vk satisfy £k — i ’ivk), and find a 7 such th a t Zk = Vk + 7 (u0 — Vk) € dT>. If 7 = 0, th en Vk is an optim al solution of (C P). O therw ise, set £4+i = min{Z4, ^{zk)}, add a constraint according to (4.13) and form (4.12). C om pute V{Pk+\) (only vertices having an objective function value lower than the current upper bound need to be stored, let V k+X denote these vertices), then let Ck+i — min{^>(u) : v £ V^+1)} and (xfc+ 1, 4 +i) be the point satisfying £k+i = ij)(xk+1,tk+ 1). If Uk+\ — £ k + 1 < e, then stop and Zk is an optim al solution of problem (CP). O therw ise, go to the next iteration. R e m a r k 2: ( 1 ) If let t r be large enough such th a t ip(x,tT) > Uo, where V;x £ V(S), then at m ost n + 1 vertices (x,t]3) need to be stored for th e next iteration. Therefore, by adopting th e m ethod described in Liu and Papavassilopoulos [LP94] we have a parallel algorithm w ith n + 1 processors. O ne num erical test will be given in Section 4.5. (2) At iteration k , we also can delete the vertices yielding an objective function 84 value great th an Uk — t. in this case, if V k+l — ( f> , then algorithm term inates: Zk is an optim al solution. L e m m a 7 ([H T 9 3 ]) Let Lki k = 1,2,... be a sequence o f arbitrary set in 5ft". I f {xk} is a bounded sequence o f points satisfying £ r u * h then d(xk, Lk) — > 0 (k — > oo) (where d is the distance function in T h e o r e m 9 (cf. H o ffm a n [H of81]) In Algorithm 1, every accumulation point o f the sequence {(a;*1 ,/^)} is a global optim al solution o f problem (CP). Proof: T he algorithm 1 generates a sequence of points {(a^,^-)} and a corre sponding sequence of lower bounds {if(xk, tk)} which is m onotonically nondecreas ing (since S k D S k+1) and bounded from above by the value of where (x*,t*) is any feasible solution in T>. Let the half-space Lk = {(#,0 G 5R " x 5 R : Ck{x,t ) < 0 } , and th e hyperplane Hk = { ( z , f ) G 3?" X 3? : Ck(x,t) — 0 } , then we have Ck{xk,tk) > 0, while Ch{xk,tk) < 0 (h = 0,1,..., k — 1). Now th e sequence {(a;*, f^)} is bounded (since enclosing polyhedron Pk is bounded), by Lem m a 7, we have d((xk,tk), Lk) —> 0 (k — > oo). Hence d((xk,tk), Hk) — > 0, where Hk = { ( # ,£ ) G 5?" x 3ft : V H T(zk)[(x, t) — Zk] = 0 } , i.e. _ \VH T{zk)[{xk,tk) ~ zk]| . . <*((*><*). k) (i + \ \ v n ( Zk)\\2y /2 ( ^ 85 Let (x ,f) be a accum ulation point for the set and tk,)} a subse quence of {(xfc ,tfc)} converge to (x,f). By Lem m a 2,3 in [Hof81], we have two sequence {Afc}, {zk} and th eir accum ulation points A, z such th a t Zk = (xk,tk) + Afc[u0 — (xk,tk )] and z = (x,t) + A[uo — (x, ^)]. Let subsequences {A*,}, {zh,} of set {A*}, { z ^ converge to A, z respectively, then we also have Zh = (x ki,t ki) + AjtJi/o - (s*S < *,•)] From (4.31) we have \HTizki)[{xki,tki) ~ Zk,]\ — > 0 ==> \UT{z)[{x,T) - z}\ = 0 = ► \ n Tm x [ u o - ( x ,£)]}| = o Since U q is a strictly interior point, only A = 0 can satisfy, i.e. (x,t) = z £ T > . d A lg o r ith m 2 I n itia liz a tio n (Given an e > 0) C onstruct a prism V associated w ith a sim plex S D D, and let a polyhedral convex set Pq — V D T > as described in Section 4.3.3. A ccording to the process stated in Section 4.3.2, p artitio n V into r sim plices S l 0, (I = 1,... ,r) and denote the vertex set of S l 0 by V(S q). Let Uo = min{0,(f(x*) - g(z*))> * = 1,. • •, n + 1} (4.32) Uo = min{U'0, min{ip(v) : v £ ( ( J V'(S'o)) f ) ® }} (4-33) (=i 8 6 (cf. section 4.3.3). Set (x°,to) G T> such th a t t0 — g(a?0) = Uq and let £ 0 = — oo. D elete all sim plices Sk (I = 1,..., r) which satisfy the deletion law (a). I te r a t io n k (k — 0,1,2,...) At the begining of iteration k we have a polyhedron Pk D T>, the best feasible point (x k,tk ) obtained so far, and an associated upper bound Uk = tk — g ^ * )- Furtherm ore, we have a set 7Zk of sim plices generated from the initial p artition of V by deletion and subdivision according to the rules described in Section 4.3.2, 4.3.4, and th eir lower bounds £ ( S l k ), 1 = 1,..., [R,k\- S te p 1 : Choose the first Nk simplices in IZk (if \I^k\ > N , then Nk — N ). Solve these linear program s (4.21), (4.22) corresponding to both S k (j G Ik = {1,..., A^}) and Pk in parallel. Hence, let (x k,t k) (j =G h ) be their optim al solutions. If any sim plex Si (j G Ik) which satisfies deletion laws (a), (b), or (c), delete and set Ik — Ik/{j}- S te p 2: For Vj G h Do If (x{,t{) G V (j G h), then set U(SJ k) = m in {Uk,t{ ~ g (^i)} and Ik = Ik/{j}- O therw ise, let U(Sk) = Uk and add a constraint Ck(x,t) < 0 ac cording to (4.23). End Do. Set Pk+i = {{x,t) G Pk ■ Ck(x, t) < 0, j G h } , (4.34) S te p 3: Subdivide the sim plices Sk, j G Ik by the m ethod indicated in Sec tion 4.3.2 into a finite num ber of subsim plices and delete the subsim plices satisfying deletion law (a). Let Sk (ji G Jk, where Jk is the index set of these 87 rem aining subsim plices) be the rem aining subsim plices. Let Vk be the new feasible vertices of the corresponding partitio n of the sim plices Sk, (j £ Ik). Set K{Sk) = m in {Uk, min{^>(u) : v £ V^}} if V J k^< f> (4.35) S te p 4: For each ji £ J k solve the linear program (4.21), (4.22) corresponding to b oth Pk+i and Sk . Delete any subsim plex S k (ji £ Jk) by deletion laws (a), (b), or (c). Let lZk denote the collection of rem aining subsim plices Sk (ji £ Jk)- For each Sk £ 7l'k, Set £ ( 5 i ‘) = m a x { £ ( 5 2 ) ,£ ( 5 j) } (4.36) S te p 5 : Let T k denote the set of new feasible points obtained from solving the linear program s in step 4, and set Uk+i = m in {m in{U(SJ k), j = 1,..., Nk}, m in{^(u) : v £ T k}} (4.37) Let (x fc+1, tfc+i) £ V such th a t tk+i — g(xA+1) = Uk+i- Set K k+1= ( K k\{ S i: j = l ,.. . , N k } ) \ j K ' k (4.38) If |7?.fc+i| = 0, then stop (xfc+1, t^+i) is an optim al solution of problem (CP). O therw ise, set C k + 1 = m in {£(5{+1) : / = !,..., |f t *+ 1 |} (4.39) 8 8 S te p 6: If Uk+i — £k + 1 < e, then stop: (xk+1, tk+i) is an optim al solution of problem {CP). O therw ise, go to the next iteration. R e m a r k 3: /*, is an index set of active processors at iteration k. Note th a t not every processor is active throughout an iteration, some of them m ight be idle after some step. T h e o r e m 10 Any accum ulation point o f the sequence (x k,tk ) generated by Algo rithm 2 is a global optim al solution. Proof: From Lem m a 6 , we know th a t any infinite nested sequence of simplices Sr, r G Q C {0,1, 2 ...}, obtained from A lgorithm 2 by m eans of a Q -triangulation subdivision is exhaustive. T hen the proof can be seen in [HT93]. D A lg o r ith m 3 This algorithm is based on th e m ethod described in Horst et al. [RHdV91] to perform com putation in parallel. Since a prism in [RHdV91] is always generated by an n-sim plex, th e prism atic partitio n in [RHdV91] is sim ilar to the sim plex subdivision in A lgorithm 2. By utilizing both parallel procedure and subdivision process as indicated in A lgorithm 2, we have a parallel process for algorithm in [RHdV91]. 4.5 N u m e rica l T ests P r o b le m 1: We applied the A lgorithm 1 (serial process) to the following problem which was solved in M uu and O ettli [M091] by a branch-and-bound m ethod. m inim ize (4 xf) — (O.lx^ — x ^ 2) 89 subject to: 0 < X\ < 1, (4.40) 0 < x2 < 2, Xi + X2 > 1, Set / ( x ) = 4 x \, g(x) = O.lxj — x ^ 2, then problem (4.40) can be rew ritten as a (C P) problem m i n i m i z e t ~ g(®) subject to: x € D, (4.41) f ( x ) - t < 0 w ith D = {x G 3?2 : 0 < x x < 1, 0 < x 2 < 2, x i + x 2 > 1} Choose the convergence criterion e = 0.01 and construct a sim plex S = {x G 3?2 : Xi > 0, x 2 > 0, Xi + x 2 < 3} and the corresponding prism V{= Pq) with since 0 < m in { /(x ) : x (E D } and let tr = 10000000. T hus, these vertices with objective values are — 36, 9999964, 0, tr , 0, tr respectively. From (4.29), we obtain Uo — 1 and (x ° ,t 0) = (0 , 1 , 0 ). Iteration 0: choose Vq = (3 ,0 ,0 ), then find a 7 = 0.9168 such th a t z0 =(0.4788,0.9168,0.9168) and V ’C^o) =• Set Ui — Uq — 1, and form Pi = Po D {(a;>0 : Co(x,t ) = 3.83x! — <-0.9168 < 0}. W ith Uk — Ck < e, the algorithm term inates after 5 iterations at the ap proxim ate optim al solution (x*,<*) = (0.0683,0.9364,0.0186) w ith the objective vertices (3,0,< B), (3,0,<r), (0,3,< s ), (0 ,3,tT), (0,0, tB), (0,0, tT), where tB - 0 90 function value 0.9863. T hroughout the com putation, 14 vertices are generated by cuts and m axim um num ber of vertices stored in m em ory are only 2. But, th e algorithm in [M 091] showed Xk = (0.125,0.875) w ith objective function value 0.9979 after 8 iterations. P r o b le m 2 [R H d V 9 1 ]: In this problem , we use A lgorithm s 1,2,3 to solve it and let Nk = N in parallel algorithm s. m inim ize {Ax\ -f 2x%) — {Ax\) subject to: x\ — 2xx — 2x 2 — 1 < 0 , -1 < Xi < 1, - 1 < x 2 < 1, Let f(x) = 4x* + 2x%, g(%) = Axj, then we have (C P) problem m inim ize t — g(x) subject to: ( x , t ) £ U where T> = {(x, t) € 5 ft3 : f(x) — t < 0,x\ — 2xi — 2x2 — 1 < 0 , - 1 < xi < 1,-1 < x 2 < 1 } and tB = 0 . Let e = 0.05 and construct an initial sim plex S = {x G 5 ft2 : .T x > — 1, x 2 > — l,xi + x 2 < 2} w ith vertices (3 ,-1 ) , (—1,3), ( — 1 ,- 1 ). We obtain Uq = 0, (x°,to) = (0,0) according to (4.29), (4.32). A lg o r ith m 1: Choose uQ = (0,0,0.5) and tr = 10000000. We obtain U q = (3,—1,0) by evaluating t — g(x) a t (3,-1,0), (-1,3,0), (-1,-1,0). A fter a line search ing, we find 7 = 0.8194, z0 = (0.5417,-0.1806,0.4097) and ip(z0) — — 0.7642. 91 T hus U\ = — 0.7642 and add a cutting plane 2.5437a;! — 0.7223a;2 — t — 1.0987 < 0, i.e. form a new polyhedron Pi = PofK O M ) G : 2.5437X! - 0.7223a;2 - t - 1.0987 < 0} C ontinuing w ith the sam e procedure, the algorithm will stop a t an approxim ate optim al solution (a:*, i,) = (0.7171, — 0.0103,1.0580) w ith objective function value — 0.9990 after 16 iterations. T here are 90 vertices generated during th e com puta tions. If we m ake use of the parallel m ethod described in [LP94], then the algorithm only needs 1 0 iterations to obtain the sam e accuracy. A lg o r ith m 2 : (Let IV jt = N — 1, i.e. sequential algorithm ) P artitio n the prism (tr — 36) into 3 sim plices according to (4.11), then subdivide them into 6 sub sim plices by using bisection. Iteration 0: we have Uq — 0, (a;°,t0) = (0 ,0 ,0 ), Pi = P o p l i ^ O : C&(x,t) < 0, j = 1 ,2 ,3 ), where Cl{x,t) = 16aq + 4a;2 — t — 14, C^x^t) = — 16a; j — 4a;2 — t — 14, Cq(x, t) = 16aq — 4a;2 — t — 14. and |7?-i| = 8 w ith lower bounds —15.2, — 4, — 4, — 4, —1.84, —1.84, —1.1579, — 1.1579. Obviously, L\ — —15.2. Iteratio n 1: Solve the linear program (4.21), (4.22), we obtain A =(0.35, 0.15, 0.5, 0), thus (ad,ti) = (0 .7 ,—0.7,0), and add a constraint C i(a;,t) = 5.4880a;! — 2.800a;2— t —3.8612 < 0 since (a:1, ti)£T>, therefore P 2 = Pi 0 : Ci(x, t) < 0). 92 A t th e end of this iteration, |7?-21 = 8, £ 2 = —11.4541, Ui = 0. A fter 114 iterations, the algorithm will stop at an approxim ate optim al solution. T here are some com putational results in Table 4.1 using A lgorithm 2,3 with different num ber of processors (results of A lgorithm 2 using equation (4.24) were shown in parentheses) , where Nk — N, and bisection will always be used in subdivision process of sim plices. Table 4.1: R esults for A lgorithm 2, 3 w ith N Processors A lgorithm 2 A lgorithm 3 N Iter Cont_no M ax Iter Cont_no M ax 1 114(98) 109(91) 32(21) 34 29 24 2 63(51) 114(91) 33(20) 22 40 35 3 43(34) 112(92) 35(21) 14 36 28 4 26(26) 92(93) 28(23) 10 33 27 5 22(19) 98(87) 28(22) 9 36 29 10 14(13) 104(104) 13(13) 8 55 18 Iter: num ber of iterations N: num ber of processors Cont_no: num ber of added constraints Max: m axim al num ber of sim plices (7Zk) stored 93 P r o b le m 3 [R H d V 9 1 ]: Here, we apply A lgorithm 1 (use serial procedure only) to solve th e problem m inim ize (xj + x 2 + £ 3 ) — (2:1 + x\ — x3) subject to: (x\ — x 2 — 1.2) 2 + x2 < 4.4, xi + X? + x 3 < 6.5, xi > 1.4, x 2 > 1 .6 , x 3 > 1 .8 , which was solved by Thoai ([HT93],p.525) and Horst et al. [RHdV91] e = 0.01. T he form er needs 81 iterations to obtain an approxim ate and th e la tte r term inates after 18 iterations. B ut A lgorithm 1 proposed in this chapter only requires 4 iterations to achieve the approxim ate solution of the sam e precision. F irst, transform th e original (DC) problem into a (C P) problem , where f{x) = xj + X2 + x 3, g(x) — xi + x\ — x3. W ith an interior point uq = (1.4,1.6,1.8,8.0) and th e sam e e , A lgorithm 1 has an approxim ate optim al solution (x*,C) = (1.4000,1.8095,1.8000,7.4511) at the end of the 4-th iteration. 4 .6 C o n clu sio n From our num erical results, the serial version of A lgorithm 1 seems to be very efficient, at least for sm all problem s, com pared w ith other m ethods ([M 091], w ith an solution 94 [RHdV91], [HT93],p.525). It should be noted th a t the A lgorithm s 1, 2, 3 have infinite convergence (although an approxim ate solution is achieved by a tolerance criterion), and th a t the linear constraints grow in size as the num ber of iterations increases. Also, in these th ree algorithm s, A lgorithm 1 frequently generates more num ber of linear constraints and A lgorithm 2 creates less num ber of linear con straints. A lgorithm 2 is m uch less efficient th an th e other two since not only it needs m ore iterations to solve the problem s but also m ore linear constraints are generated in com putation. C om pared to the large am ount of tim e saved due to less iterations required in parallel processing for A lgorithm s 1, 2, 3, the com m uni cation overhead is unlike to have a significant effect. In conclusion, the A lgorithm s presented here can be expected to produce prom ising results in parallel processing. 95 C h a p ter 5 N u m e r ic a l E x p erien ce w ith P a ra llel A lg o rith m s for S o lv in g th e B M I P ro b lem 5.1 In tr o d u ctio n T he B ilinear M atrix Inequality (BM I) has been introduced by Safonov et al. [SGL94] as a sim ple and flexible fram ew ork for approaching robust control system synthesis problem s. A wide class of robust control synthesis problem s including fixed-order H°° control, p //:m-synthesis, decentralized control, robust gain-scheduling, sim ul taneous stabilization, and arb itrary com binations of these can be form ulated as BM I problem s (see [SGL94, GTS+94, SP94]). Owing to the sim plicity and gener ality of the BM I form ulation, an efficient and reliable BM I solver is in order. The problem is hard essentially due to its nonconvex character. Let us sta te the BM I problem (see [SGL94, G TS+ 94]): Let F{x,y) = (5 J ) t = i j —l 96 w here Fij = Ffj G 5?n*x”% for i G {1,... ,nx}, j G {1,... ,ny}. F : 5 R 7 1 * x S R ”* ' =>■ sjjnzxnz jg obviously bilinear, b u t not jointly convex in (x , y ). We want to find (x*,y*) G x 3i”! / such th a t F(x*,y*) < 0. i.e. find (x*,y *) in the the feasible set of the BM I (5.1) which is Sb = {(x,y) G 9?"* x SR "" : F(x,y) < 0} (5.2) In [GTS+ 94], various properties of the BM I problem are investigated and sev eral local optim ization approaches are discussed. However, local optim ization m ethods can not solve the BM I problem in general because of the nonconvex character of the BM I problem . To explore the feasible set (5.2), a global op tim ization m ethod m ay be necessary. Recently, two global optim ization m eth ods [SP94, GSP94] have been proposed for solving the problem (5.1). Essentially, th e approach in [SP94] is equivalent to m axim izing a convex function over a convex constraint set (i.e. m inim ize a concave function subject to a convex set) a problem on which num erous authors (e.g. [Hor84, Hor90, PR 86, PR87, HT93, RHB91]) have worked. In [SP94], th e BM I problem is shown to be equivalent to finding th e diam eter of a set defined as intersection of ellipsoids, which is a concave m in im ization problem . A lthough there exist several efficient algorithm s for solving concave m inim ization problem , for exam ple [RHB91, HT89, Thi89, HT93], they do not seem to be appropriate for m inim izing the problem described in [SP94]. A sim ple num erical BM I exam ple is given in [GSP94] which provides a branch and bound global optim ization algorithm by m inim izing the BM I eigenvalue problem . In this chapter, we report on num erical results and com parisons of four algo rith m s for solving the BM I problem . As in [SP94], we can solve a sequence of 97 concave m inim ization problem s or d.c. program s instead of solving the problem (5.1). T he first algorithm considered for such concave m inim ization problem s is th e m ethod proposed in [Hof81]. To accelerate the speed of convergence, a parallel im plem entation discussed in [LP94] was applied. Also, another parallel algorithm w ith a different partitio n of the convex constraint set was em ployed. T he nu m erical experim ents show th a t the parallel m ethods seem to be m ost prom ising. For solving the d.c. problem s, the m ethod of [RHdV91] where only linear pro gram m ing problem s have to be solved seems to be not efficient because of the convex constraint set generated by the ellipsoids. Here, the algorithm presented in [LP95e],[LP95a] was used to solve such d.c. problem s. T he results of test prob lem s indicate th a t the approach in [LP95e], [LP95a] is less efficient th an the first one in th e average since th e form er needs one m ore variable and has a different o u ter polyhedron. These algorithm s are guaranteed to find a solution of (5.1) in a finite num ber of iterations if (5.2) is non-em pty and a suitable tolerance (e > 0) is prescribed. In other words, for a prescribed num ber e > 0, they can be term inated in a finite num ber of iterations if there are no feasible solutions to be found. Section 5.2 casts the BM I problem as a global optim ization problem . In Sec tion 5.3 th e algorithm s for solving it are described. Two num erical exam ples including the results of these four algorithm s and the num erical experim ents are given in Section 5.4. Conclusion is given in the Section 5.5. 98 5.2 G lo b a l O p tim iza tio n M e th o d s C o n sid e r A r C 5ft"1 x d e fin e d b y zj.F\,\Zr z jF iazr . zr Fi tnyzr A r = < £ Unx+ny : A r = A(zr) = z j F2,izr z jF 2t2zr . Zr F2lnyZr l N - ■ H H - i Zr Fnxtlzr Zr FnxtnyZr (5.3) w here zr £ 5R"Z and ||2 r || = 1, for r = 1,2,... (cf. [GTS+94] and references therein). T hrough this chapter ||.|| denotes the Euclidian norm and V{P) denotes the vertex set of P. T he following proposition is a straightforw ard application of Savkas’ lem m a. P r o p o s itio n 1 For a fixed y £ $lny, let Cr = A ry where A r £ A r (r = 1,2,... J). There exists an x £ 5R "1 such that xTA ry < 0, for r = 1,2,...,/ if and only if 0 ^ convex hull o f (Cr, r = 1,..., I). A ctually, an x £ 5 R ”1 in Proposition 1 can be easily calculated by the m ethods de scribed in [Gil66, Wol76, Hau86]. However, it seems difficult to find a y satisfying th e condition in Proposition 1 since there is no general rule to choose such y. 5.2.1 BM I and Concave M inimization From [SP94], we know th a t th e BM I (5.1), (5.2) is equivalent to m ax xTA(z)y < 0 (5-4) IM I=i ireK"* 99 where [A(z)],j = z T F ijz G 3?ni+n!'. Thus, (5.4) leads to m inim izing a concave function subject to an infinite num ber of quadratic constraints param eterized by z. m in J z(x ,y ) subject to: - T X y / jA (z ) x ±AT(z) I < 1 (5.5) V z e !R n‘ , \\z 1 1 = 1. w here Jz(x ,y ) = — ( ||x | | 2 + ||?/||2) and p is a real positive num ber such th a t I > 0 (5.6) Obviously, the convex set C = { u E $tni+n» : : u l pA { z f I u < 1, z G 5R "2, ||z|| = 1} (5.7) is th e intersection of ellipsoids centered at the origin in S R ni+n«, where u — (x, y)T. A ctually, we do not need to find a global optim um of (5.5) because all we need is a point (x , y ) w ith Jz(x ,y ) < — 1, for Vz G 5ft"*. Thus, we have th e following lem m a. L e m m a 8 ([S P 9 4 ]) There exists a point (x , y ) G Sb if and only i f J z(x,y ) < —1 fo r problem (5.5). 100 5.2.2 BMI and d.c. Programming Since F (x ,y ) < 0 « £ = > ■ F (x ,X y) < 0 for all A > 0, w ithout loss of generality, problem (5.4) can also be w ritten as: xty subject to: (x,y) E B (5.8) where < b z{x, y) = m ax IWI=i zESR"* 0 A(z) A t (z ) 0 (5.9) and B = {(x ,y ) E 3?n*+ny : ||(a;,j/)|| < 7 }, 7 is a positive real num ber. Note th at 0 A (z) is an indefinite quadratic form and it can be represented by A T(z ) 0 Pi — P2 , where Pi, P2 are positive sem idefinite m atrices. Therefore, (5.8) is an , 0 A & optim ization of a d.c. problem because 4 > z(x,y ) is still d.c. if A T{z) 0 is d.c. [Tuy84, Tuy 8 6 a]. Let f z{x,y) and gz(x,y) be two convex functions over sjjnz+Tiy th at 4 > z( x ,y ) = f z ( x ,y ) — gz(x,y). T hus, by introducing an additional variable, we have a concave m inim ization problem m in v - g z(x,y) subject to: (x , y ) G B fz{x ,y ) - v <0 (5.10) 101 Also, including two additional variables, problem (5.8) can be transform ed into a canonical d.c. program (see [HT93] and references therein) as follows: m in im ize v subject to: (x,y ) E B (5-11) fz{x ,y ) - s < 0 gz( x ,y ) - s + v > 0 T he first inequality in (5.11) is convex, and the second is reverse convex. Similarly, it is not necessary to solve (5.8) for the global m inim um because any point (x , y ) w ith (pz( x ,y ) < 0 for (5.8) will satisfy (5.4). L e m m a 9 There exists a point (x,y) E Sb if and only if 4 > z(x ,y ) < 0 for prob lem (5.8). 5.3 T h e A lg o rith m s In this section we introduce three algorithm s including one serial and two parallel algorithm s for solving a sequence of concave m inim ization problem s of the type (5.5) and one serial algorithm for solving a sequence of d.c. program s of the type (5.8). Essentially, these four algorithm s em ploy an outer approxim ation m ethod using cu tting plane. 102 5.3.1 Serial Algorithms A lg o r ith m l[cf. [LP94],[Hof81]] Given A r — A (zr) £ A r (r — 1 , 2 ,...,/), where z i ,z 2, . . . , zi are random ly gener ated w ith ||2 r || = 1. Let uo be the origin strictly interior point of C. C onstruct a sim plex Po 3 flr=i Qr, where Qr is an ellipsoid. Obviously, the vertex set V'(Po) of Po consisting of the nx + ny + 1 points is known. Set t =0. S te p 1 : Choose ut by m inim izing J z(u), u £ V{Pt). If ut £ Dr=i Qr, then ut is an optim al solution w ith optim al value J z(ut). O therw ise, solve the line searching problem m in a (5.12) i subject to: u t + a(uo — u t) £ f | Qr; 0 < a < 1 r = l Let a* be the solution and wt = u t + a*(u0 — ut). If J z{wt) < —1, stop. O therw ise, set ht(u) = V H T(wt)[u - wt] (5.13) where H (u) £ Qr, r = 1,..., / and wt on the boundary of H(u). S te p 2: Set Pt+i = Pt P i {u £ 3?n*+ns ' : ht(u) < 0). C om pute V (P t+i). Let / < — t + 1 , go to Step 1 . R e m a r k T he initial sim plex Po can be represented by nx+ny P q = { u £ 5 jfjn x + n y . Ui < d(nx + n y), u > — d} i=i 103 w here d — m in{dr , r = 1,..., and dr is the radius of Qr. T he following algorithm [LP95e],[LP95a] solves the d.c. m inim ization prob lem (5.8) via concave program m ing, i.e. solve problem (5.10). Let D = {(x,j/,u) G 3Jn*+ny+1 ; (x ,y ) G B , f z(x ,y ) < v} be the feasible set of (5.10). A fter having found a sim plex S D B w ith vertex set V (5 ), a prism P0 D D can be defined by P0 = {{x,y, v) G sR ni+"y+1 : (x,y ) G S, vg < v < v t } (5.14) where vg — m in { /z(u) : u G B} and v t = m a x { /z(u) : u G K (5 )}. T he prism Po has n + 1 vertical lines (parallel to the v —axis) which pass through the n + 1 vertices of S respectively. Thus, A lgorithm 2 can be described as A lgorithm 2 (cf. [LP95e],[LP95a]) Given A r = A (zr) G A r (r = 1,2,..., /), where z\, z2, . ■ ■ ,zi are arbitrarily gener ated w ith ||zr || = 1. Let (u0,v 0) be strictly interior point of D. C onstruct a prism Po D D. T hen, the vertex set V (P 0) of P0 consisting of the 2(nx + n y + 1) points is known. Set t =0. Step 1: Choose (ut,v t) by m inim izing 0z(u,u), (u,v) G V (P t). If G D, th en (ut,v t) is an optim al solution w ith optim al value 4 > z(ut,v t). O therw ise, solve the problem m ina (5.15) subject to: (ut,v t) + a [(u 0,Wo) — (ut,v t)] G D; 0 < a < 1 104 Let a* be the solution and wt = («t,i>t) + a!*[(uo, Uo) — (u (,u*)]. If < t > z{wt) < 0, stop. O therw ise, set h t(u, v ) — V H T (wt)[(u, v ) — wt] (5.16) where H (u, v) = f z{u) — v and H (w t) = 0. Step 2 : Set Pt+1 = Pt fl {(u,w) G 5Jn * + n » + 1 : ht(u,v) < 0}. C om pute V (P t+i). Let t < — t + 1, go to Step 1. 5.3.2 Parallel Algorithms Since the algorithm s presented in preceding section search for the solution around a convex set, th eir parallel im plem entation can be done efficiently by partitioning the constrained set into N p arts (i.e. N independent subproblem s) and executing th e algorithm s for each subproblem sim ultaneously. Here, we em ploy two parallel approaches for A lgorithm 1 . A lgorithm 3: A pply the sam e m ethod of partitio n as in [LP94] to solve each subproblem by A lgorithm 1. All N subproblem s are solved in parallel. However, the partitio n in A lgorithm 3 m ay cause overlap for th e subproblem s. T he subdivision of the enclosing polyhedron Po shown in Figure 5.1 will have no overlap. Each region in Figure 5.1 is a sim plex containing nx + n y + 1 linear constraints. Therefore, A lgorithm 4 can be stated as follows. A lgorithm 4: E xecute the sam e procedure as A lgorithm 3 except using the p artition described 105 as Figure 5.1. In order to balance the com putation load of each node for A lgorithm 4, p artition ing the enclosing polyhedron Pq into N sim plices of equal volum e is suggested. In this chapter, nx + n y + 1 (=N) processors will be used in parallel algorithm s 3,4 for num erical com putation. Figure 5.1: An exam ple for dividing Po € 3 ? 2 into 3, 4 or 6 regions of equal volume. 5.3.3 A Global Algorithm for BMI Problem A pplying the above algorithm s, we have th e following algorithm to solve BMI problem : 106 A lgorithm GAB Let A r = A (zr) € A r ( r = 1,2,..., /), where Z\,Z2, ... ,zi are random ly generated w ith ||zr || = 1 . Set k = 1 . Iteration k : Applying A lgorithm s 1 ,2 ,3,4 to solve the problem m in J^k)(x,y ) subject to: - M r Pr r Pr r < 1. (5.17) or m in < j > (k){x,y) (* » y ) subject to: ||(a:,i/)|| < 7 ; r = 1 , 2 ,...,/. (5.18) / j-A r where J ( zk\ x , y ) = - ( | | x | | 2 + ||y ||2), Pr > 0 for _L At P r A r I - T r - r X 0 A r X num ber pr and < fik\ x , y ) = m ax|p ||= 1 y A T r 0 y , r = 1 , 2 ,...,/. (1 ) If a globally m inim izing pair (x^k\ y ^ ) w ith = —1 or 4 k) = 0 is found in th e A lgorithm s 1,3,4 or 2, then the BM I problem has no feasible solution. (2) If any point (x^k\ y ^ ) w ith J jk^ < — 1 or (f>^ < 0 has been obtained after t(k ) iterations in solving the above problem s, th en find z * + 1 which solves the problem 107 { *T I E ” i , E ”i , 4 k)y f F i , i ] * } < 5 ' 19> Let s be th e solution to th e equation (5.19). If s < 0, th en stop, (x^k\ y ^ ) is a feasible solution of BM I. Otherw ise, form Ai+i = A(zi+ 1) G A r and set k 4 — k 1, I 4 — / — ( — 1. Rem arks • t(k ) m eans th a t A lgorithm s 1,2,3,4 require t iterations to find a point (x , y ) such th a t j ( k\ x , y ) < —1 or y) < 0 at iteration k. • At iteration k, it is not necessary to resta rt the algorithm s 1,2,3,4 to solve (5.17),(5.19) and all algorithm s are based on the inform ation obtained at iteration k — 1 to search the solution. • In parallel algorithm s 3,4, assign to each processor one subproblem . If sev eral processors find (x, y) w ith J^k\ x , y) < —1, then choose the point having th e sm allest objective function value as (x^k\ y ^ ) . Lem m a 10 At iteration k, if s > 0 for z *+ 1 in (5.19), then the ellipsoid generated by Ai+i strictly separates (af k\ y W ) from Dr=i Qr, Le. i Proof: Since s > 0, we have A i+ iy ^ > 0. i.e. - T r a<*> S i - T r x (k) 1 - T r * (* > 1 -------------- 1 0 A/+i AJ+1 0 P i + i l A i + 1 Af+1 pi+il — Af+l p i + i ‘+ 1 a<*> yW > 0 — A p i+ i /+i x { k ) y(k) > 1 + /?. where | | a ^ | | 2 + ||y ^ ||2 = 1 + ft, for some (3 > 0. □ In practice, A lgorithm s 1,2,3,4 would stop when £4 — Lk < e, where Uk is an upper bound, Lk is a lower bound of and e > 0 is prescribed. Therefore, given an e > 0, the four algorithm s term inates in a finite num ber of iterations if there is no feasible solution to be found. T h e o r e m 11 I f Algorithms 1,2,3,4 are not considered to stop after a finite num ber of steps (e — > Q), a feasible solution of B M I can be found if there exists a feasible set for problem BM I. Proof: See [LP94],[Hof81],[LP95e],[LP95a]. □ 109 5.4 E x a m p les an d N u m e rica l E x p erim en ts 5.4.1 Examples E x a m p le 1 To illustrate the four presented A lgorithm s, consider a sim ple BM I problem stud ied in [GSP94]: F (x ,y ) = z iy iF u + xi y2F ii2 + x 2yiF 2,i + x 2 y2F2,2 (5.20) where F \t2 — - 1 0 - 0 .5 - 2 - 0 .5 4.5 0 - 2 0 0 / -1 .8 -0 .1 -0 .4 -0 .1 1.2 - 1 -0 .4 - 1 0 x — 2, Tly = 2, n z - f*2,l — F2.2 — i r 2,2 / \ 9 0.5 0 0.5 0 - 3 0 - 3 - 1 \ 0 0 2 0 - 5 .5 3 2 3 0 Ax — ( \ / \ -7.3379 -1.2995 , a 2 — -0.3425 0.6582 7.2088 -0.870 8 1.4572 -2.1147 110 T he results by GAB w ith A lgorithm s 1,2,3,4 are following: A lg o r ith m 1: a feasible solution (x*,y*) = (0.5493, 0.3767, 0.2889, 0.7299) with A(F(x*,y*)) = — 0.0689 (the greatest eigenvalue of F(x*,y*)) was found at k — 3, where t(l) = 5, 2(2) = 2, t(3) = 20 and £ ? =1 <(*) = 27. A lg o r ith m 2: a feasible solution (x*,y*) = (0.1204, 0.0947, 0.3233, 0.8538) with A(F(x*,y*)) = — 0.0089 was found at k = 4, where /(l) = 1, 2(2) = 2, 2(3) = 2, 2(4) = 23 and £ ? =1 *(*) = 28- A lg o r ith m 3: a feasible solution (x* ,y *) = (0.4983, 0.2319, 0.1895, 0.8335) with A(jF(x*,y*)) = — 0.0338 was found at k = 3, where t(l) = 1, t(2) = 2, t( 3) = 10 and £?= i t{i) = 13. A lg o r ith m 4: a feasible solution (x * ,y *) = (-0.4769, -0.5460, -0.4679, -0.5460) w ith A(F(x*,y*)) = — 0.1457 was found at k = 4, where t(l) = 2, t(2) = 1, t(3) = 2, 2(4) = 10 and Yli=i t (0 = 15. Ill Exam ple 2 Let F ( x ,y ) = £?= i £ j= i XiyjFi}j, where / \ 7 0 1 - 6 * U = \ 0 2.5 1 1.5 1 1 - 4 0 - 6 1.5 0 5 / * 2,1 = 3,1 / \ -3.4 1 0 1.5 1 2 5 - 1 0 5 2 1.8 v 1.5 - 1 1.8 2.5 - 2 - 1 3.2 2 - 1 1 - 1.2 1 3.2 - 1 .2 8.8 0 2 1 0 -2 .3 *i.a = ( \ - 3 .5 3.33 1.23 0 3.33 1.8 1 - 1 1.23 1 -6 .1 2 0.75 0 - 1 0.75 2.15 2,2 \ * 3 ,2 — 0 - 1 0 0.8 - 1 - 3 0 -4 .6 0 0 - 3 1.5 0.8 - 4 .6 1.5 5.5 j 1 3.4 1.1 3 3.4 2 0 5 1.1 0 - 5 1.2 3 5 1.2 -3 .1 / -2 .5 9 4.3 0 0 / 4.7 0.56 - 0 .6 - 1 * 4 ,1 = 4.3 -2 .3 1.7 0.56 I I £ 0.56 0 -5 .1 2 0 1.7 2.8 - 0 .6 - 0 .6 -5 .1 8.29 0 0 \ 0.56 - 0 .6 - 1 \ 2 0 - 1 . 5 , Four feasible solutions of this BM I problem obtained by A lgorithm s 1,2,3,4 are 112 Table 5.1: C om putational Results for Exam ple 2 Algorithm 1 Algorithm 2 Algorithm 3 Algorithm 4 k m J ? } A(F) t(k) A(F) t(k) J ifc ) A (F ) t(k) J ifc ) A (F) 1 i -1.456 .426 1 -.310 .272 1 -1.055 1.092 1 -1.055 1.092 2 5 -1.109 .941 7 -.807 2.182 2 -1.060 1.108 2 -1.016 1.018 3 6 -1.023 2.297 1 -.488 1.070 6 -1.095 1.022 2 -1.002 .272 4 5 -1.048 1.006 28 -.060 .499 7 -1.009 .494 3 -1.052 2.818 5 16 -1.021 .578 18 -.080 .244 8 -1.009 .387 10 -1.002 1.018 6 14 -1.036 .033 20 -.066 .076 2 -1.058 .855 8 -1.034 .410 7 43 -1.018 .177 17 -.107 .154 12 -1.001 .546 2 -1.007 .111 8 22 -1.005 .139 73 -.071 .045 13 -1.007 .204 7 -1.071 .053 9 10 32 -1.022 -.026 43 -.209 -.078 19 -1.018 -.028 3 2 -1.002 -1.029 .001 -.097 = (0.4199, 0.1428, 0.1962, -0.5686, -0.6568, 0.1788), (-0.1418, -0.3744, - 0.0719, 0.3597, 0.3686, -0.1592), (0.5809, 0.2741, 0.1173, -0.7094, -0.2823, 0.0925) and (0.6941, 0.2731, 0.1390, -0.4515, -0.4515, 0.2128) respectively. The com puta tional history of GAB using A lgorithm s 1,2,3,4 is shown in Table 5.1. 5.4.2 Numerical Experiments In this section, th e four algorithm s presented in th e previous sections were tested and com pared on sm all problem s random ly generated w ith n x = 2, n y = 2 n z = 3. These algorithm s were program m ed in M ATLAB and run on Sun SPA R C station IPX (4/50). In order to test their perform ance, the four algorithm s were run w ith the sam e Ai,A% at the beginning for each test problem . T he com putational results of A lgorithm s 1,2 are shown in Table 5.2. Since A lgorithm 2 needs one more variable v and m ore vertices at the beginning (prism Po), it seems less efficient th an A lgorithm 1. B ut due to the different form ulation for the BM I problem , A lgorithm 1 has m ore iterations th an A lgorithm 2 in some test problem s. Notice 113 th a t the perform ance of A lgorithm s 1,2 depends heavily on the choice of the interior point Uq ([uq, Uo])- However, there are no general rules for this choice. Table 5.2: Perform ance of Serial A lgorithm s Algorith m 1 Algorithm 2 No. k t(i),i= Iter Time(sec) k N , I I M P S ' Iter Time(sec) “1 6 5,1,4,11,5,6 32 ■31.00 4 14,4,3,8 17 44.86 2 4 1,11,2,1 15 7.75 4 3,7,6,1 17 42.44 3 3 2,2,13 17 9.06 4 2,1,1,2 6 8.69 4 2 1,7 8 3.10 2 1,3 4 4.55 5 3 5,1,2 8 3.42 1 5 5 5.42 6 4 2,2,1,7 12 5.41 3 4,2,3 9 17.56 7 6 2,2,7,10,12,1 34 31.86 3 1,1,8 10 20.43 8 3 5,1,1 7 2.72 3 1,7,2 10 17.72 9 5 1,1,63,2,45 112 264.96 4 1,1,13,50 65 445.67 10 4 1,3,17,7 28 20.71 2 1,2 3 2.88 11 2 6,1 7 2.90 1 1 1 0.95 12 5 5,1,10,1,6 23 15.51 2 1,14 15 33.93 13 5 2,19,12,20,7 60 87.26 5 2,28,20,20,5 75 566.28 14 5 7,4,3,1,6 21 12.88 3 8,5,10 23 81.01 15 2 2,7 9 3.93 1 2 2 1.81 16 7 1,11,3,10,5,1,3 34 48.93 4 1,12,1,8 22 74.66 17 5 1,2,8,1,3 15 7.76 3 1,2,3 6 7.56 18 5 5,14,23,28,36 106 248.68 6 6,5,28,12,1,78 130 2478.60 19 4 1,12,6,41 60 86.79 4 1,8,10,28 47 295.28 20 1 2 2 0.50 1 3 3 2.89 21 7 6,1,24,65,24,41,74 235 726.02 6 1,14,18,104,4,13 154 3100.50 22 4 2,9,22,16 49 59.63 2 2,35 37 130.45 23 6 5,2,1,1,3,9 21 14.62 6 1,4,5,1,12,1 24 70.00 24 3 3,2,35 40 41.70 5 3,6,8,32,11 60 372.26 For th e parallel algorithm s, the BM I problem can be com puted in parallel w ithout any com m unication among all nodes from iteration k to iteration k + 1. T he only m essage needed to be broadcasted is a possible feasible solution found at each iteration of algorithm GAB. Since the am ount of d a ta to be passed is small, the com m unication overhead should not cause series delay. T hus, here we use a serial com puter to sim ulate the behavior of a parallel com puter. Let 71 / denote the tim e which is needed to com plete the t-th iteration in A lgorithm s 3,4 for processor 114 i. Obviously, an approxim ate m easure of the tim e taken by a parallel com puter to finish the t-th iteration is r ; = m ax Tt (5.21) W ithout considering the com m unication, the approxim ately executing tim e of Table 5.3: Perform ance of Parallel A lgorithm s Algorithm 3 Algorit lm 4 No. k t(i),i = l,...,k Iter Time(sec) k t(i),i= l,...,k Iter iim e(sec) 1 6 1,2,4,2,1,2 12 4.62 4 1,3,3,1 8 3.57 2 4 1,6,1,2 10 3.59 3 1,5,2 8 3.14 3 4 1,2,2,2 7 1.74 3 1,2,3 6 1.95 4 1 1 1 0.38 2 1,2 3 1.03 5 1 4 4 0.64 2 4,1 5 1.02 6 3 1,2,2 5 1.27 3 1,1,2 4 2.28 7 2 1,3 4 1.04 3 1,3,2 6 2.40 8 2 1,5 6 1.70 1 4 4 0.99 9 6 1,2,22,3,48,7 83 102.80 5 1,8,14,5,16 44 41.75 10 2 1,2 3 0.88 3 1,2,4 7 2.69 11 1 1 1 0.40 2 3,1 4 0.74 12 4 1,5,6,4 16 5.99 3 4,4,3 11 4.10 13 5 1,10,5,1,3 20 10.44 5 1,8,3,2,3 17 10.19 14 4 2,5,1,2 10 3.58 4 5,2,1,2 10 5.02 15 2 1.4 5 1.20 1 3 3 0.71 16 3 1,2 3 0.69 2 1,6 7 2.62 17 2 1,3 4 0.81 3 1,2,4 7 3.03 18 5 3,1,2,15,4 25 12.89 6 4,4,8,7,4,2 29 20.56 19 6 1,6,3,3,1 14 4.24 5 1,9,6,2,3 21 14.25 20 1 1 1 0.14 1 1 1 0.14 21 9 1,4,9,26,22,14,2,4,4 86 103.05 8 3,1,5,4,3,25,2,2 45 41.31 22 5 1,11,10,15,3 40 29.57 6 1,5,8,2,5,6 26 19.30 23 3 1,2,2 5 1.23 4 4,2,2,1 9 3.06 24 3 1,4,5 10 2.48 3 2,2,4 8 2.70 parallel algorithm s for solving a BM I problem can be m easured by K t(k) T im e (sec) = E E I' « ‘ (5.22) fc=:l t= 1 w here K is the to tal iterations of GAB for solving the BM I problem . Actually, T* m ay be not achieved at the sam e node for each iteration. T hus, (5.22) seems to overestim ate th e tim e. Table 5.3 contains the com putational results for the 115 sam e test problem s as Table 5.2. In general, the num ber of vertices generated by cut is rapidly increasing w ith the dim ension of the problem . T he storage of vertices will be significantly large w ith high dim ension [RHdV88]. A lthough the parallel algorithm s can reduce th e iterations of solving the BM I problem with lower storage of vertices for each processor [LP94], [LP95d],[LP95a] it still needs a bigger com puter for large BM I problem s. 5.5 C o n clu sio n In this chapter we introduced a global m ethod w ith four global optim ization ap proaches to find the solution of the BM I problem via solving a sequence of non- convex m inim ization problem s of type (5.5) or (5.8). Tables 5.2, 5.3 dem onstrates th a t th e parallel algorithm s are m ore efficient th an the serial algorithm s. How ever, due to the nonconvex character of the BM I problem s, solving them is still tim e-consum ing especially for high dim ension BM I problem s as those originated by robust control problem s. In fact, com putational experim ents have shown th a t m ost currently available m ethods of global nonconvex optim ization w ith general convex constraint set are practical only for problem s of sm all size. In order to address this difficulty, parallel com putations seem to be a prom ising approach but obviously m ore work is needed. 116 C h a p ter 6 S u m m a ry an d F u tu re R esea rch 6.1 S u m m ary In this dissertation we develop several parallel algorithm s for globally solving three im p o rtan t classes of nonconvex optim ization problem s: concave m inim ization, lin ear program w ith an additional reverse convex constraint and d.c. program m ing. T he perform ance tests of all serial and parallel algorithm s m entioned above are in tensively investigated, including the test on a distributed-m em ory M IM D parallel com puter DELTA. T he com putational results in Tables 2.2, 3.1, 3.2, 4.1 indicate th a t these parallel algorithm s are m ore efficient than the serial ones. In addition, the m em ory required for th e parallel algorithm s can be reduced resulting in being capable of solving the problem s of large size. Also, in chapter 5 it is shown th at th e BM I problem , a powerful tool for solving control system synthesis problem s could be solved by the above algorithm s although a substantial com putational tim e savings have not been achieved so far. T he algorithm s presented in this dissertation m ainly use the outer approx im ation technique and the cutting plane m ethod to find a global solution. In 117 C hapters 2, 4 and 5 th e parallel algorithm s are based on a suitable p artition of an initial polyhedron enclosing the feasible constraint set to form the subproblem s and solve them in parallel. T he parallel algorithm described in C hapter 3 searches the global solution along the edges binding a t a nondegenerate vertex in parallel and com putes th e new vertices generated by cuts w ith all processors. From our com putational experim ents, the nonconvex global optim ization prob lem s studied here seem to be difficult to solve for a general constraint set. One reason is th e nonlinear constraints of the problem s; a second one is th a t the num ber of newly generated vertices is rapidly increasing w ith the num ber of variables. Therefore, in C hapters 4 and 5 only the problem s of sm all size are presented and obviously m ore work is needed for the above nonconvex problem s subject to a general constraint set. 6.2 P r o p o sa l for F u tu re W ork Since a wide range of applications leads to classes of nonconvex global optim iza tion problem s like those studied here and a strong dem and of efficient algorithm s for solving them exists, the proposed directions of future research are naturally focused on th e following: In this dissertation, it is shown th a t a large am ount of m em ory and com puta tion (e.g. in C hapter 2, the storage of vertices and the tim e of generation of new vertices) are required for nonconvex global optim ization problem s. Future work should a tte m p t to im prove th e efficiency of the algorithm s for solving these three 118 classes of nonconvex problem s and to achieve a reasonable storage of m em ory and com putation tim e. 1. From theoretical and practical view points, some future research topics are: • T he feasible set of th e nonconvex problem discussed in C hapter 3 is an intersection of a linear polyhedron and a reverse convex constraint. For m ore general reverse convex program s, a couple of reverse convex constraints or (and) a general convex set instead of a linear constraint set should be explored. • A lthough a variety of m ethods have been presented for solving the d.c. program m ing, th e num erical tests seem to be lim ited to the problem s with a sm all num ber of variables. A dditional com putational results for the d.c. problem s of large size are obviously in dem and. • F u rth er investigate other classes of nonconvex optim ization problem s such as M ax-M in problem s, integer program m ing, bilinear program m ing, com ple m en tarity problem s and canonical d.c. problem s and derive new algorithm s for solving them , especially parallel algorithm s. • As introduced in C hapter 1, there are some connections am ong nonconvex global optim ization problem s; for exam ple, a concave m inim ization prob lem w ith a polyhedron constraint set can be transform ed into a linear pro gram s w ith an additional reverse convex constraint. Hence, when dealing w ith som e nonconvex optim ization problem , a transform ation into another 119 equivalent nonconvex optim ization problem m ay be of benefit to com pu tatio n or sim plicity. In other words, exam ining a nonconvex problem in different structures m ay yield a b e tter result. 2. N um erical m ethods for solving the BM I problem : • As studied in C hapter 5, the feasible set of th e BM I problem can be found by solving a sequence of concave m inim ization problem s or d.c. program s. Thus, the im provem ent of efficiency of th e algorithm s for solving concave m inim ization problem or d.c. program m ing is also necessary for BM I prob lem. • Since n y + nx the num ber of variables in the global optim ization algorithm s developed in C hapter 5 is a large num ber for the BM I problem originated by robust control problem s, it enhances the difficulty of finding a solution. It is possible to reduce the dim ension of the BM I problem to n y (nx > n y) by Proposition 1 in C hapter 5. However, a general rule for finding such y in Proposition 1 has to be further studied. • F urther investigate w hether the BM I problem or some interesting subclasses of it can be transform ed into other form ulations such as convex optim ization problem s. 120 R e feren ce L ist [Alt68] [AW70] [AW71] [BJ75a] [BJ75b] [BT89] [DB89] [Dou88] [DP77] [EZ71] M. A ltm an. Bilinear program m ing. Bull. Acad. Polon. Sci. Ser. Sci. Math. Astronom. Phys., 16:741-745, 1968. M. Avriel and A.C. W illiam s. C om plem entary geom etric program m ing. S IA M Journal on Applied Mathematics, 19:125-141, 1970. M. Avriel and A.C. W illiam s. An extension of geom etric program m ing w ith applications in engineering optim ization. Journal of Engineering Mathematics, 5:187-194, 1971. P.P. Bansal and S.E. Jacobsen. An algorithm for optim izing netw ork flow capacity under economies of scale. Journal of Optimization Theory and Applications, 15:565-586, 1975. P.P. Bansal and S.E. Jacobsen. C haracterization of local solutions for a class of nonconvex program s. Journal of Optimization Theory and Applications, 15:549-564, 1975. D. P. Bertsekas and J. N. Tsitsiklis. Parallel and Distributed Compu tation: Numerical Methods. Prentice-H all, 1989. T. P ham Dinh and S. El Bernoussi. Numerical Methods for Solving a Class of Global Nonconvex Optimization Problems. New M ethods in O ptim ization and T heir Industrial Uses. B irkhauser Verlag, Basel, 1989. T im othy Doup. Simplicial Algorithms on the Simplotope. Springer- Verlag, Berlin, 1988. M .E. Dyer and L.G. Proll. An algorithm for determ ining all extrem e points of a convex polyhedron. Mathematical Programming, 12:81-91, 1977. B.C. Eaves and W .I. Zangwill. Generalized cutting plane algorithm s. S IA M Journal on Control, 9:529-542, 1971. 121 [Fal73l J.E . Falk. A linear m ax-m in problem . Mathematical Proqramminq, 5:169-188, 1973. [FH75] [FH86] [FP92] [Gil66] [GJ91] [GSP94] [GTS+94] [Hau86] [Hil75] [HJ80a] [HJ80b] J.E . Falk and K.L. Hoffman. A successive underestim ation m ethod for concave m inim ization problem s. Mathematics of Operations Research, 1:251-259, 1975. J.E . Falk and K.L. Hoffman. Concave m inim ization via collapsing poly topes. Operations Research, 34:919-929, 1986. C.A. Floudas and P.M. Pardalos, editors. Recent Advances in Global Optimization. Princeton U niversity Press, 1992. E.G . G ilbert. An iterative procedure for com puting the m inim um of a quadratic form on a convex set. S IA M Journal on Control and Opti mization, 4:61-79, 1966. T. R. G urlitz and S. E. Jacobsen. On the use of cuts in reverse convex program s. Journal of Optimization Theory and Applications, 68:257- 274, 1991. K. C. Goh, M. G. Safonov, and G. P. Papavassilopoulos. A global optim ization approach for the BM I problem . In Proceedings IE E E Conference on Decision and Control, pages 2009-2014, O rlando, FL, 1994. K. C. Goh, L. T uran, M. G. Safonov, G. P. Papavassilopoulos, and J. H. Ly. Biaffine m atrix inequality properties and com putational m ethods. In Proceedings of American Control Conference, pages 850-855, B alti m ore, M D, 1994. J.E . Hauser. Proxim ity algorithm s: T heory and im plem enta tion. College o f Engineering, University of California, Berkeley, U C B /E R L :M 86/53, 1986. R. J. H illestad. O ptim ization problem s subject to a budget constraint w ith economies of scale. Operations Research, 23:1091-1098, 1975. R .J. H illestad and S.E. Jacobsen. Linear program s w ith additional reverse convex constraint. Applied Mathematics and Optimization, 6:257-269, 1980. R .J. H illestad and S.E. Jacobsen. Reverse convex program m ing. Ap plied Mathematics and Optimization, 6:63-78, 1980. 122 [Hof81] [Hor81] [Hor84] [Hor90] [HP94] [HS82] [HT89] [HT90] [HT93] [HU85] [Jon95] [Kel60] [Kon76] K.L. Hoffman. A m ethod for globally m inim izing concave functions over convex sets. Mathematical Programming, 20:22-32, 1981. R. H orst. An algorithm for nonconvex program m ing problem s. Math ematical Programming, 20:22-32, 1981. R. H orst. O n th e global m inim ization of concave functions: Introduc tion and survey. Operations Research Spektrum, 6:195-205, 1984. R. H orst. D eterm inistic global optim ization: Some recent advances and new fields of applications. Naval Research Logistics, 37:433-471, 1990. R. H orst and P.M . Pardalos, editors. Handbook of Global Optimization. Kluwer A cadem ic Publishers, 1994. B. Heron and M. Sermange. Nonconvex m ethods for com puting free boundary equilibria of axially sym m etric plasm as. Applied Mathemat ics and Optimization, 8:351-382, 1982. R. H orst and N.V. Thoai. M odification, im plem entation and com pari son of three algorithm s for globally solving linearly constrained concave m inim ization problem s. Computing, 42:271-289, 1989. R. H orst and H. Tuy. Global Optimization. Springer-V erlag, Berlin, 1990. R. H orst and H. Tuy. Global Optimization, volum e second edition. Springer-V erlag, Berlin, 1993. J.B . H iriart-U rruty. Generalize Differentiability, Duality and Opti mization for Problems Dealing with Differences of Convex Functions, volum e 256 of Lecture Notes in Economics and Mathematical Systems. Springer-V erlag, Berlin, 1985. E dm ond A. Jonckheere. Algebraic and Differential Topology of Robust Stability. Oxford U niversity Press, 1995. To appear. J.E . Kelley. T he cutting plane m ethod for solving convex program s of a convex polyhedron. S IA M Journal, 8:703-712, 1960. H. Konno. A cutting-plane algorithm for solving bilinear program s. Mathematical Programming, 11:14-27, 1976. 123 [LP94] [LP95a] [LP95b] [LP95c] [LP95d] [LP95e] [LP96] [Mat73] [M091] [MR80] Shih-M im Liu and G. P. Papavassilopoulos. A parallel m ethod for globally m inim izing concave functions over a convex polyhedron. In Proceedings of the 2nd IE E E Mediterranean Symposium on New Direc tions in Control & Automation, Ju n e 19-22, 1994. Shih-M im Liu and G. P. Papavassilopoulos. A lgorithm s for globally solving d.c. m inim ization problem s via concave program m ing. Submit ted to Applied Mathematics and Computation, 1995. Shih-M im Liu and G. P. Papavassilopoulos. N um erical com putations on solving robust control system synthesis via bilinear m atrix inequal ities. Technical Report, 95/2. Department of Electrical Engineering- Systems, University of Southern California., 1995. Shih-M im Liu and G. P. Papavassilopoulos. A parallel algorithm for linear program s w ith an additional reverse convex constraint. Submit ted to Journal of Parallel and Distributed Computing, 1995. Shih-M im Liu and G. P. Papavassilopoulos. Parallel com putation for a class of global nonconvex m inim ization problem s. In Proceedings of IA S T E D International Conference: Applied Modelling, Simulation and Optimization, Cancun, Mexico, Ju n e 14-17, 1995. Shih-M im Liu and G. P. Papavassilopoulos. A lgorithm s for globally solving d.c. m inim ization problem s via concave program m ing. In Pro ceedings of American Control Conference, S eattle, WA, June 21-23, 1995. Shih-M im Liu and G. P. Papavassilopoulos. N um erical experience with parallel algorithm s for solving th e BM I problem . In Submitted to the 13th World Congress International Federation of Automatic Control, San Francisco, California, USA, Ju n e 30-July 5, 1996. T.H . M attheiss. An algorithm for determ ining irrelevant constraints and all vertices in system s of linear inequalities. Operations Research, 21:247-260, 1973. L. D. M uu and W . O ettli. M ethod for m inim izing a convex-concave function over a convex set. Journal of Optimization Theory and Ap plications, 70:377-384, 1991. T.H . M attheiss and D.S. Rubin. A survey and com parison of m eth ods for finding all vertices of convex polyhedral sets. Mathematics of Operations Research, 5:167-185, 1980. 124 [Par92] P.M. Pardalos, editor. Advances in Optimization and Parallel Com puting, volum e H onorary Volume on the Occasion of J.B . Rosen’s 70th Birthday. N orth-H olland, 1992. [Pol87] E. Polak. On th e m athem atical foundations of nondifferentiable opti m ization in engineering design. SIA M Review, 29:21-89, 1987. [PPR87] J.H . Glick P.M . Pardalos and J.B . Rosen. Global m inim ization of in definite quadratic problem s. Computing, 39:281-291, 1987. [PPR93] A .T. Phillips P.M . Pardalos and J.B . Rosen. Topics in Parallel Com puting in Mathematical Programming. Science Press, 1993. [PR86] P.M . Pardalos and J.B . Rosen. M ethods for global concave m inim iza tion: A bibliographic survey. S IA M Review, 28:367-379, 1986. [PR87] P.M . Pardalos and J.B . Rosen. Constrained Global Optimization: Al gorithms and Applications, volume 268 of Lecture Notes in Computer Science. Springer-Verlag, Berlin, 1987. [PR88] A .T . Phillips and J.B . Rosen. A parallel algorithm for constrained concave quadratic global m inim ization. Mathematical Programming, 42:421-448, 1988. [PV79] E. Polak and A.S. Vincentelly. Theoretical and com putational aspects of the optim al design centering tolerancing and tuning problem . IEEE Transactions on Circuits and Systems, CAS-26:795-813, 1979. [RHB91] N.V. Thoai R. H orst and H.P. Benson. Concave m inim ization via coni cal partitions and polyhedral outer approxim ation. Mathematical Pro gramming, 50:259-274, 1991. [RHdV88] N.V. Thoai R. H orst and J. de Vries. On finding new vertices and redundant constraints in cutting plane algorithm s for global optim iza tion. Operations Research Letters, 7:85-90, 1988. [RHdV91] Ng. V. Thoai R. H orst, T.Q . Phong and J. de Vries. On solving a d.c. program m ing problem by a sequence of linear program s. Journal of Global Optimization, 1:183-203, 1991. [RHT87] N.V. Thoai R. H orst and H. Tuy. O uter approxim ation by polyhedral convex sets. Operations Research Spektrum, 9/3:153-159, 1987. [Ros66] J.B . Rosen. Iterative solution of nonlinear optim al control problem s. S IA M Journal on Control, 4:223-244, 1966. 125 [SGL94] [SP94] [Tha88] [Thi89] [Tol79] [Top70] [TT84] [Tui64] [Tuy64] [Tuy 84] [Tuy86a] [Tuy86b] M. G. Safonov, K. C. Goh, and J. H. Ly. C ontroller synthesis via bilinear m atrix inequalities. In Proceedings of American Control Con ference, pages 45-49, B altim ore, MD, 1994. M. G. Safonov and G. P. Papavassilopoulos. T he diam eter of an inter section of ellipsoids and BM I robust synthesis. In Proceedings of the IFAC Symposium on Robust Control Design, Septem ber, 1994. P.T . T hach. T he design centering problem as a d.c. program . Mathe matical Programming, 41:229-248, 1988. T.V . Thieu. Improvement and Implementation of Some Algorithms for Nonconvex Optimization Problems, volum e 1405 of Lecture Notes in Mathematics. Springer-Verlag, Berlin, 1989. J.F . Toland. A duality principle for nonconvex optim ization and the calculus of variations. Archive of Radional Mechanics and Analysis, 71:41-61, 1979. D.M . Topkis. C u ttin g plane m ethods w ithout nested constraint sets. Operations Research, 18:404-413, 1970. T .V . T huong and H. Tuy. A Finite Algorithm for Solving Linear Programs with an Additional Reverse Convex Constraint, volum e 225 of Lecture Notes in Economics and Mathematical Systems. Springer- Verlag, Berlin, 1984. H. Tui. Concave program m ing under linear constraints. Soviet Math. Dokl, 4:1437-1440, 1964. H. Tuy. Concave program m ing under linear constraints. Soviet Math. Dokl, 4:1437-1440, 1964. H. Tuy. Global m inim ization of a difference of two convex functions. In Lecture Notes in Economics and Mathematical Systems, volum e 226, pages 98-108, 1984. H. Tuy. A general determ inistic approach to global optim ization : via d.c. program m ing. In Fermat Days 85: Mathematics for Optimization, pages 273-303, Ed. J.B . H iriart-U rruty, N orth-H olland, 1986. H. Tuy. A General Deterministic Approach to Global Optimization via d.c. Programming. FER M A T Days 85: M athem atics for O ptim ization, Ed. J.B . H iriart-U rruty. N orth-H olland, A m sterdam , 1986. 126 [Tuy86c] H. Tuy. A general determ inistic approach to global optim ization via d.c. program m ing. 129:273-303, 1986. [Tuy87] [VD82] [VNT] [Wol76] [Zwa73] H. Tuy. Convex program s w ith an additional reverse convex constraint. Journal of Optimization Theory and Applications, 52:463-486, 1987. L.M. Vidigal and S.W . D irector. A design centering algorithm for nonconvex regions of acceptability. IE E E Transactions on Computer- Aided-Design of Integrated Circuits and Systems, CAD-T.13-24, 1982. J .J . Strodiot V.H . N guten and N.V. Thoai. On an optim um shape design problem . Technical Report, 85/5. Department of Mathematics, Facultes Universitaires de Namur. P. Wolfe. Finding th e nearest point in a polytope. Mathematical Pro gramming, 11:128-149, 1976. P.B. Zw art. N onlinear program m ing: C ounterexam ple to two global optim ization algorithm s. Operations Research, 21:1260-1266, 1973. 127
Linked assets
University of Southern California Dissertations and Theses
Conceptually similar
PDF
00001.tif
PDF
00001.tif
PDF
00001.tif
PDF
00001.tif
PDF
00001.tif
PDF
00001.tif
PDF
00001.tif
PDF
00001.tif
PDF
00001.tif
PDF
00001.tif
PDF
00001.tif
PDF
00001.tif
PDF
00001.tif
PDF
00001.tif
PDF
00001.tif
PDF
00001.tif
PDF
00001.tif
PDF
00001.tif
PDF
00001.tif
PDF
00001.tif
Asset Metadata
Core Title
00001.tif
Tag
OAI-PMH Harvest
Permanent Link (DOI)
https://doi.org/10.25549/usctheses-oUC11256590
Unique identifier
UC11256590
Legacy Identifier
9617111