Close
The page header's logo
About
FAQ
Home
Collections
Login
USC Login
Register
0
Selected 
Invert selection
Deselect all
Deselect all
 Click here to refresh results
 Click here to refresh results
USC
/
Digital Library
/
University of Southern California Dissertations and Theses
/
Processor efficient parallel graph algorithms
(USC Thesis Other) 

Processor efficient parallel graph algorithms

doctype icon
play button
PDF
 Download
 Share
 Open document
 Flip pages
 More
 Download a page range
 Download transcript
Contact Us
Contact Us
Copy asset link
Request this asset
Transcript (if available)
Content PROCESSOR EFFICIENT PARALLEL GRAPH ALGORITHMS by H illel G azit A Dissertation Presented to the FACULTY OF THE GRADUATE SCHOOL UNIVERSITY OF SOUTHERN CALIFORNIA In Partial Fulfillment of the Requirements for the Degree DOCTOR OF PHILOSOPHY (Com puter Science) A u g u st 19 88 © Copyright 1988 Hillel Gazit UMI Number: D P22772 All rights reserved INFORMATION TO ALL USERS The quality of this reproduction is dependent upon the quality of the copy submitted. In the unlikely event that the author did not send a com plete manuscript and there are missing pages, th ese will be noted. Also, if material had to be removed, a note will indicate the deletion. Dissertation Publishing UMI D P22772 Published by ProQuest LLC (2014). Copyright in the Dissertation held by the Author. Microform Edition © ProQuest LLC. All rights reserved. This work is protected against unauthorized copying under Title 17, United States Code ProQuest LLC. 789 East Eisenhower Parkway P.O. Box 1346 Ann Arbor, Ml 4 8 1 0 6 -1 3 4 6 UNIVERSITY OF SOUTHERN CALIFORNIA THE GRADUATE SCHOOL UNIVERSITY PARK LOS ANGELES, CALIFORNIA 90089 Fh.T>. CpS ’ 88 G2tf This dissertation, written by Hi l l e l < s -/\z n under the direction of h ..is Dissertation Committee, and approved by all its members, has been presented to and accepted by The Graduate School, in partial fulfillm ent of re­ quirem ents for the degree of D O C TO R OF PH ILO SOPH Y Dean of Graduate Studies D a t e Jul.y_ 2 0 , _ 1988 DISSERTATION COMMITTEE Chairperson . J R ^ a ^ axxa- - .......... Acknowledgements I express my sincere gratitude to my advisor Gary Miller, who introduced me to my research subjects and gave me many useful comments regarding my research. I would also like to thank the other members of the dissertation committee for their participation and advice. In addition, my special thanks to all the people who gave me good advice when I most needed it. In particular Orna Berry and Ram Nevatia. Finally, I thank my wife, Salit, for helping, understanding and encouraging. A special thanks for her help in completing this work by reading, proofreading and editing. C O N T E N T S A cknow ledgem ents ii List o f figures v A b stract vii 1 B asic D efin ition s and A lgorith m s 1 1.1 P re lim in a rie s................................................................................................ 1 1.2 Basic Parallel A lg o rith m s ......................................................................... 3 1.2.1 The PRAM M o d e l.......................................................................... 3 2 B read th F irst Search 5 2.1 P re lim in a rie s................................................................................................ 5 2.1.1 N o ta tio n s .......................................................................................... 7 2.2 Computing the B F S ................................................................................... 7 2.2.1 First a p p ro x im a tio n ....................................................................... 8 2.2.2 Computing the distances ............................................................. 9 2.3 Conclusions and Open P r o b le m s ............................................................ 12 3 A n O ptim al C on nected C om p onents A lgorith m 14 3.1 Intro d u ctio n ................................................................................................... 14 3.2 P re lim in a rie s................................................................................................ 16 3.2.1 Previous R e s u lts ............................................................................. 16 3.2.2 Definitions ....................................................................................... 17 3.2.3 D ata S tr u c tu r e ................................................................................ 18 3.3 Easy c a s e ....................................................................................................... 18 3.4 Dense-to-Easy Reduction ......................................................................... 20 3.4.1 Informal d e s c rip tio n .............................. 20 3.4.2 Analysis of the Dense-to-Easy Reduction ............................... 22 3.5 Sparse-to-Dense Reduction ...................................................................... 27 3.5.1 Informal D escrip tio n ....................................................................... 27 3.5.2 Deterministic M ating ................................................................... 28 3.5.3 Partition of the V e rtic e s........................................... 30 iii 3.5.4 Analysis of the Partition A lg o r ith m ......................................... 32 3.5.5 Algorithm th at Reduces the Number of Vertices .................. 34 3.5.6 Analysis of the Algorithm that Reduces the Number of Vertices 35 3.6.1 Analysis of the Algorithm for Finding Connected Components 37 4 Separators in P lanar G raphs 38 4.1 Planar E m b e d d in g ..................................................................................... 38 4.3 Djidjev’s separator a lg o rith m .................................................................. 41 4.4 Our separator a lg o rith m ............................................................................ 42 5 P arallel planar algorithm s 49 5.1 Miller’s se p a ra to r........................................................................................ 49 5.1.1 First Im p ro v em en t.......................................................................... 52 6 P olylog Separator A lgorith m 55 6.1 In tro d u ctio n .................................................................................................. 55 6.1.1 Informal D escrip tio n ....................................................................... 55 6.1.2 The Algorithm - A General Outline ......................................... 57 6.2 Basic Parallel A lg o rith m s ........................................................................ 58 6.2.1 Finding ^ -n e ig h b o rs....................................................................... 59 6.2.2 Finding an Independent S e t .......................................................... 63 6.2.3 Computing planar graph lay e rin g ................................................ 67 6.2.4 Dividing BFS Layers Into C y c le s ................................................ 71 6.2.5 Computing Induced Weights on a Set of C y c le s...................... 75 6.2.6 Voronoi Diagrams .......................................................................... 81 6.2.7 Creating Spanning T ree s................................................................. 85 6.3 Reduction S t e p ....................................................................... 87 6.3.1 Finding a base cycle for each 7 - s e t ............................................ 88 6.3.2 Finding a small diameter spanning forest ............................... 90 6.3.3 Computing the Face-Vertex Dual S u b g ra p h ............................ 95 6.3.4 Mesa p a r t i t i o n ................................................................................ 99 6.4 Applications of R e d u c e -G ............................................................................ 103 6.4.1 Stopping the recu rsio n ........................................................................ 103 6.4.2 Number of ite ra tio n s........................................................................... 104 6.5 Conclusion .......................................................................................................105 R eferen ces List 105 iv L IST O F F IG U R E S 1.1 Breadth First S e a rc h .................................................................................. 3 3.1 Finding Connected Components: Easy C a s e ....................................... 19 3.2 Reducing Dense graph to Easy G r a p h ................................................. 22 3.3 Deterministic M a tin g .................................................................................. 28 3.4 Algorithm for Partition of the V e rtic e s................................................. 31 3.5 Reducing the Number of V e rtic es........................................................... 34 3.6 Finding connected components in a g r a p h .......................................... 37 4.1 First situ atio n ............................................................................................... 43 i 4.2 Second s i t u a t i o n ......................................................................................... 45 5.1 C o n e ................................................................................................................ 53 6.1 Outline of our a lg o rith m ........................................................................... 58 6.2 Finding k -N e ig h b o rs.................................................................................. 61 6.3 Luby’s Independent Set A lg o rith m ........................................................ 64 6.4 Finding Containing C o m p o n e n ts........................................................... 65 6.5 Finding the Distance to Your Nearest I- set.......................................... 68 6.6 Finding the B F S F o r e s t........................................................................... 69 6.7 Dividing Layers Into C y c le s ............... 72 6.8 An Example of a non-simple C y c le ....................................................... 73 6.9 An Impossible Case .................................................................................. 74 6.10 An Example of a Disconnected Region................................................... 76 6.11 An Example of the Boundary of a Region Decomposed into Cycles. 82 6.12 W hy V irtual Vertices Does Not Interference W ith Planarity . . . . 83 6.13 Creating a spanning tree of G .................................................................. 86 6.14 Building a Spanning T re e s ........................................................................ 86 6.15 Outline of a Procedure to Compute H (see Theorem 3.1) 87 6.16 Finding a Base Cycle for each I- set......................................................... 89 6.17 Finding Voronoi D iag ram s........................................................................ 92 6.18 An example of multi-source B F S from top cycles, and layering . . 93 6.19 The subgraph H ........................................................................................ 97 v 6.20 Computing a Dual of the Voronoi Diagram and the Subgraph H . 6.21 A Voronoi Diagram which is One C y c l e ............................................... 6.22 Determining the Simple Cycle Boundary for each J-set and Remov­ ing all O th e rs ................................................................................................ 6.23 Finding short paths between I -sets th at are neighbors in the Voronoi D iagram ............................................................................................................ 98 100 102 103 vi Abstract ! I Several parallel algorithms are presented. The following results were achieved: 1. An optim al randomized parallel algorithm for finding connected components ! in undirected graphs. The algorithm takes 0(log(n)) time. i I 2. A parallel BFS algorithm for directed graphs that improves the num ber of' ! processors required for B F S of a directed graph. Our algorithm requires 0(log2(n)) tim e using processors where M (n) is the num ber of opera­ tions for m atrix m ultiplication over the ring of the integers. The previously best known result requires 0 (n 3 log(n)) operations. The value of M (n), bounded below by n 2 is successively being improved, and currently stands at n 2- 376[7]. 3. Improvement of the y/6 n Djidjev [8] sequential separator algorithm. Our separator size is | y/n. The previously best known result is \/§n [8]. 4. A parallel algorithm for an 0(-\fn) separator in planar graph which uses 0 ( n 1+e) processors and 0(log2(n)) time. vii C h a p ter 1 B a sic D efin itio n s and A lg o rith m s 1.1 Preliminaries Following Even [9] we define a graph G (V ,E ) as a structure which consists of a set of vertices V = {»-i,v.2 ■ • •} an£l a set °f edges E = {ei,e2, . . Every edge e can be viewed as a pair of vertices (u ,v ). In an undirected graph (which we usually refer to as a graph) this pair is unordered. We call u and v the endpoints of e. If u = v we say th at the edge is self-loop. If two edges e\ and e2 have the same endpoints we say th at e\ and e2 are parallel or multiple edges. The degree of a vertex v, d(v) is the num ber of times th at v is used as an endpoint of the edges. A vertex v whose degree is 0 is called isolated. A path joining V j, and Vk is a sequence of vertices vx,.. .,Vk such th at , V{ +1} € E. Note that there can be more than one path between two vertices. The reverse of a path from v to u is a path from u to v. Therefore for undirected graph a path from v to u implies a path from u to v. If there is a path from v to u and a path from u to w, there is a path from v to w. This path is the concatenation of the two paths. There is a path from every vertex v to itself. The path is v, and contain no edges. P ath is an equivalence relation upon the vertices. Therefore we can divide the vertices of the graph into equivalence classes such th at two vertices are in the same class iff there is a path between them . We call these equivalence classes 1 connected components. Chapter 2 of this thesis presents a parallel algorithm for finding connected components. A minimal path between two vertices is the path with a m inim um num ber of vertices in it. There may be more than one minimal path between two vertices. We say that the distance between two vertices is the num ber of edges in a minimal path between them . If there is no path between them we say th at the distance is oo. A graph G is called circuit-free if there are no self-loops and for every two vertices u and v there is a unique path between them. A tree is a connected circuit-free graph. A spanning tree of a graph G(V, E) is a tree with the same vertex set as G and with edge set which is a subset of E. For every connected component G there exists at least one spanning tree. A directed graph is defined similarly to a graph except th at the pair of endpoints of an edge is now ordered. The first endpoint is called start-vertex of the edge and the second is called the end-vertex. Every undirected graph can be represented by a directed one. We create the directed graph by replacing every undirected edge (•u,v) by two directed edges (u,v) and (i>, u). The underlying graph of a directed graph is a graph resulting from the directed graph if the direction of the edges is ; ignored. The outdegree, d ^ i y ) of a vertex v is the num ber of edges th at have v as their start-vertex; indegree din(v) is defined similarly. A directed graph G(V, E ) is said to have a root r if for every vertex v G V there is a directed path which starts in r and end in v. A directed graph is called a directed tree if it has a root and its underlying undirected graph is a tree. ; The Breadth First Search (B F S ) of a graph with respect to some vertex s is a labeling of its vertices by their distance from s. The algorithm in Figure 1.1 was suggested by Moore [21] to solve this problem. The complexity of the algorithm is 0(1^1 + |-E|) which is the input size. Therefore the algorithm is optim al. If we have the B F S labeling of a graph from some root s, we can get a spanning tree of the graph by simply collecting for every vertex v with label i one edge (u, v) from 2 Label vertex s with 0. i := 1 w hile there is an unlabeled vertex adjacent to a labeled one do Label every unlabeled vertex which is adjacent to a labeled one with i. i := i + 1 o d Label every unlabeled vertex with oo. I 1 i i Figure 1.1: Breadth First Search u with label i — 1. A minimal path from s to every vertex v in the graph can be composed from edges of this tree. We call all the vertices with label i the ith level of the B F S tree. The B F S algorithm can be used an undirected graph to compute the connected component to which some vertex s belongs. This can be achieved by making a B F S from s. Every vertex with label smaller then oo belongs to the same component as s. An optim al algorithm for connected components of an undirected graph can therefore be cconstructed based on the B F S algorithm. 1.2 Basic Parallel Algorithms 1.2.1 T h e P R A M M o d el The P R A M model is a generalization of RAM model. In the RAM model we have a random access memory and a processor that can read or write to every cell ofj the RAM in every given tim e unit. In the PRAM model we have a large num ber of processors th at can access the cells of a common memory. There are three useful variations of this model. 1. E R E W (Exclusive Read Exclusive W rite) model. In this model only one processor can access a specific cell in every given tim e unit. 3 GREW (Concurrent Read Exclusive W rite) model. In this model more than one processor can read the context of a cell in any given time, but only one can write. CRCW (Concurrent Read Concurrent W rite) model. This model has two m ain submodels: (a) Concurrent W rite is perm itted only if all the processors try to write the same value to a cell. (b) In case of Concurrent W rite, one of the processors always succeeds. In case of conflict, it is arbitrary which processor will “win” . In analysis of algorithms for this model, we assume th at a worst case scenario always happened. A m apping of every algorithm from C R C W to E R E W can be done. Thisj m apping increases the tim e complexity of the algorithm by a factor of log(n). A m apping of algorithms in P R A M to the hypercube or butterfly architectures can be done using randomized algorithms [30] [24]. These mappings increase the time complexity by a factor of log(n). 1 2. 3. t C h a p ter 2 B r e a d th F irst Search 2.1 P r eliminar ies The single source B r e a d th F irs t S earch of a graph G = (F, E ) with respect to a vertex s is an assignment of labels to the vertices of G such th at the label of a vertex v is the distance from s to v. If we have these labels we can find a BFS tree of the' I graph using (|F |) processors in 0(1) time. Breadth first search is one of the basic! paradigm s for the development of efficient sequential algorithms. It can easily be seen th at the sequential time of the problem is 0 (n + m) where n and m are the num ber of vertices and edges respectively [1]. On the other hand, parallel B F S seems to require a substantial num ber of processors. The classic naive parallel algorithm views G as an incidence m atrix M over the semiring (J\f, min, + ), where M denotes the natural num bers, and repeatedly squares M \ returning M n. This algorithm uses 0(log(n) • log(log(n))) time and lo8(i"8(n^' processors in the CRCW model. A similar algorithm , in the E R E W mode, take 0(log2(n)) tim e using lo8(n) processors. This processor count arises because all known algorithm s for m atrix m ultiplication over this semiring require n 3 operations. 5 If one is able to work over a ring instead of a semiring, there are m any consider­ ably more processor efficient algorithms (at least in the limit as n goes to infinity). The simple 0 (n 3) sequential algorithm for m atrix m ultiplication over a ring was : first improved by Strassen [27] to achieve complexity of 0 ( n 2 81). This has been progressively improved by the work of many researchers. For a good sum m ary on the subject see [22]. The best known result for m atrix m ultiplication is 0 (n 2 376) due to Coppersm ith and W inograd [7]. All these algorithms can be implemented in parallel with optim al speedup in O(log(n)) time. In our paper we will denote by M (n ) the num ber of processors that are needed for m atrix m ultiplication over the integers of two n x n matrices in 0(log(n)) time. Thus the best known value for M (n) is 0 ( n 2 376) [7]. These algorithms use the fact th at there exists inverse element with respect to + operator. Since the semiring (A/’, min, + ) does not have additive (for min) inverses we cannot apply the algorithms directly. In this paper we show how to use these fast m atrix m ultiplication algorithms in an indirect way. We replace the log(n) m atrix multiplications over the semiring of (min, + ) by log(n) arithm etic m atrix m ultiplication and log(n) vector-m atrix m ultiplication operations. M atrix multiplication has an obvious lower bound of 0 ( n 2), therefore our BFS algorithm cannot be as good as the sequential (0(|J3|)) one. Our basic algorithm uses the E R E W P R A M model, in which concurrent reads or writes to the same memory location are not perm itted. We assume th at pro­ cessors can add, multiply and compare log n bit integers in unit time. If we want to make BFS from more then one source we use the CRCW P R A M model, in which concurrent writes to the same memory location are perm itted only if all the processors try to write the same value. 6 2.1.1 N o ta tio n s We let G — (V ,E ) be a directed graph with n vertices and m edges. We further assume th at the vertex set is the set of integers {l,...,n}. The d istan ce from vertex u to vertex v is the num ber of edges on the shortest directed path from u to v and infinity (oo) otherwise. We denote the shortest distance by d(u,v) \ We shall use the semiring (.A /* U {oo},m in,+ ) where J V * denotes the natural num bers and infinity (oo) satisfies the following natural rules: j j o o + k = o o o o • k = o o o o min k = k. I I We denote m atrix m ultiplication over this semiring by *. 2.2 Computing the B FS In this section we describe our algorithm to compute BFS of a directed graph G (V ,E ) from some given vertex s. The algorithm is comprised of three parts: In the first part we compute [log(ro)] n x n matrices, the nonzero entries determine those pairs of vertices which are a distance of at most 2* apart. The second algorithm rewrites the information from the first algorithm in term s of distances. In the third part we compute the actual distances. We combine these three algorithms into the procedure BFS below. The algo­ rithm BFS takes a graph G(V, E) and a vertex s as input and computes a vector D of length n , such th at D[w\ equal the distance from 5 to w. program BFS(G (V,E),s) First- Approximation(G(V, E)); Find-Distances(s); end BFS 7 2.2.1 F irst ap p roxim ation In this section we compute an approxim ation of the distance between all pairs of1 vertices in G. It is well known th at boolean m atrix m ultiplication can be reduced to integer m ultiplication, see [1] pages 242-243, [3]. In our model we assume th at log(n) bits num ber can be m ultiplied in 0 (1). Thus we begin by viewing the incidence m atrix of G as a Boolean matrix: I 1 if (w,u) € E or u = v B q[u ,V\ = < I 0 otherwise. Letting Bi+i = Bi2, we can compute the first [logn\ matrices by doubling-up over the Boolean semiring. This gives us the B i, ..., B |7 05(n)] satisfying: I 1 distance(u,v) < 2* Bi[u,vj = < I 0 otherwise. There are [log n] m atrix products computed over the Boolean semiring. Each product requires at most O (logn) time and M (n ) processors. Thus we need 0(log2 n) tim e and M (n ) processors to compute these matrices. We next substitute new values into the entries of each m atrix Bi and view this new m atrix over the semiring (Af U {oo},m in,+ ). This will give us the matrices M i, ..., Mpogn] as defined below: 0 if u — v 2* if distance(u,v) < 2* and u ^ v oo otherwise. Given the m atrix Bi we can compute Mi in constant tim e using at most n 2 processors. Thus we have approxim ated the distance between any pair of vertices up to a factor of 2. In the next section we show how to efficiently combine this inform ation to get the BFS of G. 8 2.2.2 C o m p u tin g th e d ista n ces In the previous algorithms we found estim ated distances between any two vertices in the graph. We have shown th at Mi[u,v] ^ oo iff the distance between u and v is at most 2 ® . We can find the least i such that Mi[u,v] = 2 ® which implies that the distance between u and v is between 2 ® _1 + 1 and 2 ® . In this section we try to improve these estim ates for a given vertex s (the root of the B FS tree). We describe an algorithm to compute the distance of s from any other vertex. In the algorithm, at the beginning of each iteration i, the estim ated distances contain an error of no more than 2*. In the end of the iteration, the error is reduced to 2 ® -1. Suppose at the end of iteration i, the distance from s to w is estim ated to be k • 2 ® . During the (* — l ) tfc iteration, we will determine whether d(s,w) lies in the range (2 • k — 2) • 2*_1 + 1 to (2 • k — 1) • 2 ® _1 or in the range (2 • k — 1) • 2*-1 + 1 to (2 ■ k) • 2 ® _1. The former case holds iff 3us.t.(k — 2) • 2 ® < d(s,u) < (k — 1) • 2 ® , an d d (u ,ii)) < 2 ® -1. If so, at the end of the ith iteration, the distance estimates of u from s m ust be (fc — 1) • 2 ® , and Mi-i[u,w] ^ oo. Thus, if such a vertex u exists, the distance estim ates from s to w is corrected to the lower half of the range, else it is corrected to the upper half. In the Find-Distances algorithm D is a row vector of length n, and * denotesl m atrix m ultiplication over the semiring of (A /* U {oo},m in, + ). We can computej all the distances in the graph from some vertex s in the following way: procedure Find-Distances for 1 < u < n in parallel do 0 if u — s oo otherwise od for i := [log(ra)] dow nto 0 do 9 D[u\ D := D * M i; o d e n d Find-Distances L e m m a 2.2.1 The complexity of Find-Distances in the E R E W model is O(log2(n)) time using n 2/lo g (n ) processors. P ro o f: There are log(n) iterations and in each one of them we can perform in EREW model in following way: 1. Make n copies of the vector D. 2. Compute the n 2 + operations. 3. Com pute n m in operations, each one on a set of n num bers. It is obvious th at each operation can be made in log(ra) tim e using 1 ( > g(n) processors. □ C o ro lla ry 2.2.2 The complexity of B F S from a single source in E R E W model is O(log2(n)) time using M (n) processors. P ro o f: In the first step we make log(n) m atrix m ultiplication over the Boolean semiring, and every one of them take O(log(n)) time using M (n) processors [1] [3]. The complexity of the second step is bounded by Lemma 2.2.1. □ We m ust still prove th at the algorithm is correct. Correctness of Find-Distances can be proved by induction by showing th at (M i*M i-i ... Mo)[uy v] = distance[u, v] for distance(u,v) < 2*+1 — 1. We will prove the correctness of the algorithm by showing th at in every iteration for every element of D, there is at most one pos­ sible improvement. Thus we reduce the time of the Find-Distances algorithm to 10 log(n) using n 2 processors. This is im portant if we want to make a BFS of the graph from more then one source. We denote by the row vector D l[w] D[w\ after iteration i of Find-Distances, as defined in section 2 d(u,v ) is the distance from u to v. T h eorem 2.2.3 After every iteration i, for every w G V D l[w] = 2 ® - P roof: We will prove the Theorem by induction on i (i decreases). Initially we take the sth row of M(iog(n)-|, and the claim follows from the definition of the Af[iog(n)] ■ We assume the Theorem is correct for i and prove to i — 1, (i decreases). Let us first prove th at D l[w] > 2 ® • . D®[ty] is equal to some -D ® +1[it] + Mi[u,w\. By the definition of M[u,w] M[u,w] > 2 ® • . Therefore D i[w] = 2 ® +1 • + 2l • r ^ r ^ l > 2 ® • > 2® . • Now we prove th at D% [w] < 2i - r ^ l . If d(s , w) = oo we are done. Thus we] may assume th at d(s,w) < oo. We set d(s,w) = q • 2*+1 + r where 0 < r < 2*+1. We will check two cases. The first case is when r > 2 ® or r = 0. In this case D*[w] < Di+1[w] = 2 ® +1 • = T • [ ^ 1 . The second case is when 0 < r < 2 ® . In this case there m ust exist a vertex u such th at d(s,u) = q • 2*+1 and d(u,w) = r. By the induction hypothesis' D t+1[u] = q • 2*+1.By the definition of Mi, and the fact th at d(u,w) = r < 2 ® it follows th at Mi[u,w] = 2*. Therefore D% [w] is at most (2 • q + 1) • 2 ® = 2 ® • • □ C orollary 2.2.4 After the algorithm d(s,k) = D[k\. P roof: Substitute i = 0 in Theorem 2.2.3, to get th at D°[ty] = d(s,w). □ L em m a 2.2.5 For every w € V and every iteration i, Z),+1[w] = Dl[w] or D i+1[w] -&[*>] = 2 \ 11 P ro o f: By Theorem 2.2.3 Di+1[w] = ceil2i+1 • and D^w] = ceiM • If d(s,w ) modulo 21+ 1 is greater then 2 * then D*[w\ = Z)*+1[ty], else D*[w] = _ 2 \ □ C o ro lla ry 2.2.6 The n min operations in every matrix multiplication can be com­ puted in 0 (1 ) in C R C W model using n 2 processors. P ro o f: The proof follows immediately from Lemma 2.2.5. □ T h e o re m 2.2 .7 The algorithm Find-Distances can be computed in the CRCW model in 0(log(n)) time using n 2 processors. P ro o f: In each iteration we compute n 2 + operations and n min operations. By the previous Corollary, the claim follows. □ Note that the Theorem implies that BFS can be performed simultaneously from several vertices. This result is summarized in the following Corollary, by applying Find-Distances from each source vertex. C o ro lla ry 2.2.8 In the CRCW model We can perform B F S from ver.\ tices in parallel in 0(log2(n)) time using M (n) processors in the CRCW model. 2.3 Conclusions and Open Problems We have shown a reduction of the BFS problem to the M atrix M ultiplication problem, which reduces the processor count significantly without increasing the tim e by much. Two related open questions remain: 12 Can we generalize the algorithm for the case when the edges have polyno- mially bounded integer weights? Can we generalize the algorithm to solve the all-pairs shortest path problem? C h a p ter 3 A n O p tim a l C o n n ected C o m p o n en ts A lg o rith m 3.1 Introduction In this paper, the problem of finding the connected components in an undirected graph G = (V, E) is considered. There are several well known and fast sequential i algorithms like D epth First Search (D F S ) and Breadth First Search (B F S ) forj finding connected components. A solution to the problem on a parallel model will! be presented in this paper. The parallel model that we use is the Concurrent-Read' Concurrent-W rite (C R C W ) Parallel Randomized Access Machine (P R A M ). It is a synchronized parallel-com putation model where all the processors can read and write into a common memory. In the case of concurrent writes into the same memory location, it is assumed th at one of the processors succeeds arbitrarily. The algorithm has an expected running time of 0(log(n)), using processors where, n — |y | is the num ber of vertices and m = |i?| is the num ber of edges. The — j a — probability of the algorithm running longer than expected is at most ( | ) lo g for some constant k. The algorithm is as fast as the best-known algorithm s [26], [25], 14 ____________________________________________________________ i and optim al in the sense th at P • T is equal to the complexity of the sequential algorithm . Shiloach and Vishkin [26] conjecture th at the barrier of log(ra) cannot be surpassed by any polynomial num ber of processors. The algorithm presented here can also be used to find a spanning forest, which most algorithms for two connectivity, three connectivity, and four connectivity need [28],[18]. Discussion of the solution to the problem of finding the connected components in an undirected graph involves the presentation of three successively harder cases: easy graph, dense graph, hard graph. The first case is called the easy case, since there is a processor for every vertex and a processor for every edge. The solution for this case was given by Shiloach and Vishkin [26]. Their algorithm is presented in Section 3, Figure 3.1. The second case is called the dense graph case where i there is a processor for every vertex and a processor for every log(n) edges. This problem can be reduced in O(log(n)) time to the easy case. The reduction is given in Section 4, Figure 3.2. The third case is called the sparse graph case, where there is a processor for every log(n.) vertices and for every log(n) edges. This problem can be reduced in O(log(n)) tim e to the dense graph problem, by repeatedly finding partial connected components of the graph until the number, j of connected components is equal to the num ber of processors. The reductionj is given in Section 5, Figure 3.5. Then we have a dense graph case th at can be reduced to the easy case. All these algorithms are randomized, th at is, each processor has access to a random -num ber generator which returns random numbers of log(log(n)) bits in unit time. The main idea in the reduction algorithm s is to find a way to separate vertices with a lot of edges, called Extrovert vertices, from those with a small num ber of edges, called Introvert. We cannot separate them by counting the edges because it will take too much time. Therefore a statistical 15 test is used. One takes a sample of edges and looks at the vertices which they hit. An Extrovert vertex is more likely to be chosen this way than an Introvert vertex, which has few edges. 3.2 Preliminaries 3.2.1 P rev io u s R esu lts In 1979 Hirschberg, C handra and Sarwate [4] presented an O(log2(ra)) parallel connectivity algorithm th at uses processors.1 In the same year Wyllie [31] presented another 0(log2(n)) connectivity algorithm. Shiloach and Vishkin [26] improved the result to O(log(n)) tim e and 0 {m + n ) processors in 1982. In 1984, Reif [25] found a simple probabilistic algorithm with the same complexity as the Shiloach and Vishkin algorithm. In 1986 Cole and Vishkin presented a determinis- I tic algorithm of 0(log(n)) tim e using ■ a(m,n)) processors, where <*(m,n)j is an inverse Ackerman function. | All these algorithms have the same problem while in the first iterations: al-j though the num ber of vertices is reduced in each iteration, the order of the number of edges may rem ain the same. One exception can be found in the case of a planar graph, where the num ber of edges is bounded by three times of the num ber of vertices. Using this fact, Hagerup [12] gave an optim al determ inistic connectivity algorithm for a planar graph. However, the space complexity of his algorithm is not linear. 1A preliminary version of their algorithm was presented in 1976 in the Symposium on Theory O f Computing. 16 In 1986, Gazit [10] presented an optim al random ized connectivity algorithm. This paper contains a new version of this algorithm , with improved probability. The probability to fail is exponentially inverse to the input size. з .2 .2 D efin itio n s An undirected graph G = (V, E ) consists of a set of vertices V of size n, and a set of edges E of size m. Each edge is an unordered pair (u, w ) of disjoint vertices v and w. In this paper it is assumed th a t there may be more then one edge between two vertices. The edge (v,w) hits vertices v and w. The degree of a vertex v is the num ber of edges th at hit u. A path joining v\ and Vk in G is a sequence of vertices Ui, v.2 such th at (Vi, v*+i) € E . A connected subgraph in a graph is a subset U C V , such th at for every pair of vertices v, u G U , there is a path joining v and и. A connected component is a connected subgraph th a t is not contained in any other connected subgraph. The following notations are used: 1. For convenience, define a = \J\. 2. P = Number of processors. 3. n — Num ber of vertices. 4. m = Number of edges. Assuming there are at least processors, the following definitions are given: • A easy graph is a graph where n + m < P. • A dense graph is a graph where n < P. • A sparse graph is a graph where n > P. 17 The main d ata structure is be a reverse-rooted tree called super-vertex. The root of every super-vertex tree is a vertex th at points to itself. Every other vertex points to its parent. For each vertex its root is defined as the root of the super- vertex tree to which it belongs, which we denote by root(v). If a tree has height of 1 or 0, the super-vertex is a star. Super-vertices are viewed as vertex disjoint connected subgraphs. In any stagei of the algorithm the graph is a forest of reversed-rooted trees. An edge (v,u) is a live edge if u and v belong to different super-vertices. There can be more than one edge between two super-vertices. A super-vertex is live if there is a live edge common to it. The degree of a super-vertex is the num ber of live edges that hit its vertices. 3.2.3 D a ta S tru ctu re The m ain data structure is a reverse-rooted forest of vertices. The union of two super-vertices means th a t one of the two roots will become the descendant of the other. Each vertex v has a pointer parent(v) to its super-vertex. Every root pointsl to itself. Every vertex v has a memory cell e„ to store an edge, and a memory cell flagv th at can hold an integer of size O(log(log(ra))). These cells will be used to store some variables in the execution of the algorithm. 3.3 Easy case An algorithm for finding connected components is first described when there is a processor for every edge and for every vertex. This algorithm was given by 18 proced u re Easy-Case (V ,E ) for all v £ V in parallel do parent{y) := v. od w h ile there is a live edge in the graph or some tree is not a star do for all v G V in parallel do parent(v) := parent(parent(v)) od for every edge (u ,v ) using concurrent write in parallel do if parent(parent(v)) = parent(v) and parent (par ent(u)) = parent(u) j th e n if parent(u) > parent(v) th en parent (par ent(u)) := parent(v) fi fi if parent(u) = parent (par ent(u)) and parent(u) did not get new links in the previous two operations th e n parent(parent(u)) := parent(v) fi od for all v. £ V in parallel do parent(v) := parent(parent(v)) od od end Easy-Case Figure 3.1: Finding Connected Components: Easy Case Shiloach and Vishkin [26]; an outline is presented in Figure 3.1. In this algorithm , for every edge (u,v) there are two edges: (it, v) and (u ,u). Some Lemmas from [26] th at will be needed later axe presented here. The proofs can be found in the original paper. L em m a 3.3.1 (Lemma 3.9 in [26]) If a tree T has not been changed during an entire iteration it remains unchanged until the end. L em m a 3.3.2 (Lemma 3.12 in [26]) If v is the root of a tree that changed in the ith iteration, then the number of vertices in its super-vertex in the end of the ith iteration is at least ( |) t-1. T h eorem 3 .3 .3 (Main Theorem of [26]) 1. The algorithm terminates after lo g |(n ) + 2 iterations. 19 2. parent(u) = parent(v) if and only if u and v are in the same connected component. 3.4 Dense-to-Easy Reduction 3.4.1 In form al d escrip tio n A dense graph is a graph where there is a processor for every vertex and a processor, for every log(n) edges. The m ethod presented here repeatedly chooses a randoml ' I \ I sample of edges. A vertex with a higher degree has a better probability th at at least one of its edges will be chosen than a vertex with a lower degree. It is possible th at a vertex has a live edge that was not chosen in some iteration, but as the num ber of its live edges increases, so does the probability th at at least one of them i will be chosen. Therefore, the probability a vertex having m any live edges (a dense vertex) to be unioned is larger than that of a vertex with few live edges (a sparse vertex). The goal is to union dense vertices among themselves and leave out the sparse ones. After the algorithm is completed, all the rem aining live edges are connected to one of the sparse vertices. It will be shown later th at the expected num ber of live edges in the graph after the algorithm was executed is 0 (n ), so th at the easy-case algorithm can be used. The dense vertices can be ’’forced” to m ate among themselves and not with the sparse vertices. As shown above, after selecting an edge and a vertex, the probability of the edge to hit this vertex is higher if the vertex has m any edges. If one random ly picks a large enough sample of edges and some vertex v and none of these edges hit the vertex, then one can assume v to be sparse with high 20 probability. For each edge in the sample an attem pt is m ade to union its end- vertices; another (smaller) sample is chosen, and the process is repeated. Note th at several edges in the sample may hit the same vertex, but only one of them is used. The size of the sample we choose decreases geometrically (to reduce time com­ plexity), so th at the probability of a super-vertex not to get an edge (and therefore to be classified as sparse) increases. However, it will be shown th at the expected num ber of live edges th at hit a vertex classified as sparse is inversely proportional to the sample size. The num ber of dense super-vertices also decreases geometrically. By taking the samples such th at the sample size decreases slower than the num ber of super­ vertices we get two desirable results: • For each iteration, consider the set of all super-vertices which were classified as sparse during this iteration and the expected total num ber of live edges th at hit this set; it will be found that this num ber decreases geometrically. • The tim e complexity of each step is proportional to the sample size. There­ fore the tim e complexity of each step decreases geometrically. The first element in a geometrically decreasing sequence dom inates the sum, and this element is y . The algorithm presented in Figure 3.2 reduces a dense graph to the easy graph case. After execution of the algorithm , every vertex in the graph points to the root of the connected subgraph to which it belongs. All edges are between the roots; and the expected num ber of live edges is 0 {n ). Therefore, the easy case algorithm can be applied. The reduction algorithm always halts, but may take longer than expected. The time complexity is 0{log(n)) when the num ber of vertices is O(P) 21 and the num ber of edges is 0 ( P •log(n)). The probability to fail is less than 3.4.2 A n a ly sis o f th e D e n se -to -E a sy R e d u ctio n D efin ition s: 1. An Extrovert root r is a root with Tfiag — 1. At the beginning of the dense- to-easy procedure, every root is a Extrovert root. 2. An Extrovert vertex v is a vertex th at belongs to a tree whose root is an Extrovert root. 3. An Introvert vertex v is a vertex th at belongs to a star and parent(v)fiag = 0. At the end of the dense-to-easy procedure all the vertices are Introvert. 4. An Extrovert Edge is a live edge which connects two Extrovert vertices. 5. An Extrovert Edge becomes an introvert edge if one of its end super-vertices becomes introvert. 6. ni is a random variable representing the num ber of Extrovert super-vertices} after iteration i. L em m a 3.4.1 The w h ile loop of dense-to-easy will he finished after at most [log|(n)J + 2 iterations. P roof: The proof is the same as for Lemma 3.3.1. The Introvert super-vertices are stars th at are put aside, and the algorithm treats the Extrovert super-vertices the same way as those in the Shiloach and Vishkin algorithm . □ Let S be a subset of the Extrovert stars. Define deg(S) as the num ber ofj Extrovert edges common to a super-vertex in S\ th at is, deg(S) = |{(ti,i?.)|(u,t;.) 6 E A u ,v € E xtrovert A {u 6 S V v € -S')}!. 22 proced ure dense-to-easy (G (V ,E )) for all v £ V in parallel do fla g v := 1 od for all v E V in parallel do parent(v) := = v od i := — 1 w h ile there is a root r such th at fla g r = 1 do i := i + 1 for all v G V in parallel do parent(v) := parent (par e n t(v )) od Pick a sample of edges of size 2 • m ax(P , \m • a 1 ]) for every edge (u ,v) in the sample using concurrent write in parallel do if parent (par ent(v)) = parent(v) and parent (par ent(u)) = parent(u) and f I ® > g p a re n t (x i) f ^ par ent(v) 1 th en if parent(u) > parent(v) th en parent(parent(u)) := parent(v) fi if parent(u) = parent(parent(u)) and parent(u) did not get new links in the previous two operations th en parent(parent(u)) := parent(v) fi od for all v ( E V in parallel do parent(v) := parent(parent(v)) od for every root r in parallel do if the tree th at rooted in r was not changed during the iteration th en fla g r := 0 fi od od for every edge (u,u) in the graph in parallel do if (u ,v ) is a live edge th en replace it by an edge (root(u),root(v)) fi od end dense-to-easy Figure 3.2: Reducing Dense graph to Easy G raph 23 L em m a 3.4.2 Given a subset S, such that deg(S) = x, the probability that none of these x edges is chosen in iteration i (and so all the vertices o f S and its x x y Extrovert edges will be moved to introvert), is less than or equal to e " m, where y = 2 • m ax(P , \m • a 1 ]) is the size of the sample of edges chosen in procedure dense-to-easy. P roof: The probability of an edge to be chosen by one processor is (Every edge is kept twice). The probability of an edge not to be chosen for the sample in | Step 3 is bounded by (1 — 2^ ) y> where y is the sample size. For any z > x edges, the probability th at none of them is chosen is at most: K1 - = (1 - 5 [‘ “ I * '* = « " '* • Z ■ m Z ■ m □ L em m a 3.4.3 For a given x, the probability that at least x edges will become introvert in iteration i is bounded by e m *2 *, where ni is the number o f Extrovert super-vertices after iteration i. P roof: A subset of edges will become Introvert only if a subset of super-vertices S, to which these edges belong, becomes Introvert. The num ber of such subsets is bounded by 2 ™ * . The probability of each of these subsets to go to Introvert is given in Lemma 3.4.2. Only subsets S with at least x Extrovert edges, of which none were chosen, are considered. Therefore, an upper limit to the result is e m • 2 * . □ L em m a 3.4.4 The number of Extrovert super-vertices, after every iteration i, is at most n • (§)'. P roof: The proof follows immediately from Lemma 3.3.2. □ 24 L e m m a 3.4 .5 For every i the probability that more than n • od Extrovert edges n-«2* become Introvert during iteration i is bounded by ( |) P roof: Set x = n • a*, y = 2 • m • and n; = n • a 2'* (an upper lim it); then, by Lemma 3.4.3 the probability is e” “ “ •2” ° = ( | ) B ’ “ . □ Define 7n = [3 • Note that a " 7 1 = log"3(n). L e m m a 3 .4 .6 The probability that more than log"(n) Extrovert edges will become n , Introvert during iteration i > j n is bounded by ( I ) 1 0 * 3 ! * 1 ) . ! I P roof: Set x = V — 2 • P , and n{ = n • a 2'7* = (an upper lim it). Then, x ■ ^ — log?(n); and by substitution in the form ula of Lem m a 3.4.3, the result is obtained. □ a_ _ L e m m a 3 .4 .7 With probability greater than or equal to 1 — (2+logs (n ) ) -( -)l < > 8 (n ) 3 c i the number of edges that become Introvert is less than 6.5 • n. P ro o f: Assume that the results expected in Lemmas 3.4.5, 3.4.6 are obtained. (The probability for this will be computed later on). Lemma 3.4.5 is used to compute the num ber of edges that become Introvert in the first iterations, and Lemma 3.4.6 is used to compute this num ber for the last iterations. The sum is bounded by: log 3 (n) T u _ J n ( £ « • « * ) + E *=o 7n+i l°g2W For a large enough n, the sum can be bounded by 25 = n + n • V) a1 = n -\--------- ■== < 6.5 • n. i=o 1 - y 2/3 An upper limit to the probability of failing will be given. Failure may arise if: 1. The num ber of edges th at become Introvert in any of the first 7„ iterations n is more than n • a*. By Lemma 3.4.5 the probability is bounded by ( | ) lo g (n). 2. The num ber of edges th at become Introvert after the first 7„ iterations is _ f — I more than n • of. By Lemma 3.4.6, the probability is bounded by ( f ) 1 ” * <n). ! I I Adding up all probabilities for all iterations yield the probability claimed. □ C o ro lla ry 3.4.8 The expected number of live edges in the graph after the algo­ rithm was executed is 0 (n). P ro o f: The proof follows immediately from Lemma 3.4.7. □ T h e o re m 3.4.9 The time complexity of the algorithm is 0(log(n)). P ro o f: The complexity of each iteration is 0(1 + p • «*)• By Lemma 3.4.1 the num ber of iterations is o(log(n)). Therefore, the complexity of the fo r loop is o(log(n)) 0 ( ]T 1 + — • a*) = 0 (log(n)). t= 0 mi i i .,i . num ber of live edges ln e complexity of the easy-case connectivity algorithm i s ------------- ~ p ---------Q — log(n), the num ber of edges by Corollary 3.4.8 is O (n); and because the graph is dense, 0 (n ) < 0 (P ). □ 26 3.5 Sparse-to-Dense Reduction 3.5.1 In form al D esc rip tio n Since sparse graph has less than O(n) processors, there cannot be a processor per each vertex. Therefore, the m ethod used for the dense case cannot be used here. Since the dense graph reduction procedure attem pts to find a sparse subgraph, it is certainly not applicable to the sparse case. The solution is to partition the graph into a sparse part and a dense part. The procedure is then repeated 2 * Zog(log(n.)) using the resulting sparse subgraph of the previous iteration. The dense subgraphs are then collected together. After 2 • log(log(n)) iterations we have two subgraphs. One of them is the union of all the dense subgraphs that we collected so far. The other is the resulting sparse subgraph. The size of the two resulting subgraphs is Now th at there is a processor per node, the dense-to-easy reduction can be used on this graph. The application of determ inistic-m ate on each vertex in the sparse subgraph during each application of Partition, results a sparse subgraph th at has O (loj ^ ) vertices. The num ber of vertices in the sparse subgraph decreases by a constant ( |) in each iteration. The expected num ber of vertices in the resulting sparse subgraph will, therefore, be 0 (n -( | ) 2 Io g(log(n))) < The expected num ber of vertices added to the dense subgraph after the ith application of Partition is at most Q (lo”^ ), where ni is num ber of sparse vertices in iteration i. This num ber also decreases geometrically; so that after 2 • log(log(n)) applications of Partition on the sparse subgraph, the num ber of dense vertices is at most 5 ^ 0(V 3 )1 • = 0 ( ia£(n))- 27 3.5.2 D e te r m in istic M a tin g The problem with the algorithm of Shiloach and Vishkin [26] is th at it can create large trees with a height of 0 (n ). Cole and Vishkin [5] solved the problem by using parallel union-find, a m ethod th at increases the complexity of their connected components algorithm by an inverse Ackerman function. Reif [25] solved the I problem by using a randomized procedure called Random-Mate th at creates trees with height one. His solution is simple but has probability of | to create only a small num ber of trees. The ideal procedure is one th at will create trees of height one, so th at m ost of the vertices will be in those trees. A similar idea was proposed by Hagerup [12]. The procedure is based on deterministic list ranking by Anderson and Miller [2]. This list ranking algorithm is similar to the algorithm of Cole and Vishkin [5]. An outline of the procedure, called Deterministic M ating, is given in Figure 3.3. L e m m a 3.5.1 The number of trees composed o f vertices which were removed in the second step (degree 0, 2 or parent of degree 0) is at most | of the number of vertices. P ro o f: A simple counting is all th at is necessary. The num ber of vertices with in­ degree of 2 or more in a tree, is bounded by the num ber of vertices with in-degree of 0. □ L e m m a 3.5.2 Every vertex v, that was scanned in the first k iterations, belongs to a tree of height 1. P ro o f: If v is removed, it becomes a root. If it is not removed it becomes either a leaf or p art of a list. The size of every list is bounded by log(log(n)); and therefore, every list can be collected in at most log(log(n)) steps. 28 proced u re mate (G (V ,E ),k ) /* Every vertex v has an edge ev — (v, u). */ /* Assume that every processor is responsible for at m ost k vertices ordered (in an array) */ for every vertex v in parallel do for every vertex v with edge e„ = (».,u) do n e xt(v) := u. od if v has either 1. indegree 0 2. indegree 2 or more 3. is a parent of a vertex of indegree 0 th e n remove v from the graph and the array, fi if v has indegree 0, th e n parent(v) := next{y) fi od for i := 1 to k + log(log(n)) do every non-busy processor takes the next vertex v from its array if n e xt(v ) was not taken in this step th e n parent(v) := parent(next(y)). remove next(v). else break the list of vertices into sublists of a size between 2 and log(log(n)), reverse the links, and have the processor th at is responsible for the last element in the sublist collect them (parent(v) last in reverse list). fi od 1 end mate Figure 3.3: Deterministic M ating 29 L e m m a 3.5.3 For every vertex that is not scanned in the first k iterations, there exists a vertex which is part of a list (not the first one) that was compressed by its processor. P ro o f: The only reason for a processor not to take care of a vertex in its array is if it compresses a list. □. T h e o re m 3 .5 .4 The number of trees (starts) in the resulting graph is at most 2 : 3 2 i n P ro o f: The proof follows immediately from Lemma 3.5.1 and Lemma 3.5.3. □ T h e o re m 3.5.5 The complexity of the mating algorithm is k + log(log(n)). □ P ro o f: The fo r loop is executed k + log(log(n)) tim es, and each iteration takes 0 (1). □ 3 .5 .3 P a r titio n o f th e V ertices The algorithm given in Figure 3.4 takes a graph G(Vi, E i) and a num ber n as input and creates two sets of super-vertices. Each super-vertex is a one-level reverse- rooted tree; so, every edge hits to two roots. The two sets of super-vertices are: 1. Extrovert - A set of Extrovert super-vertices which got a live edge in each step of the algorithm. 2. Introvert - all the other super-vertices. An Extrovert super-vertex got an edge every tim e a we pick a sample. Determ in­ istic m ate is run 2 • loglog(n) times; so, the num ber of these super-vertices is at m ost , ^ v log(n) 30 In the previous section it was proven that the num ber of live edges having one end-point in a super-vertex which belongs to Introvert is at most 6.5 • n with high probability, (see Theorem 3.4.9). Since the num ber of super-vertices which are ; Introvert decreases, the expected num ber of edges between vertices in Introvert decreases as well. An array of length m with all the live edges and some dead edges, is assumed. By keeping the edges in blocks of equal size and assigning a processor for each block, a sample of the edges (for the partition algorithm ) can be picked in time proportional to . This way, a sample can be chosen without repetition. I Note th at in p art of the analysis choosing with repetition is assumed, since it] decreases the probability of an edge to be chosen (if one edge is chosen twice, another will not be selected), an upper bound on the num ber of edges between] vertices in Introvert is results. By taking a geometrically decreasing sample of edges, the probability of a super-vertex not to have an edge in the sample increases geometrically. However, the num ber of Extrovert super-vertices decreases geometrically and the sample size decreases slower th an the num ber of super-vertices. Therefore, as the ex­ pected num ber of live edges connected to vertices which go to Introvert decreases geometrically, the complexity (which depends on the sample size) decreases geo­ metrically. The sum of a geometrical sequence in which a* = q-ai-i and q < 1 is dom inated by the first term . Therefore both the complexity and the num ber of live edges connected to vertices in Introvert are small enough. After loop j there are trees with a height of at most [1 + log(log(n))]. After stars are created from these trees, all the live edges in the graph are transm itted to the roots. 31 proced ure P a rtitio n (l/^set-of-vertices; E^set-of-edges; n:integer) returns {E xtrovert, in tro v e rt) for every vertex v € Vi in parallel do fla g v := 0; mark v as introvert od for j := 0 to [2 • Zop(log(n))] do /* loop j * / Pick a sample of edges of size m ax(P, ["|Ej| • if (v ,u ) is a live edge in the sample and fla g root(uy = fla g root(„) = j th en try to w rite the edge to: eroot(„) and eroot(u) fla g root(u) fi(^Qroot(v) •= j H ” 1 f i ; m ake deterministic-mate(G(V, E ), f|£?»| • c k ' 7 ]) between all the vertices v such th at e„ was updated in previous step, od /* loop j * / for j := [2 • Zo«gr(log(n))] dow nto 0 do for every v G Fj in parallel do if v was m ated in the j th iteration of the above loop th e n parent(v) := parent (par ent(v)) fi od od for every edge (u , v) in the graph in parallel do replace edge (u ,v) by an edge (root(u),root(v)) od for every processor p in parallel do for all v such th at p set fla g v to [2 • Zo^(Iog(n))] + 1, m ark v as Extrovert. od retu rn (E x tro v e rt, in tro ve rt) end Partition Figure 3.4: Algorithm for P artition of the Vertices 32 3 .5 .4 A n a ly sis o f th e P a rtitio n A lg o rith m D e fin itio n s: 1. An Extrovert vertex is a vertex with a value of flag > j in the end of the j th iteration. In the beginning of the algorithm , every vertex is an Extrovert vertex. 2. An Introvert vertex is a vertex with a value of flag < j in the end of the j th iteration. 3. An Extrovert Edge is a live edge which connects two Extrovert vertices. 4. An Extrovert edge becomes an Introvert edge if one of its end super-vertices has become an Introvert super-vertex. 5. ni — |^ | and mi = \Ei\ are the sizes of the input vertices and edges sets, respectively. L e m m a 3.5.6 The number of Extrovert super-vertices in iteration j will be at most ni • a 2' \ where a = ^ 2 /3 . P ro o f: Every Extrovert root has an edge in the sample. The proof follows imme­ diately from Theorem 3.5.4. □ T h e o re m 3 .5 .7 With probability greater than or equal to iy . - l _ log2 1 — (2 • log(log(n)))(|) the number of edges become Introvert is less than 6.5-|y<|. P ro o f: The proof is similar to the proof of Lemma 3.4.5. The num ber of iterations is smaller, however; and therefore, the size of introvert is even smaller. □. 33 T h e o re m 3 .5 .8 The complexity of the partition algorithm is 0 ( J f + (log(log(n)))2). P ro o f: The complexity of iteration j is bounded by the product of the sample size and m aximal height of a super-vertex trees. The sample size is c k j • and the height of every tree is at most j . Therefore am ount of work executed is bounded by j • a? • where a = y^2/3. The tim e complexity for the partition is: O O - m - , o o o o o o 3 = 1 J r x 3 = 1 3 = 1 k = 0 < + (1 - < * ) 2 ^ ~ (Ip)' If ^ is small, then (log(log(n)))2 is added to the complexity. Therefore the complexity is 0 ( r^ -f (log(log(n))2)). □ T h e o re m 3.5.9 A fter performing the algorithm, the size o f Extrovert is at most log(n) ' P ro o f: The proof follows immediately for Lemma 3.5.6. □ 3 .5 .5 A lg o rith m th a t R ed u ces th e N u m b er o f V ertices The algorithm presented in Figure 3.5 reduces the num ber of vertices in the graph from n to The expected time complexity of the algorithm is 0 (n±™ -f (log(log(n)))2). Note th at the goal here is to create a dense graph! from the sparse one. After the ith iteration the num ber of Extrovert super-vertices is (see Lemma 3.4.1 with j = 2 • log(/op(n)) - the num ber of iterations in Partition). Since the size of Introvert decreases geometrically, there are Q (lo8 ™ nj ) super-vertices in the Extrovert-Set after performing the loop. 34 p roced u re Spar se-to-Dense (G (V ,E )) Extrovert-Set:= 0; VLj := V ; E - i := E; for i 0 to [2 • log(log(n))"| do (E xtro vert, In tro vert) := P a rtitio n (V i-\,E i-\,n ) Extrovert-Set := Extrovert-Set U Extrovert Vi := vertices in Introvert which are not isolated Ei := {(v.,tt)|(v.,ti)is a live edge, and u ,v £ Introvert} R edistribute Vi and Ei between the processors. od for every root v. £ Introvert in parallel do if there exists an Extrovert vertex u common to v, th en parent(y) := u; replace every edge (v,w ) by edge (u, w). fi od V := Introvert U Extrovert-Set] E := all live edges, retu rn G (V,E) end Sparse-to-Dense Figure 3.5: Reducing the Number of Vertices 3 .5 .6 A n a ly sis o f th e A lg o rith m th a t R e d u c e s th e N u m b er o f V ertices L em m a 3.5 .1 0 A fter iteration i the number of live super-vertices which are not isolated in Introvert (that is, have some live edge to some other super-vertex in Introvert) is bounded by n - a 2 t where a = y 2 /3 . Note that the number of vertices which are Introvert is V P roof: In the first iteration of the partition all edges are checked (a subset of size \Ei \ •a3 has been selected, and j = 0). Therefore, the num ber of live super-vertices in | V;| is reduced by | . (See Theorem 3.5.4). □ 35 L e m m a 3.5.11 With probability greater than or equal to _JM. 1 — (2 ■ log(log(n)))(|) , the number o f edges that go to introvert is less than 6.5 • \Vi\. P ro o f: The proof follows immediately from Theorem 3.5.7 (note th at isolated vertices have no edges to other super-vertices in Introvert). □ L e m m a 3 .5 .1 2 With probability greater than or equal to — ? — 1 — (2 • log(log(n)))(|) , the number of live edges in Ei is bounded by 6 .5 -m ax(n • ^ ) . P ro o f: The proof follows immediately from Lemma 3.5.10 and Lemma 3.5.11. (Note th at if Vi is smaller, then the expected num ber of edges in Introvert is smaller.) □ i— T h e o re m 3 .5 .1 3 With probability, 1 — 2 • (log(log(n)))2 • ( |) the complexity of the algorithm is O(log(n)). P ro o f: The complexity of every stage in the partition is (see Theorem 3.5.8). During the algorithm a processor can take care of an edge in 0 (1 ) tim e (that is, it can read inform ation, update its status, etc.) for each edge for which it is responsible. The goal is for each processor to have about the same num ber of edges of Ei th at is, in iteration i of the algorithm , if Ei is the set of rem aining edges for the next partition, then each processor should have O (-^r) edges in order to balance the work. 36 A technique suggested by Cole and Vishkin [6] is used here. They gave an optim al algorithm to compute the prefix sum of an array of size m in ) tim e with optim al num ber of processors. The first partition takes O(jr) time. Therefore, the expected complexity is bounded by m 2-log(iog(n)) n 2 log(m) . m . n ^ 2 ^ p + S P 3 + log(log(m ))) - P + • °«(m ) + p • ^ 3 ) - ? + 2 • log(m) + —£ -5 - = 2 • log(m) + 0 (™ ^ " ) = 0 (log(n)) i 3 -T The probability to fail is obtained by adding up the probabilities to fail in every iteration. □ T h eo rem 3 .5 .1 4 The number of live super-vertices after the end of the algorithm “ ° ( i * > ) • P roof: The num ber of super-vertices which are Introvert is O(log(n)) (Lemma 3.5.10). The set of super-vertices th at move to the Extrovert-Set after iteration i is bounded lo^ (Lemma 3.5.9). The size of ni decrease geometrically (Lemma 3.5.10). □ 3.6 Algorithm for Finding Connected Components The algorithm presented in Figure 3.6 takes a graph G(V, E ) as input, and finds the connected components. After execution, every vertex points to the root of the connected component it belongs to. This is a Las-Vegas algorithm in the sense th at it always gives the right answer but may take longer than expected. W hen the 37 p ro c e d u re general-graph (G (V ,E )) call Spar$e-to-Dense(G) call Dense-to-Easy(G) call Easy-Case(G) fo r every v 6 V in p a ra lle l do parent(v) := parent (parent (par ent(v))) o d e n d general-graph Figure 3.6: Finding connected components in a graph num ber of processors is equal to the expected tim e complexity is 0 (log(n)). A dense graph is created from the sparse one; the algorithm is then run for a dense graph. 3.6.1 A n a ly sis o f th e A lg o rith m for F in d in g C o n n ecte d C o m p o n en ts T h e o re m 3.6.1 The expected complexity of the algorithm is O(log(n)). P ro o f: The proof follows immediately from Theorem 3.4.9 and Theorem 3.5.13 □ T h e o re m 3.6.2 The probability that the algorithm will run longer than expected is bounded by (2 • log(log(n)) + 2 + logi(n)) * ( | ) log3(n). P ro o f: The proof follows from Lemma 3.4.7 and Theorem 3.5.13. □ 38 C h a p ter 4 S ep a ra to rs in P la n a r G rap h s 4.1 Planar Embedding Planar graphs are graphs th at can be drawn on a plane such th a t no two edges cross each other. A face of G is a maximal part of the plane such th at x and y in] it there is a continuous line from x to y which does not share with the realization! of G at any point. In bi-connected planar graph a face is a simple cycle with no edges inside, or outside (in the case of external face). The contour of each face is I a simple circuit of G. Planar graphs were first investigated by Euler, who gave a formula for the relationship between the num ber of vertices, edges, and faces. Following Edmonds, Lehman, T utte and m any others, we formally define a planar graph as follows: Let G(V, E ) be an undirected graph. Each edge of G (V ,E ) is viewed as two directed edges called darts. A planar embedding of G(F, E ) is a description of the cyclic ordering of the darts radiating from each vertex. This perm utation defines the faces of G (V ,E ) and for every face, for every edge it gives the next edge of the face. Note th at the same planar graph may have more then one planar embedding, th at is, a different perm utation of the darts and a different set of faces. We say 39 th at some perm utation < f ) is a planar embedding if the num ber of faces / satisfies E uler’s formula: / — \E\ + |Vj = 2, where / is the num ber of faces 1. A theorem showing th at a planar graph with n > 3 vertices has at most 3ra — 6 edges [14] follows from Euler formula. In 1930 Kuratowski gave an algorithm for testing graph planarity. His algo­ rithm , th a t had an exponential complexity, was improved again and again. In 1974 Hopcroft and Tarjan gave an optim al sequential algorithm to decide if a graph is planar and find a planar embedding. In 1986 Klein and Reif gave an i 0 (n ) processors 0 (log2(ra)) time parallel algorithm for planarity test. 4.2 Lipton and Tarjan Sequential Separator Algorithm i Lipton and Tarjan [16] were the first to come with a proof and an optim al algorithm ' for separator of size 0(A /n) in a planar graph. They have also shown th at a planar graph for which the size of the minimal separator is O (yfn) exists, namely the two dimensional square grid. Their first Theorem claims th at for a planar graph with radius2 r. there exists a separator of size 2r + 1, which separates the graph such th at the m axim um weight of every chunk is at most | . T h eo rem 4.2.1 Let G be any planar graph with nonnegative vertex costs sum ­ ming to no more than one. Suppose G has a spanning forest of radius r. Then the vertices o f G can be partitioned into three sets A ,B ,C , such that no edge joins a vertex in A with a vertex in B, neither A nor B has total cost exceeding 2/3, and G contains no more then 2r + 1 vertices, one the root of the tree. 1 As an example note that a clique size 3 has two faces, the “inside” and the “outside” 2 They did not give a formal definition of radius, but intuitively the radius o f a graph with respect to some vertex v is the maximum of BFS labeling with respect to v. 40 A constructive proof in optim al tim e is given in [16] pages 179-181. This Theorem is the basis of all the algorithms for separators in planar graphs. The idea is to find cuts all over the graph such th at each cut is small and separates the graph into pieces such th at every piece contains at most | of the vertices of the graph, or the radius of the piece is small. The result should be a graph with a small radius so th at Theorem 4.2.1 can be applied. These small cuts are B F S i layers, since every B F S layer is a cut between all the vertices in a higher layer and ' all the vertices in a lower layer. Therefore we can find small cuts by looking for I small B F S layers. This idea is formulated by Lipton and Tarjan in the following Theorem. T h e o re m 4.2.2 Let G be any n-vertex connected planar graph having nonnegative vertex costs summing to no more then one. Suppose that the vertices of G are\ I partitioned into levels3 according to their distance from some vertex v, and that L(l) denotes the number of vertices on level I. I f r is the m axim um distance of any vertex from v, let r + 1 be an additional level containing no vertices. Given any two levels Z j and Z j such that levels 0 through l\ — 1 have total cost not exceeding 2/3, it is possible to find a partition A ,B ,C of the vertices of G such that no edge joins a vertex in A with a vertex in B, neither A nor B has total cost exceeding 2/3, and G contains no more than L (l\) + Ll/l/) + m ax{0, 2(Z 2 — l\ — 1)} vertices. The proof of the Theorem is based on the following facts: 1. The set of all the vertices of level 0 (which is only v), to level Z i and the edges between them is a connected subgraph. 2. Shrinking a connected subgraph into a single vertex preserves planarity. 3These are BFS levels 41 The idea therefore is to cut the graph by Z 2 and shrink layers 0 to l\ into a single vertex, so we get a planar graph with a small diam eter. On this planar graph we applied the algorithm presented in the proof to Theorem 4.2.1 to get the separator. Theorem 4.2.2 contains an outline for an algorithm . We can find a level l\j2 such th at there is 1/2 or less of the weight of the graph in either side. Each layer i I is now given a score equal to its size plus twice its distance from Z x/ 2, and then the j graph is cut in some layer Z 2 above Zj/2 with a minim al score and in some layer l\ i below li/2 with a minimal score. In the worst case the sizes of the layers are an j arithm etic sequence of 1 ,3 ,5 _ _ In this case taking Z i = {u} and Z 2 = last layer, I yields a radius of 2yJn/2 — 1, and a separator of size 2r + 1 < y/$Tn. 4.3 Djidjev’s separator algorithm Djidjev [8] investigated the special case when all the vertices have the same weight, and has shown two additional ways to separate a graph: } 1. Every ILF,? layer with less th an 2/3 of the vertices on both sides is a separator.! 2. Every two B F S layers w ith 1/3 to 2/3 of the vertices of the graph between them are a separator. We describe now case (1). Let lxj3 be the first layer w ith at most 2/3 of the vertices of the graph before it, and let l2/3 be the layer with at most to 2/3 of the vertices of the graph behind it. Every layer between L i/3 to L2/3 is a separator, but these layers may be too large. On the other hand, the num ber of vertices between L\/3 to L2/3 is at least | if li/3 and l2/3 are included, but less than | otherwise. If one of these layers is of size at most K = y/6n, then this layer is a separator of size K , else there can be no more then of such layers. Therefore 42 a large num ber of the vertices is in small num ber of layers, and the “central p a rt” of the graph has small radius. In case (2) we take two layers L \/z — j and £ 2/3 + j w ith less th an 2/3 of the vertices between them . If all the possible cuts are too large, there cannot be m any such layers. On the other hand they cover 1/3 of the vertices of the graph so there cannot be more than ^ such layers on either side of l\/2- If all these possible cuts were too large then the num ber of layers is at most ^ + 2 ^ 2 = layers, and we have | vertices outside these layers. Using an argum ent similar to Lipton and Tarjan, we can find a separator of size 2~ + 4y|/ n /6 . By solving the equation 2J~r -f 4yJn/6 = K we get K = y/6 n. 4.4 Our separator algorithm We go one step beyond Djidjev and look for a separator composed of two layers such th at the sum of their distances from £ 1/3 and £ 2/3 is some constant. By soj doing we have one more possible cut. Checking this cut give us a | y/n separator.! We present here our algorithm: In the following algorithm let K = | and a = 0.2060617121 . We will explain later how we got these numbers. A n algorith m for a \ \ f n separator 1. Create the BFS of the graph. 2. M ark the layers Lo, Li, L2, and so on. 3. Find the first layer Li st at least | of the vertices belong to previous layers. 4. Find the first layer L | st at least | of the vertices belong to previous layers. 43 last layer D x l 2 3 or less (one more layer in any side) t i \Jn L i 3 ________________________________________________________ first layer Figure 4.1: First situation 5. Check if any layer has K y/n vertices or less. If so, this is the separator. 6. X := index of L i : 3 W h ile \LX \ > f := X - 1; Y := index of L 2 : 3 W h ile \Ly \ > f y/fl do Y := Y + 1; If there are | or less of the vertices between Lx and L y then choose L x and L y to form the separator. If not, we can create the situation described in Figure 4.1. 44 The size of each layer in Di and D2 is at least y y/n, by definition. . Now we can build the graph as in Figure 4.2 If there are not enough layers in the graph for D3 and D4, we can take layers of size zero. L e m m a 4.4.1 I f D2 -f D3 contain at most K i y / n vertices, then there is a separator of size K y / n or less which is composed of two layers. P ro o f: Assume not and get a contradiction. We will go from the top layer of D3, and the top layer of D2 down, simultaneously, layer after layer. Find the first pair of layers such that the total num ber of vertices in them is less than or equal to K y / n . This is a separator. To prove th at it is a | or less separator we use contradiction. Assume that one of the parts it separated contains more than | of the vertices of the graph. Then the part under the upper cut (of D3), is larger than the part under the cut of D2. Because the size of each layer in D2 is at least y y/n, the average size of each layer in D3 under the cut is more than y y/n. 1 There are three domains: j (a) The domain above the cut in D2 and D3. In this domain the total num ber of vertices in of every two layers is greater than or equal to K y/n . (b) The domain under the cut in D2. We construct this dom ain so that every layer contains at least y-^/n vertices. (c) The domain under the cut in D3. We show th at the average layer contains more than y y/n vertices. 45 last layer i y /n | or less (one more layer in any side) j y /n first layer Figure 4.2: Second situation As a result the average size of a layer in D2 and D3 is greater th an y y/n, and* the num ber of vertices is greater th an K i y / n , contradicting the assumption. □ A sim ilar proof can be given to show th at if D1+ D 4 contain at m ost K j n vertices, then there is a separator of size K y/n or less which is composed of two layers. The next two steps in the algorithm are based on the following theorem from [16]. T h e o re m 4.4.2 Let G be any planar graph with nonnegative vertex cost summing to no more than one. Suppose G has a spanning tree of radius r. Then the vertices of G can be partitioned into three sets A, B, C, such thatj no edge joins a vertex in A with a vertex in B, neither A nor B has total cost exceeding | , and C contains no more than 2r + 1 vertices, one the root of the tree. 8. If i + j < a y / n then find a separator in the following way: • For each layer Lm above Di let em = (num ber of vertices in Lm) + 2 (distance from L i) (4.1) • Let ey = min{em} Let a — ey. Similarly, find b = ex, for D2. L y and L x will together form part of the separator. Shrink all vertices above Dx to one vertex. Shrink all vertices under D2 to one vertex. 47i The radius of the graph is the num ber of layers in Di and D2 plus 4. Apply now the algorithm from [16] to the rest of the graph. The size of the resulting separator is 2 (i + j + 3 ^ ) y/n , to which we have to add the sizes of L x and Ly. The separator size will therefore be 2 (i + j + ^ 7 ) y/n + a + b (4.2) where a2 b2 n (4 .3 ) The worst case is when a = b — n . Let K = \ . Then the separator is: 2 (a + “ + y | ) y/n < I y/n (4.4) 9. If i + j > a y/n we apply a m ethod similar to the one described in the previous step. The spanning tree will contain all the vertices starting from D3 until D4, including those belonging to the layers between them . The radius of the tree will be 2 (2 (i + j) + ^ )y/n . The size of the separator will be: 1 e2 d2 2 71 2 (2 (i + j) + — V n + c + d, where — + — < — - (i + j) K y/n (4.5) Using derivatives it is easy to prove th at the m axim um is found when c = d = yj2 (| - (i + j) K) y/n . 48 Again, using derivatives we can prove th at this expression always decreases, and th at the maxim um cut is found when i + j is minimum. assume K = | . Then 2 ( 2 - a + i + ^ 2 ( | - a - ^)) y / n = (4.6) i 7 = 2 (0.412123424 + - + v ^ 0 T 8 5 8 5 6 0 0 5 1 ) < - (4.7) I o So we can find a separator size | . We have to explain how to get K and a . If i + j is small then we want to use the option of step 8, and if i + j is large we want the option of step 9. Because these equations are derivable over all the domain, we can switch from one m ethod to the other when they give the same separator size. In other words, to find K and a. we should solve the following set of equations: 2(ct + j^ ) y /n + 2y^|n = K and 2(2a + J j ) + ^ 2 (1 - a K )y/H = K . Solving these equations give us K = 2.3311. 49 C h a p ter 5 P a ra llel p lan ar a lg o rith m s 5.1 Miller’s separator Some practical applications require not merely an arbitrarily separator, but rather a connected one. A BFS layer may be disconnected, and since Lipton and Tarjan separator is composed of BFS layers, one can find a planar graph for which Lipton and Tarjan separator is completely disconnected. On the other hand, as Lipton and Tarjan m entioned in their paper, (Jordan curve Theorem [13]) any closed curve in it divides the plane into two regions, “inside” and “outside” . Therefore, it is an interesting question whether a simple cycle separator can be found for any planar graph. Since Lipton and Tarjan do not use the planar embedding of the graph in the first part of their algorithm , the resulting separator will not be a simple cycle, in general. This is surprising because the first step of their algorithm is to find a planar embedding of the graph. In 1984 Miller [19] presented an algorithm which uses the planar embedding of the graph to find a simple cycle separator. His idea is to use the faces rather th an vertices of the graph as the basic unit. To find the faces we need the planar embedding. To get a separator with a nice shape it is better to start with BFS 50 layers w ith a nice shape. Miller’s idea was to define a graph of faces such th at two faces have distance one if they share a vertex 1. Miller assumes th at the original graph is bi-connected, otherwise we can either find a one vertex separator, or shrink all the bi-connected components but one into their separation vertex, and set the weight of this vertex as the total weight of all vertices shrinked into it. We can now define a BFS of the faces graph from a face F in the following way: Level zero contains only F , and level i is the set of all faces th at share a vertex with a face in level i — 1, and do not belong to any previous level. Suppose some layer i is a simple cycle C (certainly true for layer zero - which contains only one face), then define layer i + 1 as C' = C + 2 {-Fli7 1 share a vertex with C}. Note th at if two faces share an edge, the edge is not on the sum because the two darts would cancel each other. Miller proved the following theorem: T h e o re m 5.1.1 The edges of C' create a set of simple cycles. Obviously every cycle divides the graph into two sides - “inside” and “outside” . The first contains the faces of previous levels, which we denote the interior of the cycle. The other side is denoted the exterior of the cycle. If C' has more than one cycle, then we find the next BFS layer from every cycle independently. The exterior of at most one of the cycles has weight of 1/2 or more. We denote this cycle a part of the trunk and the rest of the cycles branches. M iller’s idea is to find for every branch an upper cut. This upper cut is a simple cycle (or cycles if the branch splits into two or more branches) th at is not too large and is close to the trunk. Similar to Lipton and Tarjan every cycle gets a score equal to its size plus twice its distance from the root, then we cut the lower part of the trunk by the smallest possible cut below the m ain weight. A spanning tree of the graph is 1Note that this is not a dual graph. 51 then created. Since for every level i a face from level i shares a vertex with a face from level i — 1, the radius of the graph in vertices is at most [ f j r where r is the radius of the faces graph and d is m axim um face size. For the upper cuts and the lower cut we take all the edges of the cycles but one to be in the tree. Using this m ethod Miller proved the following Theorem: T h e o re m 5.1.2 If G is a 2-connected embedded planar graph with weights which sum to 1, no face weight > | and the maximum face size is d, then there exist a 2-connected subgraph H with spanning tree T satisfying: 1. The diameter dia of T plus the maximum size h of any non-leaf face of H is at most 2yJ[d/2\n, i.e., dia + h < 2yJ[d/2\n. 2. The maximum induced weight on any face of H is < | . After we have the graph H we want to find a simple cycle separator. First we try all the cycles we have found in the BFS. Second we check all the cycles created by adding a non-tree edge to the tree. If none of these succeed, we have a cycle with more then § inside, but if we check all the small cycles inside, all of them have weight less th an | . Unioning some of these cycles yields the desired separator. The sequential im plem entation is simple. We compute the weight of every sub­ tree, then compute the weight of every cycle by list common ancestors algorithm. An optim al algorithm for the problem was given by Harel and Tarjan [15]. The parallel im plem entation is more difficult, but Miller gave an optim al algorithm of 0 (n /lo g (n )) processors, 0(log(n)) tim e to the problem , given a BFS of the graph as input. Therefore the most serious remaining problem is making the BFS. The usual BFS algorithm requires 0(log(n)) m atrix m ultiplications over the semiring of m in and + . Every m atrix multiplications needs 0 (n 3) operations. In Chapter 2 we show how to improve the B F S algorithm. 52 5.1.1 F irst Im p rovem en t The size of M iller’s simple cycle separator depends on the m axim um face size. We can eliminate this by triangulating the graph, then we lose the simple cycle prop­ erty. Furtherm ore it is not clear how to distribute the weight of every triangulated face (in M iller’s algorithm we have the advantage of putting weights on the faces, vertices and edges). Our solution does not use simple triangulation, but instead insert a decreasing size cycles into the faces until all have size 4 or 5 at most. If all the faces are large we do not improve over direct use of M iller’s algorithm , but if for example most of them are small and we have one face of size, y/n , we want to get an 0 (-/n ) simple cycle separator. The m ethod is to put a “cone” on every big face, with the weight of the face in the center, see Figure 5.1. For every face size d we put inside a cycle size d — 2, inside this cycle a cycle size d — 4, and so on till we have a cycle size 4 or 5. L e m m a 5.1.3 The number of vertices in a cone is at most rz-. £ P ro o f: If d is even then the num ber of vertices is \ ( d + i ) ( d - 4) = i(<P - 4) < y . If d is odd it is < y* After this step, every face contains concentric cycles which size decreases by 2. Between every two concentric cycles of sizes i + 2 and i we create i faces, as in Figure 5.1. All of them but two are of size 4. The other two are of size 5 and are in opposite sides of the cycles. The idea is to prevent the inner cycles from becoming “shortcuts” , th at is to ensure th at the shortest distance between every two vertices on the face will rem ain along the original edges of the face. In particular, the path to the smallest cycle, in the center, around it and back, will have the distance d — 1. After we find the spanning tree, using Miller’s algorithm , we replace every “inside the face p ath ” , by a p ath on the boundary of the face. 53 Figure 5.1: Cone 54 L e m m a 5.1.4 I f we replace every path inside the original face by a path on the original face, the radius of the spanning tree will not increase. P ro o f: We can replace every path between two vertices “inside” a face by a path in the same length or smaller around the face. If a vertex will have more then one incoming edge we cancel one of them arbitrary. □ T h e o re m 5.1.5 For every bi-connected planar graph we can find a simple cycle separator of size d + Y^FeG(s^ze of face F)2, where d is maximum face size. We construct the graph as described above, and find a spanning tree. The num ber c .. . _ (size of face F)2 of vertices m th e new graph is J_,FeG ---------- 2 --------- • Theorem 6 in Miller’s paper state th at If G is a 2-connected weighted and embedded planar graph with no weight > | and T is a spanning tree of G then there exists a weight separator of size at most the diameter o fT plus the maximum non-leaf face size. We substitute these numbers in M iller’s separator size formula, (^ /8 |d /2 jn ) to get a separator size of d + 2^/2^ F 6G(size °f face F)2. □ 55 C h a p ter 6 P o ly lo g S ep arator A lg o r ith m 6.1 Introduction 6.1.1 In form al D esc rip tio n In this section we give an informal description of our algorithm to find a simple cycle separator in a biconnected planar graph. In the following discussion n will denote the num ber of vertices, / the num ber of faces, and d the m axim um face size. Recall th at a 2-connected planar graph has a simple cycle separator of size V od • n[19]. Note th at the num ber of vertices in a biconnected planar graph is at most f -d/2, since every face has at most d vertices, and every vertex belongs to at least two faces. Using this fact we can rewrite the simple cycle separator theorem in term s of faces as 2 • d • y/J. 1 The new formula is the basis of our algorithm. Before we describe out algorithm let us describe the heuristics we use. Assume th at we have to divide a large area of land by a short path. The basic heuristic is to put a short straight line in the middle of the area. It will work fine in the great plains, but if we have an high m ountain or the Grand Canyon in the middle then 1 We can prove that the separator size is Q(y/ y^pg/j(size o f face F )2), but we do not need this stronger result. 56 going up and down will yield a long path too. Another heuristic is to make a path at sea level. Unfortunately going around the Rockies may yield a long path. A nother problem with these ideas is that we need a “topographic m ap” (ac­ tually a BFS) to get the global inform ation th at we need to plan our path. Our solution is to split the land into small pieces produce local “topographic m ap” for each one of them , and find short paths from one piece to another. On the local level we check the m ountains. If a m ountain is too steep (more th an 45 degrees) we go around it. If it is not steep, we go over i t ’s top. We then erase every other path in our “m aps” and repeat the algorithm for bigger regions. In graph theory term s we say that we want to reduce the num ber of faces w ithout increasing the maxim al face size too much. Our idea is to repeatedly union adjacent faces, until their num ber is small enough, so th a t we have enough processors to perform BFS using m atrix m ultiplication. In this process, we have to take care th at at m ost 2/3 of the original graph is contained in any face, th at the size of the faces does not increase too much and that the rem aining graph is still 2-connected. W hen the num ber of faces is small enough, we find a BFS of the faces [19] and then use M iller’s algorithm to find a simple cycle separator. To keep track of w hat we have removed from the graph in forming the new faces, we will assign weights to the faces as in [19]. W hen we union faces, the weight of the new face is the sum of the weights of the faces, vertices and edges inside the new face. Thus, our algorithm and M iller’s algorithm differ from previous m ethods in th at the w eig h ts are on faces, vertices and edges, rather than only on vertices. The sum of all weights is 1. The m ain problem in our algorithm is how to union the faces in such a way th at the new faces will not have long boundaries. We do not want the boundary between two regions to run on the top of the Himalayas. The naive approach 57 of choosing at random some adjacent face and uniting with it may yield a long boundary. Our idea is to union a “neighborhood” of faces. For th at we need to find the neighbors of every face. We define neighbors by n u m b e r and not by d ista n c e . T hat is, we want the k nearest neighbors of each face. Note th at they can have a distance k in a long and narrow graph or 1 in a wagon-wheel graph. These k-neighboring faces are united (tem porarily) together. We can perform BFS on their union and, as we later show can find some BFS layer which is small and close to the boundary. The m ethod is similar to Lipton and Tarjan [16]. We pick a m aximal disjoint subset of the ^-neighborhoods as the bases of our construction. T hat is, we define a graph Gc in which vertices are the k- neighborhoods, and two vertices share an edge iff the k-neighborhoods they represent share a face. Finding a maximal independent set in this graph, enables us to find a small set of k-neighborhoods, such th at every k-neigkborhood is close in the num ber m etric to some m ember of th at set. 6 .1 .2 T h e A lg o rith m - A G en eral O u tlin e Let G(V, E ) be an embedded planar graph, n the num ber of vertices, / the num ber of faces, d the m aximal face size of any face, P = f 1+ e the num ber of processors for some positive constant e and k = f e^2. We will show th at the while loop of the algorithm in Figure 6.1 will be executed a finite num ber of times. T h e A lg o rith m Let Reduce-G(G(V, E), k) be an operation th at reduces the planar graph G (V,E) into another embedded planar graph G'(V' , E ‘) such th at G' is a subgraph of G and the num ber of faces in G‘ is 0 ( f / k ) . We describe this operation in detail 58 later. Our algorithm, is given in Figure 6.1. In the following algorithm fi represents the num ber of faces after iteration i, and M {n ) represent the complexity of algebraic m atrix m ultiplication over the ring of integers. 1. Go(Vo,E o) < - G ( V ,E y ,i^ 0 - , 2. w h ile P < M (fi ) do i *- i + 1; k «- [ y g j Gi(Vi,Ei) <- Reduce-G(Gi-i(Vi-i, E i-i), k); 3. Find a face BFS of the faces G{. Find a simple cycle separator in Gi (using Miller’s algorithm [19]). Figure 6.1: Outline of our algorithm l t t 6.2 Basic Parallel Algorithms In this section we present several parallel algorithm s which we later use. The first finds the k nearest neighbors of each face. We will call these the ^ -n e ig h b o rh o o d s. This algorithm also gives us a BFS in faces within each ^-neighborhood of every face. The second algorithm finds a maximal set of ^-neighborhoods such th at each pair of ^-neighborhoods are face disjoint. The third algorithm , using the results of the first and second algorithms, finds BFS of the faces of the graph from the boundaries of the independent set of ^-neighborhoods. This gives us a set of larger disjoint neighborhoods of G. Between each layer of faces we also get a layer in G of edges. We divide each layer of edges into simple cycles. Each cycle sepa­ rates the graph between the inside and the outside. Therefore we can m ap (by hom omorphism ) the graph into a tree, were every cycle is m apped to an edge and the regions between the cycles are m apped to vertices. Using th at tree, and the 59 tree Tour algorithm of Tarjan and Vishkin [28] we compute the weight inside and outside each cycle. We call the boundaries between faces th at belongs to different “larger neighborhoods” the Voronoi Diagram of the graph. We give an algorithm th a t generates th at Voronoi Diagram of the graph. 6.2.1 F in d in g fc-neighbors Following Miller [19] we make a However, we limit the search so th at for each; I face we find only its k nearest neighbors. Breadth-First Search of faces in planar graphs. D e fin itio n 6 .2.1 Let G (V ,E ) is a new graph such that V is the set of the faces of G. If faces F\ and F 2 share a vertex v in G then there is an edge marked v. between JF\ and F 2. Note th at • There m ay be more then one edge m arked v (if there are three or more faces th at share the same vertex). • There may be more then one edge between two faces (if they share more than one vertex). • G is not necessary planar. 2 The distance between two faces is the length of the minimal p ath between them in G. To find the k nearest neighbors of each face, we want to find for every face F a set of k faces, such th at the distance from F to every face outside its set is greater 2 For example if a a vertex has degree five or more then it shares by 5 faces, so G has a clique size 5. 60 than or equal to the distance from F to every face in the set. Note th at the k nearest neighbors are not unique. We call any such set the k -n e ig h b o rh o o d of F, denoted by C(F). Choose k such th at P > f • k 2 where P is the num ber of processors and / is the num ber of faces. In this algorithm we assign k 2 processors per face, and for each face F we create C(F) - a subset of faces of size k th at form a sub-connected component, such th at F is the center. Using the “doubling up” technique, we can complete this step in 0 (log(n)) time. Assume th at for every face F we have: 1. C (F) - a vector of length k, which represents the subcom ponent of F. I 2. W (F ) - a k x k m atrix, used as a working space. Each element of C(F) and W (F ) is a pair {face, distance). Initially C(F) contains only the im m ediate neighbors of F (distance 1). Note th at a face may have a large num ber of neighbors, so th at naively collect­ ing all its neighbors may take 0 ( n 2). The solution is th at every vertex v, which its degree is greater than two, computes a list of its neighbors (if there are more than k neighbors, then just use the first k). Using this inform ation, every face can union the lists of all its neighbors, and drop duplicates using sort. The complexity is 0(log(d-& )) using f -d-k processors. (We assume th at k > d. If the assum ption is wrong, we can check first those faces th at share an edge). I After every face has a list of its neighbors we iterate at most log(fc) times to obtain a fc-neighborhood. In each iteration, for every F' £ C (F), we copy C (F ‘) into a distinct row W {F ) (updating the distances), then sort W (F ), remove duplicates so th at for each face in W {F ) we keep only the shortest distance from 61 F found so far. Then copy the first non-nil elements (up to k elements) back into C (F). An outline of the Find-k-Neighbors is given in Figure 6.2. p ro c e d u re Find-k-Neighbors{G{V, E), k) fo r e v ery face F £ G in p a ra lle l do C(F) < — 1)| a n d F share a vertex}; o d fo r i := 1 to [log(fc)] do fo r e v ery face F 6 G, 1 < i , j < k in p a ra lle l d o Copy the neighbors of every face in C(F) into W {F). Compute their distance from F. SORT(W (F)) to drop duplicates. SORT(W (F)) by distance. Copy the first (up to k ) elements of W (F ) to C(F). o d o d e n d Find-k-Neighbors Figure 6.2: Finding k-Neighbors D e fin itio n s j ! 1. Let C*(F) denote 0 (F ) before the ith iteration. j i 2. Let distance(F,F ) be the minimal distance (in faces) between F and F ' . ' L e m m a 6 .2 .2 VijVF1 € G , |C,,(jF)| = min(fc, \{F'\distance(F,F‘) < 2*}|) P ro o f: We will prove by induction: It is obviously correct for i = 1, because initially C (F) contains all the neighbors of F for every face F. 62 Assume correct for i and prove for t + 1. For every face F there are three possibilities: 1. |<7®(F)| = k =► \Ci+1(F)\ = k. 2. 3F' G C{(F) s.t. |C^F')] = k ==>• C ^ F ') c C m (F ) = * |C'*+1(ir)| > |C ^ F ')! = k = * |C 7i+1(F )| = fc. 3. VF' € C*(F), |Cr *(J?,,)| < fc- In this case, by the induction hypothesis, for every face F" such th at distance(F, F ”) < 2® +1 there is a face F* such th at distance(F, F ' ) < 2 ® and distance(F',F") < 2 ® . Therefore < 7f(F ') c <7®(F) and w G C ^ F ') = » F" G □ □ L e m m a 6 .2 .3 Assuming that the graph is connected and k < f , after the al­ gorithm Find-k-Neighbors every face F knows all the distances to its k nearest neighbors (that belong to C (F )). P ro o f: Since the graph is connected and k < / , there are at least k faces in dis­ tance k or less from F . After \log(kJ] iterations C (F ) contains k faces by Lemma 6.2.2. □ L e m m a 6 .2 .4 The algorithm takes T = log2(n) time and n + / • k 2 space using P = M A X ( n , f ■ k 2) processors. P ro o f: Using k 2 processors we can set the elements of W (F ) in one step and sort k 2 elements in O(log(fe)) tim e (and quite efficiently, since we can use integer 63 sorting). Removing duplicate elements can be done in constant tim e, compressing W {F ) (to remove null elements) in 0(log(&)) tim e and copying the first non-nil (upto k ) elements into C (F ) can be m ade in constant tim e. We need n processors and 0(log(n)) tim e for the distance 1 list. We use 0 { k 2) space per vertex, so 0 (n • k 2) space total. □ 6.2.2 F in d in g an In d e p e n d en t Set In this subsection we find a maximal set of ^-neighborhoods, MIS, such th at every two neighborhoods contain a disjoint set of faces. Recall th at each ^-neighborhood is a set of k faces. Thus, the num ber of neighborhoods in M IS is at most f / k . Thej idea is to define a new graph G (V ,E ) such that V is the set of faces of G, andj in which two faces F, F' share an edge in E iff their neighborhoods C (F ),C (F ') share a face. M IS is an independent set in this graph. Let G(V, E ) be a planar graph such th at every F € G has a vector C(F) of length k containing the k nearest vertices and the distances from F to them . C (F) was com puted by the Find-k-Neighbors algorithm described in the previous section. Note th at G is an intersection graph, th at is, a graph in which the vertices are sets of elements and there is an edge between two sets iff they have a common element. In our case, the sets are the ^-neighborhood of each face. Our goal is to find a maxim al independent set for this graph. Luby [17] has presented an algorithm to find a m axim al independent set in 0(log(n)) tim e w ith 0 (m + n) processors, where m is the num ber of edges. How­ ever, G is not necessarily a planar graph, and may have a very large num ber (uptoO(n2)) of edges. For example, consider a star: For k > 2 the center face will belong to all subcom ponents, so G is actually a clique. Since we have only 64 P = n • k2 processors, we cannot have enough processors. We will give a general algorithm for finding a m aximal independent set in an intersection graph, where i P is equal to the input size (which is the sets description), and T is 0(log(n)) for the C R C W model and 0(log2(ra)) for the E R E W model. The same algorithm can be used for other intersection graph problems such as maxim al m atching, with the same processor and tim e complexity which were m entioned above. In Figure 6.3 we present Luby’s independent set [17] algorithm . p ro c e d u re Luby-Independent-Set(G(V, E )) MIS*- 0; w h ile V ^ 0 d o Each vertex picks a random num ber of size (l,rc4) fo r all v £ V in p a ra lle l do if v has a num ber greater then all its neighbors th e n add v to M IS; remove v and its neighbors from V ; fi o d o d e n d Luby-Independent-Set Figure 6.3: Luby’s Independent Set Algorithm Luby proved th at with high probability this algorithm will halt after 0(log(n)) iterations. We wish to compute a m aximal independent set for G(V, E ), but since we do] not have enough processors to do it directly, we instead compute for every face F j the list L (F ) of all F' such th at F £ C(F'). We show th at finding a m axim um of the lists of all vertices in F * s subcom ponent is equivalent to finding the maximumj in the set of neighbors of F in G. We will call this set MIS. j The procedure Find- Containing- Components which for every F finds the list L (F ) = {F '\F € C (F )} is presented in Figure 6.4. p roced u re Find- Containing- Components(G(V, E)) Create an array LL of length / • k; for every face i € G, 0 < j < k in parallel do LL[k • i + j] < — the j th neighbor of face i. od Sort L L in lexicographic order; for every i e V in parallel do Use binary-search on LL to find where L(i) begins; Use binary-search on L L to find where L(i) ends; od end Find-Containing-Components I ! Figure 6.4: Finding Containing Components L em m a 6.2.5 Procedure Find-Containing-Components takes 0 (lo g (/)) time with 0 ( f • k ) processors. P roof: Since for every face F , |0 ( F ) | = k , we can set the array LL in constant tim e using / • k processors. Both the SORT and the SEARCH on an array of length f • k takes 0 (lo g (/ • k )) = 0 (lo g (/)), by the definition of k. If we try to run Luby’s algorithm we have to solve two problems: 1. How to check if the value assigned to a face is greater than the values of all it neighbors. 2. How can a face notify its neighbors th a t it is in the independent set. We will use the L ’s we computed in the previous algorithm as “communication centers” . Every set of processors th at was assigned to some list L will check which is the m axim um and if it is unique. After the maxim um is found, all the processors 66 of the smaller elements of the set notify their faces th at they are not in the set. If a face does not get such a message, this face is in the set. The second problem is solved in a similar way. If a face is in the independent set, all its k neighbors notify their L set to delete the face they belong to from the graph. L em m a 6.2 .6 The complexity of the algorithm in the C RC W model is O(log(n)) with 0 (n • k ) processors. P ro o f: In his paper [17] Luby proves that with high probability 0(Iog(n)) iterations are suffice. Each iteration can be implemented in 0 (1 ) tim e, since the m ax operation requires 0 (1 ) tim e in CRCW model 3 and all other operations re­ quire 0 (1) as well. □ The following lemmas prove th at applying the modified Compute-Independent- Set on G (V ,E) is equivalent to applying Luby’s algorithm to G (V ,E ). L em m a 6 .2 .7 V face F € G, Uf"ec(F) L{F") = { F 'K F ,^ ) e E } P ro o f: f ' e U l {f ") <=>- b f " e C(F), f ' e l {f ") <=> F"eC(F) b f " e c (f ),f " e c (f ') <<=* b f " e c ( F ) n C{f ‘) < = ^(f , f ‘) £ E □ L em m a 6.2.8 Face F € G is chosen for M IS in Compute-Independent-Set iff in some iteration F = max{Uir'e(7 (F) L (F )}. 3We do not have the reference, but have heard about the algorithm and verified its correctness. 67 P ro o f: v is chosen for M IS, iff in some iteration, (V F') F € L(F'), m a x { /(F ')} = F <=> (VF') F' € C (F ), m ax{X (F')} = F □ I L e m m a 6.2.9 The algorithm applied on G{VyE ), computes a maximal indepen­ dent set for the graph G (V ,E ). I P ro o f: We will show th at for every iteration, a face F £ G is chosen for MIS in Compute-Independent-Set iff it is chosen for M IS in Luby-Independent-Set. I By Lemmas 6.2.7 and 6.2.8 F is chosen for MIS in Compute-Independent-Set iff in some iteration F = max{UF'eC(F) L (F )} = m ax{F "|(F , F ") G E }, s o it is chosen also by Luby’s algorithm. □ □ * L e m m a 6 .2.10 M I S contains at most f / k k-neighborhoods j I I I P ro o f: The proof follows imm ediately from the definition of M I S . □ D e fin itio n 6 .2.11 An I -se t is any region obtained from a k-neighborhood in MIS by removing the last layer if it is not a complete layer of G. j Since each J-set consists of full layers in the BFS in G its boundary can be w ritten as nonnesting simple cycles, see [19]. 6 .2 .3 C o m p u tin g p lanar graph layerin g In this section we com pute the distance of each face not in an I -set to the nearest I -set boundary. We will use this inform ation to find a noncrossing (see Defini­ tion 6.2.14) B FS spanning forest of the faces th at do not belong to any /-set. We 68 can then combine this BFS spanning forest with each J-set BFS spanning tree, and so obtain a BFS spanning forest for all of G. We start by computing for each face not in an J-set the distance to the boundary of an I - set. We present the algorithm in procedure form in Figure 6.5. P ro c e d u re : Find-layers 1. M ark every face in an J-set as level 0. 2. M ark every face th at shares a vertex with an J-set boundary and is not m arked as level 1. 3. In parallel for each face F not m arked compute the distance d(F) to the nearest face in its k-neighborhood which is m arked 1. 4. For all unm arked faces m ark them 1 + d(F). Figure 6.5: Finding the Distance to Your Nearest J-set. We will prove th at the algorithm m arks all the faces by showing th at every ^-neighborhood contains a face marked 1. We state this as a lemma: L e m m a 6 .2.12 Every k-neighborhood which is not an I-set contains a face marked 1. P ro o f: Since MIS is a m aximal independent set, all ^-neighborhoods must contain a face from some set in MIS. It will suffice to see that for each set S in M IS all faces are m arked 0 or 1. But every face in S is either in a full layer of an J-set in S or sharing a vertex with it. □ L e m m a 6 .2 .1 3 The Procedure Find-layers uses 0(log(fc)) time and n -f / • k processors. P ro o f: We can check if the last layer is full or not in O(log(fc)). Finding level 1 can be m ade in tim e 0 (1) by every vertex, checking if any of its faces are in level 69 0; and then every face checks its vertices if any touch a face in level 0. Every face can check its component and find the minimum in 0 (1). □ Using the layering of the faces that we have just com puted we want a BFS spanning forest from the boundaries of the I- sets. The spanning forest is in the face-incidence graph G , but G need not be planar. We will construct the forest I I j such th at it is noncrossing: D e fin itio n 6 .2 .1 4 We say that a subgraph H of the face-incidence graph G is n o n c ro ssin g if no two edges of H that are marked by the vertex v cross (with , respect to the embedding of G). That is, there is not exist four faces A , B , C , D of1 jO that share a vertex v in the order (A ,B ,C ,D ) and A is connected to C in h \ ' I i • ~ ! j with an edge marked v and B is connected to D in H with an edge marked v. j ! The im portance of noncrossing is lies in the fact th at when we union all the j i faces in the same component and link two components in an edge iff their faces, share an edge, we get a planar graph. It is easy to see th at by “breaking” x into! two vertices and putting an edge between them , so th at it will create a real planar I I I graph w ith the same dual graph. D e fin itio n 6.2.15 T is a B F S (sp a n n in g ) fo re s t fro m a s u b s e t S of the vertices of G i f T is a (spanning) forest and every component o fT contains exactly one vertex from S. Further, the path in T from any vertex v in G to its root is a minimum length path from v to S. In Figure 6.6 we present an algorithm to compute the B F S forest of O. D e fin itio n 6 .2 .1 6 We say that a vertex v b e lo n g s to I-set I if v was used by the procedure BFS-Forest to link two faces in the neighborhood of I. 70 P r o c e d u re BFS-Forest 1. For every face (in the BFS Forest), choose as its parent a face F' such that: F shares a vertex with F, F belongs to a lower level and has the smallest index of all possible such faces. 2. Using Tree Tour [28] notify each face what is the tree to which it belongs, e n d BFS~Forest Figure 6.6: Finding the B F S Forest L e m m a 6 .2 .1 7 Procedure BFS-tree generates a noncrossing BFS spanning forest of G from I-set boundaries and all the forest edges that marked by some vertex v belong to the same tree T . P ro o f: 1. Every face has a shortest path (in faces) to the boundary of the I-sets. We will prove by induction. The claim is correct for faces in level 1. Assume th at the claim is correct for faces in level i. Since every face in level i links itself to a face in level i — 1, the claim follows. 2. All the forest edges th at are m arked by v links faces around v to the same tree. If two faces F\ and F2 use the same vertex v for linkage, then they should link themselves to the same face F (because the m inim um is unique); therefore Fi and F2 belong to the same tree T and the edges (F, Fi) and (F, F2), both of which are m arked v, belong to the same tree T. 3. It is non-crossing forest. The proof is the same as in previous item . □ L e m m a 6 .2 .1 8 The complexity of the Procedure BFS-Forest is O(log(n)) using n processors. P ro o f: In this procedure we find the parent of each face in two steps: 1. Every vertex finds the face which: (a) Belongs to the lowest B F S layer th at touches this vertex. I (b) Has the lowest index of all the faces th at belong to the lowest B F S layer. The vertex is assigned the best face it found. 2. Every face checks all its vertices and finds the best one using the above criteria. Each step requires finding minimum, and therefore complexity of 0(log(n)). □ Later we will break every I-sets component into layers by their distance from the center. This is easy to do because we have the BFS tree of every component. 6 .2 .4 D iv id in g B F S Layers In to C y cles In this section we consider the following problem: Given a graph G and a multi- J ) I source B F S layering of the faces, break the boundaries between the layers into; simple cycles. We define a boundary edge as an edge between faces in level i and level i + 1. O ur goal is to create a set of cycles from these edges. The rule we use is th at every cycle is simple in the topologic sense. T hat means th at if we go around the cycle, we always have the faces of level i on our left side. An outline ! of the procedure is given in Figure 6.7. L e m m a 6 .2 .1 9 The procedure Divide-into-cycles creates a set of edge disjoint cycles (not necessary simple cycles). P ro o f: It is easy to see that every edge has a next edge. Supposed edges e\, e- 2 have the same next edge e3. Therefore all faces between e\ and e3 belongs to 72 p ro c e d u re Divide-into-cycles(G(V, E )) fo r every B F S level i in p a ra lle l do Set all the edges between faces in level i and level i + 1 to be boundary edges of level i. For every boundary edge e direct it so that level i face is on its left side, and level i + 1 face is on its right side. For every boundary edge e, find the next boundary edge e in the cycle such th at all the faces between e and e belong to level i. (assume th at e is directed from u to v, then e is the next boundary edge clockwise, around v.) Using list ranking, find the cycles. Using biconnectivity algorithm break every cycle into simple cycles. o d e n d Divide-into-cycles Figure 6.7: Dividing Layers Into Cycles level i as are all the faces between e2 and e3. Therefore one of e — 1, e2, e3 cannot be boundary edge. □. D e fin itio n 6 .2.20 Cycles A and a path (or a cycle) B in te rla c e in a planar embedded graph G if there exist two edges of B such that one in the interior of A and the other in the exterior. D e fin itio n 6 .2.21 Cycles A and a path (or a cycle) B kiss in a planar embedded graph G if they share a vertex (or an edge) but they do not interlace. L e m m a 6 .2.22 Cycles that were created by Divide-into-cycles do not interlace. P ro o f: Assume th at the original cycles (before we use the biconnectivity al­ gorithm ) interlace in some vertex v, th at cycle A has the directed edges (u,v) and (v,w ). Cycle B should have edges (s,u ) and (v,y) such th a t the cyclic order 73 (around v, clock-wise) is either (u,x,w,y ) or (u^y,w}x) (note th at if the order is (u,w,x,y) they do not interlace). We will now check both cases. Assume th at the order is (u,x,wfy). Then, due to the direction of (w,v), the faces between (u, v) and (a;,?;) should belong to B F S level i and due to the direction of (x,v) those same faces should belong to level i -f 1. A contradiction. i j Assume th a t the order is (u,y,w,x). Then, Divide-into-cycles should have assigned the next edge of (u,v). A contradiction. Note th at if two biconnected subcom ponents interlace then the original cycles interlace as well. □ L e m m a 6.2 .2 3 Every cycle that was created by Divide-into-cycles (before we ap­ ply the biconnectivity algorithm) is either simple or is a set of simple cycles that are connected together in separation vertices (articulation points). i+1 i+1 Figure 6.8: An Example of a non-simple Cycle P ro o f: Let us break each cycle into its biconnected components and consider on each one of them . Define the outer cycle (face) as the set of edges which are shared with faces outside the component. All the faces “outside” the biconnected component th a t share an edge with the outer cycle belong either to level i or to level i + 1. 74 • If those faces belongs to level i , then by our rule we will always go around the outer cycle. As a result, the component will only be composed of those edges th at belong to the outer cycle. Therefore no biconnected subcom ponent can share anything with the “inside”, but th at implies th at they are simple cycles (See Figures 6.8,6.9). j • If those faces belong to level i+ 1 , then all the faces inside the outer cycle that i share an edge with it should belong to level i. Therefore there should be a face (with respect to the component) composed of level i faces of the original graph, th at shares an edge with the outer cycle. Note th at this face cannot share all the edges of the outer cycle because the graph is biconnected, that is the situation in the left side of Figure 6.9 is impossible, by our algorithm this face, and everything inside it, should have been in another cycle. A contradiction. □ L e m m a 6 .2 .2 4 We can break every cycle into a set of simple cycles in O(log(n)) time. P ro o f: This can be done by the Tarjan and Vishkin’s [28] algorithm to find biconnected components. □ L e m m a 6.2.25 Given an embedded planar graph G and the multi-source B F S layering of the faces, we can find the cycles between the layer in O(log(n)) time using n processors. P ro o f: We can find every boundary edge in 0 (1 ) tim e and every next edge in O(log(n)) using list ranking. The cycles can be found in 0(log(n)) using list ranking [31] and the simple cycles can be found in 0(log(n)) tim e by Lemma 6.2.24. □ 75 i+1 i+1 i i+1 Figure 6.9: An Impossible Case 6 .2 .5 C o m p u tin g In d u ced W eig h ts on a S et o f C y cles H o riz o n ta l C ycles In this subsection we solve the following problem: Given a set of non-interlacing edge-disjoint simple cycles {Ci, C 2, •. •, Ct}, compute the weights on the inside and outside of each cycle Ci. We will assume through out the section that {Ci, C2 > • • •, Ct} is a set of non-interlacing edge-disjoint simple cycles. Note that we can easily solve the problem for every cycle separately, but this would give us an algorithm that uses 0(t-n) processors, where t is the num ber of cycles. Instead we present here an algorithm th at uses only 0 ( n ) processors. D e fin itio n 6 .2 .2 6 The re g io n s formed by the cycles Ci,...,Ct are the equiva­ lence classes of faces defined by the relation: F = F' if no cycle Ci separates F from F' 76 The definition is not trivial when the cycles share vertices because regions may not be connected. For example, consider Figure 6.10, where both the “inside” and the “outside” (the white areas) belongs to the same region. Figure 6.10: An Example of a Disconnected Region. L e m m a 6 .2 .2 7 I f a face F\ shares an edge e\ with a cycle G, some other face F2 shares an edge e2 with G, F\ and F2 are on the same side of C and Ft and F2 share a vertex v with C , then F\ and F2 belong to the same region. P ro o f: Assume Fi, F2, ei, e2, C and v as in the Lemma, and assume th at there exist a cycle < 7 ; th at separates F\ and F2. W ithout lose of generality we can assume th at F\ is “inside” Ci and F2 is “outside” C{. However, this requires that v will be a vertex of C*, ei will be “inside” Ci and e2 will be “outside” C,-. (ei on e2 cannot be p art of Ci because Ci is edge disjoint from C). Therefore Ci should interlace with C , a contradiction. □ L e m m a 6 .2 .2 8 All the faces that share edges with the same side of some cycle C belong to the same region. P ro o f: The proof follows immediately from Lemma 6.2.27 □ 77 L e m m a 6 .2 .2 9 Every cycle shares edges with only two regions its “inside” and its “outside”. P ro o f: The proof follows immediately from Lemma 6.2.28. □ D e fin itio n 6.2.30 Define a graph G" as follows: 1. Every face in G is a vertex in G". 2. For every edge that separates faces that does not belong to any cycle Ci in G, there is an edge between the corresponding vertices in G ". S. Every two faces F\ and F 2 that satisfy the hypothesis of Lemma 6.2.27 will have an edge between them. L e m m a 6.2.31 If faces F\ and F 2 are on different sides of the same cycle C , then there is no path in G" between F\ and F 2. j P ro o f: Assume Flt F2 and C are as in the Lemma, and assume there is a path, j l Then somewhere along this path there is an edge e (in G") between some face Fin j inside C and a face F ^ outside C. Suppose e was generated by condition 2 (see Definition 6.2.30) then Fin and F ^t share an edge e\ in G then this edge should be part of cycle G (because C separates between F{n and JF V ut) and therefore e\ cannot be the dual of e. Therefore e was generated by condition 3. If Fin shares some edge ein w ith cycle Ci and Fout shares some edge eout w ith the same Ci then Ci will interlace with C, therefore th at cannot be the origin of e either. Therefore the edge e cannot exist in (?", a contradiction. □ L e m m a 6.2.32 There is a path between two faces in G" iff they belong to the same region in G. 78 P ro o f: The statem ent th at if there is a p ath between two faces, then they belong to the same region, is the counter-positive of Lemma 6.2.31. We will prove th at if two faces belong to the same region, then there is a path between them in G " by induction on the num ber of cycles in the set {Cx, C2, _ _ , Ct}, induction on t. The Lemma is trivial if there are no cycles because if a graph is connected then its geometric dual is connected. Assume th at the Lemma holds for i cycles. Let us remove Cj+i from the set (but not from the graph). By doing this we introduce new edges between faces th at share edges (in G") with cycle Cj+i, and union the regions th a t share edges with Ci+1. By the induction hypothesis we can find a path between every two faces Fi and F2 th at belong to the same region. Let us assume th at F\ and F2 are on the same side of Ci+1 (if not, then they should not be in the same region). W ithout i lose of generality we can assume th at they are “outside” Ci+1. The p ath between I them either includes or does not include faces inside C*+1. In the second case there is a path between in the original G" because the path does not include any new edges. In the first case the cycle look like: Fx, ..., F ^ n , Finl,..., Fin2, F ^ t2, ..., F2 where Fouti and Finl share an edge ei with C{+1 and F ^ t2 and F{ n2 share an edge e2 with Ci+i. But there is a path (in G") between F^tx and F ^ n (see condition 3 in Definition 6.2.30). Therefore there is a path F i, ..., 1,..., Fout2i..., F2 in G" after we add Ci+1 to the set (because Fouti and Fout2 share edges with the outside of C4 +i). □ L em m a 6 .2 .3 3 We can find the regions in O(log(n)) time using n processors. P ro o f: Create the graph G" described in Lemma 6.2.32. Every non-cycle edge gives one edge to G \ and every cycle edge gives two edges to G" one for an 79 inside face and one for an outside face. Therefore the num ber of edges is 0 (n ). Using Shiloach and Vishkin’s [26] connected components algorithm we can find the regions in 0 (log(n)) time. □ The reg io n -c y cle g ra p h has a vertex for every region formed by the cycles and an edge for each cycle; Lemma 6.2.29 shows th a t the edges are well defined. J A region vertex is adjacent to a cycle edge in the graph if the cycle shares an edge J jwith the region. Using this fact we can compute each region and its weight. The (weight of a region does not include the weight on it boundary. The weight of a vertex is the weight of its corresponding region and the weight of a edge is the weight of its corresponding cycle (only the edges). L e m m a 6 .2 .3 4 The region-cycle graph is a tree. I P ro o f: We will prove by induction on the num ber of cycles in the set 1 {Ci, C2, ..., Ct}. The Lemma is obviously correct for a graph with only one cycle, since there are only two regions, the “inside” and the “outside”. The region-cycle graph contains only two vertices and an edge between them - a tree. Assume th a t the Lemma is correct for i cycles. If we remove some cycle C from the set {Ci, C2, ..., Ct} (but not from the graph C), the two regions th at share edges with this cycle will be unioned. In the j region-cycle graph the two vertices that represent those regions will be unioned and j the edge th at represents the cycle will be elim inated. By the induction hypothesis the resulting region-cycle graph is a tree. But given a tree, if we replace a vertex v in it by two vertices with an edge between them and divide the edges of v in some arbitrary fashion between the two new vertices, then this is still a tree. Therefore the original region-cycle graph was a tree. □ 80 Given the region-cycle tree, we can find its weights. We can use either the Tree- Tour [28] or the Parallel Tree Contraction [20] to find the weight on the interior of each cycle and thus the exterior weights, see [19]. L e m m a 6.2.35 Given an embedded planar graph G and a set of non-interlacing edge-disjoint cycles, we can compute induced weights of the cycles in 0 (log(n)) time using n processors. P ro o f: We can find the regions in 0(log(n)) tim e using the Shiloach and Vishkin [26] connected components algorithm , then find the weights on both sides of each cycle using tree Tour [28]. □ In this paragraph we give a general description of a m ethod th at we will use in later sections. An operation we need in many places is to cut the region-cycle tree along edges th at represent small cycles and find the “center piece” , th at is, the portion of the tree with most of the weight around it. In the following theorem we state some well known properties of trees. We therefore do not give the proof here. T h e o re m 6 .2 .3 6 1. Every weighted tree has single vertex (1:2 or better) separator. 2. I f a tree has more than single vertex separator, it has at least single edge separator. Let edges representing small cycles be called erasable edges. By shrinking all vertices connected by non-erasable edges, we get a new tree, the “center piece” of which we define as the vertex separator of the tree. L e m m a 6 .2 .3 7 We can find either the center piece or one erasable edge (simple cycle) separator in 0 (log(n)) time using n processors. 81 P ro o f: By Lemma 6.2.35 the weight on all the vertices can be found in 0 (lo g (n )) tim e. Every vertex can then check in 0 (1 ) tim e whether all its neighbors have weight of less than | . □ V e rtic a l C ycles We define vertical cycles as those cycles th at are induced by non-tree edges, in the B F S spanning tree. Miller [19] gave an 0(log(n)) algorithm to compute the weights of all the induced cycles. The algorithm is based on finding least common ancestor of two vertices connected by a non-tree edge. 6.2.6 V oron oi D iagram s Using Procedure BFS-Foxest we can find a spanning forest for G. In this subsection we define and compute the boundary between the trees in this forest as a subgraph of G. This subgraph bears a strong analogy to the Voronoi diagram for geometric objects in the plane. The following is a graph-theoretic definition of Voronoi i i Diagrams. ! I D e fin itio n 6 .2 .3 8 We say that a subgraph H = (V '^E 1 ) of G is the V o ro n o i t D ia g ra m of G with respect to a noncrossing (spanning) forest T of the face- incidence graph G if the edges E ' of H are those edges e such that the two faces common to e belong to different components of T . The vertices V ' are the vertices induced by E*. I f T does not span G then we add to E ' those edges such one o f its faces belong to T and the other does not. Thus H contains boundary edges between faces in T and those not in T . These edges can be decomposed into a set of nonnesting simple cycles which we call b o u n d a ry cycles. 82 To continue the analogy between Voronoi Diagrams for geometric objects and topological objects, let S be a set of faces of G such th at every face is contained in exactly one tree of T . We call H the V o ro n o i D ia g ra m of S with respect to T . This definition can be extended from a set of faces to any set of nonnesting cycles where T is a forest created by the cycles. We call these the b a se cycles of the (spanning) forest as well as the base cycles of the Voronoi diagram. Note that we often pick T to be a B F S forest from the base cycles. In the case of a single source B FS searches of G, the boundaries between layers were always very nice. They decomposed into nonnesting simple cycles in a natural way. Here we m ust be a little more careful. We do not want the decomposition into cycles to cause cycles to cross an edge of the forest T . Thus we want the boundary of a region to be decomposed into cycles as long as the cycles do not partition a region. We will simply call this “the boundary of the region d e c o m p o se d in to cy cles” . A planar graph G (in solid lines) and spanning forest (in thin lines) are shown in Figure 6.11. The boundary of the larger tree decomposed into cycles creates the two cycles: (a ,b ,c,d ) and (e, f , g , h , i , j , k , l ) . By m aking two virtual copies of the vertex x, one common to the edges f , g and the other common to jf, k y all the vertices of these two cycles are of degree 2, so we can think of both cycles as simple. We call these the v ir tu a l v e rtic e s of the Voronoi Diagram. The i graph obtained by introducing all those virtual vertices into the Voronoi Diagram j will be called virtual Voronoi Diagram. j L e m m a 6.2.39 The virtual Voronoi Diagram described above is a planar graph and the boundary o f every face is a set of nonnesting simple cycles . P ro o f: Every vertex can be used for one connection only to the face with the lowest num ber (See Figure 6.12). Therefore no two spanning trees can interlace, 83 Figure 6.11: An Example of the Boundary of a Region Decomposed into Cycles. so the boundary can be seen as a path th at separates one piece of the graph. By following the edges of the boundary so th at the component is always on the same side, we get a circular path. Therefore we can cut into a set of simple cycles in a > i similar way to what we presented in Lemma 6.2.25 □ J L e m m a 6 .2 .4 0 The boundaries of each region can be decomposed into a set of simple cycles. P ro o f: The proof is similar to th at of Lemma 6.2.25. □ Later on we will link together neighboring regions in the Voronoi diagram. The following Lemmas guarantee us th at the resulting graph is still planar. L e m m a 6.2.41 Given a planar graph G. The following operations preserve pla­ narity. 1. Removal of an edge from G. 84 low Figure 6.12: W hy V irtual Vertices Does Not Interference W ith Planarity 2. Given an edge (u,v) in G, adding a new vertex w and new edges (u,w ), (w,u) to G. 3. Adding an edge between two vertices that belong to the same face. P ro o f: We leave the proof as an exercise to the reader. □ L e m m a 6.2.42 Consider the following graph H ‘: every region is a vertex and i,here is an edge between two regions iff they share an edge. Then H ' is a planar graph. Before we give a formal proof we would like to give the intuition. If we have a planar graph, its dual is still planar (was proved by Euler). If we take some arbitrary set of vertices that all of them belong to the same face and union them together then the face may not be simple any more, but the dual is still planar. JVe can view the vertices th at belong to some I - set, like x in Figure 6.11 as two vertices th a t were unioned together, and therefore the dual is still planar. P ro o f o f L e m m a 6 .2.42: We will show how H ' can be obtained from G using >lanarity preserving operations only. If a vertex v belongs (see Definition 6.2.16) 85 to some 7-set 7 then we add edges to create new small faces of size three for all faces th at do not belong to 7-set 7 (see Figure 6.12). The result is a planar graph H" with no “pinch” vertices. H ' is the dual of H" and therefore is also a planar graph. □ We would like to find the weight on the boundaries of every region. The prob­ lem is similar to the one we discussed in subsection 6.2.5, but with one difference, namely th at the boundaries of a region may share edges w ith boundaries of other regions. O ur solution is also similar to the solution proposed there. D e fin itio n 6.2.43 The V -re g io n s graph is a bipartite graph with a vertex for every region and for every Voronoi Diagram. A region and Voronoi Diagram are connected iff the Voronoi Diagram is the boundary of the region. L e m m a 6.2 .4 4 The V-regions graph is a tree. P ro o f: It is easy to see th at the graph is connected because G is connected. A connected graph with no cycles is a tree. We will show th at it has no cycles. Assume conversely th at the V-regions graph has a simple cycle: 72i, VT>i, R 2, V D 2, R 3, V D z, . . . , V D i, R i. V D i has a cycle which is the boundary of R i, where R \ is on one side of the boundary and R 2 is on the other side. The path R 2iV D 2, R 2,V D 3, . . . ,V D {,R i implies a p ath between the two sides of the cycle th at separates R \ from R 2, but in a planar graph every cycle divides the graph. A contradiction. □ L e m m a 6.2.45 We can compute the boundaries of each region in O(log(n)) time using n processors. P ro o f: We build the V-region tree, compute the weight of every region and then compute the weight on every boundary using Tree Tour [28] □ 86 6 .2 .7 C rea tin g S p an n in g T rees In this subsection we describe an algorithm th at takes B FS forest in G and creates a spanning forest (of the vertices) in G. Through out this section we assume that we have a non-crossing spanning forest T in G. The distances between vertices common to the same face are at most d/2. Therefore we expect the radius of the spanning tree of vertices to be at most the radius of the B F S tree of faces m ultiplied by d/2, as in Figure 6.13. Figure 6.13: Creating a spanning tree of G D e fin itio n 6 .2 .4 6 Level(F) is the level of face F in the faces-BFS tree. D e fin itio n 6 .2 .4 7 I f the forest T includes an edge (Fi^Fz) which is marked v and Fi belongs to higher level then v is the linkage v e rte x of F\. L e m m a 6 .2 .4 8 In Procedure Build-spanning-forest, for every vertex v such that dist(v.) > 0, there is some neighbor u such that dist(u) > dist(u). P ro o f: v got its dist value by some p ath to a linkage vertex. The next vertex in the path should have a smaller dist value. □ L e m m a 6 .2 .4 9 The complexity of the algorithm is 0(log(n)) tim e using n pro­ cessors. 87 p ro c e d u re Build-spanning-forest(G(V, E )), fo r e v e ry face F in p a ra lle l do Give every vertex in F the value | • Level(F ) plus the distance to its linkage vertex. o d fo r ev ery vertex v in p a ra lle l do dist(v) = m inimum of values v got in the previous step. Link v to its neighbor with the lowest dist value. o d e n d Build-spanning-forest Figure 6.14: Building a Spanning Trees P ro o f: We cut every face in the linkage vertices, and then compute the distance by list ranking from left to right and from right to left. □ 6.3 Reduction Step In this section we present an algorithm th at on a given planar graph computes a subgraph th at contains a small simple cycle separator. Every separator of this subgraph is a separator of the original graph as well. T h e o re m 6.3.1 There exists a parallel algorithm that on a given embedded weighted 2-connected, planar graph G with f faces, such that each face is of size at most d and no face weight > 2/3, computes a subgraph H that is 2-connected with at most 3f f k faces, each of size at most d • (4 + 5 • y/k) and no induced face weight > 2/3. This algorithm will use O(log2n.) time and n • k 2 processors. The algorithm described in Theorem 6.3.1 is the same as the operation Reduce-G described in section 1.3. We will call this algorithm Reduce-G. An outline is given in Figure 6.15. L e m m a 6.3.2 Every separator of H is a separator of G. 88 p r o c e d u r e Reduce-G 1. call Find-k-Neighbors(G (V,E),k) (Figure 6.2) 2. call Luby-Independent-Set{G{V,E)) (Figure 6.3) 3. call Find-layers(G) (Figure 6.5) 4. call Generate-the- Voronoi-diagram (Figure 6.17) 5. call Find-Short-Paths-Between-I-sets (Figure 6.23) 6. R eturn these short paths as H . e n d Reduce-G Figure 6.15: Outline of a Procedure to Compute H (see Theorem 3.1) P ro o f: We can replace every face in H by the faces th at composed it in G. The weight in either side of the separator will not be changed by it and therefore this is also a separator in G. □ We assume th at we can find short paths between neighboring I - sets. However this assum ption may not be true. Most of this section will be dedicated to justify this assum ption and give algorithms to handle several complicated cases. We will assume th at the graphs G and H have the following compact repre­ sentation: Each chain, th at is a maximal simple path with internal vertices of degree two, has been replaced by an edge plus a num ber indicating the length of the original chain. Onward this section let G — (V ,E ) be an embedded weighted 2-connected planar graph with / faces, each of size at most d, and no face has weight >2/3. 6.3.1 F in d in g a b ase cy c le for each J-set In Section 6.2 we found the ^-neighborhoods of each face of G and a maximal independent set of these neighborhoods. In this subsection we deal with the case when th at the centers of two neighboring I - sets are far apart. For example, if they 89 are on top of the Twin Towers, then the distance between them (going down and up) is quite long. The obvious solution is to move the center of these J-sets to the ground level. In this subsection we formalize this idea and give an algorithm. We want to reduce the general planar separator problem to the case where all /-sets have spanning trees of diam eter 0 (d yfk ). We allow the size of the face which is the center of the B F S to be of size 0 (d y/k ). We call this cycle the b a se cycle of the I - set. Note that after applying the procedure Find-Base-Cycle the boundary of an I - set need not be a simple cycle. We will use the fact th at each I - set has a base cycle in subsection 6.3.2. In subsection 6.3.4 we address the issue of non simple boundaries. Figure 6.16 contains the procedure Find-Base-Cycle th at finds a small cycle near the boundary in every I - set. p ro c e d u re Find-Base-Cycle fo r each J-set S in p a ra lle l do 1. Com pute the layers of the neighborhood S up to the last complete layer. 2. Divide each layer into simple nonnesting edge-disjoint simple cycles. 3. Com pute the interior and exterior weight of each simple cycle. 4. For every layer, find its size and assign ita value, which is its size plus twice the distance from the last layer. 5. Find the layer L with the lowest value. 6. Use L to separate S from G. 7. I f the weight of every new face is less than or equal to | th e n use L to separate the graph and apply Miller’s algorithm to S. (We have a B F S of S and L separates everything outside of S.) else find the cycle C th at separates the | (or more) region R from the rest of S . R eturn R with the base cycle <7, and the portion of the B F S tree of S which belongs to R. o d e n d Find-Base-Cycle Figure 6.16: Finding a Base Cycle for each J-set. 90 If procedure Find-Base-Cycle has not found a separator then it has constructed a subgraph R of G. Note th at the original B FS of S and its layering restricted to R is now a B F S and layering of R from the cycle C. This fact follows since C is in the old layering of S. Note also th at the num ber of levels in R is at most y/k. The following two Lemmas show th at if there is a small layer th at separates the graph, then M iller’s algorithm will return a separator of size 0 (d y/k ). L e m m a 6.3.3 We can find a layer inside every I-set such that the layer size plus twice the distance (in faces) between the layer cycle and the boundary plus is at most 2 • y/k. P ro o f: The proof is similar to the one given by Lipton and Tarjan [16]. We give every layer a value th at consists of its size plus twice the distance to the] i boundary and find a minimum. □ j L e m m a 6.3.4 The complexity o f the procedure Find-Base-Cycle is 0(log(n)) time J using n processors. P ro o f: The proof is similar to the proofs of Lemma 6.2.24 and Lemma 6.2.25. □ L e m m a 6.3.5 The number of base cycles is at most £. P ro o f: The size of the independent set is at most and we have one base cycle per J-set. □ L e m m a 6.3.6 I f there is a small layer L that separates the graph, M iller’ s algo­ rithm will find a simple cycle separator. P ro o f: In the B F S tree of S , layer L creates only leaf faces (see definition in M iller’s [19] algorithm ). Therefore the size of the separator will be 0 (y /d • k + d • y/k) < 0 (y /d - f ) . (Under the assum ption th at 0 ( k ) < 0(f)). □ 91 6.3.2 F in d in g a sm a ll d ia m eter sp a n n in g forest In the last subsection we show how to reduce the general case to the case where the boundary of each 7-set is a small simple cycle “close” to the boundary of the 7-set. In this subsection we would like to find a set of new base cycles close to the boundaries between the extended J-sets in the Voronoi diagram. Even when the base cycles are nice and small they may be far apart (for example, the base| of a wine glass may be an 7-set and the next 7-set m ay only be where the glass widens). j In this section the goal is to find a spanning forest and its Voronoi diagram of j these cycles satisfying the following conditions: 1. The size of each base cycle or boundary cycle is 0 (d • y/k). 2. The induced weight on a base cycle is at most 1/3. 3. The num ber of base cycles in the diagram is at most f /k . 4. The radius of the spanning forest is 0 (d • y/k). Note th at to m aintain these conditions the base cycles will not necessarily be the same base cycles th at we found in the previous subsection. If one of them is on the “top of a m ountain” ’, we would prefer to have a new one “on the base of the m ountain” . Figure 6.17 contains the algorithm in procedure form. L e m m a 6 .3 .7 I f a face F is in distance i > y/k from the nearest I-set boundary then there is a small cycle (of size d • y/k or less) between F and the boundary of the I-set. P ro o f: Assume not. Between every two layers we have at least 2 • y/k faces (to “give” enough edges to two layers with dy/k edges each). Therefore we have 92 p ro ce d u re Generate-the- Voronoi-diagram 1. Extend the B F S to all G using procedure BFS-Forest (Figure 6.6) which returns B F S forest of G. 2. call Find-Base-Cycle (Figure 6.16) 3. call Divide-into-cycles (Figure 6.7) which divides the layers in G into cycles. 4. For each simple cycle compute its interior and exterior weight using tree Tour [28]. 5. Check if any small (0 (s /n ) cycle is a separator. 6. Find all the simple cycles from step 4 of size at most d • y/k. We will call this set the bottom cycles. 7. fo r every bottom cycle discard everything on the side with the smaller weight (less than | ) from the graph. I f a base-cycle was discarded, then the bottom cycle will replace this base cycle. 8. Leave the B F S forest in G as it is, but link it to the new base cycles. Assign every new face (those created by bottom cycles) to the nearest component th at share an edge w ith it. 9. R eturn the Voronoi Diagram of the “center region” e n d Generate-the- Voronoi-diagram Figure 6.17: Finding Voronoi Diagrams cycles of 2 • y/k faces each, k faces in these cycles are in distance y/k from F (by taking 2 • y/k from the first layer, 2 • y/k — 2 from the second layer, and so on), to each side). T hat proves th at the k-neighborhood of the face F does not contain any independent set face, in contradiction to the way we com puted the /-sets. □ Lemma 6.3.7 is im portant for two cases which are dem onstrated in Figure 6.18. 1. If our I - sets are on the top of high hills (with very few faces) such as the top circles in Figure 6.18 then they will be far apart. For example, if we have a graph th at looks like a comb, and the I - sets are located on ends of the “teeth” , then they are far apart. By Lemma 6.3.7 we can find for each I - set 93 a small cycle that contain the base cycle near the boundary. We will replace the base-cycle by the small cycle. 2. There may be two J-sets such th at the boundary between them is in the middle of a very deep and narrow hole. In this case a p ath to the boundary may be very long (via the “bottom ” of the hole). In order to prevent it, we find a small cycle near the “ground level” , and establish the boundary there. Lemma 6.3.7 assure us that we can find such a cycle. i I I Figure 6.18: An example of multi-source B F S from top cycles, and layering j l I In the following Lemmas we give the formal claims: L e m m a 6.3.8 The distance from each cycle to the nearest boundary or small (d • y/k) cycle in both directions*, is at most y/k. 4 “inside” the cycle and “outside” the cycle 94 P ro o f: The proof is similar to th at of Lemma 6.3.7, and we will it to the reader. □ L e m m a 6.3.9 For every region R , the B F S tree of R with the base cycle of its I-set as source, has radius of at most 2 • d • y/k. P ro o f: The radius of the tree inside the /-set (from the base cycle) is at most y/k (see section 6.3.1), and the distance from the boundary to the small cycle outside is at most 1 + y/k. □ The im portance of these Lemmas is that they assure us th a t we can find small base cycles such th at if their Voronoi regions touch they are near each other. This I enable us to find short paths between neighboring /-sets (these paths will create the faces of H ). L e m m a 6.3 .1 0 We can find the “central region” in 0(log(ra)) time using n pro­ cessors. P ro o f: The proofs follows immediately from Lem m a 6.2.37. □ L e m m a 6.3.11 We can check in 0 (1 ) using n processors in a bottom (small) cycle separates a base cycle from the “ central region”. P ro o f: Note th at a bottom cycle separates a base cycle from the center iff the layers in the side of small weight have a lower index th an the layers in the side w ith most of the weight. We can check it in 0 (1 ) □ L e m m a 6.3.12 The complexity of the Procedure Generate-the-Voronoi-diagram is 0 (log(n)) time using n processors. P ro o f: 95 1. The complexity of Finding the layers and the BFS-Forest is 0(log(n)) (see Lemmas 6.2.13 and 6.2.18). 2. The complexity of Find-Base-Cycle is 0(log(n)) (see Lemma 6.3.4). 3. The complexity of Divide-into-cycles is O(log(n)) (see Lemma 6.2.25). 4. We can check the weight on each cycle in 0(log(n)) tim e (see Lemma 6.2.35). 5. Finding the size of each cycle (to see if it is small enough) takes O(log(n)) using list ranking. 6. After we cut the graph in the bottom cycles we can check if there is a base cycle inside each one of them using connected components algorithm [26] th at takes 0 (log(ra)) time. 6.3.3 C o m p u tin g th e F ace-V ertex D u a l S u b grap h In this subsection we construct the graph as required in Theorem 6.3.1. From our input graph G we have constructed a possible subgraph of G which satisfies all the input conditions. In Subsection 6.3.2 we have found a Voronoi subgraph of G w ith a small diam eter spanning forest. We assume that the Voronoi diagram is biconnected. In subsection 6.3.4 we will deal with the case when it is not. The vertices of the Voronoi diagram are common either to faces of two different J-sets or to faces of three or more different J-sets. We are interested in vertices of the second type. However, we m ust be careful in detecting those, as a “pinch vertex” (such as the vertex x in Figure 6.11) m ay or may not be a “real” meeting point of three I - sets. D e fin itio n 6 .3 .1 3 For every vertex v there are some edges in G which are marked v. Some of them may he in the in the B F S-F orest as part of some tree T , let us 96 call this set of edges E v and we will call the set of faces which are connected by E v, Fv. Let us split the circular list of vertices around every by the faces in Fv (if a face F is in Fv we will delete the link to next face in the cyclic order). We call the sub-lists the E q u iv a le n c e classes o f v. D e fin itio n 6.3.14 I f a vertex v has an equivalence class E C with faces from three or more I-sets then v is a ty p e 2 v e rte x with respect to all those I-sets. D e fin itio n 6.3.15 A vertex v is a ty p e 1 v e rte x if it is common to two or more j I-sets and is not a type 2 vertex. i D e fin itio n 6.3.16 A m e ta -e d g e with respect to some I-se t I is a path in the Voronoi Diagram between two type 2 (with respect to I ) vertices that is composed only of type 1 (with respect to I ) vertices. L e m m a 6.3 .1 7 Every boundary vertex can “ decide” if it is a type 1 or a type 2 vertex using local information only. P ro o f: If it is not a “pinch vertex” then the claim is obvious. Else, we can compute Equivalence classes of v in O{log(n)) time using list ranking. □ L e m m a 6.3 .1 8 A graph with f faces where every vertex has degree 3 or more, as at most 3 • / — 6 edges. P ro o f: It is a well known fact. The reader should be able to prove it using E uler’s formula ( / — e + v=2). □ L e m m a 6.3.19 The number of meta-edges is at most 3 • £ — 6. 97 Figure 6.19: The subgraph H P ro o f: As we have shown in Lemma 6.2.42, the extensions of I-sets create a planar graph. The meta-edges are the edges of this graph, and the Lemma follows imm ediately from Lemma 6.3.18. □ Figure 6.19 describe the subgraph H th at we expect to get. L e m m a 6 .3 .2 0 We have a new face around every meta-edge. The face size is at j most d • (4 -f 5 • y/k) P ro o f: The meta-edge separates extensions of two I-sets. We create the face by paths from the centers of both I - sets to both ends of the meta-edge, and in the worst case we may have a small cycle on both ends of the meta-edge. The total distance within faces is: 2 + 2 • y/k in each I-set, plus (1 + y/k) for every path 98 from each /-set to the small cycles, and we have four such paths. We can translate every face size d into a path of size d/2, so we get a path length d • (4 + y/k) edges. Each small cycle of size at most d ■ y/k, so total path length in both small cycles together is at most d • y/k. By summing everything up we get the result we claim, j □ A problem th at may arise is what to do when these paths kiss each other, and so create news small faces. This cannot happen inside a component, because all the paths belong to the same spanning tree, but it can happen on the boundary (meta-edge) between the components. The obvious solution is to take the shortest of the two paths only. Another problem arises when if one of the new faces has a weight of | or more. In th at case we need to discard all the faces outside this face, and return all the faces inside. L e m m a 6.3.21 We can find a spanning tree of diameter 0 (d • y /k ) inside every new face. P ro o f: The boundaries of a new face are at most 0 (d ■ y /k ) (Lemma 6.3.20) and the distance from any face located inside the new face to the boundary of the new face is 0 (d • y /k ) (Lemma 6.3.9). □ Figure 6.20 contains a formal outline of the algorithm: 1 6 .3 .4 M esa p a rtitio n After applying algorithm Generate-the-Voronoi-diagram on the graph G, we have found at most 0(f/k) base cycles, discarded their interiors and found a Voronoi Diagram for them from a spanning forest of the base cycles in G. Every tree in this spanning forest has diam eter of at most 0 (y /k ). Each region of the Voronoi 99 p r o c e d u re Face-vertex-dual 1. Link the type 2 vertices to the base cycles using “short” paths. 2. If there is more than one path between two type 1 vertices (because two paths “kiss”) pick the shortest one and discard the others. 3. Remove one edge from every base cycle. 4. Com pute the induced weight of each new face. 5. I f the weights of all the faces are less than 2/3 thenD iscard all base cycles th at are either not connected or only 1-connected to the rest of the graph, else Let F be the face with weight > 2/3. (a) Remove the edges and vertices of G th at are either external to F or internal to the base cycles of F. (b) Construct a spanning tree for the subgraphs which consists of F and the induced B F S trees of the two regions of the Voronoi diagram that contains the face F . (The diam eter of the tree is twice the diam eter of one component - 0 (d ■ y/k)). (c) Find a separator for this graph using the algorithm from Theorem 6 in [19]. e n d Face-vertex-dual Figure 6.20: Com puting a Dual of the Voronoi Diagram and the Subgraph H diagram contains exactly one base cycle, but the boundary of the region need not even be a connected subgraph of G. In this subsection we show how to reduce the problem to the case when all the boundaries are (virtually) simple and form a 2-connected subgraph of G. The first case we consider is when the Voronoi diagram is not connected and consists of a simple cycle. For example, if our graph is composed of thin cylinders. In this case, every end-of-cylinder J-set (after extension) is an horizontal round cut of the cylinder, but the Voronoi diagram cycle may be quite long. If most of the weight is inside one of the base cycles and the other base cycle is outside we will discard the outside. If this is not the case, then most of the weight is between 100 the two base cycles. Then we can find a small spanning tree between them and find a separator using Miller’s algorithm (see Figure 6.21). L e m m a 6.3.22 I f the Voronoi Diagram is a simple cycle than there are only two regions. Figure 6.21: A Voronoi Diagram which is One Cycle P ro o f: Assume th at there is a region R\ in one side of the Voronoi Diagram, and there are regions R 2 and Rz on the other side of the Diagram. Therefore some of the edges in the Voronoi Diagram separates between Ri to R2 and some between R\ to Rz (none of them can separates R 2 from Rz). If we go around the Voronoi diagram somewhere an edge th at separates R i from R 2 touch an edge th at separates R\ from R3. Therefore there is a type 2 vertex th at touch in R i ,R2 and Rz- A contradiction to the assum ption that Voronoi Diagram is a simple cycle. □ L e m m a 6.3.23 The spanning tree between two neighboring base cycles will have a diameter of 0 (d • y/k). 1011 P roof: We can go from one base cycle to any vertex by a path in the spanning tree rooted in the base cycle (inside its component), or by a path to the bound­ ary, then to the other base cycle, and to any vertex on the other spanning tree. Therefore the size of the diam eter is the order of the size of the diam eter of the components which is 0 (d • y/k) (Lemma 6.3.9). □ We will now deal with a harder problem, when the boundary of an compo­ nent is composed of a set of cycles. Let R be one of the regions, C its base cycle, and 2?i,..., B t the boundary of R decomposed into cycles, as described in j subsection 6.2.4. 1 Even though it is not necessary here is our intuitive picture of R. We think i of J? as a relief m ap with boundary C . The contour lines of this m ap are the layers. The m ap is of the Southwest region which has mesas. In R the mesa are the faces B i, ..., B t. Between these mesas are ridges. We show th at in our case the ridges form a spanning tree of the mesas. Our m ap contains no local minima. We partition the mesas via the ridges. The formal definitions are given below. Let T be the B FS spanning of R from C. We say th at an edge (vertex) e (v) of J? is a ridge edge (vertex) if the induced cycle in T from the two faces common to e (v) forms a nontrivial partition of the set B lf..., B t for t > 1. We will show th at the ridge edges and vertices form a forest th at spans the boundary cycles B \,..., B t. We would like to find short paths between two vertices on the base cycle around the big ridges, and then to discard the inside of the th at cycle. The problem of finding the weight of induced cycle was investigated by Lipton and Tarjan [16] for the sequential case, and by Miller [19] for the parallel case. Miller gave a fast 0(log(n)) solution, th at we will use. The radius of the spanning tree is at most 2 • d • y/k (see Lemma 6.3.9), so we do not have a problem of long paths. 102 p ro c ed u r e M esa-Partition 1. Com pute the weight of each Mesa (see Lemma 6.2.45). 2. For every non-tree edge find the induced weight in its two cycles. We will say th at the cycle with the smaller weight is the induced cycle of this non-tree edge. 3. If one of them is a separator we finished. 4. Induce cycle A is contained in induced cycle B if every edge th at A shares w ith the base cycle is shared with B as well. We keep all the cycles that are not contained in any other cycle, and erase the rest. Figure 6.22: Determining the Simple Cycle Boundary for each I - set and Removing! all Others i I L e m m a 6 .3 .2 4 We can find the weight on each mesa in 0 (log(ra)) time using r a j processors. P roof: The Mesas do not interlace and do not share edges. Therefore by Lemma 6.2.35 we can find the weights in O(log(ra)) time. □ L e m m a 6.3.25 We can check if a cycle is contained in another cycle in 0(log(ra)) | time. P roof: Two cycles cannot cross each other (because they were created from the same spanning tree). Therefore we can look at them as pair of parentheses and look for those that are not nested. A simple list ranking problem. □ P r o o f of Theorem 6.3.1: Correctness: 1. The graph is two-connected because the Voronoi diagram (after Procedure M esa-Partition) is connected (and therefore two-connected), and we build a face around every meta-edge. 2. We check in Procedure Face-vertex-dual that every new face has weight of at most 2/3. 103 p ro c ed u re Find-Short-Patks-Between-I-sets 1. call Build-spanning-forest (Figure 6.14), the procedure get spanning forest in G and returns spanning forest in G. 2. I f the Voronoi Diagram of any component is not biconnected then call M esa-Partition (Figure 6.22). 3. I f the Voronoi Diagram consists of a simple cycle th e n do (a) Pick a point on the Voronoi Diagram and construct a shortest pathj back to each of the two base cycles of G. (b) Using this path and the B F S tree in each of the two regions, construct a spanning tree of G with radius 0 { d \/k ). (c) Find a separator using the algorithm from Theorem 6 in [19] 4. call Face-vertex-dual to build the subgraph H . e n d Find-Short-Paths-Between-I-sets Figure 6.23: Finding short paths between J-sets th at are neighbors in the Voronoi Diagram. 3. The num ber of new faces is at most 3 • £ because th a t’s the upper bound for the num ber of meta-edges (see Lemma 6.3.19), and we have a new face around each meta-edge. 4. The size of each new face is at most d • (4 + 5 • y/k) (see Lemma 6.3.20). Complexity: The proof follows from Lemmas 6.2.4, 6.2.6, 6.2.13, 6.2.18, 6.2.35, 6.3.12, 6.3.24, 6.3.25. 6.4 Applications of Reduce-G 6.4.1 S to p p in g th e recu rsion We apply Reduce-G repeatedly till one of the following conditions hold: 1. The procedure Reduce- G returns a subgraph H which is a simple cycle sep-j arator. ___________________________________ 104J 2. The num ber of faces is small enough to compute a B F S of the graph H via m atrix multiplication in 0 ( log2 n) using P processors. 6.4.2 N u m b er o f iter a tio n s If we apply the procedure Reduce-G repeatedly, we can pick a larger and larger value for k. Therefore we will need only O (log(log(/))) iterations to find a sep­ arator using 0 ( n ) processors. We show that for P = n 1+e a constant num ber of iteration is sufficient. I t t Define /* to be the num ber of faces in the reduced graph after iteration i, and k{ to be the k used in iteration i. In the first iteration P = f • k 2 and f \ = 3 • £, but in the second iteration P = f i * h 2, so P = / . k 2 = ( ^ ) • k 2 = > k l = k l/ V 3 In the same way we can show th at in the third iteration = (&*)f/3 = )/3 and in the ith iteration ki = k ((§■))/3= . An upper bound for i can be obtained when I ki > f , since then we finish the algorithm. This implies ki = /3a < f . ' i If we take P = 2 • / (that is, k2 — 2), then an upper lim it to the num ber of j ■iterations is: O (log(log(/))). If, on the other hand, we take k = f e/2 where e is j some constant, then the num ber of iterations is a fixed constant i satisfying the equation fct /3^ = / . Note that i is constant that depends only on | for a large i / . A small e means a large constant, and vice versa. j The separator size also increases also with the num ber of iterations. Using the ; i result of the previous section we can conclude that the separator size will be about (3 • 52)*/2 • 2 • dy/J. 105 It is instructive to compute the num ber of iterations for a few values of P. If the num ber of processors P = n 15 and regular n 3 m atrix m ultiplication is used, then two iterations will suffice. For P = n 18 we need only one iteration. By using the processor efficient m atrix m ultiplication we can improve these results. 6.5 Conclusion As mentioned in the Introduction, we have found an optim al algorithm for the separator problem th at needs 0 (log(n) • \J\nj) time, see [11]. It is open whether an optim al polylogarithmic algorithm exists. Another open question is the B FS of planar graphs. Pan and Reif [23] presented a good algorithm for BFS, but the product P ■ T is about n 1'5. Can we find a more efficient B F S algorithm for planar graphs? 106 R eferen ces L ist [i [2 [3 [5 [6 [7 [8 [9 [10 [11 [12 [13 [14 A. Aho, J. Hopcroft, and J. Ullman. The Design and Analysis of Computer Algorithms. Addison-Wesley, 1974. Richard Anderson and Gary L. Miller. Optimal Parallel Algorithms for List Ranking. Technical Report , USC, Los Angeles, 1987. S. Baase. Computer Algorithms: Introduction to Design and Analysis. Addison-Wesley, 1983. D.S. Hirschberg A.K. Chandra and D.V.Sarwate. Com puting connected com­ ponents on parallel computers. Communication of the A CM, 22(8):461-464, August 1979. R. Cole and U. Vishkin. Approximate and exact parallel scheduling with applications to list tree, and graph problems. In 27st Annual Symposium on Foundation of Computer Science, pages 478-491, IEEE, Oct 1986. R. Cole and U. Vishkin. Faster Optimal Parallel Prefix Sums and List Rank­ ing. Subm itted to Information and Computation, 1987. D. Coppersm ith and S. W inograd. M atrix m ultiplication via Behrend’s the­ orem. In STO C (to appear), 1987. H.N. Djidjev. On the problem of partition planar graphs. SIA M J. Alg. Disrete Math., 3(2):229-240, June 1982. S. Even. Graph Algorithms. Com puter Science Press, 1979. H. Gazit. An optim al randomized parallel algorithm for finding connected components in a graph. In 27st Annual Symposium on Foundation of Com­ puter Science, pages 492— 501, IEEE, Oct 1986. H. Gazit and Gary L. Miller. An o(\/nlog(n)) optim al parallel algorithm for a separator for planar graphs, m anuscript. T. Hagerup. Optimal Parallel Algorithms on Planar Graphs. Technical Re­ port , Universitat des Saarlandes, September 1987. D. W. Hall and G. Spencera. Elementry Topology. John Wiley, 1955. F. Harary. Graph Theory. Addison-Wesley, 1969. 107 [15] D. Harel and R.E. Tarjan. Fast algorithm for finding nearest common ances­ tors. S IA M J. of Comp., 13:338-355, May 1984. [16] R. J. Lipton and R.E. Tarjan. A separator theorem for planar graphs. SIA M J. o f Appl. Math., 36:177-189, April 1979. ; [17] M. Luby. A simple parallel algorithm for the m axim al independent set prob­ lem. SIA M J. Comput., 15(4):1036-1053, November 1986. [18] Gary L. Miller and Vijya Ram achandran. A new graph triconnectivity algo- \ rithm and its parallelization. In 19st Annual A C M Symposium on Theory Of Computing, pages 335-344, ACM, New York, May 1987. [19] G.L. Miller. Finding small simple cycle separator for 2-connected planar graphs. Journal of Computer and System Sciences, 32(3):265-279, June 1986. [20] G.L. Miller and J.H . Reif. Parallel tree contraction and its applications. In | 26th Symposium on Foundations of Computer Science, pages 478-489, IEEE, I Portland, Oregon, 1985. < [21] E.F. Moore. The shortest path through a maze. In Proc. Internat. Symp. Switching. Th., pages 285-292, Harvard Univ. Press, 1959. j [22] V. Pan. How to Multiply Matrices Faster. Springer-Verlag, 1984. [23] V. Pan and J. H. Reif. Extension of Parallel Nested Dissection Algorithm to the Path Algebra Problems. Computer Science D epartm ent TR-85-9, State University of New York at Albany, 1985. [24] A.G. Ranade. How to emulate shared memory. In 28st Annual Symposium on Foundation of Computer Science, pages 185-194, IEEE, Oct 1987. [25] J. H. Reif. Optimal Parallel Algorithms for Graph Connectivity. Center for Com puting Research TR-08-84, Harvard University, 1984. [26] Y. Shiloach and U. Vishkin. An o(log n) parallel connectivity algorithm. J. of Algorithms, 3(l):57-67, 1983. [27] V. Strassen. Gaussian elimination is not optim al. Numerische Mathematik, 13:354-356,1969. [28] R.E. Tarjan and U. Vishkin. A n Efficient Parallel Biconnectivity Algorithms. Technical Report, Courant Institute, 1983. [29] L.G. Valiant. Parallelism in comparison problems. SIA M J. Comp., 4():348- 355, 1975. [30] L.G. Valiant and G.J. Brebner. Universal schemes for parallel communication. In STO C 81, pages 263-277, ACM, 1981. [31] J. C. Wyllie. The Complexity of Parallel Computation. D epartm ent of Com­ puter Science TR-79-387, Cornell University, 1979. 108 
Linked assets
University of Southern California Dissertations and Theses
doctype icon
University of Southern California Dissertations and Theses 
Action button
Conceptually similar
Strut fixtures: Modular synthesis and efficient algorithms
PDF
Strut fixtures: Modular synthesis and efficient algorithms 
Parallel language and pipeline constructs for concurrent computation
PDF
Parallel language and pipeline constructs for concurrent computation 
Tool positioning and path generation algorithms computer-aided manufacturing
PDF
Tool positioning and path generation algorithms computer-aided manufacturing 
Efficient communication algorithms for parallel computing platforms
PDF
Efficient communication algorithms for parallel computing platforms 
Algorithms for trie compaction
PDF
Algorithms for trie compaction 
On the composition and decomposition of datalog program mappings
PDF
On the composition and decomposition of datalog program mappings 
A "true concurrency" approach to parallel process modeling, verification and design
PDF
A "true concurrency" approach to parallel process modeling, verification and design 
Softman: An environment supporting the engineering and reverse engineering of large scale software systems
PDF
Softman: An environment supporting the engineering and reverse engineering of large scale software systems 
Interpretations and simple syntax-directed translations
PDF
Interpretations and simple syntax-directed translations 
Matching images using linear features
PDF
Matching images using linear features 
Numerical solution of the generalized eigenvalue problem and the eigentuple-eigenvector problem
PDF
Numerical solution of the generalized eigenvalue problem and the eigentuple-eigenvector problem 
Design of application-oriented languages by protocol analysis
PDF
Design of application-oriented languages by protocol analysis 
The precedence-assignment model for distributed database concurrency control algorithms.
PDF
The precedence-assignment model for distributed database concurrency control algorithms. 
Efficient algorithms for navigation on large transportation graphs
PDF
Efficient algorithms for navigation on large transportation graphs 
Occamflow: Programming a multiprocessor system in a high-level data-flow language
PDF
Occamflow: Programming a multiprocessor system in a high-level data-flow language 
A representation for semantic information within an inference-making computer program
PDF
A representation for semantic information within an inference-making computer program 
Design and performance analysis of locking algorithms for distributed databases
PDF
Design and performance analysis of locking algorithms for distributed databases 
Communications-efficient architectures for massively parallel processing
PDF
Communications-efficient architectures for massively parallel processing 
The 3DIS: An extensible object-oriented framework for information management.
PDF
The 3DIS: An extensible object-oriented framework for information management. 
Management of interface design activities
PDF
Management of interface design activities 
Action button
Asset Metadata
Creator Gazit, Hillel (author) 
Core Title Processor efficient parallel graph algorithms 
Contributor Digitized by ProQuest (provenance) 
Degree Doctor of Philosophy 
Degree Program Computer Science 
Publisher University of Southern California (original), University of Southern California. Libraries (digital) 
Tag Computer Science,OAI-PMH Harvest 
Language English
Permanent Link (DOI) https://doi.org/10.25549/usctheses-c17-770467 
Unique identifier UC11348399 
Identifier DP22772.pdf (filename),usctheses-c17-770467 (legacy record id) 
Legacy Identifier DP22772.pdf 
Dmrecord 770467 
Document Type Dissertation 
Rights Gazit, Hillel 
Type texts
Source University of Southern California (contributing entity), University of Southern California Dissertations and Theses (collection) 
Access Conditions The author retains rights to his/her dissertation, thesis or other graduate work according to U.S. copyright law. Electronic access is being provided by the USC Libraries in agreement with the au... 
Repository Name University of Southern California Digital Library
Repository Location USC Digital Library, University of Southern California, University Park Campus, Los Angeles, California 90089, USA