Parallel Sort, Search, Graph Algorithms

Slides:



Advertisements
Similar presentations
Chapter 5: Tree Constructions
Advertisements

Ananth Grama, Anshul Gupta, George Karypis, and Vipin Kumar
Parallel Sorting Sathish Vadhiyar. Sorting  Sorting n keys over p processors  Sort and move the keys to the appropriate processor so that every key.
CS 484. Discrete Optimization Problems A discrete optimization problem can be expressed as (S, f) S is the set of all feasible solutions f is the cost.
CSCI-455/552 Introduction to High Performance Computing Lecture 11.
Partitioning and Divide-and-Conquer Strategies ITCS 4/5145 Parallel Computing, UNC-Charlotte, B. Wilkinson, Jan 23, 2013.
Parallel Graph Algorithms
Distributed Breadth-First Search with 2-D Partitioning Edmond Chow, Keith Henderson, Andy Yoo Lawrence Livermore National Laboratory LLNL Technical report.
CS 584. Discrete Optimization Problems A discrete optimization problem can be expressed as (S, f) S is the set of all feasible solutions f is the cost.
Multilevel Graph Partitioning and Fiduccia-Mattheyses
Domain decomposition in parallel computing Ashok Srinivasan Florida State University COT 5410 – Spring 2004.
Broadcast & Convergecast Downcast & Upcast
Fundamentals of Algorithms MCS - 2 Lecture # 7
Minimum Spanning Trees CSE 2320 – Algorithms and Data Structures Vassilis Athitsos University of Texas at Arlington 1.
Graph Algorithms. Definitions and Representation An undirected graph G is a pair (V,E), where V is a finite set of points called vertices and E is a finite.
Data Structures and Algorithms in Parallel Computing Lecture 2.
CS 484 Load Balancing. Goal: All processors working all the time Efficiency of 1 Distribute the load (work) to meet the goal Two types of load balancing.
CS 584. Discrete Optimization Problems A discrete optimization problem can be expressed as (S, f) S is the set of all feasible solutions f is the cost.
Data Structures and Algorithms in Parallel Computing Lecture 3.
Copyright © 2007 Pearson Education, Inc. Publishing as Pearson Addison-Wesley. Ver Chapter 13: Graphs Data Abstraction & Problem Solving with C++
Domain decomposition in parallel computing Ashok Srinivasan Florida State University.
1 Directed Graphs Chapter 8. 2 Objectives You will be able to: Say what a directed graph is. Describe two ways to represent a directed graph: Adjacency.
Data Structures and Algorithms in Parallel Computing
© 2006 Pearson Addison-Wesley. All rights reserved 14 A-1 Chapter 14 Graphs.
Parallel Graph Algorithms Sathish Vadhiyar. Graph Traversal  Graph search plays an important role in analyzing large data sets  Relationship between.
Chapter 13 Backtracking Introduction The 3-coloring problem
© 2006 Pearson Addison-Wesley. All rights reserved14 B-1 Chapter 14 (continued) Graphs.
High Performance Computing Seminar
BCA-II Data Structure Using C Submitted By: Veenu Saini
Auburn University
Graphs Chapter 20.
Elementary Graph Algorithms
Chapter 22 Elementary Graph Algorithms
Thinking about Algorithms Abstractly
Parallel Graph Algorithms
CSPs: Search and Arc Consistency Computer Science cpsc322, Lecture 12
Auburn University COMP7330/7336 Advanced Parallel and Distributed Computing Mapping Techniques Dr. Xiao Qin Auburn University.
Csc 2720 Instructor: Zhuojun Duan
Design and Analysis of Algorithm
Community detection in graphs
Types of Algorithms.
CS120 Graphs.
Graphs Graph transversals.
Graph & BFS.
Graphs Representation, BFS, DFS
Tree Construction (BFS, DFS, MST) Chapter 5
Data Structures – Stacks and Queus
Graphs Chapter 13.
Unit-2 Divide and Conquer
Types of Algorithms.
Haim Kaplan and Uri Zwick
Search Related Algorithms
Graphs.
What is a Graph? a b c d e V= {a,b,c,d,e} E= {(a,b),(a,c),(a,d),
Richard Anderson Autumn 2016 Lecture 5
A Fundamental Bi-partition Algorithm of Kernighan-Lin
Graph Algorithms "A charlatan makes obscure what is clear; a thinker makes clear what is obscure. " - Hugh Kingsmill CLRS, Sections 22.2 – 22.4.
Algorithms Searching in a Graph.
Adaptivity and Dynamic Load Balancing
Types of Algorithms.
Parallel Graph Algorithms
Chapter 14 Graphs © 2011 Pearson Addison-Wesley. All rights reserved.
Major Design Strategies
CSC 325: Algorithms Graph Algorithms David Luebke /24/2019.
Richard Anderson Winter 2019 Lecture 5
Algorithms CSCI 235, Spring 2019 Lecture 32 Graphs I
Clustering.
Major Design Strategies
Algorithm Course Algorithms Lecture 3 Sorting Algorithm-1
INTRODUCTION A graph G=(V,E) consists of a finite non empty set of vertices V , and a finite set of edges E which connect pairs of vertices .
Presentation transcript:

Parallel Sort, Search, Graph Algorithms Sathish Vadhiyar

Radix Sort During every step, the algorithm puts every key in a bucket corresponding to the value of some subset of the key’s bits A k-bit radix sort looks at k bits every iteration Easy to parallelize – assign some subset of buckets to each processor Load balance – assign variable number of buckets to each processor

Radix Sort – Load Balancing Each processor counts how many of its keys will go to each bucket Sum up these histograms with reductions Once a processor receives this combined histogram, it can adaptively assign buckets

Radix Sort - Analysis Requires multiple iterations of costly all-to-all Cache efficiency is low – any given key can move to any bucket irrespective of the destination of the previously indexed key Affects communication as well

Histogram Sort Another splitter-based method Histogram also determines a set of p-1 splitters It achieves this task by taking an iterative approach rather than one big sample A processor broadcasts k (> p-1) initial splitter guesses called a probe The initial guesses are spaced evenly over data range

Histogram Sort Steps Each processor sorts local data Creates a histogram based on local data and splitter guesses Reduction sums up histograms A processor analyzes which splitter guesses were satisfactory (in terms of load) If unsatisfactory splitters, the , processor broadcasts a new probe, go to step 2; else proceed to next steps

Histogram Sort Steps Each processor sends local data to appropriate processors – all-to-all exchange Each processor merges the data chunk it receives Merits: Only moves the actual data once Deals with uneven distributions

Probe Determination Should be efficient – done on one processor The processor keeps track of bounds for all splitters Ideal location of a splitter i is (i+1)n/p When a histogram arrives, the splitter guesses are scanned

Graph Traversal Graph search plays an important role in analyzing large data sets Relationship between data objects represented in the form of graphs Breadth first search used in finding shortest path or sets of paths

All-Pairs Shortest Paths To find shortest paths between all pairs of vertices Dijikstra’s algorithm for single-source shortest path can be used for all vertices Two approaches

All-Pairs Shortest Paths Source-partitioned formulation: Partition the vertices across processors Works well if p<=n; No communication Can at best use only n processors Time complexity? Source-parallel formulation: Parallelize SSSP for a vertex across a subset of processors Do for all vertices with different subsets of processors Hierarchical formulation Exploits more parallelism

Parallel BFS Level-synchronized algorithm Proceeds level-by-level starting with the source vertex Level of a vertex – its graph distance from the source Also, called frontier-based algorithm The parallel processes process a level, synchronize at the end of the level, before moving to the next level – Bulk Synchronous Parallelism (BSP) model How to decompose the graph (vertices, edges and adjacency matrix) among processors?

Distributed BFS with 1D Partitioning Each vertex and edges emanating from it are owned by one processor 1-D partitioning of the adjacency matrix Edges emanating from vertex v is its edge list = list of vertex indices in row v of adjacency matrix A

1-D Partitioning At each level, each processor owns a set F – set of frontier vertices owned by the processor Edge lists of vertices in F are merged to form a set of neighboring vertices, N Some vertices of N owned by the same processor, while others owned by other processors Messages are sent to those processors to add these vertices to their frontier set for the next level

Lvs(v) – level of v, i.e, graph distance from source vs

BFS on GPUs

BFS on GPUs One GPU thread for a vertex In each iteration, each vertex looks at its entry in the frontier array If true, it forms the neighbors and frontiers Severe load imbalance among the treads Scope for improvement

Depth First Search (DFS)

Parallel Depth First Search Easy to parallelize Left subtree can be searched in parallel with the right subtree Statically assign a node to a processor – the whole subtree rooted at that node can be searched independently. Can lead to load imbalance; Load imbalance increases with the number of processors (more later)

Maintaining Search Space Each processor searches the space depth-first Unexplored states saved as stack; each processor maintains its own local stack Initially, the entire search space assigned to one processor The stack is then divided and distributed to processors

Termination Detection As processors search independently, how will they know when to terminate the program? Dijikstra’s Token Termination Detection Algorithm Based on passing of a token in a logical ring; P0 initiates a token when idle; A processor holds a token until it has completed its work, and then passes to the next processor; when P0 receives again, then all processors have completed However, a processor may get more work after becoming idle due to dynamic load balancing (more later)

Tree Based Termination Detection Uses weights Initially processor 0 has weight 1 When a processor transfers work to another processor, the weights are halved in both the processors When a processor finishes, weights are returned Termination is when processor 0 gets back 1 Goes with the DFS algorithm; No separate communication steps Figure 11.10

Graph Partitioning

Graph Partitioning For many parallel graph algorithms, the graph has to be partitioned into multiple partitions and each processor takes care of a partition Criteria: The partitions must be balanced (uniform computations) The edge cuts between partitions must be minimal (minimizing communications) Some methods BFS: Find BFS and descend down the tree until the cumulative number of nodes = desired partition size Mostly: Multi-level partitioning based on coarsening and refinement (a bit advanced) Another popular method: Kernighan-Lin

Parallel partitioning Can use divide and conquer strategy A master node creates two partitions Keeps one for itself and gives the other partition to another processor Further partitioning by the two processors and so on…

Multi-level partitioning

K-way multilevel partitioning algorithm Has 3 phases: coarsening, partitioning, refinement (uncoarsening) Coarsening - a sequence of smaller graphs constructed out of an input graph by collapsing vertices together

K-way multilevel partitioning algorithm When enough vertices are collapsed together so that the coarsest graph is sufficiently small, a k-way partition is found Finally, the partition of the coarsest graph is projected back to the original graph by refining it at each uncoarsening level using a k-way partitioning refinement algorithm

K-way partitioning refinement A simple randomized algorithm that moves vertices among the partitions to minimize edge-cut and improve balance For a vertex v, let neighborhood N(v) be the union of the partitions to which the vertices adjacent to v belong In a k-way refinement algorithm, vertices are visited randomly

K-way partitioning refinement A vertex v is moved to one of the neighboring partitions N(v) if any of the following vertex migration criteria is satisfied The edge-cut is reduced while maintaining the balance The balance improves while maintaining the edge-cut This process is repeated until no further reduction in edge-cut is obtained

Graph Coloring

Graph Coloring Problem Given G(A) = (V, E) σ: V {1,2,…,s} is s-coloring of G if σ(i) ≠ σ(j) for every (i, j) edge in E Minimum possible value of s is chromatic number of G Graph coloring problem is to color nodes with chromatic number of colors NP-complete problem

Parallel graph Coloring – General algorithm

Parallel Graph Coloring – Finding Maximal Independent Sets – Luby (1986) I = null V’ = V G’ = G While G’ ≠ empty Choose an independent set I’ in G’ I = I U I’; X = I’ U N(I’) (N(I’) – adjacent vertices to I’) V’ = V’ \ X; G’ = G(V’) end For choosing independent set I’: (Monte Carlo Heuristic) For each vertex, v in V’ determine a distinct random number p(v) v in I iff p(v) > p(w) for every w in adj(v) Color each MIS a different color Disadvantage: Each new choice of random numbers requires a global synchronization of the processors.

Parallel Graph Coloring – Gebremedhin and Manne (2003) Pseudo-Coloring

Sources/References Paper: A Scalable Distributed Parallel Breadth-First Search Algorithm on BlueGene/L. Yoo et al. SC 2005. Paper:Accelerating large graph algorithms on the GPU usingCUDA. Harish and Narayanan. HiPC 2007. M. Luby. A simple parallel algorithm for the maximal independent set problem. SIAM Journal on Computing. 15(4)1036-1054 (1986) A.H. Gebremedhin, F. Manne, Scalable parallel graph coloring algorithms, Concurrency: Practice and Experience 12 (2000) 1131-1146.