Download presentation
Presentation is loading. Please wait.
Published byDina Stokes Modified over 9 years ago
1
After step 2, processors know who owns the data in their assumed partitions— now the assumed partition defines the rendezvous points Scalable Conceptual Interfaces in hypre Allison Baker Center for Applied Scientific Computing, Lawrence Livermore National Laboratory (joint work with Rob Falgout, Jim Jones, and Ulrike Meier Yang) IJ Conceptual Interface Results OverviewOverview This work was performed under the auspices of the U.S. Department of Energy by University of California Lawrence Livermore National Laboratory under contract No. W-7405-Eng-48. UCRL-POST-209333 New Algorithm What is hypre? Good performance requires scalable algorithms and software Scalability: The key issue for large-scale computing Parallel computing data is in distributed form Conceptual interface gets data from application code to hypre Each processor knows only its own piece of the linear system Problem: Solvers require “nearby” data from other processors and the interfaces must determine who owns this data efficiently Goal: Scalable interfaces to solvers! LLNL’s new BlueGene/L machine > 100,000 processors! Description hypre’s traditional linear-algebraic interface Matrix and right-hand side are defined in terms of row and column indices Matrices are distributed across P processors by contiguous blocks of rows Matrix-vector multiply requires some knowledge of the global partition Old method for determining neighborhood info Each processor sends its range to all other processors All processors store the global partition and use it to determine who to receive data from (receive processors) Processors discover who to send data to (send processors) via a second communication Old method costs for P processors CommunicationO(log(P)) ComputationsO(P) StorageO(P) Goal: Generate neighborhood information in a scalable manner for large numbers of processors (P) A library of high-performance algorithms for solving large, sparse systems of linear equations on massively parallel computers For scalability, the computation, communications and storage costs should all depend on P logarithmically or better! How? Assume the global partition! The new algorithm is a kind of rendezvous algorithm that uses the concept of an assumed partition to answer queries about the global data distribution Assumed Partition algorithm: 1.Assume a global partition of data (N rows) that may be queried by any processor (with O(1) computation and storage cost) 2.Reconcile assumed rows with actual rows – contact processors regarding rows owned in another’s assumed partition 3.Use the assumed partition to determine send and receive processors What’s Next? Testing on more processors using BlueGene/L—16K processors coming soon! Adapting the assumed partition to the hypre conceptual interface for structured problems—more complicated! Comparison of the new assumed partition algorithm and the old algorithm for a 3D Laplacian operator with a 27-point stencil Each processor owns ~64,000 rows Runs on LLNL’s MCR Linux cluster New algorithm has better scaling properties—This will be important for 100,000 processors! Not good enough! As P increases, the algorithm’s | cost increases! CommunicationO(log(P) ComputationsO(log(P)) StorageO(log(P)) or O(1) New algorithm costs for P processors This assumed partition concept is applicable to all of hypre’s conceptual interfaces and a variety of situations in parallel codes A1A1 A2A2 APAP A = 1 1 2 3 2 4 4 3 Rows owned by 1, In 2’s assumed partition Actual partition Assumed partition For a balanced partitioning, one could assume: proc = row P N
Similar presentations
© 2024 SlidePlayer.com. Inc.
All rights reserved.