A New DMA Registration Strategy for Pinning-Based High Performance Networks Dan Bonachea & Christian Bell U.C. Berkeley and LBNL

Slides:



Advertisements
Similar presentations
1 Implementing PGAS on InfiniBandPaul H. Hargrove Experiences Implementing Partitioned Global Address Space (PGAS) Languages on InfiniBand Paul H. Hargrove.
Advertisements

Virtual Switching Without a Hypervisor for a More Secure Cloud Xin Jin Princeton University Joint work with Eric Keller(UPenn) and Jennifer Rexford(Princeton)
System Area Network Abhiram Shandilya 12/06/01. Overview Introduction to System Area Networks SAN Design and Examples SAN Applications.
C. Bell, D. Bonachea, R. Nishtala, and K. Yelick, 1Berkeley UPC: Optimizing Bandwidth Limited Problems Using One-Sided Communication.
Piccolo: Building fast distributed programs with partitioned tables Russell Power Jinyang Li New York University.
Datorteknik BusInterfacing bild 1 Bus Interfacing Processor-Memory Bus –High speed memory bus Backplane Bus –Processor-Interface bus –This is what we usually.
Virtual Memory. The Limits of Physical Addressing CPU Memory A0-A31 D0-D31 “Physical addresses” of memory locations Data All programs share one address.
Protocols and software for exploiting Myrinet clusters Congduc Pham and the main contributors P. Geoffray, L. Prylli, B. Tourancheau, R. Westrelin.
Unified Parallel C at LBNL/UCB GASNet: A High-Performance, Portable Communication System for Partitioned Global Address Space Languages Dan Bonachea Kathy.
1. Overview  Introduction  Motivations  Multikernel Model  Implementation – The Barrelfish  Performance Testing  Conclusion 2.
VIA and Its Extension To TCP/IP Network Yingping Lu Based on Paper “Queue Pair IP, …” by Philip Buonadonna.
Problems with using MPI 1.1 and 2.0 as compilation targets for parallel language implementations Dan Bonachea & Jason Duell U. C. Berkeley / LBNL
Haoyuan Li CS 6410 Fall /15/2009.  U-Net: A User-Level Network Interface for Parallel and Distributed Computing ◦ Thorsten von Eicken, Anindya.
1 Titanium and UPCKathy Yelick UPC Benchmarks Kathy Yelick LBNL and UC Berkeley Joint work with The Berkeley UPC Group: Christian Bell, Dan Bonachea, Wei.
Federated DAFS: Scalable Cluster-based Direct Access File Servers Murali Rangarajan, Suresh Gopalakrishnan Ashok Arumugam, Rabita Sarker Rutgers University.
12/13/99 Page 1 IRAM Network Interface Ioannis Mavroidis IRAM retreat January 12-14, 2000.
©UCB CS 162 Ch 7: Virtual Memory LECTURE 13 Instructor: L.N. Bhuyan
1 Lecture 14: Virtual Memory Today: DRAM and Virtual memory basics (Sections )
An Intelligent Cache System with Hardware Prefetching for High Performance Jung-Hoon Lee; Seh-woong Jeong; Shin-Dug Kim; Weems, C.C. IEEE Transactions.
An overview of Infiniband Reykjavik, June 24th 2008 R E Y K J A V I K U N I V E R S I T Y Dept. Computer Science Center for Analysis and Design of Intelligent.
Evaluation of High-Performance Networks as Compilation Targets for Global Address Space Languages Mike Welcome In conjunction with the joint UCB and NERSC/LBL.
Xen and the Art of Virtualization. Introduction  Challenges to build virtual machines Performance isolation  Scheduling priority  Memory demand  Network.
Revisiting Network Interface Cards as First-Class Citizens Wu-chun Feng (Virginia Tech) Pavan Balaji (Argonne National Lab) Ajeet Singh (Virginia Tech)
CPU Cache Prefetching Timing Evaluations of Hardware Implementation Ravikiran Channagire & Ramandeep Buttar ECE7995 : Presentation.
Multiple Processor Systems. Multiprocessor Systems Continuous need for faster and powerful computers –shared memory model ( access nsec) –message passing.
High Performance User-Level Sockets over Gigabit Ethernet Pavan Balaji Ohio State University Piyush Shivam Ohio State University.
Lecture 19: Virtual Memory
Introduction, background, jargon Jakub Yaghob. Literature T.G.Mattson, B.A.Sanders, B.L.Massingill: Patterns for Parallel Programming, Addison- Wesley,
A Comparative Study of the Linux and Windows Device Driver Architectures with a focus on IEEE1394 (high speed serial bus) drivers Melekam Tsegaye
Swapping to Remote Memory over InfiniBand: An Approach using a High Performance Network Block Device Shuang LiangRanjit NoronhaDhabaleswar K. Panda IEEE.
Unified Parallel C at LBNL/UCB An Evaluation of Current High-Performance Networks Christian Bell, Dan Bonachea, Yannick Cote, Jason Duell, Paul Hargrove,
How to Build a CPU Cache COMP25212 – Lecture 2. Learning Objectives To understand: –how cache is logically structured –how cache operates CPU reads CPU.
August 22, 2005Page 1 of (#) Datacenter Fabric Workshop Open MPI Overview and Current Status Tim Woodall - LANL Galen Shipman - LANL/UNM.
Minimizing Communication Latency to Maximize Network Communication Throughput over InfiniBand Design and Implementation of MPICH-2 over InfiniBand with.
1 Lecture 12: Hardware/Software Trade-Offs Topics: COMA, Software Virtual Memory.
Distributed Shared Memory Based on Reference paper: Distributed Shared Memory, Concepts and Systems.
Lecture 11 Page 1 CS 111 Online Working Sets Give each running process an allocation of page frames matched to its needs How do we know what its needs.
Latency Reduction Techniques for Remote Memory Access in ANEMONE Mark Lewandowski Department of Computer Science Florida State University.
The influence of system calls and interrupts on the performances of a PC cluster using a Remote DMA communication primitive Olivier Glück Jean-Luc Lamotte.
Optimizing Charm++ Messaging for the Grid Gregory A. Koenig Parallel Programming Laboratory Department of Computer.
Introduction: Memory Management 2 Ideally programmers want memory that is large fast non volatile Memory hierarchy small amount of fast, expensive memory.
COMP SYSTEM ARCHITECTURE HOW TO BUILD A CACHE Antoniu Pop COMP25212 – Lecture 2Jan/Feb 2015.
Multilevel Caches Microprocessors are getting faster and including a small high speed cache on the same chip.
Virtual Memory.  Next in memory hierarchy  Motivations:  to remove programming burdens of a small, limited amount of main memory  to allow efficient.
Processor Memory Processor-memory bus I/O Device Bus Adapter I/O Device I/O Device Bus Adapter I/O Device I/O Device Expansion bus I/O Bus.
3/12/2013Computer Engg, IIT(BHU)1 PARALLEL COMPUTERS- 2.
Cluster Computers. Introduction Cluster computing –Standard PCs or workstations connected by a fast network –Good price/performance ratio –Exploit existing.
Unified Parallel C at LBNL/UCB Berkeley UPC Runtime Report Jason Duell LBNL September 9, 2004.
1 Contents Memory types & memory hierarchy Virtual memory (VM) Page replacement algorithms in case of VM.
Memory Management memory hierarchy programs exhibit locality of reference - non-uniform reference patterns temporal locality - a program that references.
Experiences with VI Communication for Database Storage Yuanyuan Zhou, Angelos Bilas, Suresh Jagannathan, Cezary Dubnicki, Jammes F. Philbin, Kai Li.
CS161 – Design and Architecture of Computer
Virtual Memory So, how is this possible?
ECE232: Hardware Organization and Design
Memory COMPUTER ARCHITECTURE
CS161 – Design and Architecture of Computer
Optimizing the Migration of Virtual Computers
FileSystems.
Virtual Memory Use main memory as a “cache” for secondary (disk) storage Managed jointly by CPU hardware and the operating system (OS) Programs share main.
Cache Memory Presentation I
CMSC 611: Advanced Computer Architecture
Lecture 14 Virtual Memory and the Alpha Memory Hierarchy
Xen Network I/O Performance Analysis and Opportunities for Improvement
Adapted from slides by Sally McKee Cornell University
Contents Memory types & memory hierarchy Virtual memory (VM)
CSE451 Virtual Memory Paging Autumn 2002
Paging and Segmentation
Chapter 13: I/O Systems.
COT 5611 Operating Systems Design Principles Spring 2014
10/18: Lecture Topics Using spatial locality
Presentation transcript:

A New DMA Registration Strategy for Pinning-Based High Performance Networks Dan Bonachea & Christian Bell U.C. Berkeley and LBNL This work is part of the UPC and Titanium projects, funded in part by the DOE, NSF and DOD

Problem Motivation: Client Global-address space (GAS) languages –UPC, Titanium, Co-Array Fortran, etc. –Large globally-shared memory areas w/language support for direct access to remote memory Total remotely accessible memory size limited only by VM space Working set of memory being touched likely to fit in physical mem –App performance tends to be sensitive to the latency & CPU overhead for small operations Implications for communication layer (GASNet) –Want low-latency and low-overhead for non-blocking small puts/gets (think  8 bytes) –Want high-bandwidth, zero-copy msgs for large transfers zero-copy: get higher bandwidth AND avoid CPU overheads –Ideally all communication should be fully one-sided one-sided: don't interrupt remote host CPU - hurts remote compute performance and increases round-trip latency

Problem Motivation: Hardware Pinning-based NIC's (e.g. Myrinet, Infiniband) –Provide one-sided RDMA transfer support, but… –Memory must be explicitly registered ahead of time Requires explicit action by the host CPU on both sides Tell the OS to pin virtual memory page (kernel call) Register fixed virtual/physical mapping w/NIC (PCI transaction) –Memory registration can be expensive! Especially on Myrinet - average is 40 microsec to register one page, 6 milliseconds to deregister one page Costs primarily due to preventing race conditions with pending messages that could compromise system memory protection –Want to reduce the frequency of registration operations and the need for two-sided synchronization Reducing cost of a single registration operation is also important, but orthogonal to this research

Memory Registration Approaches Hardware-Based (e.g. Quadrics) –Zero-copy, One-sided, Full memory space accessible, No handshaking or bookkeeping in software –Hardware complexity and price, Kernel modifications Pin Everything - pin pages at startup or when allocated –Zero-copy, One-sided (no handshaking) –Total usage limited physical memory, may require a custom allocator Bounce Buffers - stream data through pre-pinned bufs –No registration cost at runtime, Full memory space accessible –Two-sided, mem copy costs (CPU consumption - increases CPU overhead, prevents comm. overlap), Messaging overhead (metadata and handshaking) Rendezvous - round-trip message to pin remote pages –Zero-copy, Full memory space accessible, Only handshaking synchronous –Two-sided, Registration costs paid on every operation (very bad on Myrinet) Firehose - our algorithm –Zero-copy, One-sided (common case), Full memory space accessible, Only handshaking is synchronous, Registration costs amortized –Messaging overhead (metadata and handshaking) on miss (uncommon case)

Basic Idea: A Hybrid Approach Firehose - A distributed strategy for handling registration –Get the benefits of Pin-Everything in the common case –Revert to Rendezvous-like behavior for the uncommon case Allow remote nodes to control and cache registration ops –Each node sets aside M bytes of physical memory for registration purposes (some reasonable fraction of phys mem) –Guarantee F = physical pages to every remote node, which has control over where they're mapped in virtual mem –When a remote page is already mapped, can freely use one-sided RDMA on it (a hit) - exploits temporal and physical locality –Send Rendezvous-like synchronization messages to change mappings when a needed remote page not mapped (a miss) –Also cache local memory registration to amortize pinning costs M pagesize*(nodes-1)

Firehose: Conceptual Diagram Runtime snapshot of two nodes (A and C) mapping their firehoses to a third node (B) firehose bucket A and C can freely "pour" data through their firehoses using RDMA to/from anywhere in the buckets they map on B Refcounts used to track number of attached firehoses (or local pins) Support lazy deregistration for buckets w/ refcount = 0 using a victim FIFO with a fixed max length (MAXVICTIM) to avoid re-pinning costs

Firehose: Implementation Details Implemented on Myrinet/GM as part of a GASNet impl. –Fully non-blocking (even for firehose misses) and supports multi- threaded clients - also need refcounts on firehoses to prevent races –Use active messages to perform remote Firehose operations –Currently only one-sided for puts, because GM lacks RDMA gets For now, gets implemented as an active message and a firehose put –Physical memory consumption never exceeds M+MAXVICTIM (both tunable parameters) - may be much less, based on access pattern Data structures used: –Local bucket table: bucket virtual addr => bucket ref count –Bucket Victim FIFO: links buckets w/ refcount = 0 –Firehose table: (remote node id, bucket virtual addr) => firehose ref count All bookkeeping operations are O(1) –Overhead for all metadata lookups/modifications for a put < 1  s

Application Benchmarks Simple kernels written in Titanium - just want a realistic access pattern –2 nodes, Dual PIII-866MHz, 1GB RAM, Myrinet PCI64C, 33MHz/64bit PCI bus Rendezvous includes caching of locally-pinned pages –Rendezvous no-unpin not a robust strategy for GAS lang, shown for comparison Firehose misses are rare, and even misses often hit in victim cache –Avg time to move firehose on remote node is 5  s and 14  s (pin takes 40  s) –Firehose never needed to unpin anything in this case (total mem sz < phys mem) App NameTotal Puts Registration StrategyTotal RuntimeAverage Put Latency Cannon Matrix Multiply 1.5 MRendezvous with-unpin Rendezvous no-unpin Firehose (hit: 99.8%) (miss: 0.2%) 5460 s 797 s 781 s 5141  s 34  s 14  s 46  s Bitonic Sort 2.1 MRendezvous with-unpin Rendezvous no-unpin Firehose (hit: 99.98%) (miss: 0.02%) 4740 s 289 s 255 s 522  s 33  s 15  s 54  s

Performance Results: Peak Bandwidth Peak bandwidth - puts to same location with increasing message sz Firehose beats Rendezvous no-unpin by eliminating round-trip handshaking msgs Firehose gets 100% hit rate - fully one-sided/zero-copy transfers

Performance Results: Put Bandwidth 64 KB puts, randomly distributed over increasing working set size Rendezvous no-unpin is "cheating" beyond 450 MB Note graceful degradation of Firehose beyond 450 MB working set

Performance Results: Roundtrip Put Latency 8-byte puts, randomly distributed over increasing working set size Rendezvous no-unpin is "cheating" beyond 450 MB Rendezvous with-unpin not shown: is about 6000 microseconds

Conclusions Firehose algorithm is a good registration strategy for GAS languages on pinning-based networks –Get the performance benefits of Pin-Everything approach (without the drawbacks) in the common case and degrade to Rendezvous-like behavior for the uncommon case –Exposes one-sided, zero-copy RDMA as common case –Amortizes cost of registration and synchronization over multiple operations, and avoids cost of repinning recently used pages –Cost of handshaking and registration negligible when working set fits in physical memory, and degrades gracefully beyond Future Work –Use Firehose for an Infiniband-GASNet implementation –Make MAXVICTIM adaptive for better scaling when access pattern unbalanced (bucket "stealing") avoid unpin-repin costs –Extend to RDMA gets in GM2 (soon to be released)