Download presentation
Presentation is loading. Please wait.
Published byFerdinand Rogers Modified over 9 years ago
1
Extensible Message Layers for Resource-Rich Cluster Computers Craig Ulmer Center for Experimental Research in Computer Systems A Doctoral Thesis
2
Outline Background Evolution of cluster computers Thesis Design of extensible message layers GRIM: General-purpose Reliable In-order Messages Core communication functions Extensions Integrating peripheral devices Streaming computations Multicast Sockets API Host-to-host performance Concluding remarks
3
Background An Evolution of Cluster Computers
4
Cluster Computers Cost-effective alternative to supercomputers Number of commodity workstations Specialized network hardware and software Result: Large pool of host processors CPU Network Interface Memory I/O Bus CPU Network Interface Memory I/O Bus CPU Network Interface Memory I/O Bus CPU Network Interface Memory I/O Bus System Area Network
5
Industry Trends Increasingly independent, intelligent peripheral devices Migration of computing power and bandwidth requirements to peripherals Network and multimedia applications reliance on peripheral devices High-bandwidth, low-latency system area networks Ethernet Host Storage CPU SAN NI Media Capture
6
Resource-Rich Cluster Computers Inclusion of diverse peripheral devices Ethernet server cards, multimedia capture devices, embedded storage, computational accelerators Processing takes place in host CPUs and peripherals SAN NI Ethernet Host System Area Network Cluster SAN NI Video Capture FPGA Host Storage Host CPU
7
Problem: Utilizing distributed cluster resources How is efficient intra-cluster communication provided? Desirable abstractions and their implementations How can applications make use of resources? CPU Video Capture FPGA RAID FPGA Ethernet RAID ? ? ?
8
Thesis Definition Supporting Resource-Rich Cluster Computers
9
The Challenge Message layers are enabling technology for clusters Current message layers Optimized for transmissions between host CPUs Peripheral devices only available in context of the local host What is needed Support efficient communication between host CPUS and peripheral devices Ability to harness peripheral devices as pool of resources
10
Message Layer Design Characteristics Reliable data transfers End-to-end flow control Simplify workload for communication endpoints Virtualization of the network interface Multiple endpoints share network access Flexible programming abstractions Efficient data transfer mechanisms Allow users to customize interactions with resources
11
Message Layer Requirements Extensibility is a key feature Hardware:Easily add new peripheral devices Software:Support higher-level programming abstractions Application Extensions Peripheral Extensions Message Layer Core
12
Related Work InfiniBand Emerging industry standard for clusters/data centers New I/O infrastructure Existing message layers Myricom’s GM OPIUM: SCSI interactions Adaptive Computing Clusters with FPGA cards Tower of Power
13
GRIM: An Implementation A message layer for resource-rich clusters
14
General-purpose Reliable In-order Message Layer (GRIM) Message layer for resource-rich clusters Myrinet SAN backbone Interconnect host CPUs and peripheral devices Direct access to network hardware for resources Communication core Migrate functionality into NI Easy to provide extensions GRIM
15
Core Software Architecture Per-hop flow control Endpoint-NI transfers NI-NI transfers Logical channels Multiple endpoints Virtual NI Programming interfaces Active messages Remote memory NI Endpoint Remote Memory API Registered Memory Memory Management Active Message API Handler Management Active Message Execution Network Endpoint-NI Transfers NI-NI Transfers NI Endpoint
16
Per-hop Flow Control End-to-end flow control necessary for reliable delivery Prevents buffer overflows in communication path Endpoint-managed schemes Impractical for peripheral devices Per-hop flow control scheme Transfer data as soon as next stage can accept Optimistic approach Receiving Endpoint Sending Endpoint SAN Network Interface PCI Reply Receiving Endpoint Sending Endpoint Send SAN Network Interface PCI Receiving Endpoint Sending Endpoint DATA ACK DATA ACK PCI SAN Network Interface DATA ACK PCI
17
Logical Channels Multiple endpoints in a host share the NI Employ multiple logical channels in the NI Each endpoint owns one or more logical channels Logical channel provides virtual interface to network Endpoint 1 Endpoint n Logical Channel Network Interface Scheduler Network
18
Programming Interfaces: Active Messages Message specifies function to be executed at receiver Similar to remote procedure calls, but lightweight Invoke operations at remote resources Useful for constructing device-specific APIs Example: Interactions with remote storage controller CPU Storage Controller NI SAN am_fetch_file() am_return_file_data()
19
Programming Interfaces: Remote Memory Transfer blocks of data from one host to another Receiving NI executes transfer directly Read and Write operations NI interacts with kernel driver to translate virtual addresses Optional notification mechanisms CPU NI SAN Memory CPU Memory
20
Integrating Peripheral Devices Hardware Extensibility
21
Peripheral Device Overview NI CPU Peripheral Device In GRIM peripherals are endpoints Intelligent peripherals Operate autonomously On-card message queues Process incoming active messages Eject outgoing active messages Legacy peripherals Managed by host application or Remote memory operations Legacy Peripheral Device
22
Cyclone Systems I 2 O Server Adaptor Card Networked host on a PCI card Integration with GRIM Interact directly with the NI Ported host-level endpoint software Utilized as a LAN-SAN bridge Host System i960 Rx Processor DMA Engines Primary PCI Interface DRAM 10/100 Ethernet SCSI ROM DMA Engine Secondary PCI Interface Daughter Card Local Bus
23
Video Capture and Display Cards Video capture card Specialized DMA engine Host-level library based on Video-4-Linux Active messages to control Video display card Host locates frame buffer Manipulate frame buffer Remote memory writes Incorporate legacy peripheral devices Host-level management or remote memory operations DMAA/D Frame Buffer Host Memory Video Capture Video Display D/AAGP Frame Buffer
24
Celoxica RC-1000 FPGA Card FPGAs provide acceleration Load with application-specific circuits Celoxica RC-1000 FPGA card Xilinx Virtex-1000 FPGA 8 MB SRAM Hardware implementation Endpoint as state machines AM handlers are circuits SRAM 0 SRAM 1 SRAM 2 SRAM 3 PCI FPGA Control & Switching
25
FPGA Endpoint Organization Frame Input Queues Output Queues Communication Library API Application Data Memory API FPGA Card Memory FPGA Circuit Canvas User Circuit n User Circuit API User Circuit 1
26
Example FPGA Configuration Cryptography configuration DES, RC6, MD5, and ALU 20 MHz Clock Newer FPGAs much faster
27
Expansion: Sharing the FPGA FPGA has limited space for hardware circuits Host reconfigures FPGA on demand FPGA Function Fault Host CPU FPGA Circuit X Circuit Y Configuration A Circuit X Circuit Y Configuration A Configuration B Circuit E Circuit F Configuration C Circuit G State Storage SRAM 0 Message: Use Circuit F Function Fault Circuit E Circuit F Configuration C Circuit G (150 ms)
28
Page Fault Expansion: Sharing On-Card Memory Limited card-memory for storing application data Construct virtual memory system for on-card memory Swap space is host memory Host CPU FPGA User-defined Circuits Page Frame 1 SRAM 1 Page Frame 2 SRAM 2 Page Frame 1 Page Frame 1 Page Frame 1 User Page X
29
Extension: Streaming Computations Software extensibility [1/3]
30
Streaming Computation Overview Programming method for distributed resources Establish pipeline for streaming operations Example: Multimedia processing Celoxica RC-1000 FPGA endpoint CPU NI Video Capture CPU NI Media Processor CPU NI Media Processor CPU NI Media Processor System Area Network
31
Streaming Fundamentals Computation: How is a computation performed? Active message approach Forwarding: Where are results transmitted? Programmable forwarding directory Destination: FPGA Forward Entry: X AM: Perform FFT In Message FPGA Computational Circuits Circuit 1:FFT Circuit N:Encrypt Forwarding Directory Destination: Host Forward Entry: X AM: Receive FFT Out Message
32
Performance: FPGA Computations Acquire SRAM Detect New Message Fetch Header Computation Store Results Store Header Lookup Forwarding Update Queues Release SRAM 8 4 7 1024 16 5 3 1 Fetch Payload 1024 Clocks Clock Speed: 20MHz Operation Latency: 55 s (4KB 73MB/s)
33
Extension: Multicast Software extensibility [2/3]
34
GRIM Multicast Extensions Distribute the same message to multiple receivers Tree based distributions Replicate message at NI Messages are recycled back into network Extensions to NI’s core communication operations Recycled messages in separate logical channel Utilize per-hop flow control for reliable delivery A BC DE NI Endpoint A NI Endpoint B NI Endpoint D NI Endpoint C NI Endpoint E A B C D E
35
Multicast Performance LANai 4, P4-1.7 GHz Hosts Time (μs) 8 Hosts Multicast Message Size (Bytes)
36
Multicast Observations Beneficial: reduces sending overhead Performance loss for large messages Dependent on NI memory copy bandwidth On-card memory copy benchmark: LANai 4:19 MB/s LANai 9:66 MB/s
37
Extension: Sockets API Software Extensibility [3/3]
38
Extension: Sockets Emulation Berkeley sockets is a communication standard Utilized in numerous distributed applications GRIM provides sockets API emulation Functions for intercepting socket calls AM handler functions for buffering connection data write() Intercept Generate AM AM: Append Socket X Socket Data Socket X AM Handler Append Socket Intercept Extract Data read() SenderReceiver
39
Sockets Emulation Performance P4-1.7 GHz Hosts Bandwidth (MBytes/s) Transfer Size (Bytes)
40
Host-to-Host Performance Transferring data between two host-level endpoints
41
Host-to-Host Communication Performance Host-to-Host transfers standard benchmark Remote memory writes in benchmarks Myrinet LANai 4, 9 NI cards Injection performance Overall communication path NISAN CPU NI CPU Memory Active Messages Remote Memory Operations 1 1 2 2 3 3 SourceDestination
42
Host-NI: Data Injections Host-NI transfers challenging Host lacks DMA engine Multiple transfer methods Programmed I/O DMA Automatically select method Result: Tunable PCI Injection Library (TPIL) CPU Main Memory PCI Bus PCI DMA Peripheral Device Memory Controller Cache
43
TPIL Performance: LANai 9 NI with Pentium III-550 MHz Host Bandwidth (MBytes/s) Injection Size (Bytes)
44
Overall Performance: Store-and-Forward Approach: Single message, no overlap Three transmission stages Expect roughly 1/3 of bandwidth of individual stage P3-550 MHz Hosts Message 1 time PCI: 132 MB/s Myrinet: 160 MB/s Overall Transmission Time Sending Host-NI NI-NI Receiving NI-Host Bandwidth (MBytes/s) Message Size (Bytes)
45
Enhancement: Message Pipelining Allow overlap with multiple in-flight messages GRIM uses AM and RM fragmentation/reassembly Performance depends on fragment size LANai 9, P3-550 MHz Hosts Sending Host-NI NI-NI Receiving NI-Host Message 1 time Message 3Message 2 Overall Transmission Time Message 1Message 3Message 2 Message 1Message 3Message 2 Bandwidth (MBytes/s) Message Size (Bytes)
46
Enhancement: Cut-through Transfers Forward data as soon as it begins to arrive Cut-through at sending and receiving NIs time Message 1 Message 2 Sending Host-NI NI-NI Receiving NI-Host Overall Transmission Time LANai 9, P3-550 MHz Hosts Message Size (Bytes) Bandwidth (MBytes/s)
47
Overall Host-to-Host Performance HostNILatency (μs)Bandwidth (MB/s) P4-1.7GHz LANai 98146 LANai 414.5108 P3-550MHz LANai 99.5116 LANai 41496 Bandwidth (MBytes/s) Message Size (Bytes)
48
Comparison to Existing Message Layers Latency (μs) μs Bandwidth (MB/s) MB/s
49
Host-to-Host Performance Summary GRIM fitted with performance enhancements Take place automatically Self configuring GRIM provides competitive performance Bandwidth:1.168 Gb/s Latency:8 μs Provides increased functionality
50
Concluding Remarks
51
Key Contributions Framework for communication in resource-rich clusters Reliable delivery mechanisms, virtualized network interface, and flexible programming interfaces Comparable performance to state-of-the-art message layers Extensible for peripheral devices Suitable for intelligent and legacy peripherals Methods for managing card resources Extensible for higher-level programming abstractions Endpoint-level: Streaming computations and sockets emulation NI-level: multicast
52
Future Directions Continued work with GRIM Video card vendors opening cards to developers Myrinet connected embedded devices Adaptation to other network substrates Gigabit Ethernet appealing because of cost Modification to transmission protocols InfiniBand technology promising Active system area networks FPGA chips beginning to feature gigabit transceivers Use FPGA chips as networked processing device
53
Related Publications A Tunable Communications Library for Data Injection, C. Ulmer and S. Yalamanchili, Proceedings of Parallel and Distributed Processing Techniques and Applications, 2002. Active SANs: Hardware Support for Integrating Computation and Communication, C. Ulmer, C. Wood, and S. Yalamanchili, Proceedings of the Workshop on Novel Uses of System Area Networks at HPCA, 2002. A Messaging Layer for Heterogeneous Endpoints in Resource Rich Clusters, C. Ulmer and S. Yalamanchili, Proceedings of the First Myrinet User Group Conference, 2000. An Extensible Message Layer for High-Performance Clusters, C. Ulmer and S. Yalamanchili, Proceedings of Parallel and Distributed Processing Techniques and Applications, 2000. Papers and Software Available at http://www.ee.gatech.edu/~grimace/research
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.