Presentation is loading. Please wait.

Presentation is loading. Please wait.

Architecture of Parallel Computers CSC / ECE 506 OpenFabrics Alliance Lecture 18 7/17/2006 Dr Steve Hunter.

Similar presentations


Presentation on theme: "Architecture of Parallel Computers CSC / ECE 506 OpenFabrics Alliance Lecture 18 7/17/2006 Dr Steve Hunter."— Presentation transcript:

1 Architecture of Parallel Computers CSC / ECE 506 OpenFabrics Alliance Lecture 18 7/17/2006 Dr Steve Hunter

2 2 Arch of Parallel Computers CSC / ECE 506 Outline Infiniband and Ethernet Review DDP and RDMA OpenFabrics Alliance –IP over Infiniband (IPoIB) –Sockets Direct Protocol (SDP) –Network File System (NFS) –SCSI RDMA Protocol (SRP) –iSCSI Extensions for RDMA (iSER) –Reliable Datagram Sockets (RDS)

3 3 Arch of Parallel Computers CSC / ECE 506 Infiniband Goals - Review Interconnect for server I/O and efficient interprocess communications Standard across the industry –backed by all the major players »200+ companies With an architecture able to match future systems: –Low overhead –Scalable bandwidth, up and down –Scalable fanout, few to thousands –Low cost, excellent price/performance –Robust reliability, availability, and serviceability –Leverages Internet Protocol suite and paradigms

4 4 Arch of Parallel Computers CSC / ECE 506 The Basic Unit: an IB Subnet - Review Basic whole IB system is a subnet Elements: –Endnodes –Links –Switches What it does: Communicate –endnodes with endnodes, –via message queues, –which process messages over several transport types, –and are SARed into packets, –which are placed on links, –and routed by switches. End Node Switch End Node End Node End Node End Node End Node End Node End Node Switch End Node End Node Switch Links

5 5 Arch of Parallel Computers CSC / ECE 506 End Node Attachment to IB - Review End nodes attach to IB via Channel Adapters: –Host CAs (HCAs) »O/S API/KPIs not specified »Queues and memory accessible via verbs »QP, CQ, and RDMA engines »Must support three IB Transports »Can include: Dual ports –load balancing, availability (path migration) –Attach to same or different subnets Partitioning Atomics, … –Target CAs (TCAs) »Queue access method is vendor unique »QP and CQ engines »Need only support Unreliable Datagram »ULP can be standard or proprietary »In other words… A smaller subset of required functions. IO Controller TCA QPsCQs IB Layers HCA QPsCQs Host Verbs Memory Controller CPU Memory Tables Adapter

6 6 Arch of Parallel Computers CSC / ECE 506 Infiniband Summary InfiniBand architecture is a very high performance, low latency interconnect technology based on an industry-standard approach to Remote Direct Memory Access (RDMA) –An InfiniBand fabric is built from hardware and software that are configured, monitored and operated to deliver a variety of services to users and applications Characteristics of the technology that differentiate it from comparative interconnects such as the traditional Ethernet include: –End-to-end reliable delivery, –Scalable bandwidths from 10 to 60 Gbps available today moving to 120 Gbps in the near future –Scalability without performance degradation –Low latency between devices –Greatly reduced server CPU utilization for protocol processing –Efficient I/O channel architecture for network and storage virtualizations

7 7 Arch of Parallel Computers CSC / ECE 506 Advanced Ethernet - Review TCP/IP Model Ethernet Examples IP TCP, UDP Copper, Optical HTTP, SMTP, FTP Physical Link Network Transport Application RDMA NIC (RNIC) SCSI iSER / RNIC Model shown with SCSI application Physical Media Access Control (MAC) Internet Protocol (IP) Direct Data Placement (DDP) Transmission Control Protocol (TCP) SCSl app iSCSI Extensions for RDMA (iSER) Internet SCSI (iSCSI) Markers with PDU Alignment (MPA) Remote Direct Memory Access Protocol (RDMAP) MACService IPService TCPService RDMAService SCSIService It’s expected the OpenFabrics effort (i.e., OpenIB / OpenRDMA merger) will enable even more advanced functions into NIC technology

8 8 Arch of Parallel Computers CSC / ECE 506 Advanced Ethernet Summary The iWARP technology, implemented as RDMA Network Interface Card (RNIC), achieves Zero-copy, RDMA, and protocol offload over existing TCP/IP networks –It was demonstrated that a 10GbE based RNIC can reduce the CPU processing overhead from 80-90% to less than 10% comparing to its host stack equivalent –Additionally, its achievable end-to-end latency is now 5 microseconds or less. iWARP together with the emerging low latency (low hundreds of nanoseconds) 10 GbE switches can also provide a powerful infrastructure for clustered computing, server-to-server processing, visualization and file system –The advantage of the iWARP technology includes its ability to leverage the widely deployed TCP/IP infrastructure, its broad knowledge base, and mature management and monitoring capabilities. –In addition, an iWARP infrastructure is a routable infrastructure, thereby eliminating the need for gateways to connect to the LAN or WAN internet.

9 9 Arch of Parallel Computers CSC / ECE 506 DDP and RDMA IETF RFC http://rfc.net/rfc4296.html The central idea of general-purpose DDP is that a data sender will supplement the data it sends with placement information that allows the receiver's network interface to place the data directly at its final destination without any copying. –DDP can be used to steer received data to its final destination, without requiring layer- specific behavior for each different layer. –Data sent with such DDP information is said to be `tagged'. The central components of the DDP architecture are the “buffer”, which is an object with beginning and ending addresses, and a method (set()), which sets the value of an octet at an address. –In many cases, a buffer corresponds directly to a portion of host user memory. However, DDP does not depend on this; a buffer could be a disk file, or anything else that can be viewed as an addressable collection of octets.

10 10 Arch of Parallel Computers CSC / ECE 506 DDP and RDMA Remote Direct Memory Access (RDMA) extends the capabilities of DDP with two primary functions. –It adds the ability to read from buffers registered to a socket (RDMA Read). »This allows a client protocol to perform arbitrary, bidirectional data movement without involving the remote client. »When RDMA is implemented in hardware, arbitrary data movement can be performed without involving the remote host CPU at all. RDMA specifies a transport-independent untagged message service (Send) with characteristics that are both very efficient to implement in hardware, and convenient for client protocols. –The RDMA architecture is patterned after the traditional model for device programming, where the client requests an operation using Send-like actions (programmed I/O), the server performs the necessary data transfers for the operation (DMA reads and writes), and notifies the client of completion. »The programmed I/O+DMA model efficiently supports a high degree of concurrency and flexibility for both the client and server, even when operations have a wide range of intrinsic latencies.

11 11 Arch of Parallel Computers CSC / ECE 506 OpenFabrics Alliance The OpenFabric Alliance is an international organization comprised of industry, academic and research groups that have developed a unified core of open source software stacks (OpenSTAC) leveraging RDMA architectures for both the Linux and Windows operating systems over both InfiniBand and Ethernet. –RDMA is a communications technique allowing data to be transmitted from the memory of one computer to the memory of another computer without passing through either devices CPU, without needing extensive buffering, and without calling to an operating system kernel The core OpenSTAC software supports all the well known standard upper layer protocols such as MPI, IP, SDP, NFS, SRP, iSER, and RDS on top of Ethernet and InfiniBand (IB) infrastructures –The OpenFabric software and supporting services better enables low-latency InfiniBand and 10 GbE to deliver clustered computing, server-to-server processing, visualization and file system access

12 12 Arch of Parallel Computers CSC / ECE 506 OpenFabrics Software Stack RDMA NICR-NIC Host Channel Adapter HCA User Direct Access Programming Lib UDAPL Reliable Datagram Service RDS iSCSI RDMA Protocol (Initiator) iSER SCSI RDMA Protocol (Initiator) SRP Sockets Direct Protocol SDP IP over InfiniBandIPoIB Performance Manager Agent PMA Subnet Manager Agent SMA Management Datagram MAD Subnet Administrator SA Common InfiniBand iWARP Key InfiniBand HCA iWARP R-NIC Hardware Specific Driver Hardware Specific Driver Connection Manager MAD InfiniBand Verbs / API SA Client Connection Manager Connection Manager Abstraction (CMA) User Level Verbs / API SDPIPoIBSRPiSERRDS UDAPL SDP Library User Level MAD API Open SM Diag Tools Hardware Provider Mid-Layer Upper Layer Protocol User APIs Kernel Space User Space NFS-RDMA RPC Cluster File Sys Application Level SMA R-NIC Driver API Clustered DB Access (Oracle 10g RAC) Sockets Based Access (IBM DB2 ) Various MPIs Access to File Systems Block Storage Access IP Based App Access Apps & Access Methods for using OF Stack

13 13 Arch of Parallel Computers CSC / ECE 506 IP over IB (IPoIB) IETF Standard for mapping Internet protocols to Infiniband –IETF IPoIB Working Group Covers –Fabric initialization –Multicast/Broadcast –Address resolution (IPv4/IPv6) –IP Datagram encapsulation (IPv4/IPv6) –MIBs

14 14 Arch of Parallel Computers CSC / ECE 506 IP over IB (IPoIB) Communication Parameters –Obtained from Subnet Manager (SM) »P_Key (Partition Key) »SL (Service Level) »Path Rate »Link MTU (for IPv6 can be reduced with router advert) »GRH parameters – TClass, Flow Label, HopLimit –Obtained from address resolution »Data Link Layer Address (GID) Perstent Data Link layer address necessary Enables IB Routers to be deployed eventually »QPN (queue pair number)

15 15 Arch of Parallel Computers CSC / ECE 506 IP over IB (IPoIB) Address Resolution –IPv4 »ARP request is sent on Broadcast MGID »ARP reply is unicast back and contains GID and QPN –IPv6 »Neighbor discovery using all IP-hosts multicast address »Existing RFCs Summary –Feels like Ethernet with 2KB MTU –Doesn’t utilize most of Inifinband custom hardware »e.g., SAR, Reliable Transport, Zero Copy, RDMA Reads/Writes, Kernel Bypass »SDP is the enhanced version

16 16 Arch of Parallel Computers CSC / ECE 506 Sockets Direct Protocol (SDP) Based on Microsoft’s Winsock Direct Protocol SDP Feature Summary –Maps sockets SOCK_STREAM to RDMA semantics –Optimizations for transaction oriented protocols –Optimizations for mixing of small and large messages Uses advanced Infiniband features –Reliable Connected (RC) service –Uses RDMA Writes, Reads, and Sends –Supports Automatic Path Migration

17 17 Arch of Parallel Computers CSC / ECE 506 SDP Terminology Data Source –Side of connection which is sourcing the ULP data to be transferred Data Sink –Side of connection which is receiving (sinking) the ULP data Data Transfer Mechanism –To move ULP data from Data Source to Data Sink (e.g., Bcopy, Receiver Initiated Zcopy, Read Zcopy) Flow Control Mode –State that the half connection is currently in (Combined, Pipelined, Buffered) Bcopy Threshold –If message length is under threshold, use Bcopy mechanism. Threshold is locally defined.

18 18 Arch of Parallel Computers CSC / ECE 506 SDP Modes Flow Control Modes restrict data transfer mechanisms Buffered Mode –Used when receiver wishes to force all transfers to use the Bcopy Mechanism Combined Mode –Used when receiver is not pre-posting buffers and uses peek/select interface (Bcopy or Read Zcopy, only one outstanding) Pipelined Mode –Highly optimized transfer mode – multiple write or read buffers outstanding, can use all data transfer mechanisms (Bcopy, Read Zcopy, Receive Initiated Write Zcopy)

19 19 Arch of Parallel Computers CSC / ECE 506 SDP Terminology Enables buffer-copy when –Transfer is short –Application needs buffering Enables zero-copy when –Transfer is long Data Soure User Buffer Data Sink User Buffer CA Infiniband Reliable Connection (RC) SDP Private Buffer Pool (Fixed Size) Zero Copy Path Buffer Copy Path

20 20 Arch of Parallel Computers CSC / ECE 506 Network File System (NFS) Network File System (NFS) is a protocol originally developed by Sun Microsystems in 1984 and defined in RFCs 1094, 1813, and 3530 (obsoletes 3010), as a distributed file system which allows a computer to access files over a network as easily as if they were on its local disks. –NFS is one of many protocols built on the Open Network Computing Remote Procedure Call system (ONC RPC) Version 2 of the protocol –originally operated entirely over UDP and was meant to keep the protocol stateless, with locking (for example) implemented outside of the core protocol Version 3 added: –support for 64-bit file sizes and offsets, to handle files larger than 4GB –support for asynchronous writes on the server, to improve write performance; –additional file attributes in many replies, to avoid the need to refetch them; –a READDIRPLUS operation, to get file handles and attributes along with file names when scanning a directory; –assorted other improvements.

21 21 Arch of Parallel Computers CSC / ECE 506 Network File System (NFS) Version 4 (RFC 3530) –Influenced by AFS and CIFS, includes performance improvements, mandates strong security, and introduces a stateful protocol. Version 4 was the first version developed with the Internet Engineering Task Force (IETF) after Sun Microsystems handed over the development of the NFS protocols. Various side-band protocols have been added to NFS, including: –The byte-range advisory Network Lock Manager (NLM) protocol which was added to support System V UNIX file locking APIs. –The remote quota reporting (RQUOTAD) protocol to allow NFS users to view their data storage quotas on NFS servers. WebNFS is an extension to Version 2 and Version 3 which allows NFS to be more easily integrated into Web browsers and to enable operation through firewalls.

22 22 Arch of Parallel Computers CSC / ECE 506 SCSI RDMA Protocol (SRP) SRP defines a SCSI protocol mapping onto the InfiniBand Architecture and/or functionally similar cluster protocols RDMA Consortium voted to create iSER instead of porting SRP to IP –SRP doesn’t have a wide following –SRP doesn’t have a discovery or management protocol –Version 2 of SRP hasn’t been updated for 1.5 years

23 23 Arch of Parallel Computers CSC / ECE 506 iSCSI Extensions for RDMA (iSER) iSER combines SRP and iSCSI with new RDMA capabilities iSER is maintained as part of iSCSI in IETF –Recently extended to IB by IBM, Voltaire, HP, EMC, and others Benefits to add iSER to IB –Combines same (almost) storage protocol across all RDMA Networks »Easier to train staff »Bridging products more staight-forward »Motivate storage community to iSCSI/iSER mentality and may help with acceptance on IP –Desire for a common Discovery and Management protocol across iSCSI, iSER/iWARP, and IP »i.e., same Management and discovery process and software to handle IP networks and IB networks

24 24 Arch of Parallel Computers CSC / ECE 506 iSCSI Extensions for RDMA (iSER) iSCSI’s main performance deficiencies stem from TCP/IP –TCP is a complex protocol requiring significant processing –Stream based, making it hard to separate data and headers –Requires copies that increase latency and CPU overhead –Using checksums requiring additional CRCs in the ULP iSER eliminates the bottlenecks through: –Zero copy using RDMA –CRC calculated by hardware –Work with message boundaries instead of streams –Transport protocol implemented in hardware (minimal CPU cycles per iO)

25 25 Arch of Parallel Computers CSC / ECE 506 iSCSI Extensions for RDMA (iSER) iSER leverages on iSCSI management, discovery, and RAS –Zero-Configuration, Discovery and global storage name server (SLP, iSNS) –Change Notifications and active monitoring of devices and initiators –High-Availability, and 3 levels of automated recovery –Multi-pathing and storage aggregation –Industry standard management interfaces (MIB) –3 rd party storage managers –Security: Partitioning, Authentication, Central login control, etc. Working with iSER over IB doesn’t require any changes –Focused effort from both communities More advanced than SRP

26 26 Arch of Parallel Computers CSC / ECE 506 iSCSI Extensions for RDMA (iSER) iSCSI specification: –http://www.ietf.org/rfc/rfc3720.txthttp://www.ietf.org/rfc/rfc3720.txt iSER and DA Introduction –http://www.rdmaconsortium.org/home/iSER_DA_intro.pdfhttp://www.rdmaconsortium.org/home/iSER_DA_intro.pdf iSER specification –http://www.ietf.org/internet-drafts/draft-ietf-ips-iser-05.txthttp://www.ietf.org/internet-drafts/draft-ietf-ips-iser-05.txt iSER over IB Overview –http://www.haifa.il.ibm.com/satran/ips/iSER-in-an-IB-network-V9.pdfhttp://www.haifa.il.ibm.com/satran/ips/iSER-in-an-IB-network-V9.pdf

27 27 Arch of Parallel Computers CSC / ECE 506 Reliable Datagram Sockets (RDS) Goals –Provide reliable datagram service »performance »scalability »high availability »simplify application code –Maintain sockets API »application code portability »faster time-to-market

28 28 Arch of Parallel Computers CSC / ECE 506 Reliable Datagram Sockets (RDS) Host Channel Adapter OpenIB Access Layer IPoIB IP Oracle 10g Socket Applications TCPUDP SDP RDS Kernel User UDP Applications Stack Overview

29 29 Arch of Parallel Computers CSC / ECE 506 Reliable Datagram Sockets (RDS) Application connectionless –RDS maintains node-to-node connection –IP addressing –Uses CMA –On-demand connection setup »connect on first sendmsg() or data recv »disconnect on error or policy like inactivity –Connection setup/teardown transparent to applications

30 30 Arch of Parallel Computers CSC / ECE 506 Reliable Datagram Sockets (RDS) Data and Control Channel –Uses RC QP for node level connections –Data and Control QPs per session –Selectable MTU –b-copy send/recv –H/W flow control

31 31 Arch of Parallel Computers CSC / ECE 506 The End

32 32 Arch of Parallel Computers CSC / ECE 506 RDS - Send Connection established on first send sendmsg() –allows send pipelining ENOBUF returned if insufficient send buffers, application retries

33 33 Arch of Parallel Computers CSC / ECE 506 RDS - Receive Identical to UDP recvmsg() –similar blocking/non-blocking behavior “Slow” receiver ports are stalled at sender side –combination of activity (LRU) and memory utilization used to detect slow receivers –sendmsg() to stalled destination port returns EWOULDBLOCK, application can retry »Blocking socket can wait for unblock –recvmsg() on a stalled port un-stalls it

34 34 Arch of Parallel Computers CSC / ECE 506 RDS - High Availability (failover) Use of RC and on-demand connection setup allows HA –connection setup/teardown transparent to applications –every sendmsg() could “potentially” result in a connection setup –if a path fails, connection is torn down, next send can connect on an alternate path (different port or different HCA)

35 35 Arch of Parallel Computers CSC / ECE 506 Preliminary performance RDS on OpenIB *Dual 2.4GHz Xeon 2G memory 4x PCI-X HCA **Sdp ~3700Mb/sec TCP_STREAM

36 36 Arch of Parallel Computers CSC / ECE 506 Preliminary performance RDS on OpenIB *Dual 2.4GHz Xeon 2G memory 4x PCI-X HCA **Sdp ~3700Mb/sec TCP_STREAM

37 37 Arch of Parallel Computers CSC / ECE 506 Preliminary performance RDS on OpenIB


Download ppt "Architecture of Parallel Computers CSC / ECE 506 OpenFabrics Alliance Lecture 18 7/17/2006 Dr Steve Hunter."

Similar presentations


Ads by Google