The NE010 iWARP Adapter Gary Montry Senior Scientist
2 SAN Block Storage Fibre Channel block storage adapter ▪ ▪ ▪ ▪ ▪ ▪ ▪ switch Today’s Data Center Clustering Myrinet, Quadrics, InfiniBand, etc. clustering adapter ▪ ▪ ▪ ▪ ▪ ▪ ▪ switch NAS Users LAN Ethernet networking adapter Applications ▪ ▪ ▪ ▪ ▪ ▪ ▪ switch
3 device driver OS Overhead & Latency in Networking application I/O library user device driver OS I/O adapter server software kernel TCP/IP software hardware standard Ethernet TCP/IP packet adapter bufferapp buffer OS buffer driver buffer I/O cmd % CPU Overhead 100% application to OS context switches 40% transport processing 40% Intermediate buffer copies 20% context switch Sources TCP/IP app buffer I/O cmd application to OS context switches 40% Intermediate buffer copies 20% 60% 40% application to OS context switches 40% The iWARP* Solution Transport (TCP) offload RDMA / DDP User-Level Direct Access / OS Bypass Packet Processing Intermediate Buffer Copies Command Context Switches * iWARP: IETF ratified extensions to the TCP/IP Ethernet Standard
4 Clustering Myrinet, Quadrics, InfiniBand, etc. Clustering iWARP Ethernet Storage iWARP Ethernet SAN Block Storage Fibre Channel networking storage clustering Tomorrow’s Data Center Separate Fabrics for Networking, Storage, and Clustering networking storage clustering ▪ ▪ ▪ ▪ ▪ ▪ ▪ switch networking storage clustering Applications networking storage clustering adapter ▪ ▪ ▪ ▪ ▪ ▪ ▪ switch LAN iWARP Ethernet NAS Users LAN Ethernet ▪ ▪ ▪ ▪ ▪ ▪ ▪ switch
5 Converged iWARP Ethernet SAN Single Adapter for All Traffic Converged Fabric for Networking, Storage, and Clustering Users Smaller footprint Lower complexity Higher bandwidth Lower power Lower heat dissipation NAS ▪ ▪ ▪ ▪ ▪ ▪ ▪ switch networking storage clustering adapter Applications
6 NetEffect’s NE010 iWARP Ethernet Channel Adapter
7 High Bandwidth – Storage/Clustering Fabrics Message Size (Bytes) Bandwidth MB/s InfiniBand - PCI-X NE GbE NIC InfiniBand - PCIe Bandwidth Tests NE010 vs InfiniBand & 10 GbE NIC Source: InfiniBand - OSU website, 10 Gb NIC and NE010 - NetEffect Measured
8 Multiple Connections Realistic Data Center Loads Multiple QP Comparison NE010 vs InfiniBand Message Size (Bytes) BW (MB/s) NE010 (1 QP) NE010 (2048 QP)InfiniBand (1 QP)InfiniBand (2048 QP) Source: InfiniBand - Mellanox Website; NE010 - NetEffect Measured
Message Size (Bytes) Bandwidth MB/s 0% 10% 20% 30% 40% 50% 60% 70% 80% 90% 100% CPU Utilization NE010 Processor Off-load Bandwidth and CPU Utilization NE010 vs 10 GbE NIC BW NIC BW NE010 CPU Util NIC CPU Util NE010
10 Low Latency – Required for Clustering Fabrics Application Latency NE010 vs InfiniBand & 10 GbE NIC Message Size (Bytes) Latency (usec) InfiniBand10 GbE NIC Source: InfiniBand - OSU website, 10 Gb NIC and NE010 - NetEffect Measured NE010
11 Low Latency – Required for Clustering Fabrics Application Latency D. Dalessandro – OSC May ‘06
12 Summary & Contact Information iWARP Enhanced TCP/IP Ethernet Next Generation of Ethernet for All Data Center Fabrics Full Implementation Necessary to Meet the Most Demanding Requirement – Clustering & Grid True Converged I/O Fabric for Large Clusters NetEffect Delivers “Best of Breed” Performance Latency Directly Competitive with Proprietary Clustering Fabrics Superior Performance under Load for Throughput and Bandwidth Dramatically Lower CPU Utilization For more information contact: Gary Montry Senior Scientist |