Download presentation
Presentation is loading. Please wait.
1
Dantong Yu Stony Brook University/Brookhaven National Lab
100 GFTP: An Ultra-High Speed Data Transfer Service Over Next Generation 100 Gigabit Per Second Network Dantong Yu Stony Brook University/Brookhaven National Lab
2
Outline Project Personnel Update Project Introduction and challenges
Dantong Yu, Thomas Robertazzi Post-doctoral associates Qian Chen (September/27/2010) Shudong Jin (Oct./01/2010) Student members: Yufen Ren, Tan Li, Rajat Sharma Project Introduction and challenges Software Architecture Project Plan and Intermediate Testbed Technical Discussion between RDMA v.s. TCP
3
End-to-End 100G Networking
End-to-End Networking at 100 Gbits/s 100 G APPS 100G APPS FTP 100 FTP 100 Our project and its role 100G NIC 100G NIC 100 Gbits/s Backbone
4
Problems Definition and Scope
Conventional data transfer protocol (TCP/IP) and file I/O have performance gaps. Reliable transfer (error checking and recovery) at 100G speed Coordinated data transfer flow efficiently traverses file systems and network, data path decomposition data read-in: from source disk to user memory (backend data path) Need External Collaborators to work on this together Transport: Source memory to destination host memory (frontend data path) Data write-out: from user memory to destination disks (backend data path) Cost-effective end to end data transfer (10x10GE v.s. 1X100GE) from sources to sinks Reduced port counts.
5
Challenges (Manageable)
Host System Bottlenecks: Intel Architecture: Quick Path Interface: Theoretical Rate: 6.4 GT/s, 6.4*16(effective link width)*2 (two links for bidirectional)/8 = 25.6GByptes AMD Architecture: HyperTransport For HT 3.1, 16 bits bus width gives the same rate 25.6GBytes. Requires: PCI-2.0 (500MB per lane) x 16 = 8GB (one direction) is required. PCI and PCI-based network card All NIC are PCI-2.0 (500MB per lane) x 8 = 4GB (one direction) Fastest PCI-2.0 (500MB per lane) x 16 = 8GB (one direction) is required for 40Gbps. PCI-3.0 x 16 which doubles the speed of PCI-2.0 is required for 100Gbps.
6
Challenges with some uncertainties and Proposed Solution
File System Bottlenecks: how to do file stage-in/out Kernel/software stacks slow, the same problem as TCP. Look into the zero Copy, Data was moved into the user space in one copy. Fopen, sendfile, O_DIRECT, each has some problem or restriction. Look into Lustre RDMA to pull data directly into the user space. Can a single file client (single server) pull files in the speed of 100Gbps? Look for collaborators who have this type of expertise. Storage: Need to support 100Gbps In/Out by disk spindles Multiple RAID controllers (large cache). LSI 3ware supports up to 2.5GByte/second READ. Multiple RAID controllers to accomplish 100Gbps. i.e. Multiple files need to be streamed into buffer in parallel. Switch fabric interconnects disk servers and FTP 100 servers. Storage Aggregation from disks into FTP server disk partition.
7
FTP 100 Design Challenges Such high performance data transfer requires multiple file R/W. Implement the buffer management (stream multiple files into a buffer in the system memory or NIC card memory), and provide handshake with the backend file systems Challenge of synchronization between Read/Write
8
End System Multi-Layer Capability View
Applications/Climate 100/OSG SRM/BeSTMan Application, Middleware Layer Security Application, Middleware security Application, Middleware Management FTP 100 Over RDMA Layer 4 AA TeraPaths Control TCP UDP TeraPaths Services Manage ment Plane Security Layer 3 AA QoS MPLS IP MPLS TeraPaths Services Manage ment Plane Security Layer 2 AA Layer2 Control VLANs/Ethernet/IB OSCARS Services Manage ment Plane Security Leverage existing systems Layer 1/2 AA Layer1 Control G709 in Acadia 100G NIC Layer 1 Services Manage ment Plane Security AA Plane Control Plane Service Plane Management Plane Security Not in implementation DOE 100G ANI
9
FTP Develop with OpenFabrics
Application RDMACM Memory reg Q manage Verbs File Operation Lower User Space OFED InfiniBand Verbs OpenFabrics Kernel Modules Lustre Kernel Space iWARP driver IB driver iWARP RNIC Ethernet InfiniBand HCA InfiniBand Fabric Hardware rdmacm: rdma communication. User space libraries for establishing RDMA communication. Includes both Infiniband specific and general RDMA communications management libraries for unreliable datagram, reliable connected, and multi-cast data transfers. libibverbs is a library that allows userspace processes to use InfiniBand/RDMA "verbs" directly. Communication Cluster File Systems rdmacm: rdma communication. User space libraries for establishing RDMA communication. Includes both Infiniband specific and general RDMA communications management libraries for unreliable datagram, reliable connected, and multi-cast data transfers. libibverbs is a library that allows userspace processes to use InfiniBand/RDMA "verbs" directly.
10
An example of ftp via OpenFabric
Put Get RDMA FTP Client RDMA FTP Server rdma_getaddrinfo() rdma_create_ep() rdma_listen() rdma_accept() blocks until connection from client rdma_get_recv_comp() rdma_post_send() rdma_connect() rdma_disconnect() connection establishment data rdma_deg_mr() rdma_destroy_ep() FS FTP Protocol
11
One Year Roadmap 08/10 10/10 12/10 02/11 04/11 06/11 07/11
FTP Version 0.1 OpenBSD FTP + RDMA Single file transfer 40 Gbps Lustre Testbed In-house back-to-back 25+10Gbps data transfer test FTP 100 and Acadia Integration into BNL 40Gbps infrastructure 08/10 10/10 12/10 02/11 04/11 06/11 07/11 Iperf+RDMA for data and file transfer 25 Gbps Lustre testbed FTP Version 0.2 multiple file parallel transfer function Lustre file system support FTP Version 0.3 Bug fix and performance improvement FTP Version 1.0 Support Acadia Emulated 40 Gbps NIC 40Gbps Performance Test with all
12
25 Gbps Lustre System Testbed IN Plan
IB Lustre File System IBM System x3650 M3 with RAM disks/SSD OSS Mellanox Switch Front-end connection 40GE (32Ggps, PCI e2 x8= (500MBypte*8) Backend Each server with 4 SAS drives (4*150MByte=600Mbyte/read/write) The total speed of six servers =3600M byte/read/write per second BNL will pay for the this Dell cluster OSS OSS OSS OSS OSS
13
40 Gbps Data Transfer Testbed for December/2010
Mellanox 40 G IB switch IB IB IB Acadia NIC Mellanox 40GE IBM System x3650 M3 with RAM disks/SSD Leverage DOE ANI Tabletop Storage or buy our own storage
14
100 Gbps Data Transfer Testbed Proposal
Mellanox 40 G IB switch Storage IB IB IB DOE 100 Gbits/s Backbone or In-lab fiber factory IB Acadia NIC IBM System x3650 M3 with RAM disks/SSD Acadia NIC Leverage DOE ANI Tabletop Storage or buy our own storage Remote Site
15
Conclusion For one data transfer stream, the RDMA transport is twice as fast as TCP, while the RDMA has only 10% of CPU load compare with the CPU load under TCP, without disk operation. FTP includes two components: Networking and File operation. Compare with the RDMA operation, file operation (limited by the disk performance) takes most of the CPU usage. Therefore, a well-designed file buffer mode is critical.
16
Future work Setup Lustre environment, and configure Lustre with RDMA function Start FTP migration to RDMA Source control Bug database Document Unit Test
17
SOME PRELIMINARY RESULTS
18
40 Gbps Mellanox Ethernet: This link can support both RDMA and TCP
Current Environment 40 Gbps Mellanox Ethernet: This link can support both RDMA and TCP Netqos03/client Netqos04/server Whether there is a switch? Between the two server?
19
Tool - iperf Migrate iperf to the RDMA environment with OFED(librdmacm and libibverbs). 2000+ Source Lines of Code added. From 8382 to iperf usage extended -H: RDMA transfer mode instead of TCP/UDP -G: pr(passive read) pw(passive write) Data read from server. Server writes into clients. -O: output data file, both TCP server and RDMA server Only one stream to transfer
20
Test Suites test suits 1: memory -> memory
test suits 2: file -> memory -> memory test case 2.1: file(regular file) -> memory -> memory test case 2.2: file(/dev/zero) -> memory -> memory test suits 3: memory -> memory -> file test case 3.1: memory -> memory -> file(regular file) test case 3.2: memory -> memory -> file(/dev/null) test suits 4: file -> memory -> memory -> file test case 4.1: file ( regular file) -> memory -> memory -> file( regular file) test case 4.2: file(/dev/zero) -> memory -> memory -> file(/dev/null)
21
File choice File operation with Standard I/O library
fread, fwrite, Cached by OS Input with /dev/zero wants to test the maximum application data transfer include file operation – read, which means disk is not the bottleneck Output with /dev/null wants to test the maximum application data transfer include file operation – write, which means disk is not the bottleneck
22
Buffer choice RDMA operation block size is 10MB
RDMA READ/WRITE one time Previous experiment shows that, in this environment, if the block size more than 5MB, there is little effect to the transfer speed TCP read/write buffer size is the default TCP window size: 85.3 KByte (default)
23
Test case 1: memory -> memory CPU
24
Test case 1: memory -> memory Bandwidth
25
Test case 2.1: (fread) file(regular file) -> memory -> memory CPU
26
Test case 2.1: (fread) file(regular file) -> memory -> memory Bandwidth
27
Test case 2.2 (five minutes) file(/dev/zero) -> memory -> memory CPU
28
Test case 2.2 (five minutes) file(/dev/zero) -> memory -> memory Bandwidth
29
Test case 3.1 (200G file are generated): memory -> memory -> file(regular file) CPU
Bandwidths are almost the same!
30
Test case 3.1 (200G file are generated): memory -> memory -> file(regular file) Bandwidth
Bandwidths are almost the same!
31
Testcase 3.2: memory -> memory -> file(/dev/null) CPU
32
Testcase 3.2: memory -> memory -> file(/dev/null) Bandwidth
33
Test case 4.1: file(r) -> memory -> memory -> file(r) CPU
34
Test case 4.1: file(r) -> memory -> memory -> file(r) Bandwidth
35
Test case 4.2: file(/dev/zero) -> memory -> memory -> file(/dev/null) CPU
36
Test case 4.2: file(/dev/zero) -> memory -> memory -> file(/dev/null) Bandwidth
37
FTP Version 1.0 Support Acadia Emulated 40 Gbps NIC
OpenBSD FTP + RDMA Single file transfer 40 Gbps Lustre Testbed In-house back-to-back 25+10Gbps data transfer test FTP 100 and Acadia Integration into BNL 40Gbps infrastructure 08/10 10/10 12/10 02/11 04/11 06/11 07/11 Iperf+RDMA for data and file transfer 25 Gbps Lustre testbed FTP Version 0.2 multiple file parallel transfer function Lustre file system support FTP Version 0.3 Bug fix and performance improvement FTP Version 1.0 Support Acadia Emulated 40 Gbps NIC 40Gbps Performance Test with all
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.