Presentation is loading. Please wait.

Presentation is loading. Please wait.

J. Bunn, D. Nae, H. Newman, S. Ravot, X. Su, Y. Xia California Institute of Technology High speed WAN data transfers for science Session Recent Results.

Similar presentations


Presentation on theme: "J. Bunn, D. Nae, H. Newman, S. Ravot, X. Su, Y. Xia California Institute of Technology High speed WAN data transfers for science Session Recent Results."— Presentation transcript:

1 J. Bunn, D. Nae, H. Newman, S. Ravot, X. Su, Y. Xia California Institute of Technology High speed WAN data transfers for science Session Recent Results TERENA Networking Conference 2005 Poznan, Poland 6-9 June

2 AGENDA u TCP performance over high speed/latency networks u End-Systems performance u Next generation network u Advanced R&D projects

3 Single TCP stream performance under periodic losses Loss rate =0.01%: è LAN BW utilization= 99% è WAN BW utilization=1.2% Bandwidth available = 1 Gbps  TCP throughput much more sensitive to packet loss in WANs than LANs  TCP’s congestion control algorithm (AIMD) is not well-suited to gigabit networks  The effect of packets loss can be disastrous  TCP is inefficient in high bandwidth*delay networks  The future performance-outlook for computational grids looks bad if we continue to rely solely on the widely-deployed TCP RENO

4 Single TCP stream between Caltech and CERN u Available (PCI-X) Bandwidth=8.5 Gbps u RTT=250ms (16’000 km) u 9000 Byte MTU u 15 min to increase throughput from 3 to 6 Gbps u Sending station:  Tyan S2882 motherboard, 2x Opteron 2.4 GHz, 2 GB DDR. u Receiving station:  CERN OpenLab:HP rx4640, 4x 1.5GHz Itanium-2, zx1 chipset, 8GB memory u Network adapter:  S2IO 10 GbE Burst of packet losses Single packet loss CPU load = 100%

5 ResponsivenessPathBandwidth RTT (ms) MTU (Byte) Time to recover LAN 10 Gb/s 11500 430 ms Geneva–Chicago 10 Gb/s 1201500 1 hr 32 min Geneva-Los Angeles 1 Gb/s 1801500 23 min Geneva-Los Angeles 10 Gb/s 1801500 3 hr 51 min Geneva-Los Angeles 10 Gb/s 1809000 38 min Geneva-Los Angeles 10 Gb/s 180 64k (TSO) 5 min Geneva-Tokyo 1 Gb/s 3001500 1 hr 04 min  Large MTU accelerates the growth of the window  Time to recover from a packet loss decreases with large MTU  Larger MTU reduces overhead per frames (saves CPU cycles, reduces the number of packets)  C. RTT 2. MSS 2 C : Capacity of the link  Time to recover from a single packet loss:

6 Fairness: TCP Reno MTU & RTT bias Starlight (Chi) CERN (GVA)  Two TCP streams share a 1 Gbps bottleneck  RTT=117 ms  MTU = 1500 Bytes; Avg. throughput over a period of 4000s = 50 Mb/s  MTU = 9000 Bytes; Avg. throughput over a period of 4000s = 698 Mb/s  Factor 14 !  Connections with large MTU increase quickly their rate and grab most of the available bandwidth RR GbE Switch Host #1 POS 2.5 Gbps 1 GE Host #2 Host #1 Host #2 1 GE Bottleneck

7 TCP Variants  HSTCP, Scalable, Westwood+, Bic TCP, HTCP  FAST TCP  Based on TCP Vegas  Uses end-to-end delay and loss to dynamically adjust the congestion window  Defines an explicit equilibrium  7.3 Gbps between CERN and Caltech for hours  FAST vs RENO: 1G BW utilization 19% BW utilization 27% BW utilization 95% capacity = 1Gbps; 180 ms round trip latency; 1 flow 3600s Linux TCP Linux TCP (Optimized) FAST

8 TCP Variants Performance   Tests between CERN and Caltech   Capacity = OC-192 9.5Gbps; 264 ms round trip latency; 1 flow u u Sending station: Tyan S2882 motherboard, 2x Opteron 2.4 GHz, 2 GB DDR. u u Receiving station (CERN OpenLab): HP rx4640, 4x 1.5GHz Itanium-2, zx1 chipset, 8GB memory u u Network adapter: Neterion 10 GE NIC Linux TCP Linux Westwood+ Linux BIC TCP FAST TCP 3.0 Gbps 5.0 Gbps7.3 Gbps 4.1 Gbps

9  Product of transfer speed and distance using standard Internet (TCP/IP) protocols.  Single Stream 7.5 Gbps X 16 kkm with Linux: July 2004  IPv4 Multi-stream record with FAST TCP: 6.86 Gbps X 27kkm: Nov 2004  IPv6 record: 5.11 Gbps between Geneva and Starlight: Jan. 2005  Concentrate now on reliable Terabyte-scale file transfers  Disk-to-disk Marks: 536 Mbytes/sec (Windows); 500 Mbytes/sec (Linux)  Note System Issues: PCI-X Bus, Network Interface, Disk I/O Controllers, CPU, Drivers Internet 2 Land Speed Record (LSR) Redefining the Role and Limits of TCP http://www.guinnessworldrecords.com/ Nov. 2004 Record Network Internet2 LSRs: Blue = HEP 7.2G X 20.7 kkm Throuhgput (Petabit-m/sec)

10 High Throughput Disk to Disk Transfers: From 0.1 to 1GByte/sec Server Hardware (Rather than Network) Bottlenecks:  Write/read and transmit tasks share the same limited resources: CPU, PCI- X bus, memory, IO chipset  PCI-X bus bandwidth: 8.5 Gbps [133MHz x 64 bit]  Neterion SR 10GE NIC  10 GE NIC – 7.5 Gbits/sec (memory-to- memory, with 52% CPU utilization)  2*10 GE NIC (802.3ad link aggregation) – 11.1 Gbits/sec (memory-to-memory)  1 TB at 536 MB/sec from CERN to Caltech Performance in this range (from 100 MByte/sec up to 1 GByte/sec) is required to build a responsive Grid- based Processing and Analysis System for LHC

11 SC2004: High Speed TeraByte Transfers for Physics u Demonstrating that many 10 Gbps wavelengths can be used efficiently over continental and transoceanic distances u Preview of the globally distributed grid system that is now being developed in preparation for the next generation of high-energy physics experiments at CERN's Large Hadron Collider (LHC), u Monitoring the WAN performance using the MonALISA agent-based system u Major Partners : Caltech-FNAL-SLAC u Major Sponsors:  Cisco, S2io, HP, Newysis u Major networks:  NLR, Abilene, ESnet, LHCnet, Ampath, TeraGrid u Bandwidth challenge award: 101 Gigabit Per Second Mark

12 SC2004 Network (I)   80 10GE ports   50 10GE network adapters   10 Cisco 7600/6500 dedicated to the experiment   Backbone for the experiment built in two days.

13 101 Gigabit Per Second Mark 101 Gbps Unstable end-sytems END of demo Source: Bandwidth Challenge committee

14 UltraLight: Developing Advanced Network Services for Data Intensive HEP Applications  UltraLight: a next-generation hybrid packet- and circuit- switched network infrastructure  Packet switched: cost effective solution; requires ultrascale protocols to share 10G efficiently and fairly  Circuit-switched: Scheduled or sudden “overflow” demands handled by provisioning additional wavelengths; Use path diversity, e.g. across the US, Atlantic, Canada,…  Extend and augment existing grid computing infrastructures (currently focused on CPU/storage) to include the network as an integral component  Using MonALISA to monitor and manage global systems  Partners: Caltech, UF, FIU, UMich, SLAC, FNAL, MIT/Haystack; CERN, NLR, CENIC, Internet2; Translight, UKLight, Netherlight; UvA, UCL, KEK, Taiwan  Strong support from Cisco

15 Ultralight 10GbE OXC @ Caltech u L1, L2 and L3 services u Hybrid packet- and circuit-switched PoP u Control plane is L3

16 LambdaStation  A Joint Fermilab and Caltech project  Enabling HEP applications to send high throughput traffic between mass storage systems across advanced network paths  Dynamic Path Provisioning across USNet, UltraLight, NLR; Plus an ESNet/Abilene “standard path”  DoE funded

17 LambdaStation: Results  Plot shows functionality  Alternate use of lightpaths and conventional routed networks.  Grid FTP

18 Summary  For many years the Wide Area Network has been the bottleneck; this is no longer the case in many countries thus making deployment of a data intensive Grid infrastructure possible!  Some transport protocol issues still need to be resolved; however there are many encouraging signs that practical solutions may now be in sight.  1GByte/sec disk to disk challenge.  Today: 1 TB at 536 MB/sec from CERN to Caltech  Next generation network and Grid system: UltraLight and LambdaStation  The integrated, managed network  Extend and augment existing grid computing infrastructures (currently focused on CPU/storage) to include the network as an integral component.

19 Thanks&Questions


Download ppt "J. Bunn, D. Nae, H. Newman, S. Ravot, X. Su, Y. Xia California Institute of Technology High speed WAN data transfers for science Session Recent Results."

Similar presentations


Ads by Google