Www.datatag.org Project Results Thanks to the exceptional cooperation spirit between the European and North American teams involved in the DataTAG project,

Slides:



Advertisements
Similar presentations
The LAC/UIC experiences through JGN2/APAN during SC04 Katsushi Kouyama and Kazumi Kumazoe Kitakyushu JGN Research Center / NiCT Robert L. Grossman, Yunhong.
Advertisements

Helping TCP Work at Gbps Cheng Jin the FAST project at Caltech
Appropriateness of Transport Mechanisms in Data Grid Middleware Rajkumar Kettimuthu 1,3, Sanjay Hegde 1,2, William Allcock 1, John Bresnahan 1 1 Mathematics.
Cheng Jin David Wei Steven Low FAST TCP: design and experiments.
Current Testbed : 100 GE 2 sites (NERSC, ANL) with 3 nodes each. Each node with 4 x 10 GE NICs Measure various overheads from protocols and file sizes.
Chabot College Chapter 2 Review Questions Semester IIIELEC Semester III ELEC
Restricted Slow-Start for TCP William Allcock 1,2, Sanjay Hegde 3 and Rajkumar Kettimuthu 1,2 1 Argonne National Laboratory 2 The University of Chicago.
Ahmed El-Hassany CISC856: CISC 856 TCP/IP and Upper Layer Protocols Slides adopted from: Injong Rhee, Lisong Xu.
Presentation by Joe Szymanski For Upper Layer Protocols May 18, 2015.
CUBIC : A New TCP-Friendly High-Speed TCP Variant Injong Rhee, Lisong Xu Member, IEEE v 0.2.
CUBIC Qian HE (Steve) CS 577 – Prof. Bob Kinicki.
Maximizing End-to-End Network Performance Thomas Hacker University of Michigan October 5, 2001.
Congestion Control on High-Speed Networks
Multiple constraints QoS Routing Given: - a (real time) connection request with specified QoS requirements (e.g., Bdw, Delay, Jitter, packet loss, path.
TCP on High-Speed Networks Sangtae Ha and Injong Rhee North Carolina State University.
Sizing Router Buffers Nick McKeown Guido Appenzeller & Isaac Keslassy SNRC Review May 27 th, 2004.
The Effect of Router Buffer Size on HighSpeed TCP Performance Dhiman Barman Joint work with Georgios Smaragdakis and Ibrahim Matta.
1 Emulating AQM from End Hosts Presenters: Syed Zaidi Ivor Rodrigues.
The Effects of Systemic Packets Loss on Aggregate TCP Flows Thomas J. Hacker May 8, 2002 Internet 2 Member Meeting.
Chapter 4. After completion of this chapter, you should be able to: Explain “what is the Internet? And how we connect to the Internet using an ISP. Explain.
Experiences in Design and Implementation of a High Performance Transport Protocol Yunhong Gu, Xinwei Hong, and Robert L. Grossman National Center for Data.
Large File Transfer on 20,000 km - Between Korea and Switzerland Yusung Kim, Daewon Kim, Joonbok Lee, Kilnam Chon
J. Bunn, D. Nae, H. Newman, S. Ravot, X. Su, Y. Xia California Institute of Technology High speed WAN data transfers for science Session Recent Results.
The SIS On-Line Stand at the ICT4D World Summit Exhibition demonstrates how the internet connectivity and technology has the potential.
Maximizing End-to-End Network Performance Thomas Hacker University of Michigan October 26, 2001.
Masaki Hirabaru Internet Architecture Group GL Meeting March 19, 2004 High Performance Data transfer on High Bandwidth-Delay Product Networks.
Technology for Using High Performance Networks or How to Make Your Network Go Faster…. Robin Tasker UK Light Town Meeting 9 September.
UDT: UDP based Data Transfer Yunhong Gu & Robert Grossman Laboratory for Advanced Computing University of Illinois at Chicago.
UDT: UDP based Data Transfer Protocol, Results, and Implementation Experiences Yunhong Gu & Robert Grossman Laboratory for Advanced Computing / Univ. of.
MAIN TECHNICAL CHARACTERISTICS Next generation optical transport networks with 40Gbps capabilities are expected to be based on the ITU’s.
DataTAG Research and Technological Development for a Transatlantic Grid Abstract Several major international Grid development projects are underway at.
High-speed TCP  FAST TCP: motivation, architecture, algorithms, performance (by Cheng Jin, David X. Wei and Steven H. Low)  Modifying TCP's Congestion.
Parallel TCP Bill Allcock Argonne National Laboratory.
HighSpeed TCP for High Bandwidth-Delay Product Networks Raj Kettimuthu.
Rate Control Rate control tunes the packet sending rate. No more than one packet can be sent during each packet sending period. Additive Increase: Every.
GridPP Collaboration Meeting Networking: Current Status Robin Tasker CCLRC, Daresbury Laboratory 3 June 2004.
GNEW’2004 – 15/03/2004 DataTAG project Status & Perspectives Olivier MARTIN - CERN GNEW’2004 workshop 15 March 2004, CERN, Geneva.
Data Transport Challenges for e-VLBI Julianne S.O. Sansa* * With Arpad Szomoru, Thijs van der Hulst & Mike Garret.
Internet data transfer record between CERN and California Sylvain Ravot (Caltech) Paolo Moroni (CERN)
Practical Distributed Authorization for GARA Andy Adamson and Olga Kornievskaia Center for Information Technology Integration University of Michigan, USA.
CEOS Working Group on Information Systems and Services - 1 Data Services Task Team Discussions on GRID and GRIDftp Stuart Doescher, USGS WGISS-15 May 2003.
Uni Innsbruck Informatik - 1 Network Support for Grid Computing... a new research direction! Michael Welzl DPS NSG Team
DataTAG Research and Technological Development for a Transatlantic Grid Abstract The main idea behind the DataTAG project was to strengthen the collaboration.
TCP transfers over high latency/bandwidth networks Internet2 Member Meeting HENP working group session April 9-11, 2003, Arlington T. Kelly, University.
Performance Engineering E2EpiPEs and FastTCP Internet2 member meeting - Indianapolis World Telecom Geneva October 15, 2003
30 June Wide Area Networking Performance Challenges Olivier Martin, CERN UK DTI visit.
Unit III Bandwidth Utilization: Multiplexing and Spectrum Spreading In practical life the bandwidth available of links is limited. The proper utilization.
17 January 2016 Fall 2004 Internet2 Member Meeting Douglas Van Houweling President & CEO, Internet2 17 January 2016.
Data Transport Challenges for e-VLBI Julianne S.O. Sansa* * With Arpad Szomoru, Thijs van der Hulst & Mike Garret.
GNEW2004 CERN March 2004 R. Hughes-Jones Manchester 1 Lessons Learned in Grid Networking or How do we get end-2-end performance to Real Users ? Richard.
TCP transfers over high latency/bandwidth networks & Grid DT Measurements session PFLDnet February 3- 4, 2003 CERN, Geneva, Switzerland Sylvain Ravot
Final EU Review - 24/03/2004 DataTAG is a project funded by the European Commission under contract IST Richard Hughes-Jones The University of.
INDIANAUNIVERSITYINDIANAUNIVERSITY Status of FAST TCP and other TCP alternatives John Hicks TransPAC HPCC Engineer Indiana University APAN Meeting – Hawaii.
An Analysis of AIMD Algorithm with Decreasing Increases Yunhong Gu, Xinwei Hong, and Robert L. Grossman National Center for Data Mining.
Masaki Hirabaru (NICT) and Jin Tanaka (KDDI) Impact of Bottleneck Queue on Long Distant TCP Transfer August 25, 2005 NOC-Network Engineering Session Advanced.
28/09/2016 Congestion Control Ian McDonald (with many other WAND members)
Network Congestion Control HEAnet Conference 2005 (David Malone for Doug Leith)
CUBIC Marcos Vieira.
TESTBED (Technical)
Internet2 Land Speed Record
R. Hughes-Jones Manchester
Networking for grid Network capacity Network throughput
TransPAC HPCC Engineer
DataTAG Project update
Prepared by Les Cottrell & Hadrien Bullot, SLAC & EPFL, for the
Internet2 Spring Member meeting
High-Performance Data Transport for Grid Applications
Review of Internet Protocols Transport Layer
Presentation transcript:

Project Results Thanks to the exceptional cooperation spirit between the European and North American teams involved in the DataTAG project, remarkable results have been achieved in a very short while, in the areas of:  Interoperability between, Globus based, European and US Grid projects as demonstrated during the IST2002, SC2002 and IST2003 conferences.  Grid related network research (i.e. High Performance Transport Protocols, Quality of Service (QoS),and Advance Bandwidth Reservation). EARLY DEMOS AT iGRID2002 The iGRID2002 conference that took place in Amsterdam in September 2002 provided an excellent opportunity to showcase some early demonstrations with:  The University of Michigan with respect to early advance bandwidth reservation prototypes and Quality of Service.  The first 12’000 km long “lightpath” experiment between the Triumf laboratory in Vancouver and CERN in Geneva through Amsterdam that resulted in the transfer of Terabytes of real physics data at speeds never achieved before. INTERNET2 LANDSPEED RECORDS (I2LSR) In order to stimulate research & experimentation in the TCP transport area, the Internet2 project created a contest for IPv4 as well as IPv6 (next generation Internet) and for single streams as well as multiple streams. The, so called, Internet2 landspeed record has already been beaten several times by teams closely associated with DataTAG. The IPv4 record, established on February 27th-28th 2003 by a team from Caltech, CERN, LANL and SLAC with a single 2.38Gbps stream over a 10’000 km path between Geneva and Sunnyvale through Chicago, has been entered in the Science and Technology section of the Guinness book of records [1]. Likewise, a new IPv6 record [2] was established on May 6th 2003 by a team from Caltech and CERN with a single 983Mbps stream over a 7’067 km path between Geneva and Chicago. The latest IPv4 record was established on October by a team from Caltech and CERN with a single 5.44Gbps stream over the 7’073 km path between Geneva and Chicago thus achieving the amazing result of 38, terabit-meters/second Like with all records, there is still ample room for further improvements. According to recent tests with HP’s RX2600 servers with dual Intel’s Itanium 2 1.5GHz and Intel Pro/10 GbE-LR network interface cards, stable, well above 5 Gbps throughputs can already be achieved today. With the advent of latest PCI Express chips, faster processors and improved motherboards, there is little doubt that it will be feasible to push the performance of single stream TCP transport much closer to 10Gbps in the coming years.

INTERNET2 LSR CONTEST RULES Excerpts from “A data transfer must run for a minimum of 10 continuous minutes over a minimum terrestrial distance of 100 kilometers with a minimum of two router hops in each direction between the source node and the destination node across one of more operational and production-oriented high-performance research and educations networks.…….. The winner in each Class will be whichever submitter is judged to have met all rules and whose results yield the largest value of the product of the achieved bandwidth (bits/second) multiplied by the sum of the terrestrial distances between each router location starting from the source node location and ending at the destination node location, using the shortest distance of the Earth's circumference measured in meters between each pair of locations.” The contest unit of measurement is thus bit-meters/second. HIGH PERFORMANCE TRANSPORT TCP/IP’s congestion avoidance algorithms, also known under the names “Slow Start” and “AIMD (Additive Increase Multiple Decrease)” have only undergone relatively minor changes since their inception back in 1988 by Van Jacobson. Although extensive analytic simulations have been done using the NS & NS2 tools in order to establish the fairness of TCP under a number of operating conditions, most of these studies unfortunately neglected the emerging very high speed, i.e. greater than 1Gbps, long distance, i.e. over 10’000 km, networks. Use of such networks is already established in worldwide collaborations such as the High Energy Physics community that have data intensive applications requiring exchange of Petabytes, i.e. millions of Megabytes, of data. As a result, proper operation of TCP over long distance and high bandwidth networks has been somewhat overlooked, especially as tools like GridFTP in the Grid community, offered convenient ways to circumvent the problem by making use of multiple parallel streams instead of a single stream. However, contrary to a common belief, TCP is only fair between flows having similar characteristics such as packet size, round trip time and bandwidth. The DataTAG testbed, together with its extensions to Amsterdam, Bologna and Lyon, has proved to be invaluable in validating, testing, sometimes developing even, other TCP/IP stacks variants such as, Caltech’s Grid-DT (Data Transport), IETF’s High-Speed TCP [3] with limited slow start, Caltech’s FAST [4] (Fast AQM Scalable TCP ), Hamilton Institute’s H-TCP, Cambridge University’s Scalable TCP [5], in collaboration with the Web 100 [6] and Net100 [7] projects. Note that the size of the TCP window required to achieve 10Gb/s performance over a 10Gb/s circuit with a round trip time of 170ms (e.g. Geneva to Los Angeles) is greater than 200Mbytes. This translates to over 140’ bytes packets “in flight” between the sender and the receiver! [1] [2] NEWS/log200306/msg00003.html [3] highspeed-01.txt [4] [5] [6] [7]