Technology for Using High Performance Networks or How to Make Your Network Go Faster…. Robin Tasker UK Light Town Meeting 9 September.

Slides:



Advertisements
Similar presentations
GridFTP Challenges In Data Transport John Bresnahan Argonne National Laboratory The University of Chicago.
Advertisements

MCCS391 - Application Project II Simon, Kuong Chio Ka Ramon, Vu Kai Chio Carl, Iun Sam Meng Presented by: 24 Jan 2002
TCP transfers over high latency/bandwidth network & Grid TCP Sylvain Ravot
Maximizing End-to-End Network Performance Thomas Hacker University of Michigan October 5, 2001.
Novell Server Linux vs. windows server 2008 By: Gabe Miller.
Client Side Mirror Selection Will Lefevers CS 526 Advanced Internet and Web Systems.
GridPP meeting Feb 03 R. Hughes-Jones Manchester WP7 Networking Richard Hughes-Jones.
I/O Channels I/O devices getting more sophisticated e.g. 3D graphics cards CPU instructs I/O controller to do transfer I/O controller does entire transfer.
Tutorials 1 1.What is the definition of a distributed system? 1.A distributed system is a collection of independent computers that appears to its users.
DataTAG Meeting CERN 7-8 May 03 R. Hughes-Jones Manchester 1 High Throughput: Progress and Current Results Lots of people helped: MB-NG team at UCL MB-NG.
ESLEA Bedfont Lakes Dec 04 Richard Hughes-Jones Network Measurement & Characterisation and the Challenge of SuperComputing SC200x.
1 High Performance WAN Testbed Experiences & Results Les Cottrell – SLAC Prepared for the CHEP03, San Diego, March 2003
DISTRIBUTED COMPUTING
Ch. 28 Q and A IS 333 Spring Q1 Q: What is network latency? 1.Changes in delay and duration of the changes 2.time required to transfer data across.
All rights reserved © 2006, Alcatel Accelerating TCP Traffic on Broadband Access Networks  Ing-Jyh Tsang 
Marwan Al-Namari Week 1. Teaching Plan: Weeks 1 – 14. Week 1-6 (In week 4 you will have a Quiz No.1). Mid Term Holiday Mid-Term Exam. Week 7-14 (In week.
TNC 2007 Bandwidth-on-demand to reach the optimal throughput of media Brecht Vermeulen Stijn Eeckhaut, Stijn De Smet, Bruno Volckaert, Joachim Vermeir,
Large File Transfer on 20,000 km - Between Korea and Switzerland Yusung Kim, Daewon Kim, Joonbok Lee, Kilnam Chon
Semester 1 3 JEOPARDY CHAPTER 1 REVIEW Ryan McCaigue.
TCP/IP Essentials A Lab-Based Approach Shivendra Panwar, Shiwen Mao Jeong-dong Ryoo, and Yihan Li Chapter 5 UDP and Its Applications.
Maximizing End-to-End Network Performance Thomas Hacker University of Michigan October 26, 2001.
Kevin Dunford – Windows Support & Development What do I do.. Support, configuration, and development of - Windows servers, desktops, Laptops, printers,
Group ID: Guided By: Rushabh Doshi Prepared By: Jubin Goswami Milan Valambhiya.
Network Tests at CHEP K. Kwon, D. Han, K. Cho, J.S. Suh, D. Son Center for High Energy Physics, KNU, Korea H. Park Supercomputing Center, KISTI, Korea.
Block1 Wrapping Your Nugget Around Distributed Processing.
Srihari Makineni & Ravi Iyer Communications Technology Lab
Managed Object Placement Service John Bresnahan, Mike Link and Raj Kettimuthu (Presenting) Argonne National Lab.
Parallel TCP Bill Allcock Argonne National Laboratory.
TCP behavior of a Busy Internet Server: Analysis and Improvements Y2K Oct.10 Joo Young Hwang Computer Engineering Research Laboratory KAIST. EECS.
GridPP Collaboration Meeting Networking: Current Status Robin Tasker CCLRC, Daresbury Laboratory 3 June 2004.
Internet data transfer record between CERN and California Sylvain Ravot (Caltech) Paolo Moroni (CERN)
Experiences Tuning Cluster Hosts 1GigE and 10GbE Paul Hyder Cooperative Institute for Research in Environmental Sciences, CU Boulder Cooperative Institute.
Network Structure Elements of communication message source the channel message destination Network data or information networks capable of carrying many.
Project Results Thanks to the exceptional cooperation spirit between the European and North American teams involved in the DataTAG project,
TCP transfers over high latency/bandwidth networks Internet2 Member Meeting HENP working group session April 9-11, 2003, Arlington T. Kelly, University.
Performance Engineering E2EpiPEs and FastTCP Internet2 member meeting - Indianapolis World Telecom Geneva October 15, 2003
30 June Wide Area Networking Performance Challenges Olivier Martin, CERN UK DTI visit.
BNL Service Challenge 3 Status Report Xin Zhao, Zhenping Liu, Wensheng Deng, Razvan Popescu, Dantong Yu and Bruce Gibbard USATLAS Computing Facility Brookhaven.
1 Protocol Layering Myungchul Kim Tel:
GNEW2004 CERN March 2004 R. Hughes-Jones Manchester 1 Lessons Learned in Grid Networking or How do we get end-2-end performance to Real Users ? Richard.
Development of a QoE Model Himadeepa Karlapudi 03/07/03.
BMTS 242: Computer and Systems Lecture 2: Memory, and Software Yousef Alharbi Website
TCP transfers over high latency/bandwidth networks & Grid DT Measurements session PFLDnet February 3- 4, 2003 CERN, Geneva, Switzerland Sylvain Ravot
EDC Intenet2 AmericaView TAH Oct 28, 2002 AlaskaView Proof-of-Concept Test Tom Heinrichs, UAF/GINA/ION Grant Mah USGS/EDC Mike Rechtenbaugh USGS/EDC Jeff.
Grid Network Performance Monitoring for e-Science.
Chapter 1 : Computer Networks. Lecture 2. Computer Networks Classification: 1- Depend on the geographical area. 2- Depend on functional relationship.
F. HemmerUltraNet® Experiences SHIFT Model CPU Server CPU Server CPU Server CPU Server CPU Server CPU Server Disk Server Disk Server Tape Server Tape Server.
© 2015 Pittsburgh Supercomputing Center Opening the Black Box Using Web10G to Uncover the Hidden Side of TCP CC PI Meeting Austin, TX September 29, 2015.
Final EU Review - 24/03/2004 DataTAG is a project funded by the European Commission under contract IST Richard Hughes-Jones The University of.
S. Ravot, J. Bunn, H. Newman, Y. Xia, D. Nae California Institute of Technology CHEP 2004 Network Session September 1, 2004 Breaking the 1 GByte/sec Barrier?
1 FAST TCP for Multi-Gbps WAN: Experiments and Applications Les Cottrell & Fabrizio Coccetti– SLAC Prepared for the Internet2, Washington, April 2003
1 Achieving Record Speed Trans-Atlantic End-to-end TCP Throughput Les Cottrell – SLAC Prepared for the NANOG meeting, Salt Lake City, June 2003
Recent experience with PCI-X 2.0 and PCI-E network interfaces and emerging server systems Yang Xia Caltech US LHC Network Working Group October 23, 2006.
RATION CARD MANAGEMENT SYSTEM
Computer Networking A Top-Down Approach Featuring the Internet Introduction Jaypee Institute of Information Technology.
iperf a gnu tool for IP networks
Internet Socket Programing
Software Architecture in Practice
R. Hughes-Jones Manchester
Networking for grid Network capacity Network throughput
TransPAC HPCC Engineer
Chapter III, Desktop Imaging Systems and Issues: Lesson III Moving Image Data
DataTAG Project update
Wide Area Networking at SLAC, Feb ‘03
Peano Trees, Data Striping, and Distributed Computing
Lecture 12 Internet Protocols Internet resource allocation and QoS
Breaking the Internet2 Land Speed Record: Twice
Introduction To Distributed Systems
High-Performance Data Transport for Grid Applications
Presentation transcript:

Technology for Using High Performance Networks or How to Make Your Network Go Faster…. Robin Tasker UK Light Town Meeting 9 September 2004

Presenter Name Facility Name One Terabyte of data transferred in less than an hour On February , the transatlantic DataTAG network was extended, i.e. CERN - Chicago - Sunnyvale (>10000 km). For the first time, a terabyte of data was transferred across the Atlantic in less than one hour using a single TCP (Reno) stream. The transfer was accomplished from Sunnyvale to Geneva at a rate of 2.38 Gbits/s Throughput? What’s the problem?

Presenter Name Facility Name Just the Internet2 Land Speed Record… OK We can get transatlantic rates of 6.5 Gbits/s, but how was that done? What’s the magic? So you thought 2.38 Gbits/s was good?

Presenter Name Facility Name Internet Regional Campus Client Server Just a Well Engineered End-to-End Connection End-to-End “no loss” environment from CERN to Sunnyvale! At least a 2.5 Gbits/s capacity pipe on the end-to-end path Processor speed and system bus characteristics TCP Configuration – window size and frame size (MTU) Network Interface Card and associated driver and their configuration A single TCP connection on the end-to-end path Memory-to-Memory transfer; no disk system involved No real user application That’s to say the devil is in the detail… Sorry. No magic here…..

Presenter Name Facility Name Campus Client Server UK Light Just a Well Engineered End-to-End Connection End-to-End “no loss” environment At least a 2.5 Gbits/s capacity pipe on the end-to-end path Processor speed and system bus characteristics TCP Configuration – window size and frame size (MTU) Network Interface Card and associated driver and their configuration A single TCP connection on the end-to-end path Memory-to-Memory transfer; no disk system involved No real user application Even with UK Light, the devil is in the detail …and it’s harder! And how about the same across UK Light?

Presenter Name Facility Name The Easy Bits…. :-) End-to-End “no loss” environment At least a 2.5 Gbits/s capacity pipe on the end-to-end path Processor speed and system bus characteristics TCP Configuration – window size and frame size (MTU) Network Interface Card and associated driver and their configuration

Presenter Name Facility Name Read the Details Here

Presenter Name Facility Name Now for the hard bits…

Presenter Name Facility Name A Single TCP Connection

Presenter Name Facility Name Fortunately there’s good news! Standard TCP Recovery >10 minutes Scalable TCP Very rapid recovery High Speed TCP Rapid recovery Comparison of TCP stack performance under loss rate of 1 in 10**6, RTT=108ms

Presenter Name Facility Name Memory to memory; no disk system High Speed TCP transfer using Iperf, i.e. no disk system and no application Web100 records of High Speed TCP during a http-Get data transfer, i.e. disk system but no application

Presenter Name Facility Name Understanding disk systems

Presenter Name Facility Name No real user application High Speed TCP transfer using Iperf, i.e. no disk system and no application Web100 records of High Speed TCP during a http-Get data transfer, i.e. disk system but no application Web100 records of High Speed TCP during a GridFTP data transfer, i.e. disk system and real user application

Presenter Name Facility Name Understand your Application It’s YOUR application, so remember Three Golden Rules Benchmark! Benchmark!! Benchmark!!!

Presenter Name Facility Name Book Early!!! - Provisionally - Tuesday 1 st – Wednesday 2 nd March 2005 NeSC