LambdaStation Monalisa DoE PI meeting September 30, 2005 Sylvain Ravot.

Slides:



Advertisements
Similar presentations
Gigabyte Bandwidth Enables Global Co-Laboratories Prof. Harvey Newman, Caltech Jim Gray, Microsoft Presented at Windows Hardware Engineering Conference.
Advertisements

Application of GMPLS technology to traffic engineering Shinya Tanaka, Hirokazu Ishimatsu, Takeshi Hashimoto, Shiro Ryu (1), and Shoichiro Asano (2) 1:
Highest Energy e + e – Collider LEP at CERN GeV ~4km radius First e + e – Collider ADA in Frascati GeV ~1m radius e + e – Colliders.
Shawn P. McKee University of Michigan University of Michigan UltraLight Meeting, NSF January 26, 2005 Network Working Group Report.
16 September The ongoing evolution from Packet based networks to Hybrid Networks in Research & Education Networks Olivier Martin, CERN NEC’2005.
1 SLAC Site Report By Les Cottrell for UltraLight meeting, Caltech October 2005.
HEP Prospects, J. Yu LEARN Strategy Meeting Prospects on Texas High Energy Physics Network Needs LEARN Strategy Meeting University of Texas at El Paso.
02 nd April 03Networkshop Managed Bandwidth Next Generation F. Saka UCL NETSYS (NETwork SYStems centre of excellence)
UltraLight: Network & Applications Research at UF Dimitri Bourilkov University of Florida CISCO - UF Collaborative Team Meeting Gainesville, FL, September.
ATLAS Tier 2 Paths Within ESnet Mike O’Connor ESnet Network Engineering Group Lawrence Berkeley National Lab
Questionaire answers D. Petravick P. Demar FNAL. 7/14/05 DLP -- GDB2 FNAL/T1 issues In interpreting the T0/T1 document how do the T1s foresee to connect.
Copyright © 2010 Platform Computing Corporation. All Rights Reserved.1 The CERN Cloud Computing Project William Lu, Ph.D. Platform Computing.
CMS Data Transfer Challenges LHCOPN-LHCONE meeting Michigan, Sept 15/16th, 2014 Azher Mughal Caltech.
10GbE WAN Data Transfers for Science High Energy/Nuclear Physics (HENP) SIG Fall 2004 Internet2 Member Meeting Yang Xia, HEP, Caltech
National LambdaRail A Fiber-based Research Infrastructure Vice-Provost for Scholarly Technology University of Southern California Chair of the CENIC Board.
1 Optical Research Networks WGISS 18: Beijing China September 2004 David Hartzell NASA Ames / CSC
1 ESnet Planning for the LHC T0-T1 Networking William E. Johnston ESnet Manager and Senior Scientist Lawrence Berkeley National Laboratory.
31 October The ongoing evolution from Packet based networks to Hybrid Networks in Research & Education Networks Olivier Martin, CERN Swiss ICT Task.
Large File Transfer on 20,000 km - Between Korea and Switzerland Yusung Kim, Daewon Kim, Joonbok Lee, Kilnam Chon
O AK R IDGE N ATIONAL L ABORATORY U. S. D EPARTMENT OF E NERGY DOE-UltraScience Net (& network infrastructure) Update JointTechs Meeting February 15, 2005.
J. Bunn, D. Nae, H. Newman, S. Ravot, X. Su, Y. Xia California Institute of Technology High speed WAN data transfers for science Session Recent Results.
J. Bunn, D. Nae, H. Newman, S. Ravot, X. Su, Y. Xia California Institute of Technology State of the art in the use of long distance network International.
Dan Nae California Institute of Technology US LHCNet Update.
Dan Nae California Institute of Technology The US LHCNet Project ICFA Workshop, Krakow October 2006.
1 ESnet On-demand Secure Circuits and Advance Reservation System (OSCARS) Chin Guok Network Engineering Group ESCC July Energy Sciences Network.
OSCARS Overview Path Computation Topology Reachability Contraints Scheduling AAA Availability Provisioning Signalling Security Resiliency/Redundancy OSCARS.
1 Services to the US Tier-1 Sites LHCOPN April 4th, 2006 Joe Metzger ESnet Engineering Group Lawrence Berkeley National Laboratory.
LHC Tier 2 Networking BOF Joe Metzger Joint Techs Vancouver 2005.
Copyright 2004 National LambdaRail, Inc Connecting to National LambdaRail 6/23/2004 Debbie Montano Director, Development & Operations
Network Tests at CHEP K. Kwon, D. Han, K. Cho, J.S. Suh, D. Son Center for High Energy Physics, KNU, Korea H. Park Supercomputing Center, KISTI, Korea.
CA*net 4 International Grid Testbed Tel:
HOPI: Making the Connection Chris Robb 23 June 2004 Broomfield, CO Quilt Meeting.
1 ESnet Update Joint Techs Meeting Minneapolis, MN Joe Burrescia ESnet General Manager 2/12/2007.
1 Nuclear Physics Network Requirements Workshop Washington, DC Eli Dart, Network Engineer ESnet Network Engineering Group May 6, 2008 Energy Sciences Network.
Data transfer over the wide area network with a large round trip time H. Matsunaga, T. Isobe, T. Mashimo, H. Sakamoto, I. Ueda International Center for.
1 ESnet Planning for the LHC T0-T1 Networking William E. Johnston ESnet Manager and Senior Scientist Lawrence Berkeley National Laboratory.
Delivering Circuit Services to Researchers: The HOPI Testbed Rick Summerhill Director, Network Research, Architecture, and Technologies, Internet2 Joint.
Copyright 2004 National LambdaRail, Inc N ational L ambda R ail Update 9/28/2004 Debbie Montano Director, Development & Operations
Shawn McKee University of Michigan University of Michigan UltraLight: A Managed Network Infrastructure for HEP CHEP06, Mumbai, India February 14, 2006.
Data Logistics in Particle Physics Ready or Not, Here it Comes… Prof. Paul Sheldon Vanderbilt University Prof. Paul Sheldon Vanderbilt University.
Sep 02 IPP Canada Remote Computing Plans Pekka K. Sinervo Department of Physics University of Toronto 4 Sep IPP Overview 2 Local Computing 3 Network.
1 Role of Ethernet in Optical Networks Debbie Montano Director R&E Alliances Internet2 Member Meeting, Apr 2006.
TERENA Networking Conference, Zagreb, Croatia, 21 May 2003 High-Performance Data Transport for Grid Applications T. Kelly, University of Cambridge, UK.
The Design and Demonstration of the UltraLight Network Testbed Presented by Xun Su GridNets 2006, Oct.
TCP transfers over high latency/bandwidth networks Internet2 Member Meeting HENP working group session April 9-11, 2003, Arlington T. Kelly, University.
Performance Engineering E2EpiPEs and FastTCP Internet2 member meeting - Indianapolis World Telecom Geneva October 15, 2003
30 June Wide Area Networking Performance Challenges Olivier Martin, CERN UK DTI visit.
US LHC NWG Dynamic Circuit Services in US LHCNet Artur Barczyk, Caltech Joint Techs Workshop Honolulu, 01/23/2008.
TCP transfers over high latency/bandwidth networks & Grid DT Measurements session PFLDnet February 3- 4, 2003 CERN, Geneva, Switzerland Sylvain Ravot
Dynamic Network Services In Internet2 John Vollbrecht /Dec. 4, 2006 Fall Members Meeting.
INDIANAUNIVERSITYINDIANAUNIVERSITY HOPI: Hybrid Packet and Optical Infrastructure Chris Robb and Jim Williams Indiana University 7 July 2004 Cairns, AU.
Brookhaven Science Associates U.S. Department of Energy 1 Network Services LHC OPN Networking at BNL Summer 2006 Internet 2 Joint Techs John Bigrow July.
1 ESnet4 IP Network and Science Data Network Configuration and Roll Out Schedule Projected Schedule as of Sept., 2006 For more information contact William.
Final EU Review - 24/03/2004 DataTAG is a project funded by the European Commission under contract IST Richard Hughes-Jones The University of.
S. Ravot, J. Bunn, H. Newman, Y. Xia, D. Nae California Institute of Technology CHEP 2004 Network Session September 1, 2004 Breaking the 1 GByte/sec Barrier?
J. Bunn, D. Nae, H. Newman, S. Ravot, R. Voicu, X. Su, Y. Xia California Institute of Technology US LHCNet US LHC Network Working Group Meeting October.
Connecting to the new Internet2 Network What to Expect… Steve Cotter Rick Summerhill FMM 2006 / Chicago.
Recent experience with PCI-X 2.0 and PCI-E network interfaces and emerging server systems Yang Xia Caltech US LHC Network Working Group October 23, 2006.
Fermilab Cal Tech Lambda Station High-Performance Network Research PI Meeting BNL Phil DeMar September 29, 2005.
J. Bunn, D. Nae, H. Newman, S. Ravot, X. Su, Y. Xia California Institute of Technology US LHCNet LHCNet WG September 12 th 2006.
HOPI Update Rick Summerhill Director Network Research, Architecture, and Technologies Internet2 Joint Techs 17 July 2006 University of Wisconsin, Madison,
July 19, 2005-LHC GDB T0/T1 Networking L. Pinsky--ALICE-USA1 ALICE-USA T0/T1 Networking Plans Larry Pinsky—University of Houston For ALICE-USA.
Computing at Fermilab D. Petravick Fermilab August 16, 2007.
Dynamic Network Services In Internet2
TCP Performance over a 2.5 Gbit/s Transatlantic Circuit
Wide Area Networking at SLAC, Feb ‘03
ATLAS Tier 2 Paths Within ESnet
Wide-Area Networking at SLAC
High-Performance Data Transport for Grid Applications
Presentation transcript:

LambdaStation Monalisa DoE PI meeting September 30, 2005 Sylvain Ravot

Agenda u Motivation u Building a wide-area testbed infrastructure u Monalisa u Data transport performance u Next Generation LHCNet

TOTEM pp, general purpose; HI LHCb: B-physics ALICE : HI  pp  s =14 TeV L=10 34 cm -2 s -1  27 km Tunnel in Switzerland & France Large Hadron Collider CERN, Geneva: 2007 Start CMS Atlas Higgs, SUSY, Extra Dimensions, CP Violation, QG Plasma, … the Unexpected Physicists 250+ Institutes 60+ Countries

LHC Data Grid Hierarchy: Developed at Caltech Tier 1 Tier2 Center Online System CERN Center PBs of Disk; Tape Robot FNAL Center IN2P3 Center BNL Center RAL Center Institute Workstations ~ MBytes/sec ~10 Gbps 1 to 10 Gbps Tens of Petabytes by An Exabyte ~5-7 Years later. Physics data cache ~PByte/sec Gbps Tier2 Center ~1-10 Gbps Tier 0 +1 Tier 3 Tier 4 Tier2 Center Tier 2 Experiment CERN/Outside Resource Ratio ~1:2 Tier0/(  Tier1)/(  Tier2) ~1:1:1 Emerging Vision: A Richly Structured, Global Dynamic System

Wide-Area TestBed Infrastructure u Two sites: Fermilab and Caltech,  10GBps connectivity via UltraScienceNet (USN)  Interconnecting hubs at StarLight/Chicago and Sunnyvale u At Fermilab: 8 servers available for software development and bandwidth tests including 4 machines with 10Gb NICs (IntelPro/10GbE and Neterion). u At Caltech: 2 machines equipped with 10Gb Neterion cards)  dual Opteron GHz; 4GB RAM  Neterion 10GbE card  3 3ware 8500 controllers with GB SATA drives u Each sites supports policy based routing to USN, jumbo frames and DSCP tagging. u Californian sites support MPLS

FNAL-Caltech testbed u 2 x 10GE waves to Downtown LA from Caltech Campus u 10GE wave from LA to SNV (provided by CENIC) u Cisco 6509 at Caltech, LA and SNV u Extension of the testbed to CERN (Fall 2005)

Caltech 10GbE OXC u L1, L2 and L3 services u Hybrid packet- and circuit-switched PoP u Photonic Switch u Control plane is L3 (Monalisa)

Test Setup for Controlling Optical Switches 3 Simulated Links as L2 VLAN CALIENT (LA) Glimmerglass (GE) 3 partitions on each switch They are controlled by a MonALISA service 10G links 1G links u Monitor and control switches using TL1 u Interoperability between the two systems u End User access to service u GMPLS Can easily be adapted to GFP-based products!!!

Data transport performance Sophisticated systems to provision bandwidth ….. but TCP performance issues (AIMD, MTU …) u Five parallel TCP streams u Data: Orange and light blue  Ack: Green and purple High speed path Via USNet

Single TCP stream between Caltech and CERN u Available (PCI-X) Bandwidth=8.5 Gbps u RTT=250ms (16’000 km) u 9000 Byte MTU u 15 min to increase throughput from 3 to 6 Gbps u Sending station:  Tyan S2882 motherboard, 2x Opteron 2.4 GHz, 2 GB DDR. u Receiving station:  CERN OpenLab:HP rx4640, 4x 1.5GHz Itanium-2, zx1 chipset, 8GB memory u Network adapter:  Neterion 10 GbE Burst of packet losses Single packet loss CPU load = 100%

ResponsivenessPathBandwidth RTT (ms) MTU (Byte) Time to recover LAN 10 Gb/s ms Geneva–Chicago 10 Gb/s hr 32 min Geneva-Los Angeles 1 Gb/s min Geneva-Los Angeles 10 Gb/s hr 51 min Geneva-Los Angeles 10 Gb/s min Geneva-Los Angeles 10 Gb/s k (TSO) 5 min Geneva-Tokyo 1 Gb/s hr 04 min  Large MTU accelerates the growth of the window  Time to recover from a packet loss decreases with large MTU  Larger MTU reduces overhead per frames (saves CPU cycles, reduces the number of packets)  C. RTT 2. MSS 2 C : Capacity of the link  Time to recover from a single packet loss:

TCP Variants Performance   Tests between CERN and Caltech   Capacity = OC Gbps; 264 ms round trip latency; 1 flow u u Sending station: Tyan S2882 motherboard, 2x Opteron 2.4 GHz, 2 GB DDR. u u Receiving station (CERN OpenLab): HP rx4640, 4x 1.5GHz Itanium-2, zx1 chipset, 8GB memory u u Network adapter: Neterion 10 GE NIC Linux TCP Linux Westwood+ Linux BIC TCP FAST TCP 3.0 Gbps 5.0 Gbps7.3 Gbps 4.1 Gbps

LHCNet Design Pre-Production traffic LCG traffic R&E traffic CERN-Geneva “Multiplexing” StarLight Chicago MANLAN New-York Ultraligh t LS CMSATLAS ESnet Abilen e Others u October 2005: L2 switching with VLAN; One tag per type of traffic u October 2006: Circuit switching (GFP-based products) Commodity internet

New Technology Candidates: Opportunities and Issues u New standard for SONET infrastructures  Alternative to the expensive Packet-Over-Sonet (POS) technology currently used  May change significantly the way in which we use SONET infrastructures u 10 GE WAN-PHY standard  Ethernet frames across OC-192 SONET networks  Ethernet as inexpensive linking technology between LANs & WANs  Supported by only a few vendors u GFP/LCAS/VCAT standards  Point-to-point circuit-based services  Transport capacity adjustments according to the traffic pattern.  “Bandwidth on Demand” becomes possible for SONET network u NEW standards and NEW hardware  Intensive evaluation and validation period  WAN-PHY tests in July 2005

(Circuit Oriented Services)  Control planes:  HOPI and USNet developments  Monalisa  GMPLS StarLight (Chicago) MANLAN (New-York) CERN (Geneva)

LHCNet connection to Proposed ESnet Lambda Infrastructure Based on National Lambda Rail: FY09 NLR wavegear sites NLR regeneration / OADM sites ESnet via NLR (10 Gbps waves) LHCNet (10 Gbps waves) Denver Seattle Sunnyvale LA San Diego Chicago Pitts Wash DC Raleigh Jacksonville Atlanta KCKC Baton Rouge El Paso - Las Cruces Phoenix Pensacola Dallas San Ant. Houston Albuq. Tulsa New York Clev Boise CERN (Geneva)  LHCNet: To ~80 Gbps by 2009  Routing + Dynamic managed circuit provisioning

Summary and conclusion u The WAN infrastructure is operational u Continue research of flow-based switching and its effect on applications, performance, disk-to-disk and storage- to-storage tests u Evaluation of new data transport protocols u US LHCnet: Circuit oriented services by 2006/2007 (GFP- based products)