1 Networking for LHC and HEP L. E. Price Argonne National Laboratory DOE/NSF Review of LHC Computing BNL, November 15, 2000...Thanks to much input from.

Slides:



Advertisements
Similar presentations
HPIIS Program Review The Internet2 Perspective Doug Van Houweling President and CEO, Internet2 25 October 2000 San Diego, CA.
Advertisements

International Grid Communities Dr. Carl Kesselman Information Sciences Institute University of Southern California.
STAR TAP The Persistent Interconnect for International High- Performance Networks STAR TAP ITN International Transit Network Linda Winkler
University of Illinois at Chicago Annual Update Thomas A. DeFanti Principal Investigator, STAR TAP Director, Electronic Visualization Laboratory.
Update on Advanced Networking in Singapore L.W.C. Wong & F.B.S. Lee Singapore Advanced Research & Education Network.
Abilene and Internet2 Engineering Update Guy Almes Terena Networking Conference 2002 Limerick, Ireland Guy Almes Terena Networking Conference 2002 Limerick,
1 International IP Backbone of Taiwan Academic Networks Wen-Shui Chen & Yu-lin Chang APAN-TW APAN 2003 Academia Sinica Computing Center, Taiwan.
AMPATH™: Pathway of the Americas Internet2 Member Meeting International Task Force October 13, 2003 Julio Ibarra Principal Investigator and Director
High Performance Internet Service at the University of Michigan December 1999 Internet2 These slides are available from the U-M I2 Web page:
ONE PLANET ONE NETWORK A MILLION POSSIBILITIES Barry Joseph Director, Offer and Product Management.
High Performance Internet Service at the University of Michigan April 2000 Internet2 These slides are available from the U-M I2 Web page:
Highest Energy e + e – Collider LEP at CERN GeV ~4km radius First e + e – Collider ADA in Frascati GeV ~1m radius e + e – Colliders.
UKERNA CDI Trial Infrastructure for Content Delivery Steve Williams University of Wales Swansea.
14 June 2015 Internet2: Today, Tomorrow and the GTRN Douglas E. Van Houweling President and CEO, Internet2 Douglas E. Van Houweling President.
Jean-Yves Nief, CC-IN2P3 Wilko Kroeger, SCCS/SLAC Adil Hasan, CCLRC/RAL HEPiX, SLAC October 11th – 13th, 2005 BaBar data distribution using the Storage.
Internet2 A Project of the University Corporation for Advanced Internet Development Ted Hanss Director, Applications Development VIEWNET April 1998.
SCINET The South Carolina Information Network A Gathering of State Networks St. Louis, Missouri April 17, 2000 Tom Fletcher Deputy Director Office of Information.
KEK Network Qi Fazhi KEK SW L2/L3 Switch for outside connections Central L2/L3 Switch A Netscreen Firewall Super Sinet Router 10GbE 2 x GbE IDS.
The Campus as key to Internet2 Engineering Atlanta Guy Almes 30 May 2000.
Users’ Authentication in the VRVS System David Collados California Institute of Technology November 20th, 2003TERENA - Authentication & Authorization.
Chapter 4. After completion of this chapter, you should be able to: Explain “what is the Internet? And how we connect to the Internet using an ISP. Explain.
NORDUnet NORDUnet The Fibre Generation Lars Fischer CTO NORDUnet.
Global High Performance Networks N+I Tokyo’98 Session Chair : Kilnam Chon Speakers : George Strawn / NSF US Next Generation Internet Projects.
International Task Force Meeting March 7, a.m. to noon Washington, DC.
The Research and Education Network: Platform for Innovation Heather Boyles, Next Generation Network Symposium Malaysia 2007-March-15.
The Singapore Advanced Research & Education Network.
Grid Applications for High Energy Physics and Interoperability Dominique Boutigny CC-IN2P3 June 24, 2006 Centre de Calcul de l’IN2P3 et du DAPNIA.
Florida International UniversityAMPATH AMPATH Julio E. Ibarra Director, AMPATH Global Research Networking Summit Brussels, Belgium.
PPDG and ATLAS Particle Physics Data Grid Ed May - ANL ATLAS Software Week LBNL May 12, 2000.
Finnish DataGrid meeting, CSC, Otaniemi, V. Karimäki (HIP) DataGrid meeting, CSC V. Karimäki (HIP) V. Karimäki (HIP) Otaniemi, 28 August, 2000.
1 ESnet Update Joint Techs Meeting Minneapolis, MN Joe Burrescia ESnet General Manager 2/12/2007.
Virtual Room Videoconferencing System H. Newman & P. Galvez & G. Denis, Caltech C. Isnard, C. Isnard, CERN CHEP2000 February 6, 2000.
DataTAG Research and Technological Development for a Transatlantic Grid Abstract Several major international Grid development projects are underway at.
SCIC in the WSIS Stocktaking Report (July 2005): uThe SCIC, founded in 1998 by ICFA, is listed in Section.
The Internet2 HENP Working Group Internet2 Spring Meeting May 8, 2002 Shawn McKee University of Michigan HENP Co-chair.
Office of Science U.S. Department of Energy ESCC Meeting July 21-23, 2004 Network Research Program Update Thomas D. Ndousse Program Manager Mathematical,
Connecting Advanced Networks in Asia-Pacific Kilnam Chon APAN Focusing on Applications -
Connect. Communicate. Collaborate perfSONAR MDM Service for LHC OPN Loukik Kudarimoti DANTE.
Sep 02 IPP Canada Remote Computing Plans Pekka K. Sinervo Department of Physics University of Toronto 4 Sep IPP Overview 2 Local Computing 3 Network.
Networking Shawn McKee University of Michigan DOE/NSF Review November 29, 2001.
The Particle Physics Data Grid Collaboratory Pilot Richard P. Mount For the PPDG Collaboration DOE SciDAC PI Meeting January 15, 2002.
STAR TAP, Euro-Link, and StarLight Tom DeFanti April 8, 2003.
Tiziana FerrariThe DataTAG Projct, Roma Nov DataTAG Project.
SLAC Status, Les CottrellESnet International Meeting, Kyoto July 24-25, 2000 SLAC Update Les Cottrell & Richard Mount July 24, 2000.
SINET Update and Collaboration with TEIN2 Jun Matsukata National Institute of Informatics (NII) Research Organization of Information and Systems
30 June Wide Area Networking Performance Challenges Olivier Martin, CERN UK DTI visit.
Networking Shawn McKee University of Michigan PCAP Review October 30, 2001.
Internet2 Update CCIRN Meeting 8 June 2001 Stockholm.
Internet Connectivity and Performance for the HEP Community. Presented at HEPNT-HEPiX, October 6, 1999 by Warren Matthews Funded by DOE/MICS Internet End-to-end.
April 15, ICFA_SCIC meeting Matthias Kasemann, FNAL1 ICFA Standing Committee on Interregional Connectivity 1. Meeting at Fermilab April
Advanced research and education networking in the United States: the Internet2 experience Heather Boyles Director, Member and Partner Relations Internet2.
NORDUnet NORDUnet e-Infrastrucure: Grids and Hybrid Networks Lars Fischer CTO, NORDUnet Fall 2006 Internet2 Member Meeting, Chicago.
1 (Brief) Introductory Remarks On Behalf of the U.S. Department of Energy ESnet Site Coordinating Committee (ESCC) W.Scott Bradley ESCC Chairman
Building Corporate Data Networks – A Case Study
26 October 2001 National Summit On Broadband Deployment Implications From Internet2.
The EU DataTAG Project Richard Hughes-Jones Based on Olivier H. Martin GGF3 Frascati, Italy Oct 2001.
Internet2: an update Heather Boyles Reunión de Otoño CUDI 2000 Monterrey, México 6 y 7 de noviembre.
1 IEPM / PingER project & PPDG Les Cottrell – SLAC Presented at the NGI workshop, Berkeley, 7/21/99 Partially funded by DOE/MICS Field Work Proposal on.
CA*net3 - International High Performance Connectivity 9th Internet2 Member Meeting Mar 9, Washington, DC tel:
Global Research & Education Networking - Lambda Networking, then Tera bps Kilnam Chon KAIST CRL Symposium.
Transporting High Energy Physics Experiment Data over High Speed Genkai/Hyeonhae on 4 October 2002 at Oita Korea-Kyushu Gigabit Network.
10-Feb-00 CERN HepCCC Grid Initiative ATLAS meeting – 16 February 2000 Les Robertson CERN/IT.
J. Bunn, D. Nae, H. Newman, S. Ravot, X. Su, Y. Xia California Institute of Technology US LHCNet LHCNet WG September 12 th 2006.
Hall D Computing Facilities Ian Bird 16 March 2001.
] Open Science Grid Ben Clifford University of Chicago
International High Performance Connectivity
The Campus as key to Internet2 Engineering
CERN-USA connectivity update DataTAG project
Wide-Area Networking at SLAC
The Campus as key to Internet2 Engineering
Presentation transcript:

1 Networking for LHC and HEP L. E. Price Argonne National Laboratory DOE/NSF Review of LHC Computing BNL, November 15, Thanks to much input from Harvey Newman

2 It’s the Network, Stupid! For 20 years, high energy physicists have relied on state-of- the-art computer networking to enable ever larger international collaborations LHC collaborations would never have been attempted if they could not expect excellent international communications to make them possible The network is needed for all aspects of collaborative work –Propose, design, collaborate, confer, inform –Create, move, access data –Analyze, share results, write papers HEP has usually led the demand for research networks In special cases, we must support our own connections to high-rate locations--like CERN for LHC –Because our requirements overwhelm those of other researchers –Because regional networks do not give top priority to interregional connections

3 Networking Requirements Beyond the simple requirement of adequate bandwidth, physicists in all of DoE/DHEP’s (and NSF/EPP’s) major programs require: –An integrated set of local, regional, national and international networks able to interoperate seamlessly, without bottlenecks –Network and user software that will work together to provide high throughput and manage bandwidth effectively –A suite of videoconference and high-level tools for remote collaboration that will make data analysis from the US (and from other remote sites) effective The effectiveness of U.S. participation in the LHC experimental program is particularly dependent on the speed and reliability of national and international networks

4 Networking must Support a Distributed, Hierarchical Data Access System Tier2 Center Online System Offline Farm, CERN Computer Center > 20 TIPS FranceCentre FNAL Center Italy Center UK Center Institute Institute ~0.25TIPS Workstations ~100 MBytes/sec ~2.4 Gbits/sec Mbits/sec Bunch crossing per 25 nsecs. 100 triggers per second Event is ~1 MByte in size Physicists work on analysis “channels”. Each institute has ~10 physicists working on one or more channels Physics data cache ~PBytes/sec ~ Gbits/sec + Air Freight Tier2 Center ~622 Mbits/sec Tier 0 +1 Tier 1 Tier 3 Tier 4 Tier2 Center Tier 2 GriPhyN: FOCUS On University Based Tier2 Centers Experiment

5 Bandwidth Requirements Projection (Mbps): ICFA-NTF bits/  x 10 7 sec = 300 Mbs (x 8 for headroom, simulations, repeats,….)

Shared Internet may not be good enough! Sites in UK track one another, so can represent with single site 2 Beacons in UK Indicates common source of congestion Increased capacity by 155 times in 5 years Direct peering between JANet and ESnet Transatlantic link will probably be the thinnest connection because of cost

7 US-CERN Link Working Group DOE and NSF have requested a committee report on the need for HEP-supported transatlantic networking for LHC and… –BaBar, CDf, D0, ZEUS, BTeV, etc. Co-chairs: Harvey Newman (CMS), Larry Price (ATLAS) Other experiments are providing names of members for committee Hope to coordinate meeting with ICFA-SCIC (Standing Committee on Interregional networking-- see below.) Report early in 2001.

8 Committee history: ICFA NTF Recommendations concerning Inter-continental links: –ICFA should encourage the provision of some considerable extra bandwidth, especially across the Atlantic –ICFA participants should make concrete proposals, (such as recommendation to increase bandwidth across the Atlantic, approach to QoS, co-operation with other disciplines and agencies, etc.) –The bandwidth to Japan needs to be upgraded –Integrated end-to-end connectivity is primary requirement, to be emphasized to continental ISPs, and academic and research networks

9 ICFA Standing Committee on Interregional Connectivity (SCIC) ICFA Commissioned the SCIC in Summer 1998 as a standing committee to deal with the issues and problems of wide area networking for the ICFA community CHARGE – Make recommendations to ICFA concerning the connectivity between American Asia and Europe. – Create subcommittees when necessary to meet the charge (Monitoring, Requirements, Technology Tracking, Remote Regions). – Chair of the committee should report to ICFA once per year, at its joint meeting with laboratory directors. MEMBERSHIP – M. Kasemann (FNAL), Chair – H. Newman (CIT) for US Universities and APS/DPF – Representatives of HEP Labs: SLAC, CERN, DESY, KEK – Regional Representatives: from ECFA, ACFA, Canada, the Russian Federation, and South America

10 Academic & Research Networking in the US –Focus on research & advanced applications l hence, separate connections to commodity Internet and research backbone (GigaPoP) l lot of resistance to connect K-12 schools l Internet2 infrastructure: –vBNS –Abilene –STAR TAP l Internet2 projects: –Digital Video Initiative (DVI), –Digital Storage Infrastructure (DSI), –Qbone, –Surveyor –Mission-oriented networks  Esnet: support of Office of Science, especially Laboratories  NASA Science Internet

13

14 Abilene int’l peering STAR TAP APAN/TransPAC, Canet, IUCC, NORDUnet, RENATER, REUNA, SURFnet, SingAREN, SINET, TAnet2 (CERnet, HARnet) OC12 NYC DANTE*, JANET, NORDUnet, SURFnet (CAnet) SEATTLE CAnet, (AARnet) SUNNYVALE (SINET?) L.A. SingAREN, (SINET?) MIAMI (CUDI?, REUNA, RNP2, RETINA) OC3-12 El Paso, TX (CUDI?) San Diego (CUDI?) Internet 2

15 Europe seen from U.S. 650ms 200 ms 7% loss 10% loss 1% loss Monitor site Beacon site (~10% sites) HENP country Not HENP Not HENP & not monitored Performance

16 History of the “LEP3NET” Network (1) Since the early days of LEP, DOE has supported a dedicated network connection to CERN, managed by Caltech Initially dedicated to L3 experiment, more recently the line has supported US involvement in LEP and LHC – : Use of Int’l public X.25 networks ( kbps) to support U.S. participation in DESY and CERN programs – : Leased analog (16.8 kbits/s) CERN-MIT X.25 switched line, with onward connections to Caltech, Michigan, Princeton, Harvard, Northeastern,... – : Leased digital (64 kbits/s) CERN-MIT switched supporting L3 and also providing the US-Europe DECNET service. – : Leased digital ( kbits/s) CERN-MIT line split to provide IP (for L3) and DECNET (for general purpose Europe-US HEP traffic) –12/95 - 9/96: Major partner in leased digital (1.544Mbits/s) CERN-US line for all CERN-US HEP traffic. Development of CERN-US packet videoconferencing and packet/Codec hybrid systems.

17 History of the “LEP3NET” Network (2) October August 1997 – Upgraded leased digital CERN-US line: Mbps – Set-up of monitoring tools and traffic control – Start Deployment of VRVS a Web-based videoconferencing system September April 1999 – Upgraded leased CERN-US line to 2 X Mbps; Addition of a backup and “overflow” leased line at Mbps (total 6 Mbps) to avoid saturation in Fall 1998 – Production deployment of VRVS software in the US and Europe (to 1000 hosts by 4/99; Now 2800). – Set-up of CERN-US consortium rack at Perryman to peer with ESnet and other international nets – Test of QoS features using new Cisco software and hardware

18 History of the “LEP3NET” Network (3) October September 1999 –Market survey and selection of Cable&Wireless as ISP. –Began Collaboration in Internet2 applications and network developments. –Move to C&W Chicago PoP, to connect to STARTAP. –From April 1999, set-up of a 12 Mbps ATM VP/VBRnrt circuit between CERN and C&W PoP –9/99: Transatlantic upgrade to 20 Mbps September 1st, coincident with CERN/IN2P3 link upgrade –7/99: Begin organized file transfer service to “mirror” Babar DST data from SLAC to CCIN2P3/Lyon With the close of LEP and the rise of the more demanding LHC and other programs, we are renaming the network “LHCNET”

19 History of the “LEP3NET” Network (4) October September 2000 –CERN (represented by our consortium) became a member of UCAID (Internet2) –Market survey and selection of KPN/Qwest as ISP. –Move from C&W Chicago PoP to KPN/Qwest Chicago PoP and connection to STARTAP end of March. –From April 2000, set-up of a 45 Mbps (DS3 SDH) circuit between CERN and KPN/Qwest PoP and 21 Mbps for general purpose Internet via QwestIP. –October 2000: Transatlantic upgrade to 155 Mbps (STM-1) with move to the KPN/Qwest PoP in New-York with direct peering with Esnet, Abilene and Canarie (Canada). –Possibility to have a second STM-1 (two unprotected circuits) in 2001; second one for R&D.

20 Configuration at Chicago with KPN/Qwest

21 Daily, Weekly, Monthly and Yearly Statistics on the 45 Mbps line

22 Bandwidth Requirements for the Transatlantic Link

23 Estimated Funding for Transatlantic Link

24 CERN Unit Costs are Going Down Recent price history on CERN-US link: –still paying 400KCHF/Mbps/year 16 months ago (Swisscom/MCI), –then 88KCHF/Mbps/year (C&W) –now 36KCHF/Mbps/year (KPN-Qwest) –expect to pay 8KCHF/Mbps/year, if the dual unprotected STM-1 solution is selected.

Gbps scenarios

26 We are Preparing Now to Use this Large Bandwidth When it Arrives Nov. 9, 2000 at SC2000: –Peak transfer rate of 990 Mbs measured in test from dallas to SLAC via NTON –Best results achieved with 128KB window size and 25 parallel streams –Demonstration by SLAC and FNAL of work for PPDG Caltech and SLAC working toward 2Gbs transfer rate over NTON in 2001 Need for differentiated services (QoS) National Transparent Optical Network

27 Network must also support advanced conferencing services: e.g., VRVS Example: Example: 9 Participants, CERN(2), Caltech, FNAL(2), Bologna (IT), Roma (IT), Milan (IT), Rutherford(UK)

28 Continued Development of VRVSdone Partially done Work in progress Continuously in development QoS VRVS Reflectors (Unicast/Multicast) Real Time Protocol (RTP/RTCP) Mbone Tools (vic, vat/rat,..) QuickTime V4.0 H.323 MPEG Others?? Network Layer (TCP/IP) CollaborativeApplications VRVS Web User Interface

29 Adequate networking for LHC turnon is only the start! A Short List of Coming Revolutions Network Technologies –Wireless Broadband (from ca. 2003) –10 Gigabit Ethernet (from 2002: See 10GbE/DWDM-Wavelength (OC-192) integration: OXC Internet Information Software Technologies –Global Information “Broadcast” Architecture  E.g the Multipoint Information Distribution Protocol (MIDP; –Programmable Coordinated Agent Archtectures  E.g. Mobile Agent Reactive Spaces (MARS) by Cabri et al., Univ. of Modena The “Data Grid” - Human Interface –Interactive monitoring and control of Grid resources  By authorized groups and individuals  By Autonomous Agents

30 WAN vs LAN bandwidth The common belief that WAN will always be well behind LANs (i.e. 1-10%) may well be plain wrong…. –WAN technology is well ahead of LAN technology, state of the art is 10Gbps (WAN) against 1Gbps (LAN) –Price is less of an issue as they are falling down very rapidly. –Some people are even advocating that one should now start thinking new applications as if bandwidth was free, which sounds a bit premature to me, at least, in Europe, even though there are large amounts of unused capacity floating around! CERN

31 Conclusions Seamless high-performance network will be crucial to success of LHC--and other international HEP experiments –Data transfer and remote access –Rich menu of collaboration and conferencing functions We are only now realizing that networking must be planned as a large-scale priority task of major collaborations--it will not automatically be there –BaBar is scrambling to provide data transport to IN2P3 and INFN Advance of technology means that the networking we need will not be as expensive as once feared. –But a fortiori we should not provide less than we need The US-CERN Link Working Group will have an interesting and vital task –Evaluate future requirements and opportunities –Recommend optimum cost/performance tradeoff –Pave the way for effective and powerful data analysis