Network to and at CERN Getting ready for LHC networking Jean-Michel Jouanigot and Paolo Moroni CERN/IT/CS.

Slides:



Advertisements
Similar presentations
Multihoming and Multi-path Routing
Advertisements

Antonio González Torres
2006 © SWITCH 1 TNC'06 Panel Presentation Myths about costs of circuit vs. packet switching Simon Leinen.
T1-NREN Luca dell’Agnello CCR, 21-Ottobre The problem Computing for LHC experiments –Multi tier model (MONARC) –LHC computing based on grid –Experiment.
IPv6 at CERN Update on Network status David Gutiérrez Co-autor: Edoardo MartelliEdoardo Martelli Communication Services / Engineering
FNAL Site Perspective on LHCOPN & LHCONE Future Directions Phil DeMar (FNAL) February 10, 2014.
Module 5 - Switches CCNA 3 version 3.0 Cabrillo College.
Cisco 3 - Switches Perrine - Brierley Page 15/10/2015 Module 5 Switches LAN Design LAN Switches.
Multi-Layer Switching Layers 1, 2, and 3. Cisco Hierarchical Model Access Layer –Workgroup –Access layer aggregation and L3/L4 services Distribution Layer.
ENTERPRISE NETWORK IMPLEMENTATION
Ch.6 - Switches CCNA 3 version 3.0.
Lesson 11-Virtual Private Networks. Overview Define Virtual Private Networks (VPNs). Deploy User VPNs. Deploy Site VPNs. Understand standard VPN techniques.
Chapter 10 Introduction to Wide Area Networks Data Communications and Computer Networks: A Business User’s Approach.
Lecture Week 3 Introduction to Dynamic Routing Protocol Routing Protocols and Concepts.
Campus Networking Best Practices Session 2: Layer 3 Dale Smith University of Oregon & NSRC
Bandwidth on Demand Dave Wilson DW238-RIPE
Mr. Mark Welton.  Three-tiered Architecture  Collapsed core – no distribution  Collapsed core – no distribution or access.
Agenda Network Infrastructures LCG Architecture Management
LHC high-level network architecture Erik-Jan Bos Director of Network Services SURFnet, The Netherlands T0/T1 network meeting NIKHEF/SARA, Amsterdam, The.
Questionaire answers D. Petravick P. Demar FNAL. 7/14/05 DLP -- GDB2 FNAL/T1 issues In interpreting the T0/T1 document how do the T1s foresee to connect.
Barracuda Load Balancer Server Availability and Scalability.
1 The SpaceWire Internet Tunnel and the Advantages It Provides For Spacecraft Integration Stuart Mills, Steve Parkes Space Technology Centre University.
1 October 20-24, 2014 Georgian Technical University PhD Zaza Tsiramua Head of computer network management center of GTU South-Caucasus Grid.
TCOM 515 Lecture 6.
Current Job Components Information Technology Department Network Systems Administration Telecommunications Database Design and Administration.
ACM 511 Chapter 2. Communication Communicating the Messages The best approach is to divide the data into smaller, more manageable pieces to send over.
Module 4: Planning, Optimizing, and Troubleshooting DHCP
T0/T1 network meeting July 19, 2005 CERN
Communication Between Networks How the Internet Got Its Name.
1 Second ATLAS-South Caucasus Software / Computing Workshop & Tutorial October 24, 2012 Georgian Technical University PhD Zaza Tsiramua Head of computer.
Using E2E technology for LHC Apr 3, 2006 HEPiX Spring Meeting 2006
Thoughts on Future LHCOPN Some ideas Artur Barczyk, Vancouver, 31/08/09.
S4-Chapter 3 WAN Design Requirements. WAN Technologies Leased Line –PPP networks –Hub and Spoke Topologies –Backup for other links ISDN –Cost-effective.
© 2006 Cisco Systems, Inc. All rights reserved.Cisco Public 1 Version 4.0 Introducing Network Design Concepts Designing and Supporting Computer Networks.
© 2008 Cisco Systems, Inc. All rights reserved.Cisco ConfidentialPresentation_ID 1 Chapter 1: Introduction to Scaling Networks Scaling Networks.
23 January 2003Paolo Moroni (Slide 1) SWITCH-cc meeting DataTAG overview.
Brookhaven Science Associates U.S. Department of Energy 1 Network Services BNL USATLAS Tier 1 / Tier 2 Meeting John Bigrow December 14, 2005.
Routing integrity in a world of Bandwidth on Demand Dave Wilson DW238-RIPE
Services provided by CERN’s IT Division Ethernet in controls applications European Organisation for Nuclear Research European Laboratory for Particle.
INFN TIER1 (IT-INFN-CNAF) “Concerns from sites” Session LHC OPN/ONE “Networking for WLCG” Workshop CERN, Stefano Zani
Connect. Communicate. Collaborate perfSONAR MDM Service for LHC OPN Loukik Kudarimoti DANTE.
Online-Offsite Connectivity Experiments Catalin Meirosu *, Richard Hughes-Jones ** * CERN and Politehnica University of Bucuresti ** University of Manchester.
Campus Case Studies in Implementing Advanced Services Cas D’Angelo
© 2006 Cisco Systems, Inc. All rights reserved.Cisco PublicITE I Chapter 6 1 Introducing Network Design Concepts Designing and Supporting Computer Networks.
Cisco 3 - Switches Perrine - Brierley Page 112/1/2015 Module 5 Switches.
1 Using VPLS for VM mobility cern.ch cern.ch HEPIX Fall 2015.
Technical Solution Proposal
High performance Brocade routers
The Technical Network in brief Jean-Michel Jouanigot & all IT/CS.
Network infrastructure at FR-CCIN2P3 Guillaume Cessieux – CCIN2P3 network team Guillaume. cc.in2p3.fr On behalf of CCIN2P3 network team LHCOPN.
© 2006 Cisco Systems, Inc. All rights reserved.Cisco PublicITE I Chapter 6 1 Prototyping the Campus Network Designing and Supporting Computer Networks.
© 2005 Cisco Systems, Inc. All rights reserved. BGP v3.2—3-1 Route Selection Using Policy Controls Using Multihomed BGP Networks.
Brookhaven Science Associates U.S. Department of Energy 1 Network Services LHC OPN Networking at BNL Summer 2006 Internet 2 Joint Techs John Bigrow July.
Release 16/7/2009 Internetworking Devices Chapter 10 Jetking Infotrain Ltd.
2005 ESCC/Internet2 Joint Tech Workshop, 15 th February 2005, Salt Lake City – Otto Kreiter GÉANT2 Otto Kreiter Network Engineering.
Connect communicate collaborate LHCONE European design & implementation Roberto Sabatino, DANTE LHCONE Meeting, Washington, June
IT-INFN-CNAF Status Update LHC-OPN Meeting INFN CNAF, December 2009 Stefano Zani 10/11/2009Stefano Zani INFN CNAF (TIER1 Staff)1.
© ExplorNet’s Centers for Quality Teaching and Learning 1 Select appropriate hardware for building networks. Objective Course Weight 2%
SMOOTHWALL FIREWALL By Nitheish Kumarr. INTRODUCTION  Smooth wall Express is a Linux based firewall produced by the Smooth wall Open Source Project Team.
100GE Upgrades at FNAL Phil DeMar; Andrey Bobyshev CHEP 2015 April 14, 2015.
LHC high-level network architecture Erik-Jan Bos Director of Network Services SURFnet, The Netherlands T0/T1 network meeting CERN, Geneva, Switzerland;
T0-T1 Networking Meeting 16th June Meeting
Luca dell’Agnello INFN-CNAF
“A Data Movement Service for the LHC”
ALICE internal and external network
INFN CNAF TIER1 Network Service
Establishing End-to-End Guaranteed Bandwidth Network Paths Across Multiple Administrative Domains The DOE-funded TeraPaths project at Brookhaven National.
Designing Routing and Switching Architectures. Howard C. Berkowitz
Module 5 - Switches CCNA 3 version 3.0.
an overlay network with added resources
Presentation transcript:

Network to and at CERN Getting ready for LHC networking Jean-Michel Jouanigot and Paolo Moroni CERN/IT/CS

Summary  Current situation  T0-T1 planning: LAN  T0-T1 planning: WAN

21 January 2005T0/1 network meeting3 Current situation  General purpose network  Technical network  Experimental areas (pre-production)  External network  (firewall / HTAR)

21 January 2005T0/1 network meeting4 Technical Network COMPUTER CENTER REMOTE MAJOR STARPOINTS..etc.. Server Farms Firewall CIXP,Internet..etc.. General-purpose network

21 January 2005T0/1 network meeting5 SR2 SR1 SR3 SR4 SR5 SR6 SR7 SR8 Technical network MCR TCR PCR CCR General Purpose Network

21 January 2005T0/1 network meeting6 Tests + LHC pre- production CIXP GÉANT +SWITC H ….…. ….…. External network Internet General purpose network Chicago PoP

21 January 2005T0/1 network meeting7 Firewall This slide is intentionally left blank

21 January 2005T0/1 network meeting8 T0-T1 planning (LAN)  New 2.4 Tb/s backbone to interconnect  LHC experiments (CERN Tier0)  general purpose network  CERN Tier1  T0-T1 WAN (regional Tier1’s)  Based on 10GE technology  Layer 3 interconnections  No central switch(es)  Redundancy via multiple 10GE paths (OSPF)

21 January 2005T0/1 network meeting9 More about T0-T1 LAN  Random paths through the backbone for load balancing (OSPF)  IP addressing:  depends on the LHC WAN implementation,  RFC1918 addresses are likely for a lot of end systems  a data mover facility can help a lot (already successfully implemented for the BABAR experiment at IN2P3)  Default route? Maybe not necessary  Call for tender for the equipment being issued

21 January 2005T0/1 network meeting GE->88*GE ~6000 CPU servers ~2000 Tape and Disk servers GPN 4 LHC experimental areas 4 LHC experimental areas T0-T1 WAN T0-T1 WAN 10GE->88*GE ….…. ….…. 10GE->32*GE 10GE->n*10GE T0-T1 network at CERN (LAN) Raw LHC data Externa l network multiple 10GE 10GE GbE CERN Tier1

21 January 2005T0/1 network meeting11 Tier0 network (LHC experimental areas) T0-T1 LAN DAQ LHC experiment control network GPN Low speed (management) High speed: redundant 10GE (data) T0-T1 WAN CER N Tier1 LHC experiment

21 January 2005T0/1 network meeting12 T0-T1 WAN: progress  A lot of progress has been made:  10 Gb/s equipment is commonly available (although not yet cheap): STM-64 (10GE WAN PHY), 10GE LAN  10 Gb/s capacity (SDH, wavelength, WDM over dark fibre) is affordable  long-distance, high-speed TCP is feasible, although with special Linux tuning

21 January 2005T0/1 network meeting13 T0-T1 WAN: progress (continued)  More progress being made:  GN2 is coming in Europe with new services and research activities  Several interesting initiatives in North America and in Europe (dark fibre-based networks, etc.)  Several interesting monitoring tools exist or are being developed  Pre-production simulation (robust data challenge): a useful ongoing experience  Firewall with HTAR works for non-LHC traffic and for some pre- production

21 January 2005T0/1 network meeting14 T0-T1 WAN: issues  Still several open questions:  how will Tier1’s connect to Tier0 (directly, one upstream, layered upstreams, …)?  backup routing ?  non-homogeneous Tier1 requirements?  any Tier1-Tier1 traffic via Tier0?  IP addressing: routable or RFC1918 ?  does every Tier1 have enough routable addresses?  and …

21 January 2005T0/1 network meeting15 T0-T1 WAN: more issues  …what about  security ?  Tier2’s ?  compatibility between GRID middleware and network design?  special tuning for WAN data transfers?  compatibility between high speed flows and some network devices (Juniper M160)?  management, monitoring, troubleshooting?  Anything else?

21 January 2005T0/1 network meeting16 Recommendations (I)  Allow for diverse regional requirements, but standardise NOW on the T0-T1 physical interface:  10GE LAN PHY (LR/SR ?)  STM-64/OC192  10GE WAN PHY (?)  Other interfaces also possible in the pre-production phase (GbE, multiple GbE, STM-16)  Take advantage of useful experience (robust data challenge)  Define clearly the operational responsibilities across multiple administrative domains

21 January 2005T0/1 network meeting17 Recommendations (II)  Select equipment which is expected to work reliably for some years  A data mover facility (spooling system) helps with several issues:  IP addressing needs  security  WAN data transfer optimisation  Select proven and stable technology: smooth network operations and easy troubleshooting are essential

21 January 2005T0/1 network meeting18 Recommendations (III)  Security is essential  Monitoring is essential  Allocate suitable (routable) subnets, dedicated to LHC production purposes  If not enough routable IP addresses, ask RIPE-NCC for more, via the appropriate upstream LIR, and do so NOW (or ask ARIN, or APNIC, according to the region)

21 January 2005T0/1 network meeting19 Recommendations (IV)  Never mind if the network is just a boring production tool: being at the bleeding edge is not essential in this situation  LHC physics is the research target, not LHC networking

21 January 2005T0/1 network meeting20 LHC WAN: a possible design  Assumptions: if …  Tier1’s connect at layer 3  backup routing is a requirement and it is acceptable via research IP networks (not more than two-three Tier1’s down at the same time)  Tier1-Tier1 traffic is allowed via Tier0 (although this would not be Tier0’s preference…)  Tier1 and Tier0 addresses are publicly routable and every Tier1 has allocated a SMALL number of subnets for inter-Tier0/1 traffic  BGP routing using the “natural” ASN and routable prefixes  no default route (or no default route towards T0): is it possible?  …

21 January 2005T0/1 network meeting21 A possible design (continued)  …and if …  basic security is provided via layer 3 ACLs (allowed subnets and, if possible, port numbers)  Tier1’s may have some non-homogeneous requirements  no Tier2 directly connected to Tier0, but some may be allowed to exchange traffic at less that 10 Gb/s  alternatively, some T0-T2 traffic may transit via an intermediate T1  a spooling system (data mover) is used as buffer between sites to optimise long-distance data transfer and reduce public IP addresses needs  … then …

21 January 2005T0/1 network meeting22 LHC LAN Tier1 ….…. A possible T0-T1 WAN network Externa l network multiple 10GE 10GE or STM-64 Tier1 10GE or multiple GbE Data mover (spool) Tier 2

Thank you Questions?