Network infrastructure at FR-CCIN2P3 Guillaume Cessieux – CCIN2P3 network team Guillaume. cc.in2p3.fr On behalf of CCIN2P3 network team LHCOPN.

Slides:



Advertisements
Similar presentations
Chapter 3: Planning a Network Upgrade
Advertisements

Antonio González Torres
Chapter 7: Intranet LAN Design
LCG France Network Infrastructures Centre de Calcul IN2P3 June 2007
UTC-N Overview of Campus Networks Design.
T1-NREN Luca dell’Agnello CCR, 21-Ottobre The problem Computing for LHC experiments –Multi tier model (MONARC) –LHC computing based on grid –Experiment.
Internet Access for Academic Networks in Lorraine TERENA Networking Conference - May 16, 2001 Antalya, Turkey Infrastructure and Services Alexandre SIMON.
FNAL Site Perspective on LHCOPN & LHCONE Future Directions Phil DeMar (FNAL) February 10, 2014.
CCIN2P3 Network FJPPL 2015
Multi-Layer Switching Layers 1, 2, and 3. Cisco Hierarchical Model Access Layer –Workgroup –Access layer aggregation and L3/L4 services Distribution Layer.
Trial of the Infinera PXM Guy Roberts, Mian Usman.
RIT Campus Data Network. General Network Statistics Over 23,000 wired outlets Over 14,500 active switched ethernet ports > 250 network closets > 1,000.
Data Center Basics (ENCS 691K – Chapter 5)
Diploma in Information Technology Principles of Information Systems and Data Management Classroom Local Area Network & Internet.
Mr. Mark Welton.  Three-tiered Architecture  Collapsed core – no distribution  Collapsed core – no distribution or access.
Treaded Case Study Computer Networks 2002 Daire Sheriden Ronan Monaghan Mark Gilmore.
Agenda Network Infrastructures LCG Architecture Management
March 27, IndiaCMS Meeting, Delhi1 T2_IN_TIFR of all-of-us, for all-of-us, by some-of-us Tier-2 Status Report.
Questionaire answers D. Petravick P. Demar FNAL. 7/14/05 DLP -- GDB2 FNAL/T1 issues In interpreting the T0/T1 document how do the T1s foresee to connect.
Chapter 1: Hierarchical Network Design
Characterizing the Existing Internetwork PART 1
WANS & Routers By Scott Burden & Linnea Wong Cisco Network Academy Semester 2.
Fermi National Accelerator Laboratory, U.S.A. Brookhaven National Laboratory, U.S.A, Karlsruhe Institute of Technology, Germany CHEP 2012, New York, U.S.A.
1/28/2010 Network Plus Network Device Review. Physical Layer Devices Repeater –Repeats all signals or bits from one port to the other –Can be used extend.
1 Prepared by: Les Cottrell SLAC, for SLAC Network & Telecommunications groups Presented to Kimberley Clarke March 8 th 2011 SLAC’s Networks.
EGEE-III INFSO-RI Enabling Grids for E-sciencE EGEE and gLite are registered trademarks LHCOPN Ops WG Act 4 – Conclusion Guillaume.
T0/T1 network meeting July 19, 2005 CERN
Using E2E technology for LHC Apr 3, 2006 HEPiX Spring Meeting 2006
Thoughts on Future LHCOPN Some ideas Artur Barczyk, Vancouver, 31/08/09.
LAN Switching and Wireless – Chapter 1 Vilina Hutter, Instructor
EGEE-III INFSO-RI Enabling Grids for E-sciencE EGEE and gLite are registered trademarks LHCOPN operations Presentation and training.
EGEE-III INFSO-RI Enabling Grids for E-sciencE EGEE and gLite are registered trademarks LHCOPN Ops WG Act 5 Guillaume Cessieux (CNRS/IN2P3-CC,
1 LHC-OPN 2008, Madrid, th March. Bruno Hoeft, Aurelie Reymund GridKa – DE-KIT procedurs Bruno Hoeft LHC-OPN Meeting 10. –
Cisco 3 - Switch Perrine. J Page 111/6/2015 Chapter 5 At which layer of the 3-layer design component would users with common interests be grouped? 1.Access.
EGEE-III Enabling Grids for E-sciencE EGEE and gLite are registered trademarks 2008 report on LHCOPN from ASPDrawer
INFN TIER1 (IT-INFN-CNAF) “Concerns from sites” Session LHC OPN/ONE “Networking for WLCG” Workshop CERN, Stefano Zani
Network to and at CERN Getting ready for LHC networking Jean-Michel Jouanigot and Paolo Moroni CERN/IT/CS.
Network design Topic 4 LAN design. Agenda Modular design Hierarchal model Campus network design Design considerations Switch features.
The Washington School District Mike, Mark, Joy, Armando, & Mona.
High performance Brocade routers
A follow-up on network projects 10/29/2013 HEPiX Fall Co-authors:
Brookhaven Science Associates U.S. Department of Energy 1 Network Services LHC OPN Networking at BNL Summer 2006 Internet 2 Joint Techs John Bigrow July.
Connect communicate collaborate LHCONE European design & implementation Roberto Sabatino, DANTE LHCONE Meeting, Washington, June
EGEE-III INFSO-RI Enabling Grids for E-sciencE EGEE and gLite are registered trademarks LHCOPN operations Presentation and training.
EGEE-III INFSO-RI Enabling Grids for E-sciencE EGEE and gLite are registered trademarks LHCOPN operations Presentation and training.
IT-INFN-CNAF Status Update LHC-OPN Meeting INFN CNAF, December 2009 Stefano Zani 10/11/2009Stefano Zani INFN CNAF (TIER1 Staff)1.
© 2008 Cisco Systems, Inc. All rights reserved.Cisco ConfidentialPresentation_ID 1 Chapter 1: Hierarchical Network Design Connecting Networks.
LHCOPN operational model Guillaume Cessieux (CNRS/FR-CCIN2P3, EGEE SA2) On behalf of the LHCOPN Ops WG GDB CERN – November 12 th, 2008.
Dominique Boutigny December 12, 2006 CC-IN2P3 a Tier-1 for W-LCG 1 st Chinese – French Workshop on LHC Physics and associated Grid Computing IHEP - Beijing.
100GE Upgrades at FNAL Phil DeMar; Andrey Bobyshev CHEP 2015 April 14, 2015.
26. Juni 2003Bernd Panzer-Steindel, CERN/IT1 LHC Computing re-costing for for the CERN T0/T1 center.
LCG France Network Network 11/2010 Centre de Calcul de l'IN2P3/CNRS.
CCIN2P3 Network November 2007 CMS visit to Tier1 CCIN2P3.
WLCG Network Outlook By Edoardo Martelli with input from and interpreted by Tony Cass.
Implementing Cisco Data Center Unified Computing
New network infrastructure at CCIN2P3
Luca dell’Agnello INFN-CNAF
DE-KIT(GridKa) -- LHCONE
INFN CNAF TIER1 Network Service
Andrea Chierici On behalf of INFN-T1 staff
SURFnet6: the Dutch hybrid network initiative
LCG France Network Infrastructures
LSST - IN2P3 connectivity
How to address the increasing connectivity needs of the HEP community?
IN2P3 Computing Center April 2007
Vladimir Sapunenko On behalf of INFN-T1 staff HEPiX Spring 2017
IS3120 Network Communications Infrastructure
an overlay network with added resources
Network performance issues recently raised at IN2P3-CC
VLANS The Who, What Why, And Where's to using them
Presentation transcript:

Network infrastructure at FR-CCIN2P3 Guillaume Cessieux – CCIN2P3 network team Guillaume. cc.in2p3.fr On behalf of CCIN2P3 network team LHCOPN meeting, Vancouver,

FR-CCIN2P3 2 Since 1986 Now 74 persons ~ 5300 cores 10 Po disk 30 Po tape Computing room ~ 730m2 1.7 MW LHCOPN meeting, Vancouver, GCX

RENATER-4 → RENATER-5: Dark fibre galore LHCOPN meeting, Vancouver, GCX3 → ~7500km of DF CERN Kehl Cadarache Dark fibres Leased line 2,5 G Genève (CERN) Cadarache Leased line 1 G (GE) Tours Le Mans Angers Tours Le Mans Angers Kehl

(D)WDM based Previously –Alcatel 1.6k series –Cisco 6500 & Upgraded to –Ciena CN4200 –Cisco 7600 & CRS-1 Hosted by CCIN2P3 –Direct foot into RENATER’s backbone No last miles or MAN issues Pop RENATER-5 Lyon LHCOPN meeting, Vancouver, GCX4

Ending two 10G LHCOPN links 5 100km GRIDKA-IN2P3-LHCOPN-001 CERN-IN2P3-LHCOPN-001 Candidate for L1 redundancy CERN-GRIDKA-LHCOPN-001 Layer 3 view:

WAN connectivity related to T0/T1s RENATER 2x1G WANLAN Chicago Geneva Karlsruhe GÉANT2 Internet Generic IPDedicated Tiers2 FR NREN LHCOPN Tiers1 Backbone Tiers2 Edge 10G LHCOPN meeting, Vancouver, GCX Beware: Not for LHC MDM appliances Dedicated data servers for LCG 1G

LAN: Just fully upgraded! LHCOPN meeting, Vancouver, GCX7 Computing → Storage SATA Storage FC+TAPE Computing Storage SATA Storage FC+TAPE

Now “top of rack” design Really easing mass handling of devices –Enable directly buying pre-wired racks Just plug power and fibre – 2 connections! LHCOPN meeting, Vancouver, GCX8 …

Current LAN for data analysis LHCOPN meeting, Vancouver, GCX9 36 computing racks 34 to 42 server per rack 1x10G uplink 1G per server Data FC (27 servers) Data SATA 816 servers in 34 racks 10G/server Tape 10 servers 2x1G per server 10G/server 1 switch/rack (36 access switches) 48x1G/switch 3 distributing switches Linked to backbone with 4x10G Computing … 24 servers per switch 34 access switches with Trunked uplink 2x10G Linked to backbone with 4x10G … 2 distributing switches Storage Backbone 40G

Main network devices and configurations used 24x10G (12 blocking) + 96x1G + 336x1G blocking (1G/8ports)‏ 48x10G (24 blocking) + 96x1G 64x10G (32 blocking) 48x1G + 2x10G x10G Backbone & Edge Distribution Access LHCOPN meeting, Vancouver, GCX x5 x70 > 13km of copper cable & > 3km of 10G fibres

Tremendous flows LHCOPN meeting, Vancouver, GCX11 But still regular peaks at 30G on the LAN backbone LHCOPN links not so used yet GRIDKA-IN2P3-LHCOPN-001 CERN-IN2P3-LHCOPN-001

Other details LAN –Big devices preferred to meshed bunch of small –We avoid too much device diversity Ease management & spare –No spanning tree, trunking is enough Redundancy only at service level when required –Routing only in the backbone (EIGRP) 1 VLAN per rack No internal firewalling –ACL on border routers are sufficient Only on incoming traffic and per interface –Preserve CPU LHCOPN meeting, Vancouver, GCX12

Monitoring Home made flavour of netflow –EXTRA: External Traffic Analyzer But some scalability issues around 10G... Cricket & cacti + home made –ping & TCP tests + rendering Several publicly shared – LHCOPN meeting, Vancouver, GCX13

Ongoing (1/3) WAN - RENATER –Upcoming transparent L1 redundancy Ciena based –40G & 100G testbed Short path FR-CCIN2P3 – CH-CERN is a good candidate LHCOPN meeting, Vancouver, GCX14

Ongoing (2/3) LAN –Improving servers’ connectivity 1G → 2x1G→ 10G per server Starting with most demanding storage servers –100G LAN backbone Investigating Nexus based solutions –7018: 576x10G (worst case ~144 at wirespeed) –Flat to stared design LHCOPN meeting, Vancouver, GCX15 → Nx40G Nx100G

Ongoing (3/3) A new computer room! LHCOPN meeting, Vancouver, GCX16 Building 2 2 floors Existing 850m² on two floors 1 cooling, UPS, etc. 1 computing devices Target 3 MW Expected beginning 2011 (Starting at 1MW)

Conclusion WAN –Excellent LHCOPN connectivity provided by RENATER –Demand from T2s may be next working area LAN –Linking abilities recently tripled –Next step will be core backbone upgrade LHCOPN meeting, Vancouver, GCX17