High performance Brocade routers

Slides:



Advertisements
Similar presentations
LCG France Network Infrastructures Centre de Calcul IN2P3 June 2007
Advertisements

T1-NREN Luca dell’Agnello CCR, 21-Ottobre The problem Computing for LHC experiments –Multi tier model (MONARC) –LHC computing based on grid –Experiment.
US CMS Tier1 Facility Network Andrey Bobyshev (FNAL) Phil DeMar (FNAL) CHEP 2010 Academia Sinica Taipei, Taiwan.
IPv6 at CERN Update on Network status David Gutiérrez Co-autor: Edoardo MartelliEdoardo Martelli Communication Services / Engineering
CCIN2P3 Network FJPPL 2015
1 © 2001, Cisco Systems, Inc. All rights reserved. NIX Press Conference Catalyst 6500 Innovation Through Evolution 10GbE Tomáš Kupka,
KIT – University of the State of Baden-Wuerttemberg and National Research Center of the Helmholtz Association STEINBUCH CENTRE FOR COMPUTING - SCC
Trial of the Infinera PXM Guy Roberts, Mian Usman.
A Scalable, Commodity Data Center Network Architecture.
Transatlantic Connectivity in LHCONE Artur Barczyk California Institute of Technology LHCONE Meeting Washington DC, June 13 th,
Energy Aware Network Operations Authors: Priya Mahadevan, Puneet Sharma, Sujata Banerjee, Parthasarathy Ranganathan HP Labs IEEE Global Internet Symposium.
Tier 2 Prague Institute of Physics AS CR Status and Outlook J. Chudoba, M. Elias, L. Fiala, J. Horky, T. Kouba, J. Kundrat, M. Lokajicek, J. Svec, P. Tylka.
Agenda Network Infrastructures LCG Architecture Management
CMS Data Transfer Challenges LHCOPN-LHCONE meeting Michigan, Sept 15/16th, 2014 Azher Mughal Caltech.
Status Report on Tier-1 in Korea Gungwon Kang, Sang-Un Ahn and Hangjin Jang (KISTI GSDC) April 28, 2014 at 15th CERN-Korea Committee, Geneva Korea Institute.
Fermi National Accelerator Laboratory, U.S.A. Brookhaven National Laboratory, U.S.A, Karlsruhe Institute of Technology, Germany CHEP 2012, New York, U.S.A.
| BoD over GÉANT (& NRENs) for FIRE and GENI users GENI-FIRE Workshop Washington DC, 17th-18th Sept 2015 Michael Enrico CTO (GÉANT Association)
Swiss Internet Exchange Association Wouter van Hulten InterXion.
CIXPPaolo Moroni (Slide 1) SwiNOG ( ) CERN Internet Exchange Point l In Geneva, Switzerland, on the Franco-Swiss border l Co-located with the.
Connect communicate collaborate LHCONE L3VPN Status Update Mian Usman LHCONE Meeting Rome 28 th – 29 th Aprils 2014.
Bandwidth-on-Demand evolution Gerben van Malenstein Fall 2011 Internet2 Member Meeting Raleigh, North Carolina, USA – October 3, 2011.
WLCG Networking Tony Cass, Edoardo Martelli 11 th April 2015.
Services provided by CERN’s IT Division Ethernet in controls applications European Organisation for Nuclear Research European Laboratory for Particle.
Network Topologies & Routers By Scott Miller. Network Topology: is the study of the physical and logical connections between input & output devices Physical.
INFN TIER1 (IT-INFN-CNAF) “Concerns from sites” Session LHC OPN/ONE “Networking for WLCG” Workshop CERN, Stefano Zani
Network to and at CERN Getting ready for LHC networking Jean-Michel Jouanigot and Paolo Moroni CERN/IT/CS.
KOLKATA Grid Site Name :- IN-DAE-VECC-02Monalisa Name:- Kolkata-Cream VO :- ALICECity:- KOLKATACountry :- INDIA Shown many data transfers.
1 Using VPLS for VM mobility cern.ch cern.ch HEPIX Fall 2015.
Platform Disaggregation Lightening talk Openlab Major review 16 th Octobre 2014.
30 June Wide Area Networking Performance Challenges Olivier Martin, CERN UK DTI visit.
Network Programming Chapter 1 Networking Concepts and Protocols.
Network infrastructure at FR-CCIN2P3 Guillaume Cessieux – CCIN2P3 network team Guillaume. cc.in2p3.fr On behalf of CCIN2P3 network team LHCOPN.
GreenCloud: A Packet-level Simulator of Energy-aware Cloud Computing Data Centers Dzmitry Kliazovich ERCIM Fellow University of Luxembourg Apr 16, 2010.
Network Connections for the Worldwide LHC Computing Grid Tony Cass Leader, Communication Systems Group Information Technology Department 11 th December.
Brocade Flow Optimizer
A follow-up on network projects 10/29/2013 HEPiX Fall Co-authors:
JINR WLCG Tier 1 for CMS CICC comprises 2582 Core Disk storage capacity 1800 TB Availability and Reliability = 99% 49% 44% JINR (Dubna)End of.
Brookhaven Science Associates U.S. Department of Energy 1 Network Services LHC OPN Networking at BNL Summer 2006 Internet 2 Joint Techs John Bigrow July.
Eygene Ryabinkin, on behalf of KI and JINR Grid teams Russian Tier-1 status report May 9th 2014, WLCG Overview Board meeting.
Connectivity layout of the N7 switch ( ) 4* 1 Gb uplinks to backbone 2 * 10 * 1 Gb interconnections between the N7 switches 10 Gbit limits 1.3.
CERN Campus Network Infrastructure Specificities Jean-Michel Jouanigot Campus Network Leader CERN EUROPEAN ORGANIZATION FOR NUCLEAR RESEARCH EUROPEAN LABORATORY.
Connect communicate collaborate LHCONE European design & implementation Roberto Sabatino, DANTE LHCONE Meeting, Washington, June
The Worldwide LHC Computing Grid Frédéric Hemmer IT Department Head Visit of INTEL ISEF CERN Special Award Winners 2012 Thursday, 21 st June 2012.
IT-INFN-CNAF Status Update LHC-OPN Meeting INFN CNAF, December 2009 Stefano Zani 10/11/2009Stefano Zani INFN CNAF (TIER1 Staff)1.
A Strawman for Merging LHCOPN and LHCONE infrastructure LHCOPN + LHCONE Meeting Washington, DC, Jan. 31, 2013 W. E. Johnston and Chin Guok.
Data Center Network Jin Kim GSDC Data Grid Computing School1.
Networks ∙ Services ∙ People Mian Usman TNC15, Porto GÉANT IP Layer 17 th June 2015 IP Network Architect, GÉANT.
ICX 7750 Distributed Chassis for Campus Aggregation/Core
Dominique Boutigny December 12, 2006 CC-IN2P3 a Tier-1 for W-LCG 1 st Chinese – French Workshop on LHC Physics and associated Grid Computing IHEP - Beijing.
KISTI activities and plans Global experiment Science Data hub Center Jin Kim LHCOPN-ONE Workshop in Taipei1.
ASGC Activities Update Hsin-Yen Chen ASGC LHCONE/LHCOPN meeting Taipei 13 Mar
100GE Upgrades at FNAL Phil DeMar; Andrey Bobyshev CHEP 2015 April 14, 2015.
1 1 – Statistical information about our resource centers; ; 2 – Basic infrastructure of the Tier-1 & 2 centers; 3 – Some words about the future.
LCG France Network Network 11/2010 Centre de Calcul de l'IN2P3/CNRS.
WLCG Network Outlook By Edoardo Martelli with input from and interpreted by Tony Cass.
LHCONE Phase 1 Prototype architecture. Agenda LHCONE FR prototype target architecture LHCONE FR gateways Other tiers 2 connection to LHCONE FR.
Network evolving 2020 and beyond
Luca dell’Agnello INFN-CNAF
DE-KIT(GridKa) -- LHCONE
IT Services Katarzyna Dziedziniewicz-Wojcik IT-DB.
Grid site as a tool for data processing and data analysis
Andrea Chierici On behalf of INFN-T1 staff
CERN Data Centre ‘Building 513 on the Meyrin Site’
LHC Open Network Project status and site involvement
How to address the increasing connectivity needs of the HEP community?
IN2P3 Computing Center April 2007
Vladimir Sapunenko On behalf of INFN-T1 staff HEPiX Spring 2017
Internet and Web Simple client-server model
an overlay network with added resources
Data Center Traffic Engineering
Presentation transcript:

Communications Services CERN Data Center Network Changes and Evolution Communications Services David Gutiérrez Co-autors: Carles Kishimoto, Edoardo Martelli Communication Services / Engineering www.cern.ch/it

High performance Brocade routers Outline 2010 2011 2012 2013 1 2 3 High performance Brocade routers External connectivity and Firewall system Network Architecture for Wigner 1

Data Center Network 2010 Data Center Racks 841 Systems 11,320 Cores 2011 2012 Brocade deployment Force10 100Gbps tests 100Gbps deployment Data Center Racks 841 Systems 11,320 Cores 57,651 Raw HDD (TiB) 61,137 1G NICs 15,703 10G NICs 390 ToR Switches 584 Consumption DC 2.44 MW Force10 LCG non-blocking Fabric 2.88 Tbps Routers 24 Linecards 248 10Gbps Ports 1100 100Gbps Ports N/A 2 ToR: Top of the Rack

DC Network 2010 CORE GPN . . . LCG Force10 router Active Firewall Passive Firewall Aggregated 10Gbps links Tier1s 10Gbps link CORE Border routers GPN Switching fabric 0.96 Tbps 2.88 Tbps Backbone Distribution . . . Access ToR sw LCG CPU, Disk, Tapes AFS, Mail, Web, … 3 LCG: LHC Computing Grid GPN: General Purpose Network ToR: Top of the Rack 3

10Gbps Aggregation issues Hashing is decoupled from link capacity Flow based hashing Potential network traffic polarization Manageability 10G 4x10G 10G LACP 4 LAG: Link AGgregation ECMP: Equal Cost MultiPath

100Gbps fat router interconnects Technology upgrade where needed Performance and fairness tests 100GBase-LR10 CFP (SMF up to 2Km) Testing 100GbE WAN to: Lyon (RENATER) ~120 km Amsterdam (AMSIX) ~1650 km 5

Migration in images 6

DC Network today CORE GPN . . . LCG Brocade router Active Firewall Passive Firewall Aggregated 10Gbps Links LHCONE Tier1s 100Gbps Link CORE Border routers GPN Switching fabric 1.36 Tbps 5.28 Tbps Backbone Distribution . . . Access ToR sw LCG CPU, Disk, Tapes AFS, Mail, Web 7 7

Data Center Network today 2012 2013 Data Center 2010 2012 Power 2.9 MW 3.5MW* Racks 841 1070 Systems 11,320 12,483 Cores 57,651 68,385 Raw HDD (TiB) 61,137 97,698 1G NICs 15,703 16,026 10G NICs 390 1,912 ToR Switches 584 662 Consumption 2.44 MW 2.8MW Data Center L3 Switch Brocade LCG non-blocking Fabric 5.28 Tbps Routers 22 Linecards 230 10Gbps Ports 1,280 100Gbps Ports 60 Data Center L2 Switch HP 1Gbps Ports 22,776 4,284 MLXe32 Technical Specs # Non-blocking Fabric 15 Tbps Linecards 32 10Gbps Ports 256 100Gbps Ports 8

External connectivity Outline 2010 2011 2012 2013 2 3 High performance Brocade routers External connectivity Network Architecture for Wigner 9

External connectivity changes Internet Géant2 Internet2 US Peers CIXP LHCONE EXTNET LHCOPN Active Firewall Passive Firewall CORE LCG GPN 10 CIXP: Cern Internet eXchange Point

Firewall System Active-Passive Internet Géant2 Internet2 US Peers CIXP Shared with SWITCH 12Gbps 3.8Gbps 1Gbps 20Gbps 30Gbps 6Gpbs Stateful 130Gbps LHCONE 20Gbps EXTNET LHCOPN Active Firewall Passive Firewall CORE LCG GPN 11

Firewall System Active-Active Internet RENATER 2Gbps 40Gbps Géant2 Internet2 US Peers CIXP Shared with SWITCH 12Gbps 3.8Gbps 1Gbps 20Gbps 130Gbps LHCONE EXTNET LHCOPN 30Gbps 6Gpbs Stateful Active Firewall Active Firewall 30Gbps 10Gpbs Stateful CORE LCG GPN 12

Network Architecture for Wigner Outline 2010 2011 2012 2013 3 High performance Brocade routers External connectivity Network Architecture for Wigner 13

LCG Resources 2x100Gbps Geneva Building 513 Budapest Wigner Internet/ GeantIP/ Esnet/I2 Firewall 2x100Gbps CERN Core Network Wigner Core Network LCG GPN LCG 14

Autonomous Operation 2x100Gbps Geneva Building 513 Budapest Wigner Internet/ GeantIP/ Esnet/I2 Internet/ HU access 188.185.0.0/16 AS198797 2001:1459::/32 Firewall 2x100Gbps Firewall CERN Core Network Wigner Core Network radius dns ntp dhcp LCG GPN LCG GPN 15

LHCOPN and LHCONE traffic Geneva Building 513 Budapest Wigner Internet/ GeantIP/ Esnet/I2 Internet/ HU access 188.185.0.0/16 AS198797 2001:1459::/32 Firewall Firewall CERN Core Network Wigner Core Network MPLS BGP MPLS: MultiProtocol Label Switching radius dns ntp dhcp LCG GPN LCG GPN LHCOPN LHCONE 16

Wigner in numbers Wigner Data Center 2013 2014 Power ~900KW ~1200KW Racks 90 120 Routers 6 10+Firewall 100Gbps ports 18 Switches 140 210 Servers ~1200 ~1800 L2 Switch 1Gbps ports 3072 4608 10Gbps ports 528 792 17

Thank you for your attention Questions? 18