Presentation is loading. Please wait.

Presentation is loading. Please wait.

A follow-up on network projects 10/29/2013 HEPiX Fall 20132 Co-authors:

Similar presentations


Presentation on theme: "A follow-up on network projects 10/29/2013 HEPiX Fall 20132 Co-authors:"— Presentation transcript:

1

2 A follow-up on network projects 10/29/2013 HEPiX Fall 20132 Sebastien.Ceuterickx@cern.ch Co-authors: Edoardo.Martelli@cern.ch, David.Gutierrez@cern.chEdoardo.Martelli@cern.chDavid.Gutierrez@cern.ch Carles.Kishimoto@cern.chCarles.Kishimoto@cern.ch, Aurelie.Pascal@cern.chAurelie.Pascal@cern.ch IT/Communication Systems HEPiX Fall 2013

3 Agenda Latest network evolution Network connectivity at Wigner Business Continuity Status about IPv6 deployment TETRA deployment 10/29/2013 3HEPiX Fall 2013

4 Latest network development 10/29/2013 HEPiX Fall 20134

5 Upgrade of Geneva data center Migration to Brocade routers completed 2 year project No service interruption Benefits: 100Gbps links Simplified topology (from 22 to 13 routers) Lower power consumption per port Margin for scalability Enhanced features (MPLS, Virtual routing) 10/29/2013 5HEPiX Fall 2013

6 Backbone 100 Gbps links ∑ 140 Gbps ∑ 20 Gbps ∑ 60 Gbps Internet ∑ 12 Gbps External Network RacksDistribution CERN Data Center today Switching fabric 1.36 Tbps Switching fabric 5.28 Tbps LCG GPN Firewall CORE Network Wigner ∑ 200 Gbps Switching fabric 1 Tbps 100 Gbps links MPLS 10/29/2013 HEPiX Fall 20136

7 StorageDistribution Routers Backbone Routers 100s of 10 Gbps 100 Gbps links Scaling the Data Center 7 10s of 100 Gbps Try to skip 40 Gbps interface HEPiX Fall 2013 10/29/2013 CPU

8 Service capacity depends on Service purpose Blocking Factor: 2 for CPU, 5 for Storage x 1Gbps m x 10 Gbps 10 Gbps x n x 10 Gbps CPU servers Storage servers Distribution Router 8 Scaling the Top of the Rack x 10Gbps m x 100 Gbps n x 100 Gbps 10GBase-T ? 40Gbps ? HEPiX Fall 2013 10/29/2013

9 CORE Network Internet ∑ 240 Gbps RacksDistribution Backbone Extending the Tier0 to Wigner 9HEPiX Fall 2013 10/29/2013 Geneva Switzerland Budapest Hungary

10 Internet CORE Backbone Routers RacksDistribution Racks Distribution MPLS Backbone WLCG Tier0 10HEPiX Fall 2013 10/29/2013

11 Business Continuity 11 HEPiX Fall 2013 10/29/2013

12 LCG Extended CORE MPLS Wigner for Business Continuity 12 LCG GPN External Network Firewall External Network Firewall Internet HEPiX Fall 2013 10/29/2013 Virtual Routers

13 2 nd Network Hub at CERN 13 Backbone ∑ 140 Gbps ∑ 20 Gbps ∑ 60 Gbps Internet ∑ 12 Gbps External Network RacksDistribution LCG GPN Firewall CORE Network Wigner ∑ 200 Gbps HEPiX Fall 2013 10/29/2013 Single Building

14 2 nd Network Hub at CERN 14 ∑ 140 Gbps ∑ 20 Gbps ∑ 60 Gbps Internet ∑ 12 Gbps LCG GPN External Network Firewall CORE Network Wigner ∑ 200 Gbps External Network Firewall CORE Network GPN LCG HEPiX Fall 2013 10/29/2013

15 IPv6 deployment at CERN 15HEPiX Fall 2013 10/29/2013

16 16 2012 Network Database: Schema and Data IPv6 Ready Configuration Manager supports IPv6 routing Admin Web: IPv6 integrated 2013 The Data Center is Dual-Stack Gradual deployment on the routing infrastructure starts NTPv6 and DNSv6 DHCPv6 Infrastructure is Dual-Stack Firewallv6 automated configuration User Web and SOAP integrate IPv6 Today HEPiX Fall 2013 10/29/2013 Automatic DNS AAAA configuration

17 IPv4 / IPv6 same portfolio Identical performance, common tools and services Dual Stack, dual routing OSPFv2/OSPFv3 BGP ipv4 and ipv6 peers Service managers decide when ready for IPv6 Devices must be registered IPv6 auto configuration (SLAAC) disabled RAs: Default Gateway + IPv6 prefixes no-autoconfig DHCPv6 MAC addresses as DUIDs: painful without RFC6939 ISC has helped a lot (βcode implementing classes for ipv6) DHCPv6 clients might not work ‘out of the box’ 17HEPiX Fall 2013 10/29/2013

18 Lots of VMs Two options: A) VMs with only public IPv6 addresses + Unlimited number of VMs - Several applications don't run over IPv6 today (PXE, AFS,...) - Very few remote sites have IPv6 enabled (limited remote connectivity) + Will push IPv6 adoption in the WLCG community B) VMs with private IPv4 and public IPv6 + Works flawlessly inside CERN domain - No connectivity with remote IPv4-only hosts (NAT solutions not supported or recommended) HEPiX Fall 201318 Current VM adoption plan will cause IPv4 depletion during 2014. 10/29/2013

19 TETRA deployment 19HEPiX Fall 2013 10/29/2013

20 What is TETRA? A digital professional radio technology E.T.S.I standard (VHF band – 410-430MHz) Make use of “walkie-talkies” Voice services Message services Data and other services Designed for safety and security daily operation 20HEPiX Fall 2013 10/29/2013

21 The project Update the radio system used by the CERN Fire Brigade 21 Fully operational since early 2013 2.5 years work A fully-secured radio network (unlike GSM) Complete surface and underground coverage Cooperation with French and Swiss authorities Enhanced features and services HEPiX Fall 2013 10/29/2013

22 Which services? Interconnection with other networks Distinct or overlapping user communities security, transport, experiments, maintenance teams… Outdoor and indoor geolocation Lone worker protection 22HEPiX Fall 2013 10/29/2013

23 Conclusion The Network is ready to cope with ever- increasing needs Wigner is fully integrated Development of Business Continuity Before end-2013, IPv6 will be fully deployed and available to the CERN community TETRA system provides CERN with an advanced, fully-secured radio network. 23HEPiX Fall 2013 10/29/2013

24 Thank you! Question 24HEPiX Fall 2013 10/29/2013

25 Some links A short introduction to the Worldwide LCG, Marteen Litmaath –https://espace.cern.ch/cern-guides/Documents/WLCG-intro.pdfhttps://espace.cern.ch/cern-guides/Documents/WLCG-intro.pdf Physics computing at CERN, Helge Meinhard –https://openlab-mu-internal.web.cern.ch/openlab-mu-internal/03_Documents/4_Presentations/Slides/2011-list/H.Meinhard-PhysicsComputing.pdfhttps://openlab-mu-internal.web.cern.ch/openlab-mu-internal/03_Documents/4_Presentations/Slides/2011-list/H.Meinhard-PhysicsComputing.pdf WLCG – Beyond the LHCOPN, Ian Bird –http://www.glif.is/meetings/2010/plenary/bird-lhcopn.pdfhttp://www.glif.is/meetings/2010/plenary/bird-lhcopn.pdf LHCONE – LHC Use case –http://lhcone.web.cern.ch/node/23http://lhcone.web.cern.ch/node/23 LHC Open Network Environment. Bos-Fisk paper –http://lhcone.web.cern.ch/node/19http://lhcone.web.cern.ch/node/19 Introduction to CERN Data Center, Frederic Hemmer –https://indico.cern.ch/getFile.py/access?contribId=87&resId=1&materialId=slides&confId=243569https://indico.cern.ch/getFile.py/access?contribId=87&resId=1&materialId=slides&confId=243569 The invisible Web –http://cds.cern.ch/journal/CERNBulletin/2010/49/News%20Articles/1309215?ln=enhttp://cds.cern.ch/journal/CERNBulletin/2010/49/News%20Articles/1309215?ln=en CERN LHC Technical infrastructure monitoring –http://cds.cern.ch/record/435829/files/st-2000-018.pdfhttp://cds.cern.ch/record/435829/files/st-2000-018.pdf Computing and network infrastructure for Controls –http://epaper.kek.jp/ica05/proceedings/pdf/O2_009.pdfhttp://epaper.kek.jp/ica05/proceedings/pdf/O2_009.pdf 25HEPiX Fall 2013 10/29/2013

26 Extra Slides 26HEPiX Fall 2013 10/29/2013

27 CERN Area~600,000m 2 Buildings 646 Staff and Users14,592 Devices Registered170,475 Data CentersGenevaWigner 2013Wigner 2014 Power3,500KW~900KW~1200KW Racks82890120 Servers10,173~1,200~1800 Routers2268+Firewall 100Gbps ports6018 ToR Switches662140210 ToR Switching 1Gbps ports22,7763,0724608 10Gbps ports4,284528792 Storage Disks 76,697 Raw disk capacity (TiB) 116,948 Tape Drives 160 Data on Tape (PiB)100 L2 Switching Switches2726 1 Gbps ports91230 10 Gbps ports5656 L3 Switching Routers161 1 Gbps ports5976 10 Gbps ports2248 100 Gbps ports78 WiFi Access Points1,514 Devices seen/day~7,000 27HEPiX Fall 2013 10/29/2013

28 LHCONE LHC Open Network Environment Enable high-volume data transport between T1s, T2s and T3s Separate LHC large flows from the general purpose routed infrastructures of R&E Provide access locations that are entry points into a network private to the LHC T1/2/3 sites. Complement the LHCOPN 28HEPiX Fall 2013 10/29/2013

29


Download ppt "A follow-up on network projects 10/29/2013 HEPiX Fall 20132 Co-authors:"

Similar presentations


Ads by Google