Download presentation
Presentation is loading. Please wait.
Published byAlexander Simpson Modified over 9 years ago
1
Network infrastructure at FR-CCIN2P3 Guillaume Cessieux – CCIN2P3 network team Guillaume. Cessieux @ cc.in2p3.fr On behalf of CCIN2P3 network team LHCOPN meeting, Vancouver, 2009-09-01
2
FR-CCIN2P3 2 Since 1986 Now 74 persons ~ 5300 cores 10 Po disk 30 Po tape Computing room ~ 730m2 1.7 MW LHCOPN meeting, Vancouver, 2009-09-01 GCX
3
RENATER-4 → RENATER-5: Dark fibre galore LHCOPN meeting, Vancouver, 2009-09-01GCX3 → ~7500km of DF CERN Kehl Cadarache Dark fibres Leased line 2,5 G Genève (CERN) Cadarache Leased line 1 G (GE) Tours Le Mans Angers Tours Le Mans Angers Kehl
4
(D)WDM based Previously –Alcatel 1.6k series –Cisco 6500 & 12400 Upgraded to –Ciena CN4200 –Cisco 7600 & CRS-1 Hosted by CCIN2P3 –Direct foot into RENATER’s backbone No last miles or MAN issues Pop RENATER-5 Lyon LHCOPN meeting, Vancouver, 2009-09-01 GCX4
5
Ending two 10G LHCOPN links 5 100km GRIDKA-IN2P3-LHCOPN-001 CERN-IN2P3-LHCOPN-001 Candidate for L1 redundancy CERN-GRIDKA-LHCOPN-001 Layer 3 view:
6
WAN connectivity related to T0/T1s RENATER 2x1G WANLAN Chicago Geneva Karlsruhe GÉANT2 Internet Generic IPDedicated Tiers2 FR NREN LHCOPN Tiers1 Backbone Tiers2 Edge 10G LHCOPN meeting, Vancouver, 2009-09-01 6GCX Beware: Not for LHC MDM appliances Dedicated data servers for LCG 1G
7
LAN: Just fully upgraded! LHCOPN meeting, Vancouver, 2009-09-01 GCX7 Computing → Storage SATA Storage FC+TAPE Computing Storage SATA Storage FC+TAPE
8
Now “top of rack” design Really easing mass handling of devices –Enable directly buying pre-wired racks Just plug power and fibre – 2 connections! LHCOPN meeting, Vancouver, 2009-09-01 GCX8 …
9
Current LAN for data analysis LHCOPN meeting, Vancouver, 2009-09-01 GCX9 36 computing racks 34 to 42 server per rack 1x10G uplink 1G per server Data FC (27 servers) Data SATA 816 servers in 34 racks 10G/server Tape 10 servers 2x1G per server 10G/server 1 switch/rack (36 access switches) 48x1G/switch 3 distributing switches Linked to backbone with 4x10G Computing … 24 servers per switch 34 access switches with Trunked uplink 2x10G Linked to backbone with 4x10G … 2 distributing switches Storage Backbone 40G
10
Main network devices and configurations used 24x10G (12 blocking) + 96x1G + 336x1G blocking (1G/8ports) 48x10G (24 blocking) + 96x1G 64x10G (32 blocking) 48x1G + 2x10G 6509 6513 4948 4900 16x10G Backbone & Edge Distribution Access LHCOPN meeting, Vancouver, 2009-09-01 10GCX x5 x70 > 13km of copper cable & > 3km of 10G fibres
11
Tremendous flows LHCOPN meeting, Vancouver, 2009-09-01 GCX11 But still regular peaks at 30G on the LAN backbone LHCOPN links not so used yet GRIDKA-IN2P3-LHCOPN-001 CERN-IN2P3-LHCOPN-001
12
Other details LAN –Big devices preferred to meshed bunch of small –We avoid too much device diversity Ease management & spare –No spanning tree, trunking is enough Redundancy only at service level when required –Routing only in the backbone (EIGRP) 1 VLAN per rack No internal firewalling –ACL on border routers are sufficient Only on incoming traffic and per interface –Preserve CPU LHCOPN meeting, Vancouver, 2009-09-01 GCX12
13
Monitoring Home made flavour of netflow –EXTRA: External Traffic Analyzer http://lpsc.in2p3.fr/extra/ But some scalability issues around 10G... Cricket & cacti + home made –ping & TCP tests + rendering Several publicly shared –http://netstat.in2p3.fr/http://netstat.in2p3.fr/ LHCOPN meeting, Vancouver, 2009-09-01 GCX13
14
Ongoing (1/3) WAN - RENATER –Upcoming transparent L1 redundancy Ciena based –40G & 100G testbed Short path FR-CCIN2P3 – CH-CERN is a good candidate LHCOPN meeting, Vancouver, 2009-09-01 GCX14
15
Ongoing (2/3) LAN –Improving servers’ connectivity 1G → 2x1G→ 10G per server Starting with most demanding storage servers –100G LAN backbone Investigating Nexus based solutions –7018: 576x10G (worst case ~144 at wirespeed) –Flat to stared design LHCOPN meeting, Vancouver, 2009-09-01 GCX15 → Nx40G Nx100G
16
Ongoing (3/3) A new computer room! LHCOPN meeting, Vancouver, 2009-09-01 GCX16 Building 2 2 floors Existing 850m² on two floors 1 cooling, UPS, etc. 1 computing devices Target 3 MW Expected beginning 2011 (Starting at 1MW)
17
Conclusion WAN –Excellent LHCOPN connectivity provided by RENATER –Demand from T2s may be next working area LAN –Linking abilities recently tripled –Next step will be core backbone upgrade LHCOPN meeting, Vancouver, 2009-09-01 GCX17
Similar presentations
© 2024 SlidePlayer.com. Inc.
All rights reserved.