Download presentation
Presentation is loading. Please wait.
Published byDominick Young Modified over 9 years ago
1
Infrastructure for the LHCb RTTC Artur Barczyk CERN/PH RTTC meeting, 26.10.04
2
2CERN, 26.10.2004Artur Barczyk, CERN/PH Background Proposed setup for RTTC (Beat, 29.10.04): Controls switch Disk server SFC Node Data switch Node ECS Disk server SFC Node Data switch Node Data switch
3
3CERN, 26.10.2004Artur Barczyk, CERN/PH Background Existing equipment in 157: 46 compute nodes 46 compute nodes 4 SFCs 4 SFCs 1 dual Xeon (32 bit architecture) 2 dual Opteron (64 bit architecture) 1 dual Itanium (64 bit architecture) 1 ECS server (Windows) 1 ECS server (Windows) 1 NFS server (Linux) 1 NFS server (Linux) 3 24 port GbE switches 3 24 port GbE switches 1 48 port FE switch (farm connectivity for controls) 1 48 port FE switch (farm connectivity for controls) 2 complete Sub-Farms with 23 nodes each (although aging, so no speed record to be expected… … but planned to buy 23 dual CPU farm nodes) All hosts (incl. switches) are on LHCb private network
4
4CERN, 26.10.2004Artur Barczyk, CERN/PH Private Network Private Network is: IP network using private address range IP network using private address range Private = administered within organisation, i.e. the LHCb Online team in this case Private = administered within organisation, i.e. the LHCb Online team in this case Not directly connected to the internet access via Gateway Not directly connected to the internet access via Gateway Reserved private numbers are (RFC 1918) Reserved private numbers are (RFC 1918) Class A: 10.0.0.0 / 8 (16 Mhosts) Class B: 172.16.0.0 / 12 (1 Mhosts) Class C: 192.168.0.0 / 16 (64 khosts) In general, all hosts are accessible via gateway Some boxes, in particular the servers, can be accessed from the CERN network as usual (Network Address Translation (NAT) on the Gateway machine transparent to the user) Gateway functions also as a firewall, need to identify services from outside, and open corresponding ports (e.g. AFS, DNS etc.)
5
5CERN, 26.10.2004Artur Barczyk, CERN/PH Why bother Future: Readout network will be a private network, as will be the Controls Network etc. Present: DAQ test bed in 157 runs out of CERN IP numbers (“our” segment has 127 possible addresses, 101 already used up) Good opportunity to switch over, and test functionality before/during the Trigger Challenge CERN/IT LHCb Point 8IT Controls Storage Workstations Gateway
6
6CERN, 26.10.2004Artur Barczyk, CERN/PH Control interfaces The setup in 157 uses class A private numbers Subnet 10.1.0.0/16 used for control interfaces Use 3 rd octet to distinguish between Farm nodes ( 10.1.N.0 / 24 ) Farm nodes ( 10.1.N.0 / 24 ) SFCs ( 10.1.100.0 / 24 ) SFCs ( 10.1.100.0 / 24 ) Servers ( 10.1.101.0 / 24 ) Servers ( 10.1.101.0 / 24 ) Gateway Farm NSRCsSRVsSFCs CERN NETWORK LBTBGW 137.138.137.239 DAQ PRIVATE NETWORK 10.254.254.254 / 8 10.1.102.0 / 24 e.g. 10.1.102.12 for pclbtbsrc12 10.1.N.0 / 24 e.g. 10.1.2.7 for PC 7 in farm 2 10.1.101.0 / 24 e.g. 10.1.101.2 for pclbtbsrv02 10.1.100.0 / 24 e.g. 10.1,100.5 for pclbtbsfc05
7
7CERN, 26.10.2004Artur Barczyk, CERN/PH User access Generally through gateway (lbtbgw), in two steps: pclhcb114> ssh lbfarmer@lbtbgw pclhcb114> ssh lbfarmer@lbtbgw lbtbgw> ssh lbfarmer@farm0001 lbtbgw> ssh lbfarmer@farm0001 Firewall currently open only for ssh ssh IP-time IP-time DNS DNS AFS AFS AFS can be accessed as usual on directly NATed boxes (servers, SFCs) as usual on directly NATed boxes (servers, SFCs) via dynamic NAT from all other boxes (farm nodes) via dynamic NAT from all other boxes (farm nodes) This means that only the host in question can start a connection, and that only a limited number of hosts can access AFS at the same time Meant for e.g. system upgrades Other services will be allowed to pass the gateway when identified as needed In principle, the RTTC traffic should be local within our domain
8
8CERN, 26.10.2004Artur Barczyk, CERN/PH Data interfaces Subnet 10.2.0.0/16 used for data interfaces Use 3 rd octet to distinguish between Data source N ( 10.2.N.0 / 24 ) Data source N ( 10.2.N.0 / 24 ) SFC M ( 10.2.10M.0 / 24 ) SFC M ( 10.2.10M.0 / 24 ) Farm (K) node ( 10.1.20K.0 / 24 ) Farm (K) node ( 10.1.20K.0 / 24 ) Note: no gateway! Source 1 10.2.1.1 SFC 5 Farm 1, node 15 10.2.1.3 10.2.105.1 10.2.105.2 10.2.201.15
9
9CERN, 26.10.2004Artur Barczyk, CERN/PH Status/Outlook The setup is running on the private network as of recently So far used for switch testing and SFC benchmarking We have to gain experience with running behind a firewall: Identify outside services needed Identify outside services needed Install whatever is missing/useful Install whatever is missing/useful Other operational details like e.g. ssh tunnelling, security/OS updates etc. Other operational details like e.g. ssh tunnelling, security/OS updates etc. Hardware installations: 1-2 disk servers for RTTC data 1-2 disk servers for RTTC data 23 state-of-the-art farm nodes 23 state-of-the-art farm nodes
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.