Infrastructure for the LHCb RTTC Artur Barczyk CERN/PH RTTC meeting,
2CERN, Artur Barczyk, CERN/PH Background Proposed setup for RTTC (Beat, ): Controls switch Disk server SFC Node Data switch Node ECS Disk server SFC Node Data switch Node Data switch
3CERN, Artur Barczyk, CERN/PH Background Existing equipment in 157: 46 compute nodes 46 compute nodes 4 SFCs 4 SFCs 1 dual Xeon (32 bit architecture) 2 dual Opteron (64 bit architecture) 1 dual Itanium (64 bit architecture) 1 ECS server (Windows) 1 ECS server (Windows) 1 NFS server (Linux) 1 NFS server (Linux) 3 24 port GbE switches 3 24 port GbE switches 1 48 port FE switch (farm connectivity for controls) 1 48 port FE switch (farm connectivity for controls) 2 complete Sub-Farms with 23 nodes each (although aging, so no speed record to be expected… … but planned to buy 23 dual CPU farm nodes) All hosts (incl. switches) are on LHCb private network
4CERN, Artur Barczyk, CERN/PH Private Network Private Network is: IP network using private address range IP network using private address range Private = administered within organisation, i.e. the LHCb Online team in this case Private = administered within organisation, i.e. the LHCb Online team in this case Not directly connected to the internet access via Gateway Not directly connected to the internet access via Gateway Reserved private numbers are (RFC 1918) Reserved private numbers are (RFC 1918) Class A: / 8 (16 Mhosts) Class B: / 12 (1 Mhosts) Class C: / 16 (64 khosts) In general, all hosts are accessible via gateway Some boxes, in particular the servers, can be accessed from the CERN network as usual (Network Address Translation (NAT) on the Gateway machine transparent to the user) Gateway functions also as a firewall, need to identify services from outside, and open corresponding ports (e.g. AFS, DNS etc.)
5CERN, Artur Barczyk, CERN/PH Why bother Future: Readout network will be a private network, as will be the Controls Network etc. Present: DAQ test bed in 157 runs out of CERN IP numbers (“our” segment has 127 possible addresses, 101 already used up) Good opportunity to switch over, and test functionality before/during the Trigger Challenge CERN/IT LHCb Point 8IT Controls Storage Workstations Gateway
6CERN, Artur Barczyk, CERN/PH Control interfaces The setup in 157 uses class A private numbers Subnet /16 used for control interfaces Use 3 rd octet to distinguish between Farm nodes ( 10.1.N.0 / 24 ) Farm nodes ( 10.1.N.0 / 24 ) SFCs ( / 24 ) SFCs ( / 24 ) Servers ( / 24 ) Servers ( / 24 ) Gateway Farm NSRCsSRVsSFCs CERN NETWORK LBTBGW DAQ PRIVATE NETWORK / / 24 e.g for pclbtbsrc N.0 / 24 e.g for PC 7 in farm / 24 e.g for pclbtbsrv / 24 e.g. 10.1,100.5 for pclbtbsfc05
7CERN, Artur Barczyk, CERN/PH User access Generally through gateway (lbtbgw), in two steps: pclhcb114> ssh pclhcb114> ssh lbtbgw> ssh lbtbgw> ssh Firewall currently open only for ssh ssh IP-time IP-time DNS DNS AFS AFS AFS can be accessed as usual on directly NATed boxes (servers, SFCs) as usual on directly NATed boxes (servers, SFCs) via dynamic NAT from all other boxes (farm nodes) via dynamic NAT from all other boxes (farm nodes) This means that only the host in question can start a connection, and that only a limited number of hosts can access AFS at the same time Meant for e.g. system upgrades Other services will be allowed to pass the gateway when identified as needed In principle, the RTTC traffic should be local within our domain
8CERN, Artur Barczyk, CERN/PH Data interfaces Subnet /16 used for data interfaces Use 3 rd octet to distinguish between Data source N ( 10.2.N.0 / 24 ) Data source N ( 10.2.N.0 / 24 ) SFC M ( M.0 / 24 ) SFC M ( M.0 / 24 ) Farm (K) node ( K.0 / 24 ) Farm (K) node ( K.0 / 24 ) Note: no gateway! Source SFC 5 Farm 1, node
9CERN, Artur Barczyk, CERN/PH Status/Outlook The setup is running on the private network as of recently So far used for switch testing and SFC benchmarking We have to gain experience with running behind a firewall: Identify outside services needed Identify outside services needed Install whatever is missing/useful Install whatever is missing/useful Other operational details like e.g. ssh tunnelling, security/OS updates etc. Other operational details like e.g. ssh tunnelling, security/OS updates etc. Hardware installations: 1-2 disk servers for RTTC data 1-2 disk servers for RTTC data 23 state-of-the-art farm nodes 23 state-of-the-art farm nodes