Factors affecting ANALY_MWT2 performance MWT2 team August 28, 2012
Factors to check Storage servers Internal UC network Internal IU network WAN network Effect of dCache-locality caching versus WAN direct access IU analysis nodes specifically 2
Individual storage servers We have previously measured performance of each storage node individually with various “blessing tests” – Nodes are uct2-s[14] and iut2-s[6] – Note xxt2-s[3] are first gen; xxt2-s[4-14] are SAS2 H800 Each storage node is over-provisioned for CPU and memory (96G) – even while running dCache services and Xrootd- overlay Each node has a single 10G NIC, a potential bottleneck – Some of the s-nodes have an additional 10G port that could be cabled and bonded Currently only UC and IU have storage and since all accesses are local no analysis jobs run at UIUC presently – UIUC will add 300TB this fall 3
Typical Storage node 4
Storage Network utilization at UC 5 An hour sample. See IO nicely spread over servers, no obvious bottlenecks or hot spots
Storage Network utilization UC (week) 6 Over past week. More less the same- good spread, MB/s continuously per system
UC Network 7 This link is now 2x10G
UC Network – Bottlenecks (1) PC8024F and PC6248 stack – Cacti (guest/cacti): w&local_graph_id=3581&rra_id=all w&local_graph_id=3581&rra_id=all – Last week. There are moments of saturation (3) 8
UC Network (2) PC6248 and Cisco 6509 is 2x10G bonded – w&local_graph_id=3757&rra_id=all w&local_graph_id=3757&rra_id=all Last week (looks fine mostly < single 10G) 9
WAN network from UC to IU, UIUC, and BNL 10
WAN Network 11 Last week’s IO to UC: green is usually FTS transfers from BNL, blue IO mostly to IU but some to UIUC. This is for one of the 10G NICs from the 6509 to campus core (there is a second NIC to campus core, and the bonded plot, neither of which I can find in our cacti hierarchy at the moment)
Local IU 12 MWT2
IU local network Storage nodes are each connected via 10GB link to 6248 switch stack Compute nodes are connected to the same stack via 1GB connections 6248 switch has dual-10GB uplink to rtsw2, the 100GB Brocade switch 13
IU-centric WAN picture 14 This picture does not show connectivity to the other MWT2 sites; to UC, the connection is through MREN currently
IU WAN network WAN traffic last month (peaks up to 8 Gbps) 15 There are 100 Gbps links to Chicago available – though not all the way to UIUC and UC
WAN direct access versus dCache- locality mode caching We believe since turning on dCache-caching we have reduced the load on the WAN 16 dCache-locality was turned circa 8/6/2012 (~ Week 31) dCache-locality was turned circa 8/6/2012 (~ Week 31)
dCache locality mode Cache hit rate is about 75% Individual files are used an average of 4 times The average transfer transfer reads 25% of the file 17
dCache Site Caches 18 IU pools UC pools Cached data
Low efficiency at IU Slow jobs are not associated with a particular data server According to strace, most system time is spent in munmap command Jobs are slow even on a completely empty node Data is cached at IU 19
Summary and improvements Add second 10G to 8024F-6248 link at UC and bond Cable up second 10G port for uct2-s[11-14] – will need to add another 8024F, which will also require further trunking rearrangement Adding additional storage nodes will increase number of IO channels decreasing single-node contention 20