Presentation is loading. Please wait.

Presentation is loading. Please wait.

Oxford Update HEPix Pete Gronbech GridPP Project Manager October 2014.

Similar presentations


Presentation on theme: "Oxford Update HEPix Pete Gronbech GridPP Project Manager October 2014."— Presentation transcript:

1 Oxford Update HEPix Pete Gronbech GridPP Project Manager October 2014

2 Oxford Particle Physics - Overview Oxford University has one of the largest Physics Departments in the UK –~450 staff and 1100 students. Particle Physics sub department is the largest in the UK We now support Windows, Linux (Ubuntu) and Mac on the desktop across the department. –Ubuntu - Cobbler used to install base system, Cfengine used for package control and configuration. Two computational clusters for the PP physicists. –Grid Cluster part of the SouthGrid Tier-2 –Local Cluster (AKA Tier-3) A common Cobbler and Puppet system is used to install and maintain all the SL6 systems. Lincoln - October 2014 2

3 UK GridPP Computing Structure One Tier-1 center - RAL 18 University Particle Physics Departments Each is part of a regional Grid Tier-2 center Most have some local computing clusters (AKA Tier-3) Oxford is part of SouthGrid. SouthGrid is comprised of all the non London based sites in the South of the UK. Birmingham, Bristol, Cambridge, JET, Oxford, RAL PPD, Sussex. Lincoln - October 2014 3

4 UKI Tier-1 & Tier-2 contributions The UK the second largest contributor to the WLCG. (~11% cf. 28% for USA) Accounting for the last year. Tier-1 accounts for~31% Tier-2s share as below 4 Lincoln - October 2014

5 UK Grid Job Mix Lincoln - October 2014 5

6 6 UK Tier-2 reported CPU – Historical View to present Comparing last update in 2012.

7 Lincoln - October 2014 7 SouthGrid Sites Accounting as reported by APEL

8 Oxford Grid Cluster Upgrades over the last year to increase the capacity of the storage and some modest CPU upgrades. 15 Dell 720XDs servers (12*4TB raw capacity). Note we used SATA not SAS this time. Unlikely to do this in the future. (SAS becoming the default and support for SATA costs extra). SE is running DPM. Three ‘twin-squared’ Viglen Supermicro HX525T2i worker nodes have been installed. Intel E5-2650v2 8 core (16 Hyper-threaded cores each) provides 384 job slots with 2GB RAM. Two thirds of the Grid Cluster now running HT Condor behind ARC CE. Remaining third running legacy torque/maui driven by CREAM CE. Lincoln - October 2014 8 Current capacity 16,768HS06 1300TB

9 Intel E5-2650 v2 SL6 HEPSPEC06 Average result 361.4 29% improvement over 2650 v1 on SL5 Lincoln - October 2014 9

10 Power Usage – Twin squared chassis Max 1165W Idle 310W Lincoln - October 2014 10

11 Oxford’s Grid Cluster 11 Lincoln - October 2014

12 Begbroke Computer Room 12 Lincoln - October 2014

13 Local Computer room showing PP cluster & cold aisle containment 13 Very similar h/w to the Grid Cluster. Same Cobbler and Puppet management setup. Lustre used for larger groups Capacity:- 7192 HS06, 716TB Lincoln - October 2014

14 Networking ~900 MB/s = 7.2 Gbps. The University had a 10Gbit link to JANET with a 10 Gbit failover link. A third link was added and the Grid traffic routed exclusively down that in August 2013. Plots from March 2014 (Atlas transfer from BNL) Lincoln - October 2014 14

15 CMS Tier-3 –Supported by RALPPD’s PhEDEx server –Useful for CMS, and for us, keeping the site busy in quiet times –However can block Atlas jobs and during accounting period not so desirable ALICE Support –There is a need to supplement the support given to ALICE by Birmingham. –Made sense to keep this in SouthGrid so Oxford have deployed an ALICE VO box UK Regional Monitoring –Oxford runs the nagios based WLCG monitoring for the UK –These include the Nagios server itself, and support nodes for it, SE, MyProxy and WMS/LB –Multi VO Nagios Monitoring added two years ago. IPv6 Testing –We take leading part in the IPv6 testing, many services enabled and tested by the community. –perfSONAR IPv6 enabled. RIPE Atlas probe also on IPv6. Cloud Development –Openstack test setup (Has run Atlas jobs) –VAC setup (LHCb, Atlas & GridPP Dirac server jobs) Other Oxford Work Lincoln - October 2014 15

16 Lincoln - October 2014 16 Conclusions Recent hardware purchases have provided both storage capacity and CPU performance improvements. Good Network connectivity Solid computer rooms. Medium sized Grid site but have involvement in many development projects.

17 HEPix Spring 2015 is coming to Oxford 17

18 Other Oxford Attractions! 18

19 Including Oxford Physics Lincoln - October 2014 19


Download ppt "Oxford Update HEPix Pete Gronbech GridPP Project Manager October 2014."

Similar presentations


Ads by Google