Presentation is loading. Please wait.

Presentation is loading. Please wait.

Report from the WLCG Operations and Tools TEG Maria Girone / CERN & Jeff Templon / NIKHEF TEG Workshop, 7 th February 2012.

Similar presentations


Presentation on theme: "Report from the WLCG Operations and Tools TEG Maria Girone / CERN & Jeff Templon / NIKHEF TEG Workshop, 7 th February 2012."— Presentation transcript:

1 Report from the WLCG Operations and Tools TEG Maria Girone / CERN & Jeff Templon / NIKHEF TEG Workshop, 7 th February 2012

2 Operations & Tools TEG Composition: about 40 people overall: – Experiments (~15), sites (~20), “WLCG operations”, plus representatives from EMI, EGI-InSPIRE, OSG – Full list on TwikiTwiki – Good representation: UK, ES, US, NL, CH, CERN, EGI, EMI – Not all regions / tiers equally represented Regular meetings (6), including 2 workshopsmeetings – CERN & NIKHEF CERNNIKHEF Organized from beginning in 5 working groups TEG workshop, February 20122

3 Working Groups As part of the work of the TEG, subgroups were formed on the various topics within the overall charge  WG1: Monitoring and Metrics Simone Campana, Pepe Flix  WG2: Support Tools, Underlying Services and WLCG Operations Andrea Sciabà, Maria Dimou, Lionel Cons, Stefan Roiser  WG3: Application Software Management Stefan Roiser  WG4: Operational Requirements on Middleware Maarten Litmaath, Tiziana Ferrari  WG5: Middleware Configuration, Deployment and Distribution Oliver Keeble, Rob Quick Each WG produced a detailed report included in overall document TEG workshop, February 20123

4 Methodology Detailed assessments of the current status – Which things work well; – What things are missing; – Which things are operationally intensive; – Areas of potential savings in operational effort and/or improvements in site reliability and efficiency Generally acknowledged that things work but neither optimally nor sustainably – Need to revisit given our concrete experience and adapt to future needs – We must assume that resources will be less but service demands will grow All recommendations and observations indicated the impact, effort and timeline involved January workshop defined global priorities across all WGs based on these criteria – only those of high impact will be presented; all in document TEG Workshop, February 20124

5 Where we would like to be A small number of well-defined common services would be needed per site; Installing, configuring and upgrading these would be “trivial” All services would comply to standards, e.g. for error messages, monitoring; Services would be resilient to glitches and highly available; In case of load (or unexpected “user behaviour”) they would react gracefully; In case of problems, diagnosis and remedy should be straight- forward and rapid. Not necessarily the agreed goals at design & implementation stage – how close can we approach these retro-actively? TEG Workshop, February 20125

6 Guiding Principles 1.Reduce operations effort 2.Reduce complexity 3.Minimize dependencies between sites and services (reduce reliance on actions of others) 4.Reduce effort to upgrade and reconfigure 5.Improve access to information 6.Improve reaction to service/hardware failures 7.Deploy scalable services (able to handle up to 2- 3 times the average load) TEG Workshop, February 20126

7 Global Recommendations TEG Workshop, February 20127 #TitleAreaTimeline R1WLCG Service CoordinationOperationsFrom 2012 R2WLCG Service CommissioningOperationsFrom 2012 R3WLCG Availability MonitoringMonitoring2012 R4WLCG Site MonitoringMonitoring2012 R5WLCG Network MonitoringMonitoringLS1 R6Software deploymentS/W2012/LS1 R7Information System (WM TEG)Underlying Services2012/LS1 R8Middleware ServicesM/W2012/LS1 R9Middleware DeploymentM/W2012/LS1

8 Recommendations: Monitoring R3: WLCG Availability Monitoring: streamline availability calculation and visualization – Converge on one system for availability calculation and for visualization – Review/add critical tests for VO availability calculation to better match site usability expose usability also in regular reports (monthly, MB). R4: WLCG Site Monitoring: deploy a common multi-VO tool to be used by sites to locally display the site performance – Site and experiments should agree on a few common metrics between experiments, relevant from a site perspective  Extensively covered at 14 th December GDB TEG Status Report R5: WLCG Network Monitoring: deploy a WLCG-wide and experiment independent monitoring system for network connectivity TEG Workshop, February 20128

9 Software Deployment R6: Software deployment – Adopt CVMFS ­­­for use as shared software area at all WLCG sites (Tier-1 and Tier-2) – Deploy a robust and redundant infrastructure for CVMFS Complete the deployment and test the implemented resilience TEG Workshop, February 20129

10 Information System R7: Information System (consistent with the recommendations of the GDB from June 2011) – Short term: improve the Information System via full deployment of the cached BDII and a strengthening of information validation (for instance via nagios probes) – Long term: split the information into optimized tools focused to provide structural data (static), meta data, and state data (transient) During refactoring the information elements in the BDII should be reviewed and unnecessary elements dropped TEG workshop, February 201210

11 Recommendations: Operations R1: WLCG Service Coordination: improve the computing service(s) provided by the sites – Clarify scope, frequency and outcome of current meetings; – Address specific Tier-2 communication needs Dedicated service coordination meetings Evolve to “Computing as a Service at Tier-2s” – less experiment-specific services and interactions – Organize with EGI, NDGF and OSG common site administrator training R2: WLCG Service Commissioning: establish a core team of experts (from sites and experiments) to validate, commission and troubleshoot services TEG Workshop, February 201211

12 Middleware Services R8: Review site (middleware) services – Refactor existing middleware configurations to establish consistent procedures and remove unnecessary complexity – Assess services on scalability, load balancing and high availability aspects – Assess clients on retry and fail-over behaviors – Team of experts to prioritize open bugs and RFEs – Improve documentation based on input from service administrators and users TEG Workshop, February 201212

13 Middleware Deployment R9: Middleware Distribution, Configuration, and Deployment – Middleware configuration should be improved and should not be bound to a particular configuration management tool – Endorse middleware distribution via EPEL repository for additions to the RHEL/SL operating system family Opportunity to optimize release process – Encourage sites and experiments to actively participate in the commissioning and validation of middleware components and services – Maintain compatible middleware clients in the Application Area repository. Establish a compatible UI/WN release in rpm and tar format – Possibility to produce targeted updates which fix individual problems on request TEG workshop, February 201213

14 Order by Time Short term, specific time bounded and well defined targets – Availability, Site & Network monitoring – Software deployment Medium term, require a WG and need goals and metrics – Information system, Middleware Services and Deployment Long term, requires coordination and communication – Service Coordination and Commissioning TEG Workshop, February 201214

15 Ordering by Principle Reduce operations effort – Service Coordination and Commissioning, Site and Network Monitoring, Software deployment, Middleware Services Reduce complexity – Software deployment, Middleware Services and Deployment Minimize inter-dependencies (sites, experiments, services) – Software deployment, Information System Reduce effort to upgrade and reconfiguration – Middleware deployment Improve access to information – Information System, Availability, Site and Network monitoring Improve reaction to service/hardware failures – Site Monitoring Deploy scalable services (2-3 times above the average load) – Middleware Services TEG Workshop, February 201215

16 Areas not covered Model for middleware and infrastructure support after EGI-InSPIRE, EMI, … have ended – WLCG has benefitted for more than a decade from a series of externally supported grid projects – In the absence of a new project the transition to internally sustained infrastructure will require effort and strategic planning – Subject of the User and General EGI Sustainability WorkshopUser and General EGI Sustainability Workshop Network operations TEG workshop, February 201216

17 Conclusions and Outlook Ops & Tools TEG has documented strategy and recommendations for the suggested topics and scope TEG report document already in a good shape and soon available – Draft version at https://twiki.cern.ch/twiki/bin/view/LCG/WLCGTEGO perations https://twiki.cern.ch/twiki/bin/view/LCG/WLCGTEGO perations Hard to get active participation from the sites – A rapid feedback to the document is welcome Looks possible to conclude according to schedule TEG workshop, February 201217

18 Thank you Working Group coordinators: – Simone Campana, Pepe Flix, Andrea Sciabà, Maria Dimou, Lionel Cons, Stefan Roiser, Maarten Litmaath, Tiziana Ferrari (EGI), Oliver Keeble, Rob Quick (OSG) The LHC experiments: – Marco Cattaneo, Joel Closier, Ian Fisk, Costin Grigoras, Stephane Jezequel, I. Ueda Sites: – Ian Collier, Xavier Espinal, Alessandra Forti, John Gordon, Pablo Hernandez, Gonzalo Merino, Anthony Tiradani – CERN-IT: Julia Andreeva, Marian Babik, David Collados, Laurence Field, Alessandro Di Girolamo, Elisa Lanciotti, Wojciech Lapka, Alexandre Lossent, Pablo Saiz, Steve Traylen EMI: Cristina Aiftimiei OSG: Miron Livny TEG workshop, February 201218

19 Ian.Bird@cern.ch Backup Slides 19

20 WG1: Recommendations 20 BACKUP SLIDES

21 WG2 Recommendations TEG workshop, February 201221 Support Tools

22 WG2 Recommendations TEG workshop, February 201222 Underlying Services

23 WG2 Recommendations WLCG Operations TEG workshop, February 201223

24 WG3 Recommendations TEG workshop, February 201224

25 WG4 Recommendations TEG workshop, February 201225

26 WG5 Recommendations TEG workshop, February 201226 Middleware Configuration

27 WG5 Recommendations Middleware Deployment TEG workshop, February 201227

28 WG5 Recommendations Middleware Distribution TEG workshop, February 201228

29 Availability Monitoring Proposal GDB, October 201129 Experiments extend their SAM tests to test more site-specific functionality – Any new test contributing to the availability is properly agreed upon with sites and documented The SAM framework is extended to properly support external metrics such as from Panda, DIRAC, … The resulting availability will have these properties: – Takes into account more relevant site functionality – Is as independent as possible from experiment-side issues – Is well understood by the sites Additional experiment-side metrics are nevertheless used by VO computing operations and published e.g. via SSB

30 Schematic view: Proposal OpsALICEATLASCMSLHCb         Standard tests Exp custom tests Critical standard test critical custom test, submitted by Nagios  non-critical test, external and injected to SAM SAM tests ALICEATLASCMSLHCb       External metrics

31 Site Monitoring Proposal We miss the equivalent of the today’s SSB experiment views tailored for sites Proposal to use the SSB framework to provide this functionality as well – Advantages: many metrics already in the SSB for ATLAS and CMS No duplication of effort nor issues of consistency – Need to agree on a few common metrics between experiments Relevant from a site perspective – Some development needed in SSB to facilitate the visualization – Some commitment needed from experiment and sites Experiment (support): define and inject metrics, validation Sites: validation, feedback GDB, October 201131


Download ppt "Report from the WLCG Operations and Tools TEG Maria Girone / CERN & Jeff Templon / NIKHEF TEG Workshop, 7 th February 2012."

Similar presentations


Ads by Google