Download presentation
Presentation is loading. Please wait.
Published byEarl Powell Modified over 9 years ago
1
GridPP Deployment Status GridPP15 Jeremy Coles J.Coles@rl.ac.uk 11 th January 2006
2
Overview 2 Some new sources of information 3 General deployment news 4 Expectations for the coming months 5 Preparing for SC4 and our status 6 Summary 1 An update on some of the high-level metrics
3
Prototype metric report UKI is still contributing well but according to the SFT data our proportion of sites failing is relatively high
4
Snapshot of recent numbers Region# sitesAverage CPU Asia Pacific8450 CERN194250 Central Europe1530 France81100 Germany & Switzerland113365 Italy252250 Northern Europe101160 Russia14430 South East Europe20370 South West Europe14700 UKI363250 Most of the unavailable sites have been in Ireland as they make the move over to LCG 2.6.0.
5
Average job slots have increased gradually UK job slots have increased by about 10% since GridPP14. (See Steve Lloyd’s talk for how this looks against the targets)
6
Therefore our contribution to EGEE CPU resources remains at ~20%
7
However there still is not consistently high usage of job slots
8
The largest GridPP users by VO for 2005 LHCb ATLAS BABAR CMS BIOMED DZERO ZEUS NB: Excludes data from Cambridge – for Condor support in APEL see Dave Kant’s talk
9
Storage has seen a healthy increase – but usage remains low At the GridPP Project Management and Deployment Boards yesterday we discussed ways to encourage the experiments to make more use of Tier-2 disk space – The Tier-1 will be unable to meet allocation requests. One of the underlying concerns is what do data flags mean to Tier-2 sites.
10
Scheduled downtime Views of data will be available from CIC portal from today! http://cic.in2p3.fr/
11
Scheduled downtime Congratulations to Lancaster for being the only site to have no Scheduled Downtime
12
SFT review It was probably clear already that the majority of our failures (and those of other large ROCs) are lcg-rm (Failure points: replica catalog, configured BDII, CERN storage for replication OR a local SE problem) and rmga (generally badly configured site). We know the tests need to improve and become more reliable and accurate too!
13
Overall EGEE statistics The same problems cover the majority of EGEE resources. Hours of impact will be available soon and will help us evaluate the true significance of these results.
14
Completing the weekly reports ALL site administrators have now been asked to complete information related to problems observed at their sites as recorded in the weekly operations report This will impact our message at weekly EGEE operations reviews (http://agenda.cern.ch/displayLevel.php?fid=258) and YOUR Tier-2 performance figures!http://agenda.cern.ch/displayLevel.php?fid=258
15
Performance measures The GridPP Oversight Committee has asked us to investigate why some sites perform better than others. As well as looking at the SFT and ticket response data, the Imperial College group will help pull data from their LCG2 Real Time Monitor Daily Summary Reports: http://gridportal.hep.ph.ic.ac.uk/rtm/reports.html
16
SUPPORT User ticket response time Number of “supporters” # tickets escalated % tickets wrongly assigned ROC measures SERVICE NODES (testing) RB – submit to CE time BDII – query time average MyProxy – register/access/del SRM-SE – test file movement Catalogue test VOMS RGMA EGEE metrics While we have defined GridPP metrics many are not automatically produced. EGEE now has metrics as a priority and at EGEE3 a number of metrics were agreed for the project and assigned. SIZE # of sites in production # of job slots Total available kSpecInt Storage (disc) Mass storage # EGAP approved VOs # active VOs # active users Total % used resources DEPLOYMENT Speed of m/w security update OPERATIONS Site responsiveness to COD Site response to tickets Site tests failed % availability of SE, CE # days downtime per ROC USAGE Jobs per VO (submit, comp, fail) Data transfer per VO CPU and storage usage per VO % sites blacklisted/whitelisted # CE/SE available to VO
17
The status of 2.7.0 Mon 9th Jan - tag and begin local testing of installations and upgrades on mini testbeds, complete documentation Mon 16th Jan - pre-release to >3 ROCs for a week of further testing Mon 23rd Jan - incorporate results of ROC testing and release asap Release 2.7.0 (at the end of January!?) Expect Bug fixes – RB, BDII, CE, SE-classic, R-GMA, GFAL, LFC, SE_DPM VOMS – new server client version VO-BOX – various functionality changes LFC/DPM updates Lcg_utils/GFAL – new version & bug fixes RB – new functionality for job status checking Security issues – pool account recycling, signed rpm distribution FTS clients & dCache 1.6.6.2 Some “VO management via YAIM” additions Details of release work: https://uimon.cern.ch/twiki/bin/view/LCG/LCG-2_7_0https://uimon.cern.ch/twiki/bin/view/LCG/LCG-2_7_0
18
Outcomes of security challenge Comments (Thanks to Alessandra Forti) Test suites should be asynchronous Security contacts mail list is not up to date 4 sites CSIRTS did not pass on information – site security contacts should be the administrators and not site CSIRTS 1 site did not understand what to do Majority of sites acknowledged tickets within a few hours once site administrator received ticket On average sites responded with CE data in less that 2 days (some admins were unsure about contacting the RB staff) 2 sites do not use lcgpbs jobmanager and were unable to find the information in the log files (also 1 using Condor) Some sites received more than one SSC job in 3 hr timeframe and were unable to return an exact answer but gave several Mistake in date – admins spotted inconsistencies ROC struggled with ticket management and caused delays in processing tickets! Aside: The EGEE proposed Security Incident handbook is being reviewed by the deployment team: http://wiki.gridpp.ac.uk/wiki/Incident_Response_Handbook
19
Other areas of interest! The Footprints version (UKI ROC ticketing system) will be upgraded on 23 rd January. This will improve our interoperations with GGUS and other ROCs (using xml emails). There should be little observable impact but we do ask PLEASE SOLVE & CLOSE as many currently open tickets as possible by 23 rd January. Culham (the place which hosted the last operations workshop) will be adding a new UKI site in the near future. They will join or host the Fusion VO. Most sites have now completed the “10 Easy Network Questions” responses. http://wiki.gridpp.ac.uk/wiki/GridPP_Answers_to_10_Easy_Network_Questions This has proved a useful exercise. What do you think? The deployment team has identified a number of operational areas to improve. These include such things as experiment software installation, VO support availability of certain information on processes (like where to start for new sites) Pre-production service: UKI now has 3 sites with gLite (components) either deployed or in the process of being deployed
20
Other areas of interest! The Footprints version (UKI ROC ticketing system) will be upgraded on 23 rd January. This will improve our interoperations with GGUS and other ROCs (using xml emails). There should be little observable impact but we do ask PLEASE SOLVE & CLOSE as many currently open tickets as possible by 23 rd January. Culham (the place which hosted the last operations workshop) will be adding a new UKI site in the near future. They will join or host the Fusion VO. Most sites have now completed the “10 Easy Network Questions” responses. http://wiki.gridpp.ac.uk/wiki/GridPP_Answers_to_10_Easy_Network_Questions This has proved a useful exercise. What do you think? The deployment team has identified a number of operational areas to improve. These include such things as experiment software installation, VO support availability of certain information on processes (like where to start for new sites) Pre-production service: UKI now has 3 sites with gLite (components) either deployed or in the process of being deployed REMINDER & REQUEST – Please enable more VOs! GridPP PMB requests that 0.5% (1% in EGEE-2) resources be used to support wider VOs – like BioMed. This will also get our utilisation higher. Feedback is going to developers on making adding VOs easier.
21
Our focus is now on Service Challenge 4 GridPP links, progress and status is being logged in the GridPP wiki: http://wiki.gridpp.ac.uk/wiki/Service_Challenges SRM 80% of sites have working (file transfers with 2 other sites successful) SRM by end of December All sites have working SRM by end of January 40% of sites (using FTS) able to transfer files using an SRM 2.1 API by end February All sites (using FTS) able to transfer files using an SRM 2.1 API by end March Interoperability tests between SRM versions at Tier-1 and Tier-2s (TBC) FTS Channels FTS channel to be created for all T1-T2 connections by end of January FTS client configured for 40% sites by end January FTS channels created for one Intra-Tier-2 test for each Tier-2 by end of January FTS client configured for all sites by end March A number of milestones (previously discussed at the 15 th November UKI Monthly Operations Meeting) have been set. Red in text means milestone at risk (generally due to external dependencies) and Green text signifies done.
22
Core to these are the… Data Transfers Tier-1 to Tier-2 Transfers (Target rate 300-500Mb/s) Sustained transfer of 1TB data to 20% sites by end December Sustained transfer of 1TB data from 20% sites by end December Sustained transfer of 1TB data to 50% sites by end January Sustained transfer of 1TB data from 50% sites by end January Sustained individual transfers (>1TB continuous) to all sites completed by mid-March Sustained individual transfers (>1TB continuous) from all sites by mid-March Peak rate tests undertaken for all sites by end March Aggregate Tier-2 to Tier-1 tests completed at target rate (rate TBC) by end March Inter Tier-2 Transfers (Target rate 100 Mb/s) Sustained transfer of 1TB data between largest site in each Tier-2 to that of another Tier-2 by end February Peak rate tests undertaken for 50% sites in each Tier-2 by end February
23
The current status RAL Tier-1LancasterManchesterEdinburghGlasgowIC-HEPRAL-PPD RAL Tier-1 ~800Mb/s350Mb/s156Mb/s84Mb/s309 Mb/s 397 Mb/s Lancaster 0 Mb/s Manchester Edinburgh 422Mb/s210 Mb/s 224 Mb/s Glasgow 331Mb/s122 Mb/s IC-HEP RAL-PPD Receiving NEXT SITES: London – RHUL & QMUL ScotGrid – Durham SouthGrid – Birmingham & Oxford? NorthGrid – Sheffield? & Liverpool KEY: Black figures indicate 1TB transfer Blue figures indicate <1TB transfer (eg. 10 GB) http://wiki.gridpp.ac.uk/wiki/Service_Challenge_Transfer_Tests
24
Additional milestones LCG File Catalog LFC document available by end November LFC installed at 1 site in each Tier-2 by end December LFC installed at 50% sites by end January LFC installed at all sites by end February Database update tests (TBC) VO Boxes Depending on experiment responses to security and operations questionnaire and GridPP position on VO Boxes. VOBs available (for agreed VOs only) for 1 site in each Tier-2 by mid-January VOBs available for 50% sites by mid-February VOBs available for all (participating) sites by end March Experiment Specific Tests (TBC) To be developed in conjunction with experiment plans – Please make suggestions!
25
LCG File Catalog LFC document available by end November LFC installed at 1 site in each Tier-2 by end December LFC installed at 50% sites by end January LFC installed at all sites by end February Database update tests (TBC) VO Boxes Depending on experiment responses to security and operations questionnaire and GridPP position on VO Boxes. VOBs available (for agreed VOs only) for 1 site in each Tier-2 by mid-January VOBs available for 50% sites by mid-February VOBs available for all (participating) sites by end March Experiment Specific Tests (TBC) To be developed in conjunction with experiment plans – Please make suggestions! Additional milestones LHCb & ALICE questionnaires received. Accepted and VO boxes deployed at Tier-1.Little use so far – ALICE has not had a disk allocation. ATLAS original response was not accepted. They have since tried to implement VO boxes and found problems so are now looking at a centralised model. CMS do not have VO Boxes but they DO require local VO persistent processes
26
Getting informed & involved! The deployment team are working to make sure sites have sufficient information. Coordinate your activities with your Tier-2 Coordintor. 1)Stay up to date via the Storage Group work: http://wiki.gridpp.ac.uk/wiki/Grid_Storage 2) General Tier-1 support: http://wiki.gridpp.ac.uk/wiki/RAL_Tier1 3) Understand and setup FTS (channels): http://wiki.gridpp.ac.uk/wiki/RAL_Tier1_File_Transfer_Service http://wiki.gridpp.ac.uk/wiki/RAL_Tier1_File_Transfer_Service 4) VO Boxes go via Tier-1 first: http://wiki.gridpp.ac.uk/wiki/VOBoxhttp://wiki.gridpp.ac.uk/wiki/VOBox 5) Catalogues (& data management): http://wiki.gridpp.ac.uk/wiki/Data_Managementhttp://wiki.gridpp.ac.uk/wiki/Data_Management The status of sites is being tracked here: http://wiki.gridpp.ac.uk/wiki/Service_Challenge_4_Site_Status Some particular references worth checking out when taking the next step: 6) What RALPP did to get involved: http://wiki.gridpp.ac.uk/wiki/RALPP_Local_SC4_Preparations http://wiki.gridpp.ac.uk/wiki/RALPP_Local_SC4_Preparations 8) Edinburgh dCache tests: http://wiki.gridpp.ac.uk/wiki/Ed_SC4_Dcache_Testshttp://wiki.gridpp.ac.uk/wiki/Ed_SC4_Dcache_Tests 9) Glasgow DPM testing: http://wiki.gridpp.ac.uk/wiki/Glasgow http://wiki.gridpp.ac.uk/wiki/Glasgow PLEASE CREATE SITE TESTING LOGS – it helps with debugging and information sharing
27
Summary 2 EGEE work will add to information which is published & analysed 3 GridPP & experiments need to work at better use of Tier-2 disk 4 There are changes coming with 2.7.0 & helpdesk upgrade 6 Sites asked to complete reports, reduce tickets & get involved in SC4! 1 Metrics show stability and areas where we can improve 5 Focus has shifted to Service Challenge work (including security)
Similar presentations
© 2024 SlidePlayer.com. Inc.
All rights reserved.