Download presentation
Presentation is loading. Please wait.
Published byAmber Lockhart Modified over 11 years ago
1
GridPP2 Status Tony Doyle
2
OC Actions 1.GridPP TO PROVIDE DATA ON WHAT FRACTION OF THE REGISTERED USERS WERE MAKING THE GREATEST USAGE OF THE RESOURCES. SEE TALK 2.GridPP TO PROVIDE PPARC WITH A TIER 1 PURCHASE PLAN FOR FY06. DONE 3.GridPP to provide data on experiments increased usage of Tier 2 resources. DONE 4.GridPP to provide an update of the performance metrics. DONE 5.GridPP to present a draft GridPP3 proposal to the next meeting. DONE (version 0.6) 6.GridPP to circulate procedures adopted by Grid Security Vulnerability Group to Committee members. DONE 7.GridPP to provide a paper to the next meeting justifying the proposed Tier 1 hardware spend in FY07 against other spending options. ONGOING 8.GridPP to describe its relationship with the e-Science Core Programme more fully. ONGOING 9.GridPP to provide PPARC, on a post-by-post basis, details of the cost of extending posts finishing before new funding is expected to be in place (end of March 2008). DONE (on March 15 th and included as part of GridPP3 proposal)
3
gLite 3.0 What is gLite-3.0? LCG-2.7 and updates gLite WMS/LB gLite CE gLite/LCG WN gLite/LCG UI FTS (Service) FTA (Agents)
4
WLCG MoU 17 March 2006 PPARC signed the Memorandum of Understanding with CERN Commitment to UK Tier-1 at RAL and the four UK Tier-2s to provide services and resources. Will need to propagate through LFRC..
5
UK pledges & medium-term planning RAL, UK Pledged Planned to be pledged 2006 2007200820092010 CPU (kSI2K) 980 1492 1234 2712 3943 4206 6321 5857 10734 Disk (Tbytes) 450 841 630 1484 2232 2087 3300 3020 5475 Tape (Tbytes) 664 1080 555 2074 2115 3934 4007 5710 6402 As defined in summer 2005.. 1.Tier-1 (v26b) plan 2007 or Tier-2 GridPP MoU, followed by pessimistic guess 2.August 2005 minimal Grid 3.GridPP3 proposal (see Daves talk) Need to update 2007 pledges by Sept. 06 UK, Sum of all Federations Pledged Planned to be pledged 2006 2007200820092010 CPU (kSI2K) 3800 3840 1592 4830 4251 5410 6127 6010 9272 Disk (Tbytes) 530 540 258 600 1174 660 2150 720 3406
6
Capacity Planning Considerations… The Tier-1/A capacity originally planned to be available for 06Q1 was put into production in late April 2006 (CPU). The disk capacity is scheduled to be available in early August. The new SL8500 tape robot began providing a production service in March. The first three T10K tape drives for the GridPP tape service are expected to be delivered in July together with 200TB of tape media. The SL8500 robot will be upgraded from 6000 to 10000 slots (paid for by CCLRC). Tenders for 500 kSI2k and 237 TB of disk at the end of June. Good progress is being made on the deployment of CASTOR2 (providing HSM capability and SRM interface to storage) which remains on schedule for a production service in September. For the Tier-2 centres, additional capacity was made available in 06Q1, with the incorporation of capacity at two additional large centres (Manchester and Liverpool). The available CPU in the first quarter increased to 3703 kSI2k such that 75% of the MoU commitment has now been met, with disk increasing to 263 TB. CPU utilisation of this much larger resource was 23% with overall disk utilisation improved at 61% in 06Q1. Additional capacity improvements are envisaged at Bristol, Cambridge, Glasgow and QMUL during this year. The GridPP resource utilisation outturn for 2005 updated to include 06Q1 is available from http://www.gridpp.ac.uk/docs/gridpp3/GridPP-PMB-92-Utilization.doc.http://www.gridpp.ac.uk/docs/gridpp3/GridPP-PMB-92-Utilization.doc
7
Dissemination highlights the Grid by numbers (For MPs) the BBC..
8
Tier-0 to Tier-1 worldwide data transfers > 950MB/s for 1 week peak transfer rate from CERN of >1.6GB/s Need high data rate transfers to/from CERN as a routine activity
9
Tier-1 to Tier-2 UK data transfers >1000Mb/s for 3 days peak transfer rate from RAL of >1.5Gb/s Need high data rate transfers to/from RAL as a routine activity
10
gLite 3.0 deployment Upgrades are supported from LCG-2.7.0 Appears to work well Sites need to keep on the upgrade path A reasonably well-defined deployment release cycle Release cycle is getting (somewhat) shorter
11
Project Status Good progress, according to plan Glass half full....and half empty 5000 ex 10000 CPUs Metric OK Metric not OK Tasks Complete Tasks Overdue Tasks due in next 60 days Items Inactive Tasks not Due Change Forms 88 (91%) 9127 (49%) 719201053
12
Know your users.. 1.GridPP TO PROVIDE DATA ON WHAT FRACTION OF THE REGISTERED USERS WERE MAKING THE GREATEST USAGE OF THE RESOURCES.
13
Active Users (All VOs)
14
Job success? Overview
15
Job Success by LHC experiment ALICE CMS ATLAS LHCb
16
Active Users by LHC experiment ALICE (8) CMS (150) ATLAS (70) LHCb (40)
17
Fine-grained: active users at RAL Number of registered users (exc. DTEAM) Quarter: 05Q4 06Q2 Value: 1342 1831 Number of active users (> 10 jobs) Quarter: 05Q4 06Q1 06Q2 Value: 83 166 201 Fraction [%]: 6.211.0 Number of active UK users at RAL Quarter: 05Q1 05Q2 05Q3 05Q4 06Q1 06Q2 Value: 10 19 29 37 44 55
18
Number of jobs/UK Grid user on RAL resources (06Q2) Worldwide Data Protection Act: non-UK sites requested APEL NOT to include this data by default (hence not switched on at most UK sites). Complied in order to ensure APEL is widely deployed.
19
Where are we now? Since the last oversight committee meeting a lot has happened.. 1.Progress has been made in the release of gLite-3.0 (first middleware fully integrating all EGEE and LCG components) 2.gLite-3.0 efficiently deployed at 11 UK sites 3.Tier-2 resources on the Production Grid beginning to be fully utilised 4.Many measured performance improvements (see Jeremys talk) 5.The GridPP2 Project is halfway through: 49% of its targets met, 91% of the metrics within specification 5000 ex ~10000 computers on the Grid 6.EGEE phase I reviewed – commended by the EU 7.Dissemination: lead news on BBC technology web site, GridPP overview for MPs circulated. 8.In March 2006 PPARC signed the worldwide LCG MoU 9.Significant planning performed for GridPP3 (see Daves talk) 10.Work starting on large-scale experiment-specific file transfers and improving site performance..
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.