DJ: WLCG CB – 25 January 2007 1 WLCG Overview Board Activities in the first year Full details (reports/overheads/minutes) are at:

Slides:



Advertisements
Similar presentations
Nick Brook University of Bristol The LHC Experiments & Lattice EB News Brief overview of the expts  ATLAS  CMS  LHCb  Lattice.
Advertisements

Sue Foffano LCG Resource Manager WLCG – Resources & Accounting LHCC Comprehensive Review November, 2007 LCG.
LHCbComputing LHCb Computing Tasks. December Status of Computing Personnel m Currently available people insufficient to cover all activities o Estimate.
LCG Milestones for Deployment, Fabric, & Grid Technology Ian Bird LCG Deployment Area Manager PEB 3-Dec-2002.
Status Report on Tier-1 in Korea Gungwon Kang, Sang-Un Ahn and Hangjin Jang (KISTI GSDC) April 28, 2014 at 15th CERN-Korea Committee, Geneva Korea Institute.
LHCC Comprehensive Review – September WLCG Commissioning Schedule Still an ambitious programme ahead Still an ambitious programme ahead Timely testing.
Status of WLCG Tier-0 Maite Barroso, CERN-IT With input from T0 service managers Grid Deployment Board 9 April Apr-2014 Maite Barroso Lopez (at)
Resources and Financial Plan Sue Foffano WLCG Resource Manager C-RRB Meeting, 12 th October 2010.
Ian Bird LHCC Referee meeting 23 rd September 2014.
LCG Introduction John Gordon, SFTC GDB December 2 nd 2009.
LCG Service Challenge Phase 4: Piano di attività e impatto sulla infrastruttura di rete 1 Service Challenge Phase 4: Piano di attività e impatto sulla.
3 June 2004GridPP10Slide 1 GridPP Dissemination Sarah Pearce Dissemination Officer
Workshop summary Ian Bird, CERN WLCG Workshop; DESY, 13 th July 2011 Accelerating Science and Innovation Accelerating Science and Innovation.
WLCG GDB, CERN, 10th December 2008 Latchezar Betev (ALICE-Offline) and Patricia Méndez Lorenzo (WLCG-IT/GS) 1.
Security Policy Update LCG GDB Prague, 4 Apr 2007 David Kelsey CCLRC/RAL
Status Report of WLCG Tier-1 candidate for KISTI-GSDC Sang-Un Ahn, for the GSDC Tier-1 Team GSDC Tier-1 Team 12 th CERN-Korea.
LCG LHC Computing Grid Project – LCG CERN – European Organisation for Nuclear Research Geneva, Switzerland LCG LHCC Comprehensive.
LCG Introduction John Gordon, STFC-RAL GDB September 9 th, 2008.
WLCG Collaboration Workshop 7 – 9 July, Imperial College, London In Collaboration With GridPP Workshop Outline, Registration, Accommodation, Social Events.
BNL Tier 1 Service Planning & Monitoring Bruce G. Gibbard GDB 5-6 August 2006.
Les Les Robertson LCG Project Leader WLCG – Management Overview LHCC Comprehensive Review September 2006.
WLCG Grid Deployment Board, CERN 11 June 2008 Storage Update Flavia Donno CERN/IT.
Grid Operations Centre LCG SLAs and Site Audits Trevor Daniels, John Gordon GDB 8 Mar 2004.
LCG CCRC’08 Status WLCG Management Board November 27 th 2007
Storage Accounting John Gordon, STFC GDB March 2013.
Ian Bird GDB CERN, 9 th September Sept 2015
LCG Introduction John Gordon, STFC GDB September 14 th 2011.
LCG Introduction John Gordon, STFC GDB December14 th 2011.
Procedure to follow for proposed new Tier 1 sites Ian Bird CERN, 27 th March 2012.
CCRC’08 Monthly Update ~~~ WLCG Grid Deployment Board, 14 th May 2008 Are we having fun yet?
LCG Sep. 26thLHCC comprehensive review 2006 Volker Guelzow 1 Tier 1 status, a summary based upon a internal review Volker Gülzow DESY.
WLCG Planning Issues GDB June Harry Renshall, Jamie Shiers.
Procedure for proposed new Tier 1 sites Ian Bird WLCG Overview Board CERN, 9 th March 2012.
LCG Support for Pilot Jobs John Gordon, STFC GDB December 2 nd 2009.
EGI-InSPIRE RI EGI-InSPIRE EGI-InSPIRE RI Monitoring of the LHC Computing Activities Key Results from the Services.
LCG Accounting Update John Gordon, CCLRC-RAL WLCG Workshop, CERN 24/1/2007 LCG.
LCG Service Challenges SC2 Goals Jamie Shiers, CERN-IT-GD 24 February 2005.
Ian Bird WLCG Networking workshop CERN, 10 th February February 2014
LHC Computing, SPC-FC-CC-C; H F Hoffmann1 CERN/2379/Rev: Proposal for building the LHC computing environment at CERN (Phase 1) Goals of Phase.
Ian Bird Overview Board; CERN, 8 th March 2013 March 6, 2013
Enabling Grids for E-sciencE INFSO-RI Enabling Grids for E-sciencE Gavin McCance GDB – 6 June 2007 FTS 2.0 deployment and testing.
SRM v2.2 Production Deployment SRM v2.2 production deployment at CERN now underway. – One ‘endpoint’ per LHC experiment, plus a public one (as for CASTOR2).
1 Proposal for a technical discussion group  Because...  We do not have a forum where all of the technical people discuss the critical.
CMS: T1 Disk/Tape separation Nicolò Magini, CERN IT/SDC Oliver Gutsche, FNAL November 11 th 2013.
The Grid Storage System Deployment Working Group 6 th February 2007 Flavia Donno IT/GD, CERN.
WLCG Operations Coordination report Maria Alandes, Andrea Sciabà IT-SDC On behalf of the WLCG Operations Coordination team GDB 9 th April 2014.
Project Execution Board Gets agreement on milestones, schedule, resource allocation Manages the progress and direction of the project Ensures conformance.
WLCG Status Report Ian Bird Austrian Tier 2 Workshop 22 nd June, 2010.
EGEE-II INFSO-RI Enabling Grids for E-sciencE EGEE and gLite are registered trademarks EGEE Operations: Evolution of the Role of.
ARDA Massimo Lamanna / CERN Massimo Lamanna 2 TOC ARDA Workshop Post-workshop activities Milestones (already shown in December)
INFSO-RI Enabling Grids for E-sciencE File Transfer Software and Service SC3 Gavin McCance – JRA1 Data Management Cluster Service.
WLCG Collaboration Workshop 21 – 25 April 2008, CERN Remaining preparations GDB, 2 nd April 2008.
WLCG Accounting Task Force Update Julia Andreeva CERN GDB, 8 th of June,
EGI-InSPIRE RI EGI-InSPIRE EGI-InSPIRE RI Storage Accounting John Gordon, STFC OMB August 2013.
Grid Deployment Technical Working Groups: Middleware selection AAA,security Resource scheduling Operations User Support GDB Grid Deployment Resource planning,
LCG Accounting Update John Gordon, CCLRC-RAL 10/1/2007.
LCG Introduction John Gordon, STFC-RAL GDB June 11 th, 2008.
LCG Introduction John Gordon, STFC-RAL GDB November 7th, 2007.
Site notifications with SAM and Dashboards Marian Babik SDC/MI Team IT/SDC/MI 12 th June 2013 GDB.
17 September 2004David Foster CERN IT-CS 1 Network Planning September 2004 David Foster Networks and Communications Systems Group Leader
LHCbComputing Update of LHC experiments Computing & Software Models Selection of slides from last week’s GDB
Bob Jones EGEE Technical Director
LCG Service Challenge: Planning and Milestones
David Kelsey CCLRC/RAL, UK
LCG Security Status and Issues
Ian Bird GDB Meeting CERN 9 September 2003
Update on Plan for KISTI-GSDC
John Gordon, STFC-RAL GDB March 11, 2009
LHC Data Analysis using a worldwide computing grid
Strategy
Presentation transcript:

DJ: WLCG CB – 25 January WLCG Overview Board Activities in the first year Full details (reports/overheads/minutes) are at: Individual meetings are protected by the keyword “lcgover”

DJ: WLCG CB – 25 January WLCG Committee Structure Management Board (MB) Grid Deployment Board (GDB) Collaboration Board (CB) Overview Board (OB) Strategic Committees Executive Committees Architects Forum (AF)

DJ: WLCG CB – 25 January Role of the Overview Board  Standing sub-committee of the CB. Operates by delegation from the CB. Membership comprises representatives of Experiment, LCG and Tier-1 managements  Technical discussion on strategy, policy and conflict- resolution  At it’s first meeting, the Overview Board decided:  It should be presented well ahead of time with well-defined points to debate and eventually decide – this has proven hard to implement in practice  It’s decisions are to be considered as binding on the Tier-1’s but any decisions affecting the Tier-2’s need prior consultation

DJ: WLCG CB – 25 January Meetings  1 st 20 March 2006 (six weeks after the first CB meeting)  2 nd 12 June 2006  3 rd 11 September 2006  4 th originally planned for 18 December 2006 but will now take place on 29 January 2007 (four days from now).

DJ: WLCG CB – 25 January Themes  Project Status Reports at each meeting → tracking of (e.g.):  Deployment  Service Challenges  Usage by experiments Showed the steady progress being made but also allowed many issues and problems to be flagged  Some particular themes followed through the meetings:  Use of VO-boxes  Accounting  Tier-1 – Tier-2 associations and dependencies  Capacity and the capacity ramp-up profile  On today’s agenda (apart from this report) the point on “revised capacity plans” is a direct result of discussions in the Overview Board

DJ: WLCG CB – 25 January Use of VO-boxes  1 st meeting  VO-boxes deprecated but tolerated (noted that would be hard for Tier-2’s to install them). Tier-1’s asked to allow them for now. Left to individual Tier-2’s to decide for themselves.  2 nd meeting  Thanks to specific meetings led by C. Loomis, it was possible to identify two VO-box service classes: 1.can be tolerated; 2.must eliminate as soon as possible (system admin’s object to them).  Two Class 2 services identified:  Package management (ALICE);  SRM functionality (CMS) – now supplied by SRM 2.2  3 rd meeting  No more issues raised

DJ: WLCG CB – 25 January Accounting  1 st meeting  Accounting recognised to be urgent and necessary at VO level, later at user level.  2 nd meeting  Monthly figures becoming available per experiment. Overview Board re-affirms its wish to monitor these figures at each meeting and to approve any figures to be shown to the C-RRB  3 rd meeting  Overview Board approved figures to be shown to the October C-RRB, now systematic for the Tier-0 and Tier-1’s, and covering the period April-August in terms of installed capacity, by site (in the end the C-RRB was shown the January-August figures)

DJ: WLCG CB – 25 January Tier-1 – Tier-2 dependencies  1 st meeting  Overview Board affirmed that urgent to settle this point; considered to be work for the experiments  2 nd meeting  All four experiments reported, also explaining their data models:  CMS starts from position of no defined dependencies but is told this is unrealistic – network capacity is focused on the Tier-0+Tier-1’s OPN  ATLAS plans specific associations forming a radial hierarchy  LHCb also sees a radial hierarchy, mostly driven by proximity or national boundaries  ALICE thought associations would be driven by national boundaries or bandwidth availability but plans nothing specific

DJ: WLCG CB – 25 January Tier-1 – Tier-2 dependencies (ctd)  2 nd meeting (ctd)  Recognised need to separate service issues from data storage/transfer issues  Tier-1’s confirmed that they plan capacity separately for each experiment – not counting on timesharing  Overview Board asked the Management Board to make a workable technical implementation of associations for the next meeting – conforming to the capacity actually available in the Tier-1’s and in the form of a table  3 rd meeting  A team from the experiments had examined the bandwidth and Tier-1 storage requirements (using 2008 as reference year). This showed the load on the Tier-1’s to be very uneven and a real shortage of capacity for ALICE.  But the topic was closed for the Overview Board – now to be handled via the MB (or perhaps a working-group)

DJ: WLCG CB – 25 January Capacity ramp-up  2 nd meeting  The Overview Board noted the schedule shift for LHC vacuum pipe closure from end-June to end-August but agreed that centres can only re-schedule purchases on a timescale of ≥ 6 months  3 rd meeting  A review with the LHC machine of running and discussions with the experiments showed that CERN is now just 10% short of needed capacity. The assumption is thus that the experiment requirements for CERN can be compressed to fit with the presently available funding  The Overview Board agreed the importance of the experiments finalising their updated requirements by end-September so as to give a clear picture for the Tier-1’s to the October C-RRB.

DJ: WLCG CB – 25 January Points for the next meeting  Survey of Tier-1 staffing levels (Neil Geddes)  Project Status (Les Robertson):  Accounting  Reliability metrics  SRM 2.2  2007 end-year schedule  Resource re-assessments (ahead of the April C-RRB)  SC4 post-mortem  Number of data copies in the experiments’ computing models (Neil Geddes)