Introduction Kors Bos, NIKHEF, Amsterdam GDB Meeting at CERN, June 22, 2005 LCG.

Slides:



Advertisements
Similar presentations
Nick Brook University of Bristol The LHC Experiments & Lattice EB News Brief overview of the expts  ATLAS  CMS  LHCb  Lattice.
Advertisements

Deployment Board Introduction David Kelsey 29 Oct 2004
T1-NREN Luca dell’Agnello CCR, 21-Ottobre The problem Computing for LHC experiments –Multi tier model (MONARC) –LHC computing based on grid –Experiment.
Sue Foffano LCG Resource Manager WLCG – Resources & Accounting LHCC Comprehensive Review November, 2007 LCG.
LCG Introduction Kors Bos, NIKHEF, Amsterdam GDB Jan.10, 2007 IT Auditorium.
INFSO-RI Enabling Grids for E-sciencE Update on LCG/EGEE Security Policy and Procedures David Kelsey, CCLRC/RAL, UK
Proposal for a Constitution for MICE A Plan for Discussion P Dornan G Gregoire Y Nagashima A Sessler.
Storage Task Force Intermediate pre report. History GridKa Technical advisory board needs storage numbers: Assemble a team of experts. 04/05 At HEPiX.
Questionaire answers D. Petravick P. Demar FNAL. 7/14/05 DLP -- GDB2 FNAL/T1 issues In interpreting the T0/T1 document how do the T1s foresee to connect.
14 July 2004GridPP Collaboration BoardSlide 1 GridPP Dissemination Sarah Pearce Dissemination Officer
Les Les Robertson LCG Project Leader LCG - The Worldwide LHC Computing Grid LHC Data Analysis Challenges for 100 Computing Centres in 20 Countries HEPiX.
HEPiX Storage Task Force Roger Jones Lancaster CHEP06, Mumbai, February 2006.
LCG Milestones for Deployment, Fabric, & Grid Technology Ian Bird LCG Deployment Area Manager PEB 3-Dec-2002.
RomeWorkshop on eInfrastructures 9 December LCG Progress on Policies & Coming Challenges Ian Bird IT Division, CERN LCG and EGEE Rome 9 December.
SC4 Workshop Outline (Strong overlap with POW!) 1.Get data rates at all Tier1s up to MoU Values Recent re-run shows the way! (More on next slides…) 2.Re-deploy.
T0/T1 network meeting July 19, 2005 CERN
3 June 2004GridPP10Slide 1 GridPP Dissemination Sarah Pearce Dissemination Officer
The LHC Computing Grid – February 2008 The Worldwide LHC Computing Grid Dr Ian Bird LCG Project Leader 25 th April 2012.
Security Policy Update LCG GDB Prague, 4 Apr 2007 David Kelsey CCLRC/RAL
Ian Bird LCG Project Leader OB Summary GDB 10 th June 2009.
LCG LHC Computing Grid Project – LCG CERN – European Organisation for Nuclear Research Geneva, Switzerland LCG LHCC Comprehensive.
WLCG Collaboration Workshop 7 – 9 July, Imperial College, London In Collaboration With GridPP Workshop Outline, Registration, Accommodation, Social Events.
LCG Service Challenges: Planning for Tier2 Sites Update for HEPiX meeting Jamie Shiers IT-GD, CERN.
LCG Service Challenges: Planning for Tier2 Sites Update for HEPiX meeting Jamie Shiers IT-GD, CERN.
Tony Doyle - University of Glasgow 8 July 2005Collaboration Board Meeting GridPP Report Tony Doyle.
15-Dec-04D.P.Kelsey, LCG-GDB-Security1 LCG/GDB Security Update (Report from the Joint Security Policy Group) CERN 15 December 2004 David Kelsey CCLRC/RAL,
LCG Introduction John Gordon, STFC-RAL GDB May 2, 2007.
SC4 Planning Planning for the Initial LCG Service September 2005.
Ian Bird GDB CERN, 9 th September Sept 2015
Procedure to follow for proposed new Tier 1 sites Ian Bird CERN, 27 th March 2012.
LCG Introduction Kors Bos, NIKHEF, Amsterdam GDB Oct.4, 2006.
Procedure for proposed new Tier 1 sites Ian Bird WLCG Overview Board CERN, 9 th March 2012.
1 Future Circular Collider Study Preparatory Collaboration Board Meeting September 2014 R-D Heuer Global Future Circular Collider (FCC) Study Goals and.
Data Transfer Service Challenge Infrastructure Ian Bird GDB 12 th January 2005.
DJ: WLCG CB – 25 January WLCG Overview Board Activities in the first year Full details (reports/overheads/minutes) are at:
LCG WLCG Accounting: Update, Issues, and Plans John Gordon RAL Management Board, 19 December 2006.
12 March, 2002 LCG Applications Area - Introduction slide 1 LCG Applications Session LCG Launch Workshop March 12, 2002 John Harvey, CERN LHCb Computing.
Report from GSSD Storage Workshop Flavia Donno CERN WLCG GDB 4 July 2007.
LCG Accounting Update John Gordon, CCLRC-RAL WLCG Workshop, CERN 24/1/2007 LCG.
LCG Service Challenges SC2 Goals Jamie Shiers, CERN-IT-GD 24 February 2005.
LHC Computing, SPC-FC-CC-C; H F Hoffmann1 CERN/2379/Rev: Proposal for building the LHC computing environment at CERN (Phase 1) Goals of Phase.
18-May-04D.P.Kelsey, LCG-GDB-Security1 LCG/GDB Security Update (Report from the LCG Security Group) Barcelona 18 May 2004 David Kelsey CCLRC/RAL, UK
Planning for LCG Emergencies HEPiX, Fall 2005 SLAC, 13 October 2005 David Kelsey CCLRC/RAL, UK
1 Proposal for a technical discussion group  Because...  We do not have a forum where all of the technical people discuss the critical.
INFSO-RI Enabling Grids for E-sciencE Joint Security Policy Group David Kelsey, CCLRC/RAL, UK 3 rd EGEE Project.
The Grid Storage System Deployment Working Group 6 th February 2007 Flavia Donno IT/GD, CERN.
Project Execution Board Gets agreement on milestones, schedule, resource allocation Manages the progress and direction of the project Ensures conformance.
M.C. Vetterli – WLCG-CB, March ’09 – #1 Simon Fraser WLCG Collaboration Board Meeting Praha, March 22 nd, 2009 Thanks to Milos for hosting us.
ARDA Massimo Lamanna / CERN Massimo Lamanna 2 TOC ARDA Workshop Post-workshop activities Milestones (already shown in December)
12 March, 2002 LCG Applications Area - Introduction slide 1 LCG Applications Session LCG Launch Workshop March 12, 2002 John Harvey, CERN LHCb Computing.
HEPiX IPv6 Working Group David Kelsey david DOT kelsey AT stfc DOT ac DOT uk (STFC-RAL) HEPiX, Vancouver 26 Oct 2011.
LCG Introduction John Gordon, STFC-RAL GDB June 11 th, 2008.
LCG Introduction John Gordon, STFC-RAL GDB November 7th, 2007.
Top 5 Experiment Issues ExperimentALICEATLASCMSLHCb Issue #1xrootd- CASTOR2 functionality & performance Data Access from T1 MSS Issue.
LHC high-level network architecture Erik-Jan Bos Director of Network Services SURFnet, The Netherlands T0/T1 network meeting CERN, Geneva, Switzerland;
17 September 2004David Foster CERN IT-CS 1 Network Planning September 2004 David Foster Networks and Communications Systems Group Leader
LHCbComputing Update of LHC experiments Computing & Software Models Selection of slides from last week’s GDB
The Worldwide LHC Computing Grid WLCG Milestones for 2007 Focus on Q1 / Q2 Collaboration Workshop, January 2007.
T0-T1 Networking Meeting 16th June Meeting
Bob Jones EGEE Technical Director
Jeremy Coles, STFC-RAL GDB April 4, 2007
WLCG Tier-2 Asia Workshop TIFR, Mumbai 1-3 December 2006
“A Data Movement Service for the LHC”
David Kelsey CCLRC/RAL, UK
LCG Security Status and Issues
John Gordon, STFC-RAL GDB June 6, 2007
System performance and cost model working group
LHC Data Analysis using a worldwide computing grid
Collaboration Board Meeting
LHC Computing, RRB; H F Hoffmann
Presentation transcript:

Introduction Kors Bos, NIKHEF, Amsterdam GDB Meeting at CERN, June 22, 2005 LCG

2 Minutes of the meeting: –Can they be approved? –Ongoing actions –thanks to Jeremy Coles ! Recent workshops: –Operations workshop in Bologna May  –Service Challenge III detailed planning workshop June  Next meetings: –July GDB July 19 – 20  July 19: Tier0 – Tier1 Networking   July 20: Service Challenge III dedicated  Page with all meetings  Recent meetings

LCG 3 T0/T1 network meeting July Network architecture document v.2 is finished Optical Private Network OPN: –Dedicated 10 GE lambda’s between T0 and all T1s –Including Europe, US and AP –Provided by NREN’s, Geant, ESnet, ASnet Technical details described in Arch. Doc. –IP addressing, BGP Routing, Security, Backup, etc. –Network Operations Center Expected at the meeting: NREN and T1 reps. Homework needed from each T1 before 19/7: –3 slides from each T1 –I will ask again by

LCG 4 T1 Center Information In interpreting the T0/T1 document how do the T1s foresee to connect to the lambda? –Which networking equipment will be used? –How is local network layout organised? –How is the routing organised between the OPN and the general purpose internet? –What AS number and IP Prefixes will they use and are the IP prefixes dedicated to the connectivity to the T0? What backup connectivity is foreseen? What is the monitoring technology used locally? How is the operational support organised? What is the security model to be used with OPN? –How will it be implemented? What is the policy for external monitoring of local network devices, e.g. the border router for the OPN.

LCG 5 Service Challenge III July 20 GDB SC III should have started: throughput tests ongoing Homework for T1’s: Prepare 2 slides each –Short Status Report –Network used, Services running, Operational issues –Very concrete and detailed plans for next days/weeks Homework for the Expts: 2 slides each –Desired very simple first use cases to be tried –Not testing everything at once –Only with some T1s to begin with –Not mission critical We use the whole GDB to discuss just this! Could all T1’s send a representation (vacation time)?

LCG 6 ALICE: ATLAS: CMS: LHCb: LCG: Party on Friday in Bld. 40 TDRs have come out

LCG 7 –July  Special attention for: Service Challenges and T0/T1 network –September 6-7  Special attention: Service Challenges  UK has a problem because of a GridPP meeting –October  Special attention: Service Challenges and Accounting  Overlaps with HEPiX –November 8-9  Invited in Vancouver (SuperComp )  We have to decide now –December  End of Service Challenge III  Cern already closed ? –Please give me feedback !! Dates of future GDB meetings

LCG 8 This meeting Service Challeges Security Quattor Baseline Services The Nordic: Tier-1 and Interoperability Waiting in queue: –gLite components –Networking (draft architecture ready) –Tier-2 sites –Storage Task Force –Accounting

LCG 9 Management for Phase I PHASE I

LCG 10 Management for Phase I PHASE I National reps. Tier-1 Centres Tier-2 Centres Decission making No Tier-1 Centres reps GBD chair ex officio

LCG 11 Management for Phase II From the TDR POB  OB PEB  MB But now with T1 reps GDB is the same CB is new

LCG 12 Management for Phase II Reasons for change: – new phase of the project: R&D  Service – World Wide Collaboration WLCG – SC2 no longer needed – T1s need to be involved more directly GDB membersMB members Experiment reps (comp.coord.) T1 reps T2 reps of countries without T1-- Area project managers GDB chair (chair)GDB chair LCG project leaderLCG project leader (chair)

LCG 13 MB (in the MoU) The WLCG Management Board (MB) supervises the work of the WLCG Collaboration. It is chaired by the LCG Project Leader and reports to the OB. The MB organises the work of the WLCG Collaboration as a set of formal activities and projects. It maintains the overall programme of work and all other planning data necessary to ensure the smooth execution of the work of the WLCG Collaboration. It provides quarterly progress and status reports to the OB. The MB endeavours to work by consensus but, if this is not achieved, the LCG Project Leader shall make decisions taking account of the advice of the Board. The MB membership includes the LCG Project Leader, the Technical Heads of the Tier-1 Centres, the leaders of the major activities and projects managed by the Board, the Computing Coordinator of each LHC Experiment, the Chair of the Grid Deployment Board (GDB), a Scientific Secretary and other members as decided from time to time by the Board.

LCG 14 GDB (in the MoU) The Grid Deployment Board (GDB) is the forum within the WLCG Collaboration where the computing managements of the experiments and the regional computing centres discuss and take, or prepare, the decisions necessary for planning, deploying and operating the WLCG. Its membership includes: as voting members - one person from each country with a regional computing centre providing resources to an LHC experiment (usually a senior manager from the largest such centre in the country), a representative of each of the experiments; as non-voting members - the Computing Coordinators of the experiments, the LCG Project Leader, and leaders of formal activities and projects of the WLCG Collaboration. The Chair of the GDB is elected by the voting members of the board from amongst their number for a two year term. The GDB may co-opt additional non-voting members as it deems necessary.

LCG 15 My worries Too much overlap in membership Both can vote: GDB per country, MB per member MB is too big to be an executive board (20-25 people) MB needs to meet often (1 / 1-2week) MB needs ½ - 1 day meetings Tel.Conf. not efficient for this sort of board

LCG 16 My Thoughts Need to re-think this scheme (will change MoU) Needs to be discussed in the biggest group: Coll.Board Collaboration doesn’t exist: CB meets 1/year OB needs to be more external (T1 + expt reps. again) Need to distinguish Collaboration from Program of Work (Project) Need small board to execute a Program of Work Need big board to decide the Program of Work T2 sites need to be involved Actions and Task Forces need a clear place Need a time when it should be decided

LCG 17 Storage Task Force The storage task force will: Examine the current experiment computing models. Attempt to determine the data volumes, access patterns and required data security for the various classes of data, as a function of Tier and of time. Consider the current storage technologies and their suitability for various classes of data storage. Attempt to map the required storage capacities to suitable technologies. Formulate a plan to implement the required storage in a timely fashion. The task force will report to the HEPiX Board, and to the LCG project via the Grid Deployment Board Members: Roger Jones, Lancaster, ATLAS, Tier-2 (chairman) Andrew Sansum, RAL, Tier-1 Bernd Panzer/ Helge Meinhard, CERN, Tier-0 TBC (CMS) Peter Malzacher GridKA Tier-1, Alice Andrei Maslennikov, CASPUR Jos van Wezel GridKA, T1, Knut Woller, DESY, Tier-2 Vincenzo Vagnoni, Bologna, LHCb, Tier-2 Milestone: a first report by HEPIX meeting in September at SLAC

LCG 18 SuperComputing 2005 Seattle, November Common demo (a.o.t.) on LCG Service Challenges GridIce/MonaLisa like demo, flyers, posters, Tshirt, Logo Equipment needed: stand, normal netw., laptop, screen CERN has no booth but supports the initiative anyway Participants: –CERN, no booth, Francis Grey, Rosy Mondardini (coordination) –Germany, FZK booth, Bruno Hoeft –Netherlands, NCF booth, Kors Bos –Italy, INFN booth, Luca dellángelo –UK, PPARC booth, Roger Jones –France?, Nordic? Spain? –US, FNAL booth, Vicky White –US. BNL booth, Vicky White (temporarily) –US, SLAC booth, Harvey Newman –Canada, Triumf booth, Rod Walker First phone meeting be called soon

LCG 19 END

LCG 20 Item No. DescriptionOwnerStatus Contact Dave Kant at RAL re input of NorduGrid accounting data A NorduGrid repr Provide feedback to KB on proposed meeting dates for Q3 and Q All Experiments to provide information on their planningExperiments Arrange Data Management Workshop at CERN 5 th -7 th AprilKB/JCClosed Follow up on INFN accounting records with Cristina VistoliKB Tier-1s not in SC2 or 3 to provide planning informationTier-1 reps Appoint coordinator to work on LCG participation in Super Computing 2005 WvR Read Site registration Policy and procedures document and raise concerns before document endorsed at next GDB in March All GDB representatives to put forward names to attend joint SC/Networking/GDB meetings. All Send out request for Tier-2 representative informationKB Review and confirm the Tier-2 information at the following link : All Specific action to clarify Italian entries for in light of a potential misunderstanding Laura Perini Ensure that all sites in country are publishing dataAll Mail Ian Bird with a formal request for an EGEE representative for OSG and a Storage Group contact to help with SRM testing Ruth Pordes Begin common work list for OSG-EGEE to enable further discussion on scope and priorities for joint working Vicky White Provide OSG with information about policy implementers in EGEE Ian Bird open done ongoing done open Actions from the last meeting open ongoing