J. Harvey : Panel 3 – list of deliverables Slide 1 / 14 Planning Resources for LHCb Computing Infrastructure John Harvey LHCb Software Week July 5-7 2000.

Slides:



Advertisements
Similar presentations
CERN STAR TAP June 2001 Status of the EU DataGrid Project Fabrizio Gagliardi CERN EU-DataGrid Project Leader June 2001
Advertisements

31/03/00 CMS(UK)Glenn Patrick What is the CMS(UK) Data Model? Assume that CMS software is available at every UK institute connected by some infrastructure.
LHCb Computing Activities in UK Current activities UK GRID activities RICH s/w activities.
Nick Brook University of Bristol The LHC Experiments & Lattice EB News Brief overview of the expts  ATLAS  CMS  LHCb  Lattice.
Resources for the ATLAS Offline Computing Basis for the Estimates ATLAS Distributed Computing Model Cost Estimates Present Status Sharing of Resources.
1 Software & Grid Middleware for Tier 2 Centers Rob Gardner Indiana University DOE/NSF Review of U.S. ATLAS and CMS Computing Projects Brookhaven National.
23/04/2008VLVnT08, Toulon, FR, April 2008, M. Stavrianakou, NESTOR-NOA 1 First thoughts for KM3Net on-shore data storage and distribution Facilities VLV.
DATA PRESERVATION IN ALICE FEDERICO CARMINATI. MOTIVATION ALICE is a 150 M CHF investment by a large scientific community The ALICE data is unique and.
Trigger and online software Simon George & Reiner Hauser T/DAQ Phase 1 IDR.
CERN/IT/DB Multi-PB Distributed Databases Jamie Shiers IT Division, DB Group, CERN, Geneva, Switzerland February 2001.
Large scale data flow in local and GRID environment V.Kolosov, I.Korolko, S.Makarychev ITEP Moscow.
GridPP Steve Lloyd, Chair of the GridPP Collaboration Board.
Planning LHCb computing infrastructure 22 May 2000 Slide 1 Planning LHCb computing infrastructure at CERN and at regional centres F. Harris.
October 24, 2000Milestones, Funding of USCMS S&C Matthias Kasemann1 US CMS Software and Computing Milestones and Funding Profiles Matthias Kasemann Fermilab.
CMS Software and Computing FNAL Internal Review of USCMS Software and Computing David Stickland Princeton University CMS Software and Computing Deputy.
CERN Computing Review Recommendations ATLAS Plenary 22 February 2001 Gilbert Poulard / CERN-EP-ATC.
CMS Report – GridPP Collaboration Meeting VI Peter Hobson, Brunel University30/1/2003 CMS Status and Plans Progress towards GridPP milestones Workload.
December 17th 2008RAL PPD Computing Christmas Lectures 11 ATLAS Distributed Computing Stephen Burke RAL.
REVIEW OF NA61 SOFTWRE UPGRADE PROPOSAL. Mandate The NA61 experiment is contemplating to rewrite its fortran software in modern technology and are requesting.
23 Feb 2000F Harris Hoffmann Review Status1 Status of Hoffmann Review of LHC computing.
November 15, 2000US CMS Tier2 Plans Matthias Kasemann1 US CMS Software and Computing Tier 2 Center Plans Matthias Kasemann Fermilab DOE/NSF Baseline Review.
Data Import Data Export Mass Storage & Disk Servers Database Servers Tapes Network from CERN Network from Tier 2 and simulation centers Physics Software.
Fermilab User Facility US-CMS User Facility and Regional Center at Fermilab Matthias Kasemann FNAL.
DataGrid Applications Federico Carminati WP6 WorkShop December 11, 2000.
LHC Computing Review - Resources ATLAS Resource Issues John Huth Harvard University.
Finnish DataGrid meeting, CSC, Otaniemi, V. Karimäki (HIP) DataGrid meeting, CSC V. Karimäki (HIP) V. Karimäki (HIP) Otaniemi, 28 August, 2000.
LHC Computing Plans Scale of the challenge Computing model Resource estimates Financial implications Plans in Canada.
ATLAS and GridPP GridPP Collaboration Meeting, Edinburgh, 5 th November 2001 RWL Jones, Lancaster University.
LHC Computing Review Recommendations John Harvey CERN/EP March 28 th, th LHCb Software Week.
14 Aug 08DOE Review John Huth ATLAS Computing at Harvard John Huth.
7April 2000F Harris LHCb Software Workshop 1 LHCb planning on EU GRID activities (for discussion) F Harris.
5 May 98 1 Jürgen Knobloch Computing Planning for ATLAS ATLAS Software Week 5 May 1998 Jürgen Knobloch Slides also on:
Status of the LHCb MC production system Andrei Tsaregorodtsev, CPPM, Marseille DataGRID France workshop, Marseille, 24 September 2002.
1 Planning for Reuse (based on some ideas currently being discussed in LHCb ) m Obstacles to reuse m Process for reuse m Project organisation for reuse.
Summary of EU Grid meetings June, IN2P3,Lyon Slide 1 EU Grid meetings 29June,2000 (Testbeds and HEP applications) F. Harris.
LCG LHC Computing Grid Project – LCG CERN – European Organisation for Nuclear Research Geneva, Switzerland LCG LHCC Comprehensive.
GridPP Presentation to AstroGrid 13 December 2001 Steve Lloyd Queen Mary University of London.
ATLAS Data Challenges US ATLAS Physics & Computing ANL October 30th 2001 Gilbert Poulard CERN EP-ATC.
The LHCb Italian Tier-2 Domenico Galli, Bologna INFN CSN1 Roma,
CMS Computing and Core-Software USCMS CB Riverside, May 19, 2001 David Stickland, Princeton University CMS Computing and Core-Software Deputy PM.
Tony Doyle - University of Glasgow 8 July 2005Collaboration Board Meeting GridPP Report Tony Doyle.
US ATLAS Tier 1 Facility Rich Baker Brookhaven National Laboratory Review of U.S. LHC Software and Computing Projects Fermi National Laboratory November.
Grid User Interface for ATLAS & LHCb A more recent UK mini production used input data stored on RAL’s tape server, the requirements in JDL and the IC Resource.
Answers to Panel1/Panel3 Questions John Harvey/ LHCb May 12 th, 2000.
CERN IT Department CH-1211 Genève 23 Switzerland t Frédéric Hemmer IT Department Head - CERN 23 rd August 2010 Status of LHC Computing from.
LHCbComputing Manpower requirements. Disclaimer m In the absence of a manpower planning officer, all FTE figures in the following slides are approximate.
CMS Computing and Core-Software Report to USCMS-AB (Building a Project Plan for CCS) USCMS AB Riverside, May 18, 2001 David Stickland, Princeton University.
Computing R&D and Milestones LHCb Plenary June 18th, 1998 These slides are on WWW at:
CD FY09 Tactical Plan Status FY09 Tactical Plan Status Report for Neutrino Program (MINOS, MINERvA, General) Margaret Votava April 21, 2009 Tactical plan.
U.S. ATLAS Computing Facilities Overview Bruce G. Gibbard Brookhaven National Laboratory U.S. LHC Software and Computing Review Brookhaven National Laboratory.
Large scale data flow in local and GRID environment Viktor Kolosov (ITEP Moscow) Ivan Korolko (ITEP Moscow)
The ATLAS Computing Model and USATLAS Tier-2/Tier-3 Meeting Shawn McKee University of Michigan Joint Techs, FNAL July 16 th, 2007.
Computing Division FY03 Budget and budget outlook for FY04 + CDF International Finance Committee April 4, 2003 Vicky White Head, Computing Division.
LHCb Data Challenge in 2002 A.Tsaregorodtsev, CPPM, Marseille DataGRID France meeting, Lyon, 18 April 2002.
The MEG Offline Project General Architecture Offline Organization Responsibilities Milestones PSI 2/7/2004Corrado Gatto INFN.
LHCbComputing Personnel status Preparation of discussion at next CB.
Ian Bird Overview Board; CERN, 8 th March 2013 March 6, 2013
DPS/ CMS RRB-T Core Software for CMS David Stickland for CMS Oct 01, RRB l The Core-Software and Computing was not part of the detector MoU l.
Meeting with University of Malta| CERN, May 18, 2015 | Predrag Buncic ALICE Computing in Run 2+ P. Buncic 1.
1 ALICE Summary LHCC Computing Manpower Review September 3, 2003.
ATLAS – statements of interest (1) A degree of hierarchy between the different computing facilities, with distinct roles at each level –Event filter Online.
Ian Bird WLCG Workshop San Francisco, 8th October 2016
JRA3 Introduction Åke Edlund EGEE Security Head
Pasquale Migliozzi INFN Napoli
Dagmar Adamova (NPI AS CR Prague/Rez) and Maarten Litmaath (CERN)
ALICE Computing Model in Run3
US ATLAS Physics & Computing
Collaboration Board Meeting
LHCb thinking on Regional Centres and Related activities (GRIDs)
Development of LHCb Computing Model F Harris
Presentation transcript:

J. Harvey : Panel 3 – list of deliverables Slide 1 / 14 Planning Resources for LHCb Computing Infrastructure John Harvey LHCb Software Week July

J. Harvey : Panel 3 – list of deliverables Slide 2 / 14 Outline mBasic questions on LHCb Computing Model mTechnical requirements mPlan for tests and the 2003 Prototype mMilestones mCosting and mapower mGuidelines for MoU mCalendar of summer activities mSummary

Computing Model Data recording, calibration Reconstruction, reprocesssing Central data store Oxford Barcelona CERN Tier 0 Tier 1 UK Roma I Marseilles Simulation production Analysis Central data store Selected user analyses Local data store AOD,TAG real : 80TB/yr sim: 120TB/yr AOD,TAG 8-12 TB/yr CERN Tier 1 ? ? Tier 1 ItalyTier 1 France Tier 1 Spain ?

J. Harvey : Panel 3 – list of deliverables Slide 4 / 14 Some basic questions mWhich are the LHCb Tier 1 centres? ãUK, France, Italy, ……? ãWe are being asked this question now. mHow will countries without a Tier 1 centre access LHCb datasets and do analysis? mFor each centre we need to ascertain whether there is a policy for permitting remote access to resources e.g. is it: ãA collaboration-wide resource? ãRestricted by nationality? ãHow will use of resources be allocated and costed?

J. Harvey : Panel 3 – list of deliverables Slide 5 / 14 Technical Requirements mTier 0 - real data production and user analysis at CERN ã120,000 SI95 cpu, 500 TB/yr, 400TB (export), 120 TB(import) mTier 1 – simulation production and user analysis ã500,000 SI95 cpu, 330TB/yr, 400 TB (export), 120TB (import) ãShared between Tier 1 centres mRequirements need to be revised ãNew estimates of simulation events needed – evolution with time ãNumber of physicists doing analysis at each centre ãComparison with estimates from other experiments to look for discrepancies (e.g. analysis) mSharing between Tier 0 and Tier 1 needs to be checked ã1/3 (CERN) 2/3 (outside) guideline

J. Harvey : Panel 3 – list of deliverables Slide 6 / 14 Tests and the 2003 Prototype mAdaptation of LHCb production software to use grid middleware (i.e. Globus) has started mProduction tests of new data processing software – performance and quality mProduction tests of the computing infrastructure ãPerformance of hardware, scalability ãTests of the grid infrastructure mConstruction of a large scale prototype facility at CERN ãAdequate complexity (no of physical cpus, switches, disks etc.) ãSuitable architecture – scope covers on- and off-line ãTests – scalability under different workloads, farm controls, data recording ãShared participation and exploitation by all 4 experiments ãPlanned for 2003 – sufficient time to get experience ãScale limited by ATLAS/CMS needs

J. Harvey : Panel 3 – list of deliverables Slide 7 / 14 Tier 1 UK 2003 Prototype CERN Tier 0 CERN Tier 1 Tier 1 Italy x2 Tier 1 France Tier 1 others ? Tier 1 USA x2 mCERN Tier 0 shared by all 4 expts even in 2005 – large! mWhich are the Tier 1 centres that will participate in prototype? mTier 1s decide how big they will be in prototype

J. Harvey : Panel 3 – list of deliverables Slide 8 / 14 Computing Milestones m2H2001 – Computing MoU ãassignment of responsibilities (cost and manpower) m1H2002 –Data Challenge 1 (Functionality test) ãTest software framework machinery (database etc.) ãTest basic grid infrastructure (>2 Tier 1 centres) ãScale of test ~ 10 6 events in ~1 month – modest capacity m2H2002 – Computing TDR m1H2003 – Data Challenge 2 (Stress tests using prototype) ãStress tests of data processing ~10 7 events in 1 month ãProduction (simulation) and chaotic (analysis) ãRepeat tests of grid with all Tier 1 centres if possible ãInclude tests designed to gain experience with the online farm environment – data recording, controls

J. Harvey : Panel 3 – list of deliverables Slide 9 / 14 Planning mNeed to establish participation of Tier 1 centres ãIn grid tests ãIn 2003 prototype tests ãCoordination between Tier 1s and experiments (EU project etc.) mPlan Data Challenges ãTasks, manpower schedule and milestones ãDefine technical details ãMatch to physics milestones – ongoing simulation work mPlan resources needed at each centre between now and 2005 ãTo participate in tests ãTo participate in simulation production

J. Harvey : Panel 3 – list of deliverables Slide 10 / 14 Costing and Manpower mWe need to identify our requirements at each centre and work with their local experts to cost our needs. mThe centre should apply its own specific costing model to make estimates of the resources required to service LHCb needs as a function of time. This includes : ãUnit costs of cpu, managed storage, network bandwidth, … ãOn-going maintenance costs ãPolicy for replacement and upgrade mThe staffing levels in the Tier0 and Tier1s required for LHCb should be addressed jointly by LHCb and the centres’ experts to determine : ãDedicated LHCb support staff ãCost of services - breakdown as in-house and outsource mWe can then compare these figures with our own estimates of the cost – ultimately we have responsibility for ensuring that the LHCb computing cost estimate is correct. mWe need to estimate the evolution in the cost from now to 2005 i.e. including the cost of the prototype

J. Harvey : Panel 3 – list of deliverables Slide 11 / 14 Guidelines to write the Computing MOU mAssignment of responsibility (cost and manpower) for the computing infrastructure mProvision of computing resources ãBreakdown by centres providing production capacity ãRemote access policies of Tier 1 centres ãMaintenance and operation costs mManpower for computing centres ãPart of centre strategic planning i.e. not for LHCb MoU mManpower for CORE computing components ãCORE software components – institutional agreements ãSoftware production tasks – librarian, quality, planning, doc,.. ãData production tasks – operation, bookkeeping mManpower for detector specific computing ãFollows institutional responsibility for detector mTimescale – write during 2001, sign when ready

J. Harvey : Panel 3 – list of deliverables Slide 12 / 14 Construction of prototype mWe need to understand how the construction of the prototype will be funded and how costs will be shared mThis need not necessarily require an MoU if an informal agreement can be reached mWe are expected to describe how we intend to use the prototype and to make a commitment on the manpower that will be used to provide the applications and to perform the tests.

J. Harvey : Panel 3 – list of deliverables Slide 13 / 14 The calendar for the summer activities m17th July – Review Panel 3 ãLocation of Tier 1 centres ãLHCb technical requirements - to be sent to centres for costing ãParticipation of Tier 1 centres in prototype m22nd August – Review Panel 3 m20th September – Review Panel 3 m25-29 September – LHCb collaboration meeting mOctober 3 – Review Steering – report almost final mOctober 9 – Review Panel3 mOctober 10 th – finalise review report and distribute mOctober 24 th – RRB meeting – discussion of report

J. Harvey : Panel 3 – list of deliverables Slide 14 / 14 Summary mWe need to refine model : ãWhere are theTier 1s? ãHow will physicists at each institute access data and do analysis? mWe need to revise technical requirements ãNew estimates of simulation needs ãChecks against requirements of other experiments mWe need to define tests, MDCs and use of 2003 prototype mWe need to make cost and manpower estimates ãWorking with the various centres mNext year we will need to write a Computing MoU defining the assignment of responsibility for costs and manpower ãWill be signed only when all details are fixed