Presentation is loading. Please wait.

Presentation is loading. Please wait.

Planning LHCb computing infrastructure 22 May 2000 Slide 1 Planning LHCb computing infrastructure at CERN and at regional centres F. Harris.

Similar presentations


Presentation on theme: "Planning LHCb computing infrastructure 22 May 2000 Slide 1 Planning LHCb computing infrastructure at CERN and at regional centres F. Harris."— Presentation transcript:

1 Planning LHCb computing infrastructure 22 May 2000 Slide 1 Planning LHCb computing infrastructure at CERN and at regional centres F. Harris

2 Planning LHCb computing infrastructure 22 May 2000 Slide 2 Talk Outline qReminder of LHCb distributed model qRequirements and planning for 2000-2005 ( growth of regional centres) qEU GRID proposal status and LHCb planning qLarge prototype proposal and LHCb possible uses q Some NEWS from LHCb (and other) activities

3 Planning LHCb computing infrastructure 22 May 2000 Slide 3 General Comments qDraft LHCb Technical Note for computer model exists qNew requirements estimates have been made (big changes in MC requirements) qSeveral presentations have been made to LHC computing review in March and May (and May 8 LHCb meeting) qhttp://lhcb.cern.ch/computing/Steering/Reviews/LHCComputing2000/default.htm

4 Planning LHCb computing infrastructure 22 May 2000 Slide 4 Baseline Computing Model - Roles qTo provide an equitable sharing of the total computing load can envisage a scheme such as the following qAfter 2005 role of CERN (notionally 1/3) ãto be production centre for real data ãsupport physics analysis of real and simulated data by CERN based physicists qRole of regional centres (notionally 2/3) ãto be production centre for simulation ãto support physics analysis of real and simulated data by local physicists qInstitutes with sufficient cpu capacity share simulation load with data archive at nearest regional centre

5 CPU for production Mass Storage for RAW, ESD AOD, and TAG Institute Selected User Analyses Institute Selected User Analyses Regional Centre User analysis Production Centre Generate raw data Reconstruction Production analysis User analysis Regional Centre User analysis Regional Centre User analysis Institute Selected User Analyses Regional Centre User analysis Institute Selected User Analyses CPU for analysis Mass storage for AOD, TAG CPU and data servers AOD,TAG real : 80TB/yr sim: 120TB/yr AOD,TAG 8-12 TB/yr

6 Planning LHCb computing infrastructure 22 May 2000 Slide 6 Physics : Plans for Simulation 2000-2005 qIn 2000 and 2001 we will produce 3. 10 6 simulated events each year for detector optimisation studies in preparation of the detector TDRs (expected in 2001 and early 2002). qIn 2002 and 2003 studies will be made of the high level trigger algorithms for which we are required to produce 6.10 6 simulated events each year. qIn 2004 and 2005 we will start to produce very large samples of simulated events, in particular background, for which samples of 10 7 events are required. qThis on-going physics production work will be used as far as is practicable for testing development of the computing infrastructure.

7 Planning LHCb computing infrastructure 22 May 2000 Slide 7 Computing : MDC Tests of Infrastructure q2002 : MDC 1 - application tests of grid middleware and farm management software using a real simulation and analysis of 10 7 B channel decay events. Several regional facilities will participate : ãCERN, RAL, Lyon/CCIN 2 P 3,Liverpool, INFN, …. q2003 : MDC 2 - participate in the exploitation of the large scale Tier0 prototype to be setup at CERN ãHigh Level Triggering – online environment, performance ãManagement of systems and applications ãReconstruction – design and performance optimisation ãAnalysis – study chaotic data access patterns ãSTRESS TESTS of data models, algorithms and technology q2004 : MDC 3 - Start to install event filter farm at the experiment to be ready for commissioning of detectors in 200 4 and 2005

8 Planning LHCb computing infrastructure 22 May 2000 Slide 8 Cost of CPU, disk and tape qMoore’s Law evolution with time for cost of CPU and storage. Scale in MSFr is for a facility sized to ATLAS requirements (> 3 x LHCb) qAt today’s prices the total cost for LHCb ( CERN and regional centres) would be ~60 MSFr qIn 2004 the cost would be ~10 - 20 MSFr qAfter 2005 the maintenance cost is ~ 5 MSFr /year

9 Planning LHCb computing infrastructure 22 May 2000 Slide 9 Growth in Requirements to Meet Simulation Needs

10 Planning LHCb computing infrastructure 22 May 2000 Slide 10 Cost / Regional Centre for Simulation mAssume there are 5 regional centres(UK,IN2P3,INFN,CERN+ consortium of Nikhef, Russia, etc...) mAssume costs are shared equally

11 Planning LHCb computing infrastructure 22 May 2000 Slide 11 EU GRID proposal status ( http://grid.web.cern.ch/grid/) qGRIDs Software to manage all aspects of distributed computing(security and authorisation, resource management,monitoring). Interface to high energy physics... q Proposal was submitted May 9 ãMain signatories (CERN,France,Italy,UK,Netherlands,ESA) + associate signatories (Spain,Czechoslovakia,Hungary,Spain,Portugal,Scandinavia..) ãProject composed of Work Packages (to which countries provide effort) qLHCb involvement ãDepends on country ãEssentially comes via ‘Testbeds’ and ‘HEP applications’

12 Planning LHCb computing infrastructure 22 May 2000 Slide 12 EU Grid Work Packages qMiddleware ãGrid work scheduling C Vistoli(INFN) ãGrid Data Management B Segal(IT) ãGrid Application Monitoring R Middleton(RAL) ãFabric Management T Smith(IT) ãMass Storage Management O Barring(IT) qInfrastructure ãTestbed and Demonstrators (LHCb in) F Etienne(Marseille) ãNetwork Services C Michau(CNRS) qApplications ãHEP (LHCb in) H Hoffmann(CERN) ãEarth Observation L Fusco(ESA) ãBiology C Michau(CNRS) qManagement ãProject Management F Gagliardi(IT)

13 Planning LHCb computing infrastructure 22 May 2000 Slide 13 Grid LHCb WP - Grid Testbed (DRAFT) qMAP farm at Liverpool has 300 processors would take 4 months to generate the full sample of events qAll data generated (~3TB) would be transferred to RAL for archive (UK regional facility). qAll AOD and TAG datasets dispatched from RAL to other regional centres, such as Lyon, CERN, INFN. qPhysicists run jobs at the regional centre or ship AOD and TAG data to local institute and run jobs there. Also copy ESD for a fraction (~10%) of events for systematic studies (~100 GB). qThe resulting data volumes to be shipped between facilities over 4 months would be as follows : Liverpool to RAL 3 TB (RAW ESD AOD and TAG) RAL to LYON/CERN/… 0.3 TB (AOD and TAG) LYON to LHCb institute 0.3 TB (AOD and TAG) RAL to LHCb institute 100 GB (ESD for systematic studies)

14 Planning LHCb computing infrastructure 22 May 2000 Slide 14 MILESTONES for 3 year EU GRID project starting Jan 2001 qMx1 (June 2001) Coordination with the other WP’s. Identification of use cases and minimal grid services required at every step of the project. Planning of the exploitation of the GRID steps. qMx2 (Dec 2001) Development of use cases programs. Interface with existing GRID services as planned in Mx1. qMx3 (June 2002) Run #0 executed (distributed MonteCarlo production and reconstruction) and feed back provided to the other WP’s. qMx4 (Dec 2002) Run #1 executed (distributed analysis) and corresponding feed-back to the other WP’s. WP workshop. qMx5 (June 2003) Run #2 executed including additional GRID functionality. qMx6 (Dec 2003) Run #3 extended to a larger user community

15 Planning LHCb computing infrastructure 22 May 2000 Slide 15 ‘Agreed’ LHCb resources going into EU GRID project over 3 years q Country FTE equivalent/year ã CERN 1 ã France 1 ã Italy 1 ã UK 1 ã Netherlands.5 ã These people should work together….LHCb GRID CLUB! q This is for HEP applications WP - interfacing our physics software into the GRID and running it in testbed environments qSome effort may also go into testbed WP (? Don’t know if LHCb countries have signed up for this?)

16 Planning LHCb computing infrastructure 22 May 2000 Slide 16 Grid computing – LHCb planning qNow : Forming GRID technical working group with reps from regional facilities ãLiverpool(1), RAL(2), CERN(1), IN2P3(?), INFN(?), … qJune 2000 : define simulation samples needed in coming years qJuly 2000 : Install Globus software in LHCb regional centres and start to study integration with LHCb production tools qEnd 2000 : define grid services for farm production qJune 2001 : implementation of basic grid services for farm production provided by EU Grid project qDec 2001 : MDC 1 - small production for test of software implementation (GEANT4) qJune 2002 : MDC 2 - large production of signal/background sample for tests of world-wide analysis model qJune 2003 : MDC 3 - stress/scalability test on large scale Tier 0 facility, tests of Event Filter Farm, Farm control/management, data throughput tests.

17 Planning LHCb computing infrastructure 22 May 2000 Slide 17 Prototype Computing Infrastructure qAim to build a prototype production facility at CERN in 2003 (‘proposal coming out of LHC computing review) qScale of prototype limited by what is affordable - ~0.5 of the number of components of ATLAS system ãCost ~20 MSFr ãJoint project between the four experiments ãAccess to facility for tests to be shared qNeed to develop a distributed network of resources involving other regional centres and deploy data production software over the infrastructure for tests in 2003 qResults of this prototype deployment used as basis for Computing MoU

18 Planning LHCb computing infrastructure 22 May 2000 Slide 18 Tests Using Tier 0 Prototype in 2003 qWe intend to make use of the Tier 0 prototype planned for construction in 2003 to make stress tests of both hardware and software qWe will prepare realistic examples of two types of application : ãTests designed to gain experience with the online farm environment ãProduction tests of simulation, reconstruction, and analysis

19 Planning LHCb computing infrastructure 22 May 2000 Slide 19 Switch (Functions as Readout Network) ~100 RU SFC CPU ~10 CPC RU SFC CPU ~10 CPC Controls Network Storage Controller(s) Controls System Storage/CDR Readout Network Technology (GbE?) Sub-Farm Network Technology (Ethernet) Controls Network Technology (Ethernet) SFCSub-Farm Controller CPCControl PC CPUWork CPU Event Filter Farm Architecture

20 Planning LHCb computing infrastructure 22 May 2000 Slide 20 Switch (Functions as Readout Network) ~100 RU SFC CPU ~10 CPC RU SFC CPU ~10 CPC Storage Controller(s) Controls System Storage/CDR Testing/Verification Controls Network Legend Small Scale Lab Tests +Simulation Full Scale Lab Tests Large/Full Scale Tests using Farm Prototype

21 Planning LHCb computing infrastructure 22 May 2000 Slide 21 Scalability tests for simulation and reconstruction qTest writing of reconstructed+raw data at 200Hz in online farm environment qTest writing of reconstructed+simulated data in offline Monte Carlo farm environment ãPopulation of event database from multiple input processes qTest efficiency of event and detector data models ãAccess to conditions data from multiple reconstruction jobs ãOnline calibration strategies and distribution of results to multiple reconstruction jobs ãStress testing of reconstruction to identify hot spots, weak code etc.

22 Planning LHCb computing infrastructure 22 May 2000 Slide 22 Scalability tests for analysis qStress test of event database ãMultiple concurrent accesses by “chaotic” analysis jobs qOptimisation of data model ãStudy data access patterns of multiple, independent, concurrent analysis jobs ãModify event and conditions data models as necessary ãDetermine data clustering strategies

23 Planning LHCb computing infrastructure 22 May 2000 Slide 23 Work required now for planning prototypes for 2003/4 (request from Resource panel of LHC review) qPlan for evolution to prototypes (Tier0/1) -who will work on this from the institutes? ãHardware evolution ãSpending profile ãOrganisation(sharing of responsibilities in collaboration/CERN/centres) ãDescription of Mock Data Challenges qDraft of proposal (hardware and software) for prototype construction) ã? By end 2000 ãIf shared Tier-0 prototype then single proposal for 4 expts??

24 Planning LHCb computing infrastructure 22 May 2000 Slide 24 Some NEWS from LHCb RC activities (and other..) qLHCb/Italy currently preparing case to be submitted to INFN in June(compatible with planning shown in this talk) qLiverpool ã Increased COMPASS nodes to 6 (3 TBytes of disk) ã Bidding for a 1000PC system with 800MHZ/processor and 70Gbyte/processor ã Globus should be fully installed soon ã Collaborating with Cambridge Astronomy to test Globus package qOther experiments and the GRID ãCDF and Babar planning to set up GRID prototypes soon… qGRID workshop in Sep (date and details to be confirmed) qAny other news?


Download ppt "Planning LHCb computing infrastructure 22 May 2000 Slide 1 Planning LHCb computing infrastructure at CERN and at regional centres F. Harris."

Similar presentations


Ads by Google