Planning LHCb computing infrastructure 22 May 2000 Slide 1 Planning LHCb computing infrastructure at CERN and at regional centres F. Harris.

Slides:



Advertisements
Similar presentations
31/03/00 CMS(UK)Glenn Patrick What is the CMS(UK) Data Model? Assume that CMS software is available at every UK institute connected by some infrastructure.
Advertisements

LHCb Computing Activities in UK Current activities UK GRID activities RICH s/w activities.
Resources for the ATLAS Offline Computing Basis for the Estimates ATLAS Distributed Computing Model Cost Estimates Present Status Sharing of Resources.
1 Software & Grid Middleware for Tier 2 Centers Rob Gardner Indiana University DOE/NSF Review of U.S. ATLAS and CMS Computing Projects Brookhaven National.
23/04/2008VLVnT08, Toulon, FR, April 2008, M. Stavrianakou, NESTOR-NOA 1 First thoughts for KM3Net on-shore data storage and distribution Facilities VLV.
Exploiting the Grid to Simulate and Design the LHCb Experiment K Harrison 1, N Brook 2, G Patrick 3, E van Herwijnen 4, on behalf of the LHCb Grid Group.
August 98 1 Jürgen Knobloch ATLAS Software Workshop Ann Arbor ATLAS Computing Planning ATLAS Software Workshop August 1998 Jürgen Knobloch Slides also.
CERN/IT/DB Multi-PB Distributed Databases Jamie Shiers IT Division, DB Group, CERN, Geneva, Switzerland February 2001.
Hall D Online Data Acquisition CEBAF provides us with a tremendous scientific opportunity for understanding one of the fundamental forces of nature. 75.
Fabric Management for CERN Experiments Past, Present, and Future Tim Smith CERN/IT.
LCG Milestones for Deployment, Fabric, & Grid Technology Ian Bird LCG Deployment Area Manager PEB 3-Dec-2002.
25 February 2000Tim Adye1 Using an Object Oriented Database to Store BaBar's Terabytes Tim Adye Particle Physics Department Rutherford Appleton Laboratory.
CMS Report – GridPP Collaboration Meeting VI Peter Hobson, Brunel University30/1/2003 CMS Status and Plans Progress towards GridPP milestones Workload.
5 November 2001F Harris GridPP Edinburgh 1 WP8 status for validating Testbed1 and middleware F Harris(LHCb/Oxford)
Andrew McNab - Manchester HEP - 5 July 2001 WP6/Testbed Status Status by partner –CNRS, Czech R., INFN, NIKHEF, NorduGrid, LIP, Russia, UK Security Integration.
Remote Production and Regional Analysis Centers Iain Bertram 24 May 2002 Draft 1 Lancaster University.
3 Sept 2001F HARRIS CHEP, Beijing 1 Moving the LHCb Monte Carlo production system to the GRID D.Galli,U.Marconi,V.Vagnoni INFN Bologna N Brook Bristol.
November 7, 2001Dutch Datagrid SARA 1 DØ Monte Carlo Challenge A HEP Application.
D0 SAM – status and needs Plagarized from: D0 Experiment SAM Project Fermilab Computing Division.
Data Import Data Export Mass Storage & Disk Servers Database Servers Tapes Network from CERN Network from Tier 2 and simulation centers Physics Software.
Copyright © 2000 OPNET Technologies, Inc. Title – 1 Distributed Trigger System for the LHC experiments Krzysztof Korcyl ATLAS experiment laboratory H.
Fermilab User Facility US-CMS User Facility and Regional Center at Fermilab Matthias Kasemann FNAL.
DataGrid Applications Federico Carminati WP6 WorkShop December 11, 2000.
Cosener’s House – 30 th Jan’031 LHCb Progress & Plans Nick Brook University of Bristol News & User Plans Technical Progress Review of deliverables.
Nick Brook Current status Future Collaboration Plans Future UK plans.
LHC Computing Review - Resources ATLAS Resource Issues John Huth Harvard University.
Finnish DataGrid meeting, CSC, Otaniemi, V. Karimäki (HIP) DataGrid meeting, CSC V. Karimäki (HIP) V. Karimäki (HIP) Otaniemi, 28 August, 2000.
ALICE Upgrade for Run3: Computing HL-LHC Trigger, Online and Offline Computing Working Group Topical Workshop Sep 5 th 2014.
ATLAS and GridPP GridPP Collaboration Meeting, Edinburgh, 5 th November 2001 RWL Jones, Lancaster University.
Instrumentation of the SAM-Grid Gabriele Garzoglio CSC 426 Research Proposal.
LHC Computing Review Recommendations John Harvey CERN/EP March 28 th, th LHCb Software Week.
Data Grid projects in HENP R. Pordes, Fermilab Many HENP projects are working on the infrastructure for global distributed simulated data production, data.
7April 2000F Harris LHCb Software Workshop 1 LHCb planning on EU GRID activities (for discussion) F Harris.
SouthGrid SouthGrid SouthGrid is a distributed Tier 2 centre, one of four setup in the UK as part of the GridPP project. SouthGrid.
Status of the LHCb MC production system Andrei Tsaregorodtsev, CPPM, Marseille DataGRID France workshop, Marseille, 24 September 2002.
Summary of EU Grid meetings June, IN2P3,Lyon Slide 1 EU Grid meetings 29June,2000 (Testbeds and HEP applications) F. Harris.
GridPP Presentation to AstroGrid 13 December 2001 Steve Lloyd Queen Mary University of London.
ATLAS Data Challenges US ATLAS Physics & Computing ANL October 30th 2001 Gilbert Poulard CERN EP-ATC.
…building the next IT revolution From Web to Grid…
The LHC Computing Grid – February 2008 The Challenges of LHC Computing Dr Ian Bird LCG Project Leader 6 th October 2009 Telecom 2009 Youth Forum.
Les Les Robertson LCG Project Leader High Energy Physics using a worldwide computing grid Torino December 2005.
Grid User Interface for ATLAS & LHCb A more recent UK mini production used input data stored on RAL’s tape server, the requirements in JDL and the IC Resource.
Answers to Panel1/Panel3 Questions John Harvey/ LHCb May 12 th, 2000.
Technical Board Discussion on Computing Issues Prompted by LHC Computing Review J.Harvey May 17th, 2000.
ATLAS WAN Requirements at BNL Slides Extracted From Presentation Given By Bruce G. Gibbard 13 December 2004.
J. Harvey : Panel 3 – list of deliverables Slide 1 / 14 Planning Resources for LHCb Computing Infrastructure John Harvey LHCb Software Week July
LHCb Computing Model and updated requirements John Harvey.
Computing R&D and Milestones LHCb Plenary June 18th, 1998 These slides are on WWW at:
CD FY09 Tactical Plan Status FY09 Tactical Plan Status Report for Neutrino Program (MINOS, MINERvA, General) Margaret Votava April 21, 2009 Tactical plan.
U.S. ATLAS Computing Facilities Overview Bruce G. Gibbard Brookhaven National Laboratory U.S. LHC Software and Computing Review Brookhaven National Laboratory.
The ATLAS Computing Model and USATLAS Tier-2/Tier-3 Meeting Shawn McKee University of Michigan Joint Techs, FNAL July 16 th, 2007.
Maria Girone CERN - IT Tier0 plans and security and backup policy proposals Maria Girone, CERN IT-PSS.
The MEG Offline Project General Architecture Offline Organization Responsibilities Milestones PSI 2/7/2004Corrado Gatto INFN.
CLRC Grid Team Glenn Patrick LHCb GRID Plans Glenn Patrick LHCb has formed a GRID technical working group to co-ordinate practical Grid.
Markus Frank (CERN) & Albert Puig (UB).  An opportunity (Motivation)  Adopted approach  Implementation specifics  Status  Conclusions 2.
LHCb Current Understanding of Italian Tier-n Centres Domenico Galli, Umberto Marconi Roma, January 23, 2001.
14 June 2001LHCb workshop at Bologna1 LHCb and Datagrid - Status and Planning F Harris(Oxford)
10-Feb-00 CERN HepCCC Grid Initiative ATLAS meeting – 16 February 2000 Les Robertson CERN/IT.
LHCb computing model and the planned exploitation of the GRID Eric van Herwijnen, Frank Harris Monday, 17 July 2000.
Moving the LHCb Monte Carlo production system to the GRID
Data Challenge with the Grid in ATLAS
UK GridPP Tier-1/A Centre at CLRC
LHC Data Analysis using a worldwide computing grid
Collaboration Board Meeting
Gridifying the LHCb Monte Carlo production system
The ATLAS Computing Model
LHCb thinking on Regional Centres and Related activities (GRIDs)
Short to middle term GRID deployment plan for LHCb
Development of LHCb Computing Model F Harris
The LHCb Computing Data Challenge DC06
Presentation transcript:

Planning LHCb computing infrastructure 22 May 2000 Slide 1 Planning LHCb computing infrastructure at CERN and at regional centres F. Harris

Planning LHCb computing infrastructure 22 May 2000 Slide 2 Talk Outline qReminder of LHCb distributed model qRequirements and planning for ( growth of regional centres) qEU GRID proposal status and LHCb planning qLarge prototype proposal and LHCb possible uses q Some NEWS from LHCb (and other) activities

Planning LHCb computing infrastructure 22 May 2000 Slide 3 General Comments qDraft LHCb Technical Note for computer model exists qNew requirements estimates have been made (big changes in MC requirements) qSeveral presentations have been made to LHC computing review in March and May (and May 8 LHCb meeting) qhttp://lhcb.cern.ch/computing/Steering/Reviews/LHCComputing2000/default.htm

Planning LHCb computing infrastructure 22 May 2000 Slide 4 Baseline Computing Model - Roles qTo provide an equitable sharing of the total computing load can envisage a scheme such as the following qAfter 2005 role of CERN (notionally 1/3) ãto be production centre for real data ãsupport physics analysis of real and simulated data by CERN based physicists qRole of regional centres (notionally 2/3) ãto be production centre for simulation ãto support physics analysis of real and simulated data by local physicists qInstitutes with sufficient cpu capacity share simulation load with data archive at nearest regional centre

CPU for production Mass Storage for RAW, ESD AOD, and TAG Institute Selected User Analyses Institute Selected User Analyses Regional Centre User analysis Production Centre Generate raw data Reconstruction Production analysis User analysis Regional Centre User analysis Regional Centre User analysis Institute Selected User Analyses Regional Centre User analysis Institute Selected User Analyses CPU for analysis Mass storage for AOD, TAG CPU and data servers AOD,TAG real : 80TB/yr sim: 120TB/yr AOD,TAG 8-12 TB/yr

Planning LHCb computing infrastructure 22 May 2000 Slide 6 Physics : Plans for Simulation qIn 2000 and 2001 we will produce simulated events each year for detector optimisation studies in preparation of the detector TDRs (expected in 2001 and early 2002). qIn 2002 and 2003 studies will be made of the high level trigger algorithms for which we are required to produce simulated events each year. qIn 2004 and 2005 we will start to produce very large samples of simulated events, in particular background, for which samples of 10 7 events are required. qThis on-going physics production work will be used as far as is practicable for testing development of the computing infrastructure.

Planning LHCb computing infrastructure 22 May 2000 Slide 7 Computing : MDC Tests of Infrastructure q2002 : MDC 1 - application tests of grid middleware and farm management software using a real simulation and analysis of 10 7 B channel decay events. Several regional facilities will participate : ãCERN, RAL, Lyon/CCIN 2 P 3,Liverpool, INFN, …. q2003 : MDC 2 - participate in the exploitation of the large scale Tier0 prototype to be setup at CERN ãHigh Level Triggering – online environment, performance ãManagement of systems and applications ãReconstruction – design and performance optimisation ãAnalysis – study chaotic data access patterns ãSTRESS TESTS of data models, algorithms and technology q2004 : MDC 3 - Start to install event filter farm at the experiment to be ready for commissioning of detectors in and 2005

Planning LHCb computing infrastructure 22 May 2000 Slide 8 Cost of CPU, disk and tape qMoore’s Law evolution with time for cost of CPU and storage. Scale in MSFr is for a facility sized to ATLAS requirements (> 3 x LHCb) qAt today’s prices the total cost for LHCb ( CERN and regional centres) would be ~60 MSFr qIn 2004 the cost would be ~ MSFr qAfter 2005 the maintenance cost is ~ 5 MSFr /year

Planning LHCb computing infrastructure 22 May 2000 Slide 9 Growth in Requirements to Meet Simulation Needs

Planning LHCb computing infrastructure 22 May 2000 Slide 10 Cost / Regional Centre for Simulation mAssume there are 5 regional centres(UK,IN2P3,INFN,CERN+ consortium of Nikhef, Russia, etc...) mAssume costs are shared equally

Planning LHCb computing infrastructure 22 May 2000 Slide 11 EU GRID proposal status ( qGRIDs Software to manage all aspects of distributed computing(security and authorisation, resource management,monitoring). Interface to high energy physics... q Proposal was submitted May 9 ãMain signatories (CERN,France,Italy,UK,Netherlands,ESA) + associate signatories (Spain,Czechoslovakia,Hungary,Spain,Portugal,Scandinavia..) ãProject composed of Work Packages (to which countries provide effort) qLHCb involvement ãDepends on country ãEssentially comes via ‘Testbeds’ and ‘HEP applications’

Planning LHCb computing infrastructure 22 May 2000 Slide 12 EU Grid Work Packages qMiddleware ãGrid work scheduling C Vistoli(INFN) ãGrid Data Management B Segal(IT) ãGrid Application Monitoring R Middleton(RAL) ãFabric Management T Smith(IT) ãMass Storage Management O Barring(IT) qInfrastructure ãTestbed and Demonstrators (LHCb in) F Etienne(Marseille) ãNetwork Services C Michau(CNRS) qApplications ãHEP (LHCb in) H Hoffmann(CERN) ãEarth Observation L Fusco(ESA) ãBiology C Michau(CNRS) qManagement ãProject Management F Gagliardi(IT)

Planning LHCb computing infrastructure 22 May 2000 Slide 13 Grid LHCb WP - Grid Testbed (DRAFT) qMAP farm at Liverpool has 300 processors would take 4 months to generate the full sample of events qAll data generated (~3TB) would be transferred to RAL for archive (UK regional facility). qAll AOD and TAG datasets dispatched from RAL to other regional centres, such as Lyon, CERN, INFN. qPhysicists run jobs at the regional centre or ship AOD and TAG data to local institute and run jobs there. Also copy ESD for a fraction (~10%) of events for systematic studies (~100 GB). qThe resulting data volumes to be shipped between facilities over 4 months would be as follows : Liverpool to RAL 3 TB (RAW ESD AOD and TAG) RAL to LYON/CERN/… 0.3 TB (AOD and TAG) LYON to LHCb institute 0.3 TB (AOD and TAG) RAL to LHCb institute 100 GB (ESD for systematic studies)

Planning LHCb computing infrastructure 22 May 2000 Slide 14 MILESTONES for 3 year EU GRID project starting Jan 2001 qMx1 (June 2001) Coordination with the other WP’s. Identification of use cases and minimal grid services required at every step of the project. Planning of the exploitation of the GRID steps. qMx2 (Dec 2001) Development of use cases programs. Interface with existing GRID services as planned in Mx1. qMx3 (June 2002) Run #0 executed (distributed MonteCarlo production and reconstruction) and feed back provided to the other WP’s. qMx4 (Dec 2002) Run #1 executed (distributed analysis) and corresponding feed-back to the other WP’s. WP workshop. qMx5 (June 2003) Run #2 executed including additional GRID functionality. qMx6 (Dec 2003) Run #3 extended to a larger user community

Planning LHCb computing infrastructure 22 May 2000 Slide 15 ‘Agreed’ LHCb resources going into EU GRID project over 3 years q Country FTE equivalent/year ã CERN 1 ã France 1 ã Italy 1 ã UK 1 ã Netherlands.5 ã These people should work together….LHCb GRID CLUB! q This is for HEP applications WP - interfacing our physics software into the GRID and running it in testbed environments qSome effort may also go into testbed WP (? Don’t know if LHCb countries have signed up for this?)

Planning LHCb computing infrastructure 22 May 2000 Slide 16 Grid computing – LHCb planning qNow : Forming GRID technical working group with reps from regional facilities ãLiverpool(1), RAL(2), CERN(1), IN2P3(?), INFN(?), … qJune 2000 : define simulation samples needed in coming years qJuly 2000 : Install Globus software in LHCb regional centres and start to study integration with LHCb production tools qEnd 2000 : define grid services for farm production qJune 2001 : implementation of basic grid services for farm production provided by EU Grid project qDec 2001 : MDC 1 - small production for test of software implementation (GEANT4) qJune 2002 : MDC 2 - large production of signal/background sample for tests of world-wide analysis model qJune 2003 : MDC 3 - stress/scalability test on large scale Tier 0 facility, tests of Event Filter Farm, Farm control/management, data throughput tests.

Planning LHCb computing infrastructure 22 May 2000 Slide 17 Prototype Computing Infrastructure qAim to build a prototype production facility at CERN in 2003 (‘proposal coming out of LHC computing review) qScale of prototype limited by what is affordable - ~0.5 of the number of components of ATLAS system ãCost ~20 MSFr ãJoint project between the four experiments ãAccess to facility for tests to be shared qNeed to develop a distributed network of resources involving other regional centres and deploy data production software over the infrastructure for tests in 2003 qResults of this prototype deployment used as basis for Computing MoU

Planning LHCb computing infrastructure 22 May 2000 Slide 18 Tests Using Tier 0 Prototype in 2003 qWe intend to make use of the Tier 0 prototype planned for construction in 2003 to make stress tests of both hardware and software qWe will prepare realistic examples of two types of application : ãTests designed to gain experience with the online farm environment ãProduction tests of simulation, reconstruction, and analysis

Planning LHCb computing infrastructure 22 May 2000 Slide 19 Switch (Functions as Readout Network) ~100 RU SFC CPU ~10 CPC RU SFC CPU ~10 CPC Controls Network Storage Controller(s) Controls System Storage/CDR Readout Network Technology (GbE?) Sub-Farm Network Technology (Ethernet) Controls Network Technology (Ethernet) SFCSub-Farm Controller CPCControl PC CPUWork CPU Event Filter Farm Architecture

Planning LHCb computing infrastructure 22 May 2000 Slide 20 Switch (Functions as Readout Network) ~100 RU SFC CPU ~10 CPC RU SFC CPU ~10 CPC Storage Controller(s) Controls System Storage/CDR Testing/Verification Controls Network Legend Small Scale Lab Tests +Simulation Full Scale Lab Tests Large/Full Scale Tests using Farm Prototype

Planning LHCb computing infrastructure 22 May 2000 Slide 21 Scalability tests for simulation and reconstruction qTest writing of reconstructed+raw data at 200Hz in online farm environment qTest writing of reconstructed+simulated data in offline Monte Carlo farm environment ãPopulation of event database from multiple input processes qTest efficiency of event and detector data models ãAccess to conditions data from multiple reconstruction jobs ãOnline calibration strategies and distribution of results to multiple reconstruction jobs ãStress testing of reconstruction to identify hot spots, weak code etc.

Planning LHCb computing infrastructure 22 May 2000 Slide 22 Scalability tests for analysis qStress test of event database ãMultiple concurrent accesses by “chaotic” analysis jobs qOptimisation of data model ãStudy data access patterns of multiple, independent, concurrent analysis jobs ãModify event and conditions data models as necessary ãDetermine data clustering strategies

Planning LHCb computing infrastructure 22 May 2000 Slide 23 Work required now for planning prototypes for 2003/4 (request from Resource panel of LHC review) qPlan for evolution to prototypes (Tier0/1) -who will work on this from the institutes? ãHardware evolution ãSpending profile ãOrganisation(sharing of responsibilities in collaboration/CERN/centres) ãDescription of Mock Data Challenges qDraft of proposal (hardware and software) for prototype construction) ã? By end 2000 ãIf shared Tier-0 prototype then single proposal for 4 expts??

Planning LHCb computing infrastructure 22 May 2000 Slide 24 Some NEWS from LHCb RC activities (and other..) qLHCb/Italy currently preparing case to be submitted to INFN in June(compatible with planning shown in this talk) qLiverpool ã Increased COMPASS nodes to 6 (3 TBytes of disk) ã Bidding for a 1000PC system with 800MHZ/processor and 70Gbyte/processor ã Globus should be fully installed soon ã Collaborating with Cambridge Astronomy to test Globus package qOther experiments and the GRID ãCDF and Babar planning to set up GRID prototypes soon… qGRID workshop in Sep (date and details to be confirmed) qAny other news?