Computing Infrastructure Status. LHCb Computing Status LHCb LHCC mini-review, February 20082 The LHCb Computing Model: a reminder m Simulation is using.

Slides:



Advertisements
Similar presentations
LHCb on the Grid A Tale of many Migrations
Advertisements

Exporting Raw/ESD data from Tier-0 Tier-1s Wrap-up.
Andrew C. Smith, 11 th May 2007 EGEE User Forum 2 - DIRAC Data Management System User Forum 2 Data Management System “Tell me and I forget. Show me and.
Grid and CDB Janusz Martyniak, Imperial College London MICE CM37 Analysis, Software and Reconstruction.
December Pre-GDB meeting1 CCRC08-1 ATLAS’ plans and intentions Kors Bos NIKHEF, Amsterdam.
Ian M. Fisk Fermilab February 23, Global Schedule External Items ➨ gLite 3.0 is released for pre-production in mid-April ➨ gLite 3.0 is rolled onto.
Large scale data flow in local and GRID environment V.Kolosov, I.Korolko, S.Makarychev ITEP Moscow.
Stefano Belforte INFN Trieste 1 CMS SC4 etc. July 5, 2006 CMS Service Challenge 4 and beyond.
LCG Milestones for Deployment, Fabric, & Grid Technology Ian Bird LCG Deployment Area Manager PEB 3-Dec-2002.
LHCb Quarterly Report October Core Software (Gaudi) m Stable version was ready for 2008 data taking o Gaudi based on latest LCG 55a o Applications.
LHCC Comprehensive Review – September WLCG Commissioning Schedule Still an ambitious programme ahead Still an ambitious programme ahead Timely testing.
CMS Report – GridPP Collaboration Meeting VI Peter Hobson, Brunel University30/1/2003 CMS Status and Plans Progress towards GridPP milestones Workload.
SC4 Workshop Outline (Strong overlap with POW!) 1.Get data rates at all Tier1s up to MoU Values Recent re-run shows the way! (More on next slides…) 2.Re-deploy.
Computing and LHCb Raja Nandakumar. The LHCb experiment  Universe is made of matter  Still not clear why  Andrei Sakharov’s theory of cp-violation.
LCG Service Challenge Phase 4: Piano di attività e impatto sulla infrastruttura di rete 1 Service Challenge Phase 4: Piano di attività e impatto sulla.
What is expected from ALICE during CCRC’08 in February.
Status of the LHCb MC production system Andrei Tsaregorodtsev, CPPM, Marseille DataGRID France workshop, Marseille, 24 September 2002.
November SC06 Tampa F.Fanzago CRAB a user-friendly tool for CMS distributed analysis Federica Fanzago INFN-PADOVA for CRAB team.
Tier-2  Data Analysis  MC simulation  Import data from Tier-1 and export MC data CMS GRID COMPUTING AT THE SPANISH TIER-1 AND TIER-2 SITES P. Garcia-Abia.
CCRC’08 Weekly Update Jamie Shiers ~~~ LCG MB, 1 st April 2008.
1 LCG-France sites contribution to the LHC activities in 2007 A.Tsaregorodtsev, CPPM, Marseille 14 January 2008, LCG-France Direction.
Update on replica management
LHCb The LHCb Data Management System Philippe Charpentier CERN On behalf of the LHCb Collaboration.
The LHCb CERN R. Graciani (U. de Barcelona, Spain) for the LHCb Collaboration International ICFA Workshop on Digital Divide Mexico City, October.
Author: Andrew C. Smith Abstract: LHCb's participation in LCG's Service Challenge 3 involves testing the bulk data transfer infrastructure developed to.
1 LHCb on the Grid Raja Nandakumar (with contributions from Greig Cowan) ‏ GridPP21 3 rd September 2008.
1 LHCb File Transfer framework N. Brook, Ph. Charpentier, A.Tsaregorodtsev LCG Storage Management Workshop, 6 April 2005, CERN.
INFSO-RI Enabling Grids for E-sciencE The gLite File Transfer Service: Middleware Lessons Learned form Service Challenges Paolo.
Bookkeeping Tutorial. 2 Bookkeeping content  Contains records of all “jobs” and all “files” that are produced by production jobs  Job:  In fact technically.
LHCbComputing LHCC status report. Operations June 2014 to September m Running jobs by activity o Montecarlo simulation continues as main activity.
Testing and integrating the WLCG/EGEE middleware in the LHC computing Simone Campana, Alessandro Di Girolamo, Elisa Lanciotti, Nicolò Magini, Patricia.
Plans for Service Challenge 3 Ian Bird LHCC Referees Meeting 27 th June 2005.
Outline: The LHCb Computing Model Philippe Charpentier, CERN ICFA workshop on Grid activities, Sinaia, Romania, October
WLCG LHCC mini-review LHCb Summary. Outline m Activities in 2008: summary m Status of DIRAC m Activities in 2009: outlook m Resources in PhC2.
Large scale data flow in local and GRID environment Viktor Kolosov (ITEP Moscow) Ivan Korolko (ITEP Moscow)
LHCb report to LHCC and C-RSG Philippe Charpentier CERN on behalf of LHCb.
David Stickland CMS Core Software and Computing
Maria Girone CERN - IT Tier0 plans and security and backup policy proposals Maria Girone, CERN IT-PSS.
1 Andrea Sciabà CERN The commissioning of CMS computing centres in the WLCG Grid ACAT November 2008 Erice, Italy Andrea Sciabà S. Belforte, A.
LCG Service Challenges SC2 Goals Jamie Shiers, CERN-IT-GD 24 February 2005.
1 LHCb computing for the analysis : a naive user point of view Workshop analyse cc-in2p3 17 avril 2008 Marie-Hélène Schune, LAL-Orsay for LHCb-France Framework,
1 A Scalable Distributed Data Management System for ATLAS David Cameron CERN CHEP 2006 Mumbai, India.
LHCb status and plans Ph.Charpentier CERN. LHCb status and plans WLCG Workshop 1-2 Sept 2007, Victoria, BC 2 Ph.C. Status of DC06  Reminder:  Two-fold.
SRM v2.2 Production Deployment SRM v2.2 production deployment at CERN now underway. – One ‘endpoint’ per LHC experiment, plus a public one (as for CASTOR2).
8 August 2006MB Report on Status and Progress of SC4 activities 1 MB (Snapshot) Report on Status and Progress of SC4 activities A weekly report is gathered.
CMS: T1 Disk/Tape separation Nicolò Magini, CERN IT/SDC Oliver Gutsche, FNAL November 11 th 2013.
11/01/20081 Data simulator status CCRC’08 Preparatory Meeting Radu Stoica, CERN* 11 th January 2007 * On leave from IFIN-HH.
Markus Frank (CERN) & Albert Puig (UB).  An opportunity (Motivation)  Adopted approach  Implementation specifics  Status  Conclusions 2.
Grid Deployment Board 5 December 2007 GSSD Status Report Flavia Donno CERN/IT-GD.
WLCG Status Report Ian Bird Austrian Tier 2 Workshop 22 nd June, 2010.
Summary of SC4 Disk-Disk Transfers LCG MB, April Jamie Shiers, CERN.
LHCb Computing activities Philippe Charpentier CERN – LHCb On behalf of the LHCb Computing Group.
Breaking the frontiers of the Grid R. Graciani EGI TF 2012.
LHCb 2009-Q4 report Q4 report LHCb 2009-Q4 report, PhC2 Activities in 2009-Q4 m Core Software o Stable versions of Gaudi and LCG-AA m Applications.
Status of GSDC, KISTI Sang-Un Ahn, for the GSDC Tier-1 Team
ALICE Physics Data Challenge ’05 and LCG Service Challenge 3 Latchezar Betev / ALICE Geneva, 6 April 2005 LCG Storage Management Workshop.
LCG GDB – Nov’05 1 Expt SC3 Status Nick Brook In chronological order: ALICE CMS LHCb ATLAS.
LHCb LHCb GRID SOLUTION TM Recent and planned changes to the LHCb computing model Marco Cattaneo, Philippe Charpentier, Peter Clarke, Stefan Roiser.
LHCb Computing 2015 Q3 Report Stefan Roiser LHCC Referees Meeting 1 December 2015.
LHCb Status report June 08. LHCb Computing Report Activities since February  Applications and Core Software  Preparation of applications for real data.
L’analisi in LHCb Angelo Carbone INFN Bologna
LCG Service Challenge: Planning and Milestones
LHCb Computing Model and Data Handling Angelo Carbone 5° workshop italiano sulla fisica p-p ad LHC 31st January 2008.
Philippe Charpentier CERN – LHCb On behalf of the LHCb Computing Group
R. Graciani for LHCb Mumbay, Feb 2006
LHCb Computing Philippe Charpentier CERN
LHC Data Analysis using a worldwide computing grid
Status and plans for bookkeeping system and production tools
Production Manager Tools (New Architecture)
The LHCb Computing Data Challenge DC06
Presentation transcript:

Computing Infrastructure Status

LHCb Computing Status LHCb LHCC mini-review, February The LHCb Computing Model: a reminder m Simulation is using non-Tier1 CPU resources o MC data are stored at Tier0-1s, no permanent storage at these sites m Real data are processed at Tier0-1 (up to analysis)

LHCb Computing Status LHCb LHCC mini-review, February The life of a real LHCb event m Written out to a RAW file by the Online System o Files are 2 GBytes / 60,000 events / 30 seconds on average m RAW file is transferred to Tier0 (CERN-Castor) o Migrated to tape, checksum verified (can be deleted in Online) m RAW file is transferred to one of the Tier1s o Migrated to tape m RAW file is reconstructed at Tier0-1 o About 20 hours to create an rDST  rDST stored locally with migration to tape m When enough (4?) rDSTs are ready, they are stripped o at the local site o Data streams are created  possibly files are merged into 2 GB files o Merged streamed DSTs & ETCs created, stored locally and at CERN with tape migration  DST & ETC distributed to the 5 other Tier1s m Analysis takes place on these stripped DSTs at all Tier0-1

LHCb Computing Status LHCb LHCC mini-review, February A few numbers m RAW file - 2 GB, events, 30 s o 1500 files per day, 3 TBytes, 70 MB/s o 90 Mevts / day m Reconstruction: 20 h o 1500 CPUs permanently (200 to 300 per site) m rDST file - 2 GB, events m Stripped DSTs o In the computing TDR, very aggressive numbers for selection  Factor 10 overall reduction (after HLT) o Number of streams to be defined… Assume 20 balanced streams  One 2 GB DST, events originating from 4 RAW Mevts  Need to merge 16 DSTs (of 240 kevts each)  25 streamed DSTs per day and per stream (50 GB)  For 120 days of run, only 6 TB for each stream (3000 files)

LHCb Computing Status LHCb LHCC mini-review, February The LHCb Computing Grid (1) m Integrated system called DIRAC o DIRAC is a community (LHCb) Grid solution  based on Grid infrastructure services (storage and computing resources and services) o DIRAC provides a unified framework for all services and tools o DIRAC2 used since 2004 for production, 2006 for analysis o DIRAC3 being commissioned (full re-engineering) m DIRAC provides high level generic services o Data Management  Fully reliable file transfers  retries until successful  based on gLite FTS (File Transfer Service) using specific channels and network links (LHC Optical Private Network, OPN)  FTS used for all transfers but file upload from Worker Nodes  simple file upload commands  Implements failover and recovery mechanism for all operations  registers actions to be performed later (e.g. when site is available again)  Registration of all copies of files (called “replicas”)  For users, files only have a Logical File Name (LFN)

LHCb Computing Status LHCb LHCC mini-review, February The LHCb Computing Grid (2) o Workload Management  Prepares data (pre-stage) if not available online  Submits jobs to sites where data are available  using EGEE infrastructure (gLite WMS)  Monitors the progress of the job  Uploads output data and logfiles  Allows re-submission if job fails m DIRAC LHCb-specific services o Data Management  Registration of the provenance of files (Bookkeeping)  used for retrieving list of files according to user’s criteria  returns LFNs o Production tools  Definition of processing (complex) workflows  e.g. run 4 Gauss, 3 Boole and 1 Brunel “step”  Creation of jobs  manually (Production Manager)  automatically (using pre-defined criteria, e.g. when 4 rDSTs are ready, strip them using this workflow)

LHCb Computing Status LHCb LHCC mini-review, February DIRAC3 for first data taking m DIRAC3 being commissioned o Most components are ready, integrated and tested o Basic functionality (equivalent to DIRAC2) m This week: full rehearsal week o all developers are at CERN o Goal: follow progress of the challenge, fix problems ASAP m DIRAC3 planning (as of 15 Nov) o 30 Nov 2007: Basic functionality o 15 Dec 2007: Production Management, start tests o 15 Jan 2008: Full CCRC functionality, tests start o 5 Feb 2008: Start tests for CCRC phase 1 o 18 Feb 2008: Run CCRC o 31 Mar 2008: Full functionality, ready for CCRC phase 2 tests o Current status: on time with above schedule

LHCb Computing Status LHCb LHCC mini-review, February CCRC’08 for LHCb m Raw data upload: Online  Tier0 storage (CERN Castor) o Use DIRAC transfer framework  exercise two transfer tools (Castor rfcp, Grid FTP) m Raw data distribution to Tier1s o Reminder: CNAF, GridKa, IN2P3, NIKHEF, PIC, RAL o Use gLite File Transfer System (FTS)  based on the upcoming Storage Resource Manager (SRM) version 2.2 (just coming out) o Share according to resource pledges from sites m Data reconstruction at Tier0+1 o Production of RDST, stored locally o Data access using also SRM v2 (various storage back-ends: Castor and dCache) m For May: stripping of reconstructed data o Initially foreseen in Feb, but de-scoped o Distribution of streamed DSTs to Tier1s o If possible include file merging

LHCb Computing Status LHCb LHCC mini-review, February Tier1 resources for CCRC’08 m Data sharing according to Tier1 pledges o as of February 15 th (!!!) m LHCb SRM v2.2 space token descriptions are: o LHCb_RAW (T1D0) o LHCb_RDST (T1D0) o LHCb_M-DST (T1D1) o LHCb_DST (T0D1) o LHCb_FAILOVER (T0D1)  used for temporary upload in case of destination unavailability m All data can be scrapped after the challenge o Test SRM bulk removal m Based on 2 weeks run o 28,000 files (42 TB) m CCRC’08 in May o 4 weeks continuous running o Established services and procedures

LHCb Computing Status CCRC’08 first results m Tier0-Tier1 transfers o 5,000 files (7.5 TB) transferred to Tier1s using FTS + SRM v2  4 hours! o All Tier1’s RAW spaces work o Continuous transfer over WE m Plans o Adding automatic reconstruction now m Pit to Tier0 transfers o Sustained nominal rate for 6 days (60 MB/s)  Red line o 125 MB/s for one day o Tape migration followed  Green line  High peaks are Tier1 transfers m Plans o ½ rate today (50% duty cycle) o Nominal rate after (>10 days) LHCb LHCC mini-review, February

LHCb Computing Status m Questions? m More news tomorrow o LCG-LHCC meeting LHCb LHCC mini-review, February