LHCb system for distributed MC production (data analysis) and its use in Russia NEC’2005, Varna, Bulgaria Ivan Korolko (ITEP Moscow)

Slides:



Advertisements
Similar presentations
1 Software & Grid Middleware for Tier 2 Centers Rob Gardner Indiana University DOE/NSF Review of U.S. ATLAS and CMS Computing Projects Brookhaven National.
Advertisements

1 Grid services based architectures Growing consensus that Grid services is the right concept for building the computing grids; Recent ARDA work has provoked.
CHEP 2004, 27 September - 1 October 2004, Interlaken1 DIRAC – the distributed production and analysis for LHCb A.Tsaregorodtsev, CPPM, Marseille CHEP 2004,
6/4/20151 Introduction LHCb experiment. LHCb experiment. Common schema of the LHCb computing organisation. Common schema of the LHCb computing organisation.
Workload Management Workpackage Massimo Sgaravatto INFN Padova.
Workload Management Massimo Sgaravatto INFN Padova.
Exploiting the Grid to Simulate and Design the LHCb Experiment K Harrison 1, N Brook 2, G Patrick 3, E van Herwijnen 4, on behalf of the LHCb Grid Group.
Large scale data flow in local and GRID environment V.Kolosov, I.Korolko, S.Makarychev ITEP Moscow.
LCG Milestones for Deployment, Fabric, & Grid Technology Ian Bird LCG Deployment Area Manager PEB 3-Dec-2002.
ITEP participation in the EGEE project NEC’2005, Varna, Bulgaria Ivan Korolko (ITEP Moscow)
3 Sept 2001F HARRIS CHEP, Beijing 1 Moving the LHCb Monte Carlo production system to the GRID D.Galli,U.Marconi,V.Vagnoni INFN Bologna N Brook Bristol.
Task 6.1 Installing and testing components of the LCG infrastructure to achieve full-scale functionality CERN-INTAS , 25 June, 2006, Dubna V.A.
Computing Infrastructure Status. LHCb Computing Status LHCb LHCC mini-review, February The LHCb Computing Model: a reminder m Simulation is using.
F.Fanzago – INFN Padova ; S.Lacaprara – LNL; D.Spiga – Universita’ Perugia M.Corvo - CERN; N.DeFilippis - Universita' Bari; A.Fanfani – Universita’ Bologna;
Grid Workload Management & Condor Massimo Sgaravatto INFN Padova.
LHCb computing in Russia Ivan Korolko (ITEP Moscow) Russia-CERN JWGC, October 2005.
Monitoring system of the LHCb electromagnetic calorimeter NEC’2007, Varna, Bulgaria Ivan Korolko (ITEP Moscow)
Cosener’s House – 30 th Jan’031 LHCb Progress & Plans Nick Brook University of Bristol News & User Plans Technical Progress Review of deliverables.
Nick Brook Current status Future Collaboration Plans Future UK plans.
Backdrop Particle Paintings created by artist Tom Kemp September Grid Information and Monitoring System using XML-RPC and Instant.
1 DIRAC – LHCb MC production system A.Tsaregorodtsev, CPPM, Marseille For the LHCb Data Management team CHEP, La Jolla 25 March 2003.
Finnish DataGrid meeting, CSC, Otaniemi, V. Karimäki (HIP) DataGrid meeting, CSC V. Karimäki (HIP) V. Karimäki (HIP) Otaniemi, 28 August, 2000.
Belle II Data Management System Junghyun Kim, Sunil Ahn and Kihyeon Cho * (on behalf of the Belle II Computing Group) *Presenter High Energy Physics Team.
LHCb and DataGRID - the workplan for 2001 Eric van Herwijnen Wednesday, 28 march 2001.
DOSAR Workshop, Sao Paulo, Brazil, September 16-17, 2005 LCG Tier 2 and DOSAR Pat Skubic OU.
ICHEP06, 29 July 2006, Moscow RDIG The Russian Grid for LHC physics analysis V.A. Ilyin, SINP MSU V.V. Korenkov, JINR A.A. Soldatov, RRC KI LCG.
ATLAS and GridPP GridPP Collaboration Meeting, Edinburgh, 5 th November 2001 RWL Jones, Lancaster University.
Dan Tovey, University of Sheffield GridPP: Experiment Status & User Feedback Dan Tovey University Of Sheffield.
Results of the LHCb experiment Data Challenge 2004 Joël Closier CERN / LHCb CHEP’ 04.
Status of the LHCb MC production system Andrei Tsaregorodtsev, CPPM, Marseille DataGRID France workshop, Marseille, 24 September 2002.
November SC06 Tampa F.Fanzago CRAB a user-friendly tool for CMS distributed analysis Federica Fanzago INFN-PADOVA for CRAB team.
Tier-2  Data Analysis  MC simulation  Import data from Tier-1 and export MC data CMS GRID COMPUTING AT THE SPANISH TIER-1 AND TIER-2 SITES P. Garcia-Abia.
David Adams ATLAS ADA, ARDA and PPDG David Adams BNL June 28, 2004 PPDG Collaboration Meeting Williams Bay, Wisconsin.
And Tier 3 monitoring Tier 3 Ivan Kadochnikov LIT JINR
T3 analysis Facility V. Bucard, F.Furano, A.Maier, R.Santana, R. Santinelli T3 Analysis Facility The LHCb Computing Model divides collaboration affiliated.
ATLAS is a general-purpose particle physics experiment which will study topics including the origin of mass, the processes that allowed an excess of matter.
LCG ARDA status Massimo Lamanna 1 ARDA in a nutshell ARDA is an LCG project whose main activity is to enable LHC analysis on the grid ARDA is coherently.
CHEP 2006, February 2006, Mumbai 1 LHCb use of batch systems A.Tsaregorodtsev, CPPM, Marseille HEPiX 2006, 4 April 2006, Rome.
INFSO-RI Enabling Grids for E-sciencE ARDA Experiment Dashboard Ricardo Rocha (ARDA – CERN) on behalf of the Dashboard Team.
ITEP participation in the EGEE project NEC’2007, Varna, Bulgaria Ivan Korolko (ITEP Moscow)
Large scale data flow in local and GRID environment Viktor Kolosov (ITEP Moscow) Ivan Korolko (ITEP Moscow)
ATLAS-specific functionality in Ganga - Requirements for distributed analysis - ATLAS considerations - DIAL submission from Ganga - Graphical interfaces.
Participation of JINR in CERN- INTAS project ( ) Korenkov V., Mitcin V., Nikonov E., Oleynik D., Pose V., Tikhonenko E. 19 march 2004.
DIRAC Project A.Tsaregorodtsev (CPPM) on behalf of the LHCb DIRAC team A Community Grid Solution The DIRAC (Distributed Infrastructure with Remote Agent.
INFSO-RI Enabling Grids for E-sciencE Using of GANGA interface for Athena applications A. Zalite / PNPI.
1 LHCb view on Baseline Services A.Tsaregorodtsev, CPPM, Marseille Ph.Charpentier CERN Baseline Services WG, 4 March 2005, CERN.
1 LHCb computing for the analysis : a naive user point of view Workshop analyse cc-in2p3 17 avril 2008 Marie-Hélène Schune, LAL-Orsay for LHCb-France Framework,
1 A Scalable Distributed Data Management System for ATLAS David Cameron CERN CHEP 2006 Mumbai, India.
Distributed Physics Analysis Past, Present, and Future Kaushik De University of Texas at Arlington (ATLAS & D0 Collaborations) ICHEP’06, Moscow July 29,
1 DIRAC agents A.Tsaregorodtsev, CPPM, Marseille ARDA Workshop, 7 March 2005, CERN.
Distributed Analysis Tutorial Dietrich Liko. Overview  Three grid flavors in ATLAS EGEE OSG Nordugrid  Distributed Analysis Activities GANGA/LCG PANDA/OSG.
CHEP 2006, February 2006, Mumbai 1 DIRAC, the LHCb Data Production and Distributed Analysis system A.Tsaregorodtsev, CPPM, Marseille CHEP 2006,
L. Perini DATAGRID WP8 Use-cases 19 Dec ATLAS short term grid use-cases The “production” activities foreseen till mid-2001 and the tools to be used.
Ganga/Dirac Data Management meeting October 2003 Gennady Kuznetsov Production Manager Tools and Ganga (New Architecture)
CMS Experience with the Common Analysis Framework I. Fisk & M. Girone Experience in CMS with the Common Analysis Framework Ian Fisk & Maria Girone 1.
LHCb computing model and the planned exploitation of the GRID Eric van Herwijnen, Frank Harris Monday, 17 July 2000.
DIRAC: Workload Management System Garonne Vincent, Tsaregorodtsev Andrei, Centre de Physique des Particules de Marseille Stockes-rees Ian, University of.
Workload Management Workpackage
Overview of the Belle II computing
DIRAC Production Manager Tools
The LHCb Software and Computing NSS/IEEE workshop Ph. Charpentier, CERN B00le.
LHCb computing in Russia
LHCb Computing Model and Data Handling Angelo Carbone 5° workshop italiano sulla fisica p-p ad LHC 31st January 2008.
 YongPyong-High Jan We appreciate that you give an opportunity to have this talk. Our Belle II computing group would like to report on.
Ivan Reid (Brunel University London/CMS)
ATLAS DC2 & Continuous production
Short to middle term GRID deployment plan for LHCb
Production Manager Tools (New Architecture)
The LHCb Computing Data Challenge DC06
Presentation transcript:

LHCb system for distributed MC production (data analysis) and its use in Russia NEC’2005, Varna, Bulgaria Ivan Korolko (ITEP Moscow)

Outline LHCb detector Russian participation in LHCb LHCb distributed computing system –DIRAC –GANGA Plans for the future

Yoke Vertex Shielding Tracker Calorimeters RICH-2 Coil Muon Detector RICH-1 LHCb detector Designed for comprehensive studies of CP violation with B 500 physicists from 60 institutes

LHCb in numbers LHCb nominal Luminosity→ 2x10 32 cm -2 s -1 rate of p-p interactions→ 2x10 7 per second HLT output→ 2000 Hz RAW data per year→ 2x10 10 events (500 TB) Events with b quarks → 10 5 per second (!) acceptance for b-events → 5-10% Br. for CP channels → ~10 -5 Number of CP channels → ~50 ~5 signal events in every second of LHCb operation – GREAT! Have to select them from 1.5x10 7 background events Signature of LHCb signals is not very bright (P t, vertex) Estimation of S/B ratio is a REAL CHALLENGE

Russian participation in LHCb IHEP(Protvino), INP (Novosibirsk), INR (Troitsk), ITEP (Moscow), PNPI (St.Petersburg) SPD and Preshower, ECAL, HCAL, MUON system and RICH mirrors Design, construction and maintenance of detectors Development of reconstruction algorithms Historical interests in B physics

History of LHCb DCs in Russia K events, 1% contribution only one centre (ITEP) M events, 3% contribution all 4 our centers (IHEP,ITEP,JINR,MSU) M events, 5% contribution started to use LCG 2005 PNPI and INR have joined

LHCb Computing (TDR) LHCb will use as much as possible LCG provided capabilities computing resources (CPU and storage) software components Generic basic services provided by LCG workload management (job submission and follow-up) data management (storage, file transfer) Higher level integration and LHCb-specific tools will be provided by LHCb collaboration software releases, packaging, software distribution bookkeeping database workload management tool (DIRAC) distributed analysis tool (GANGA)

DIRAC( Distributed Infrastructure with Remote Agents’ Control Project combining LHCb specific components together with LCG general purpose components DIRAC - lightweight system built with a following requirements: support rapid development cycle, be able to accommodate evolving GRID opportunities, be easy to deploy on various platforms, transparent, easy and possibly automatic updates LHCb grid system for Monte-Carlo simulation and analysis

DIRAC design goals Designed to be highly adaptable to the use of ALL computing resources available for the LHCb collaboration LCG grid resources (mainly) sites not participating in LCG (still) desktop workstations (even) Simplicity of installation, configuring and operation. DIRAC was running on PBS, Condor, LSF, LCG The design goal was to create robust and scalable system for Computing needs of LHCb collaboration. running 10K concurrent jobs queuing 100K jobs handling 10M datasets

DIRAC architecture Uses the paradigm of Service Oriented Architecture (SOA) inspired by OGSA/OGSI “grid services” concept followed LCG/ARDA RTAG architecture blueprint ARDA inspiration open architecture with well defined interfaces allowing for replaceable, alternative services providing choices and competition Implemented in PYTHON using XML-RPC service access protocol

Interfacing DIRAC to LCG 1) Use standard LCG middleware for job scheduling straightforward but not yet reliable enough approach 2) Reservation of computing resources with pilot-agent Send simple script to LCG RB, which downloads and installs Standard DIRAC agent (needs only PYTHON on LCG site) WORKS PERFECTLY in 2004 and 2005

DIRAC Authors DIRAC development team TSAREGORODTSEV Andrei, GARONNE Vincent, STOKES-REES Ian, GRACIANI-DIAZ Ricardo, SANCHEZ-GARCIA Manuel, CLOSIER Joel, FRANK Markus, KUZNETSOV Gennady, CHARPENTIER Philippe Production site managers BLOUW Johan, BROOK Nicholas, EGEDE Ulrik, GANDELMAN Miriam, KOROLKO Ivan, PATRICK Glen, PICKFORD Andrew, ROMANOVSKI Vladimir, SABORIDO-SILVA Juan, SOROKO Alexander, TOBIN Mark, VAGNONI Vincenzo, WITEK Mariusz, BERNET Roland

2004 DC Phase 1 Statistics 3 months – 65 TB of data produced, transferred and replicated 185M events, 425 CPU years across 60 sites

2004 DC Phase 1 Statistics 43 LCG Sites (8 also DIRAC sites) 20 DIRAC Sites 7 Russian sites: DIRAC- 4 LCG- 3

Distributed Analysis GANAGA application Developed in cooperation with ATLAS Uses DIRAC to submit jobs 185M events were produced in 3 months Nobody was able to analyze them in 9 months

Plans for the nearest future Participate in LHCb Data Challenges producing MC for the collaboration (planed in comp. model) Concentrate on Distributed Analysis testing GANGA system in ITEP and IHEP We know how to produce MC and need only more resources much more difficult task absolutely different pattern of computer usage work was started already in June

Russian Tier2 Cluster planning CPUDISK TAPE active TAPE shelved link to CERN Significant increase of resources is planned for the nearest future Participation in LHCb DC (MC production) Distributed analysis KSI2KTB Mbps

For further reading LHCb reoptimized detector design and performance TDR CERN/LHCC LHCb Computing morel LHCb LHCb Computing TDR CERN/LHCC DIRAC - Distributed Infrastructure with Remote Agent Control A.Tsaregorodtsev et al., Proc of CHEP2003, March 2003 Results of the LHCb Data Challenge 2004 J.Closier et al., Proc. Of CHEP2004, Sept 2004