ALICE Computing Status1 ALICE Computing Status Are we ready? What about our choices? Workshop on LHC Computing 26 October Ren é Brun CERN Several slides.

Slides:



Advertisements
Similar presentations
1 ALICE Grid Status David Evans The University of Birmingham GridPP 14 th Collaboration Meeting Birmingham 6-7 Sept 2005.
Advertisements

1 ALICE Grid Status David Evans The University of Birmingham GridPP 16 th Collaboration Meeting QMUL June 2006.
March 24-28, 2003Computing for High-Energy Physics Configuration Database for BaBar On-line Rainer Bartoldus, Gregory Dubois-Felsmann, Yury Kolomensky,
1 Databases in ALICE L.Betev LCG Database Deployment and Persistency Workshop Geneva, October 17, 2005.
Blueprint RTAGs1 Coherent Software Framework a Proposal LCG meeting CERN- 11 June Ren é Brun ftp://root.cern.ch/root/blueprint.ppt.
23/04/2008VLVnT08, Toulon, FR, April 2008, M. Stavrianakou, NESTOR-NOA 1 First thoughts for KM3Net on-shore data storage and distribution Facilities VLV.
David Adams ATLAS DIAL Distributed Interactive Analysis of Large datasets David Adams BNL March 25, 2003 CHEP 2003 Data Analysis Environment and Visualization.
1 Status of the ALICE CERN Analysis Facility Marco MEONI – CERN/ALICE Jan Fiete GROSSE-OETRINGHAUS - CERN /ALICE CHEP Prague.
SEAL V1 Status 12 February 2003 P. Mato / CERN Shared Environment for Applications at LHC.
PROOF: the Parallel ROOT Facility Scheduling and Load-balancing ACAT 2007 Jan Iwaszkiewicz ¹ ² Gerardo Ganis ¹ Fons Rademakers ¹ ¹ CERN PH/SFT ² University.
ROOT An object oriented HEP analysis framework.. Computing in Physics Physics = experimental science =>Experiments (e.g. at CERN) Planning phase Physics.
The ALICE Analysis Framework A.Gheata for ALICE Offline Collaboration 11/3/2008 ACAT'081A.Gheata – ALICE Analysis Framework.
ALICE Upgrade for Run3: Computing HL-LHC Trigger, Online and Offline Computing Working Group Topical Workshop Sep 5 th 2014.
November SC06 Tampa F.Fanzago CRAB a user-friendly tool for CMS distributed analysis Federica Fanzago INFN-PADOVA for CRAB team.
And Tier 3 monitoring Tier 3 Ivan Kadochnikov LIT JINR
Andreas Morsch, CERN EP/AIP CHEP 2003 Simulation in ALICE Andreas Morsch For the ALICE Offline Project 2003 Conference for Computing in High Energy and.
ROOT for Data Analysis1 Intel discussion meeting CERN 5 Oct 2003 Ren é Brun CERN Distributed Data Analysis.
Acat OctoberRene Brun1 Future of Analysis Environments Personal views Rene Brun CERN.
The ALICE Computing F.Carminati May 4, 2006 Madrid, Spain.
Grid in Alice – Status and Perspective Predrag Buncic P.Saiz, A. Peters C. Cristiou, J-F. Grosse-Oetringhaus A. Harutyunyan, A. Hayrapetyan.
ROOT-CORE Team 1 PROOF xrootd Fons Rademakers Maarten Ballantjin Marek Biskup Derek Feichtinger (ARDA) Gerri Ganis Guenter Kickinger Andreas Peters (ARDA)
SEAL Core Libraries and Services CLHEP Workshop 28 January 2003 P. Mato / CERN Shared Environment for Applications at LHC.
PWG3 Analysis: status, experience, requests Andrea Dainese on behalf of PWG3 ALICE Offline Week, CERN, Andrea Dainese 1.
FP6−2004−Infrastructures−6-SSA E-infrastructure shared between Europe and Latin America High Energy Physics Applications in EELA.
GDB Meeting - 10 June 2003 ATLAS Offline Software David R. Quarrie Lawrence Berkeley National Laboratory
AliRoot survey P.Hristov 11/06/2013. Offline framework  AliRoot in development since 1998  Directly based on ROOT  Used since the detector TDR’s for.
Providing a coherent view of SFT products to potential new users Stefan Roiser PH/SFT.
6/23/2005 R. GARDNER OSG Baseline Services 1 OSG Baseline Services In my talk I’d like to discuss two questions:  What capabilities are we aiming for.
PROOF and ALICE Analysis Facilities Arsen Hayrapetyan Yerevan Physics Institute, CERN.
UTA MC Production Farm & Grid Computing Activities Jae Yu UT Arlington DØRACE Workshop Feb. 12, 2002 UTA DØMC Farm MCFARM Job control and packaging software.
DataGrid is a project funded by the European Commission under contract IST rd EU Review – 19-20/02/2004 WP8 - Demonstration ALICE – Evolving.
Testing and integrating the WLCG/EGEE middleware in the LHC computing Simone Campana, Alessandro Di Girolamo, Elisa Lanciotti, Nicolò Magini, Patricia.
JAliEn Java AliEn middleware A. Grigoras, C. Grigoras, M. Pedreira P Saiz, S. Schreiner ALICE Offline Week – June 2013.
Computing for Alice at GSI (Proposal) (Marian Ivanov)
MND review. Main directions of work  Development and support of the Experiment Dashboard Applications - Data management monitoring - Job processing monitoring.
Overview, Major Developments, Directions1 ROOT Project Status Major developments Directions NSS05 Conference 25 October Ren é Brun CERN Based on my presentation.
The MEG Offline Project General Architecture Offline Organization Responsibilities Milestones PSI 2/7/2004Corrado Gatto INFN.
ALICE experiences with CASTOR2 Latchezar Betev ALICE.
Analysis experience at GSIAF Marian Ivanov. HEP data analysis ● Typical HEP data analysis (physic analysis, calibration, alignment) and any statistical.
Meeting with University of Malta| CERN, May 18, 2015 | Predrag Buncic ALICE Computing in Run 2+ P. Buncic 1.
ALICE Computing TDR Federico Carminati June 29, 2005.
ALICE Physics Data Challenge ’05 and LCG Service Challenge 3 Latchezar Betev / ALICE Geneva, 6 April 2005 LCG Storage Management Workshop.
Some topics for discussion 31/03/2016 P. Hristov 1.
ATLAS Physics Analysis Framework James R. Catmore Lancaster University.
MAUS Status A. Dobbs CM43 29 th October Contents MAUS Overview Infrastructure Geometry and CDB Detector Updates CKOV EMR KL TOF Tracker Global Tracking.
Barthélémy von Haller CERN PH/AID For the ALICE Collaboration The ALICE data quality monitoring system.
ANALYSIS TRAIN ON THE GRID Mihaela Gheata. AOD production train ◦ AOD production will be organized in a ‘train’ of tasks ◦ To maximize efficiency of full.
HYDRA Framework. Setup of software environment Setup of software environment Using the documentation Using the documentation How to compile a program.
Monthly video-conference, 18/12/2003 P.Hristov1 Preparation for physics data challenge'04 P.Hristov Alice monthly off-line video-conference December 18,
The ALICE Analysis -- News from the battlefield Federico Carminati for the ALICE Computing Project CHEP 2010 – Taiwan.
Jean-Philippe Baud, IT-GD, CERN November 2007
ALICE Computing Data Challenge VI
Data Formats and Impact on Federated Access
ARDA-ALICE activity in 2005 and tasks in 2006
AliEn Tutorial Panos Christakoglou University of Athens - CERN
Database Replication and Monitoring
Status of the CERN Analysis Facility
ALICE analysis preservation
PROOF – Parallel ROOT Facility
European Organization for Nuclear Research
ALICE Physics Data Challenge 3
ALICE – Evolving towards the use of EDG/LCG - the Data Challenge 2004
Experience in ALICE – Analysis Framework and Train
AliRoot status and PDC’04
April HEPCG Workshop 2006 GSI
Simulation use cases for T2 in ALICE
Analysis framework - status
Grid Canada Testbed using HEP applications
Support for ”interactive batch”
Use of Geant4 in experiment interactive frameworks AliRoot
Presentation transcript:

ALICE Computing Status1 ALICE Computing Status Are we ready? What about our choices? Workshop on LHC Computing 26 October Ren é Brun CERN Several slides of this talk are taken from Federico Carminati’s presentation at the ROOT workshop

NSS05/LHC Rene BrunALICE Computing Status2 Software choices 1993->1997 : gAlice (f77 + zebra + G3) 1998 : ROOT as a framework All classes have a dictionary Can be made persistent Callable from CINT and ACLIC 2001 : Alien for the GRID 2001 : Data Challenges DAQ ->MSS 2005 : complete chain DAQ/SIMU ->REC->ANA, PROOF for interactive Analysis 2006 : consolidation

NSS05/LHC Rene BrunALICE Computing Status3 AliRoot layout ROOT AliRoot STEER Virtual MC G3G4 FLUKA HIJING MEVSIM PYTHIA6 PDF EVGEN HBTP HBTAN ISAJET LCG+AliEn EMCALZDCITSPHOSTRDTOFRICH ESD AliAnalysis AliReconstruction PMD CRTFMDMUONTPCSTARTRALICE STRUCT AliSimulation JETAN

NSS05/LHC Rene BrunALICE Computing Status4 I/O challenge in Alice (1) Because of the expected huge data volume, great care has been put in the design of the event model (from RAW to AOD). We had many iterations, eliminating unnecessary layers and optimizing the speed for data analysis. We do not use fancy object models. No complex class hierarchy. No templated classes. Our classes have all a dictionary and can be made persistent or callable by CINT/ACLIC. Object collections designed to minimize memory fractioning with new/delete and be easily usable at the AOD level.

NSS05/LHC Rene BrunALICE Computing Status5 I/O challenge in Alice (2) Following the great work in BaBar and Star, we came to the conclusion that load balancing and robustness is vital when processing many jobs in parallel. Simply calling rfio, castor, dcache services is just not enough. xrootd is a key element in the scenario. But still a lot of work to do to have a seamless integration with local services (castor, dcache,..) One thing is to optimize I/O within one job, another most important is to optimize the global performance for 1000 jobs (batch or interactive) running in parallel.

NSS05/LHC Rene BrunALICE Computing Status6 I/O challenge in Alice (3) We have excluded RDBMS from the picture for bulk data processing. RDBMS are only used where they make sense (online and locking services). RDBMS are far too slow, subject to bottlenecks and totally inappropriate to manage Terabytes in a distributed environment. We exploit read only data sets as much as possible using load-balanced xrootd servers. This is much better and scalable than adding more processors to a saturated DB server. Currently we concentrate our work in speeding up the queries to Trees for interactive analysis with PROOF. Parallelism will play an increasing role Our analysis code is gradually upgraded to run with PROOF selectors. Parallelism will play an increasing role.

NSS05/LHC Rene BrunALICE Computing Status7 ALICE event model RAW, ESD, AOD, CONDB are all ROOT files managed by a well tuned Alien FC. Bulk data in ROOT Trees Other data in keyed objects AA mode : 1 or few events per file (1.5 GBytes) pp mode : several events (>100) per file Tree split mode used as much as possible Improve read speed Improve compression

NSS05/LHC Rene BrunALICE Computing Status8 Important rules All Alice files can be read via: ROOT alone (useful to do quick histogramming and selections) ROOT + a small subset of AliRoot shared libs (eg libAliESD). This is the privileged model when running interactive analysis with PROOF. The full AliRoot

NSS05/LHC Rene BrunALICE Computing Status9 AliRoot: Current Status Up-to-date description of ALICE detectors TGeo Rich set of event generators, easily extensible Possibility to use different transport packages VMC User friendly steering classes for simulation and reconstruction Efficient track reconstruction Combined PID based on Bayesian approach ESD classes for analysis and fine-tune calibration Analysis examples to explore wide spectrum of heavy-ion and pp physics

NSS05/LHC Rene BrunALICE Computing Status10 Software summary table

NSS05/LHC Rene BrunALICE Computing Status11 Many analysis examples HBT analysis Jet analysis V0’s and cascades D0->Kpi analysis Balance function analysis Tag DB classes Flow analysis Analysis of resonances Multiplicity analysis D+ in PbPb, B->eX in pp, etc.. Most examples are simple ( < 500 lines) A few are complex (HBT,Jetan) They produces histograms and nice plots

NSS05/LHC Rene BrunALICE Computing Status12 CODE EXAMPLE void demo(const char* user = “pchrista”) { //Create an AliTagAnalysis object and connect to AliEn’s API services AliTagAnalysis TagAna (“aliendb4.cern.ch”,9000,”pchrista”); //Create an AliEventTagCuts object and impose some selection criteria AliEventTagCuts EvCuts; EvCuts.SetMultiplicityRange(0,500); //Query tag data from the catalogue TGridResult *TagResult = gGrid->Query(“/alice/cern.ch/user/p/pchrista/PDC05/pp/Tags1”,”*tag.root”,””,”-l 10 –Opublicaccess=”); //Chain the tag files TagAna.ChainGridTags(TagResult); //Get file info list Tlist *resultlist = TagAna.QueryTags(EvCuts); //Create a chain TChain chain(“esdTree”); //append the resultlist to the chain chain.AddFileInfoList(resultlist,100); //process the selector chain.Process(“esdTree.C”); } Can be run with normal ROOT or PROOF

NSS05/LHC Rene BrunALICE Computing Status13  p 

NSS05/LHC Rene BrunALICE Computing Status14 ALICE Analysis Basic Concepts Analysis Models Prompt analysis at T0 using PROOF(+file catalogue) infrastructure Batch Analysis using GRID infrastructure Interactive Analysis using PROOF(+GRID) infrastructure User Interface ALICE User access any GRID Infrastructure via AliEn or ROOT/PROOF UIs AliEn Native and “GRID on a GRID” (LCG/EGEE, ARC, OSG) integrate as much as possible common components LFC, FTS, WMS, MonALISA... PROOF/ROOT single- + multitier static and dynamic PROOF cluster GRID API class TGrid(virtual)->TAliEn(real)

NSS05/LHC Rene BrunALICE Computing Status15 The AliEn timeline Functionality + Simulation Interoperability + Reconstruction Performance, Scalability, Standards + Analysis First production (distributed simulation) Physics Performance Report (mixing & reconstruction) 10% Data Challenge (analysis) Start gLite Start of EGEE Agents and daemons

NSS05/LHC Rene BrunALICE Computing Status16 /alice/file1.root /alice/file2.root /alice/file3.root /alice/file4.root /alice/file5.root /alice/file6.root /alice/file7.root JDL:InputData /alice/file1.root /alice/file2.root JDL:InputData /alice/file3.root /alice/file4.root JDL:InputData /alice/file5.root /alice/file6.root /alice/file7.root JDL:InputData TAlienCollection XML - File TAlienCollection XML - File TAlienCollection XML - File Job Agent xrootd ROOT MSS Site ASite BSite C ALICE Batch Analysis via agents in heterogeneous GRIDs Job Optimizer-Splitting

NSS05/LHC Rene BrunALICE Computing Status17 1-Cluster Setup

NSS05/LHC Rene BrunALICE Computing Status18 Our good choices (1) Capitalize on ROOT + Alien framework We understood early (1997) the importance of dictionaries For I/O AND scripting CINT used for configuration scripts since day1 Our scripts can be interpreted (CINT) or compiled (ACLIC) CINT used for data analysis. No need for Python. Scripts can be interpreted or compiled via ACLIC. Only one efficient language.

NSS05/LHC Rene BrunALICE Computing Status19 Our good choices (2) Minimize systems dependencies We do not use SEAL, POOL, COOL, CLHEP, Boost We do not feel the need for stuff like CMT or SCRAM Simple build system with Makefiles Target portability (AliRoot running on all OS) Software and Physics groups are the same Coherent pipeline from DAQ to REC and ANA Code checking and profiling utilities

NSS05/LHC Rene BrunALICE Computing Status20 What could have been better More training with collections and Tree design. Most people learnt by copying examples. Better documentation of our system More dialog with other experiments in 2000, but probably a common framework would not have been possible. Avoid polemics with a some committees