ROOT WorkShop1 ALICE F.Carminati/CERN Root WorkShop Geneva, October 14.

Slides:



Advertisements
Similar presentations
1 ALICE Grid Status David Evans The University of Birmingham GridPP 14 th Collaboration Meeting Birmingham 6-7 Sept 2005.
Advertisements

1 User Analysis Workgroup Update  All four experiments gave input by mid December  ALICE by document and links  Very independent.
1 Databases in ALICE L.Betev LCG Database Deployment and Persistency Workshop Geneva, October 17, 2005.
Grid and CDB Janusz Martyniak, Imperial College London MICE CM37 Analysis, Software and Reconstruction.
1 Software & Grid Middleware for Tier 2 Centers Rob Gardner Indiana University DOE/NSF Review of U.S. ATLAS and CMS Computing Projects Brookhaven National.
Andreas Morsch CERN/ALICE 3rd LHC Computing WorkshopMarseille, September 30, 1999 Base Classes for Simulation The ALICE Simulation Strategy Andreas Morsch.
AliE Pablo Saiz/CERN P.Buncic, J-E. Revsbech R.Piskac, V.Sego, L. Aphecetche ALICE Collaboration ALICE Environment on the GRID.
23/04/2008VLVnT08, Toulon, FR, April 2008, M. Stavrianakou, NESTOR-NOA 1 First thoughts for KM3Net on-shore data storage and distribution Facilities VLV.
O. Stézowski IPN Lyon AGATA Week September 2003 Legnaro Data Analysis – Team #3 ROOT as a framework for AGATA.
Framework for Automated Builds Natalia Ratnikova CHEP’03.
Computing for ILC experiment Computing Research Center, KEK Hiroyuki Matsunaga.
 To explain the importance of software configuration management (CM)  To describe key CM activities namely CM planning, change management, version management.
CERN IT Department CH-1211 Genève 23 Switzerland t Internet Services Job Monitoring for the LHC experiments Irina Sidorova (CERN, JINR) on.
Finnish DataGrid meeting, CSC, Otaniemi, V. Karimäki (HIP) DataGrid meeting, CSC V. Karimäki (HIP) V. Karimäki (HIP) Otaniemi, 28 August, 2000.
IST E-infrastructure shared between Europe and Latin America High Energy Physics Applications in EELA Raquel Pezoa Universidad.
LHC Computing Review Recommendations John Harvey CERN/EP March 28 th, th LHCb Software Week.
5 May 98 1 Jürgen Knobloch Computing Planning for ATLAS ATLAS Software Week 5 May 1998 Jürgen Knobloch Slides also on:
Status of the LHCb MC production system Andrei Tsaregorodtsev, CPPM, Marseille DataGRID France workshop, Marseille, 24 September 2002.
1 / 22 AliRoot and AliEn Build Integration and Testing System.
November SC06 Tampa F.Fanzago CRAB a user-friendly tool for CMS distributed analysis Federica Fanzago INFN-PADOVA for CRAB team.
ALICE Simulation Framework Ivana Hrivnacova 1 and Andreas Morsch 2 1 NPI ASCR, Rez, Czech Republic 2 CERN, Geneva, Switzerland For the ALICE Collaboration.
The ALICE short-term use case DataGrid WP6 Meeting Milano, 11 Dec 2000Piergiorgio Cerello 1 Physics Performance Report (PPR) production starting in Feb2001.
MINER A Software The Goals Software being developed have to be portable maintainable over the expected lifetime of the experiment extensible accessible.
Andreas Morsch, CERN EP/AIP CHEP 2003 Simulation in ALICE Andreas Morsch For the ALICE Offline Project 2003 Conference for Computing in High Energy and.
EGEE is a project funded by the European Union under contract IST HEP Use Cases for Grid Computing J. A. Templon Undecided (NIKHEF) Grid Tutorial,
ATLAS Data Challenges US ATLAS Physics & Computing ANL October 30th 2001 Gilbert Poulard CERN EP-ATC.
The ALICE Computing F.Carminati May 4, 2006 Madrid, Spain.
SEAL Core Libraries and Services CLHEP Workshop 28 January 2003 P. Mato / CERN Shared Environment for Applications at LHC.
Les Les Robertson LCG Project Leader High Energy Physics using a worldwide computing grid Torino December 2005.
ATLAS is a general-purpose particle physics experiment which will study topics including the origin of mass, the processes that allowed an excess of matter.
FP6−2004−Infrastructures−6-SSA E-infrastructure shared between Europe and Latin America High Energy Physics Applications in EELA.
AliRoot survey P.Hristov 11/06/2013. Offline framework  AliRoot in development since 1998  Directly based on ROOT  Used since the detector TDR’s for.
Introduction What is detector simulation? A detector simulation program must provide the possibility of describing accurately an experimental setup (both.
Integration of the ATLAS Tag Database with Data Management and Analysis Components Caitriana Nicholson University of Glasgow 3 rd September 2007 CHEP,
G.Govi CERN/IT-DB 1 September 26, 2003 POOL Integration, Testing and Release Procedure Integration  Packages structure  External dependencies  Configuration.
Computing R&D and Milestones LHCb Plenary June 18th, 1998 These slides are on WWW at:
DataGrid is a project funded by the European Commission under contract IST rd EU Review – 19-20/02/2004 WP8 - Demonstration ALICE – Evolving.
NEC' /09P.Hristov1 Alice off-line computing Alice Collaboration, presented by P.Hristov, CERN NEC'2001 September 12-18, Varna.
Testing and integrating the WLCG/EGEE middleware in the LHC computing Simone Campana, Alessandro Di Girolamo, Elisa Lanciotti, Nicolò Magini, Patricia.
Chapter 1 Basic Concepts of Operating Systems Introduction Software A program is a sequence of instructions that enables the computer to carry.
Summary of User Requirements for Calibration and Alignment Database Magali Gruwé CERN PH/AIP ALICE Offline Week Alignment and Calibration Workshop February.
Predrag Buncic CERN ALICE Status Report LHCC Referee Meeting 01/12/2015.
The MEG Offline Project General Architecture Offline Organization Responsibilities Milestones PSI 2/7/2004Corrado Gatto INFN.
Predrag Buncic (CERN/PH-SFT) Software Packaging: Can Virtualization help?
Ian Bird Overview Board; CERN, 8 th March 2013 March 6, 2013
ATLAS Distributed Analysis Dietrich Liko IT/GD. Overview  Some problems trying to analyze Rome data on the grid Basics Metadata Data  Activities AMI.
ALICE RRB-T ALICE Computing – an update F.Carminati 23 October 2001.
1 The AliRoot framework, status and perspectives R.Brun, P.Buncic, F.Carminati, A.Morsch, F.Rademakers, K.Safarik for the ALICE Collaboration CHEP 2003.
Feedback from CMS Andrew Lahiff STFC Rutherford Appleton Laboratory Contributions from Christoph Wissing, Bockjoo Kim, Alessandro Degano CernVM Users Workshop.
ATLAS Distributed Analysis DISTRIBUTED ANALYSIS JOBS WITH THE ATLAS PRODUCTION SYSTEM S. González D. Liko
Meeting with University of Malta| CERN, May 18, 2015 | Predrag Buncic ALICE Computing in Run 2+ P. Buncic 1.
System Architecture CS 560. Project Design The requirements describe the function of a system as seen by the client. The software team must design a system.
Barthélémy von Haller CERN PH/AID For the ALICE Collaboration The ALICE data quality monitoring system.
Managing Data Resources File Organization and databases for business information systems.
Monthly video-conference, 18/12/2003 P.Hristov1 Preparation for physics data challenge'04 P.Hristov Alice monthly off-line video-conference December 18,
Availability of ALICE Grid resources in Germany Kilian Schwarz GSI Darmstadt ALICE Offline Week.
Workload Management Workpackage
ALICE Offline Organisation and Status
LHC experiments Requirements and Concepts ALICE
Status of the CERN Analysis Facility
Alice Week Offline Day F.Carminati June 17, 2002.
ALICE Physics Data Challenge 3
ALICE – Evolving towards the use of EDG/LCG - the Data Challenge 2004
AliRoot status and PDC’04
Alice Week Offline Day F.Carminati March 18, 2002 ALICE Week India.
Simulation use cases for T2 in ALICE
Distributed Data Bases & GRID Interactive Access
US ATLAS Physics & Computing
Use of Geant4 in experiment interactive frameworks AliRoot
Offline framework for conditions data
Presentation transcript:

ROOT WorkShop1 ALICE F.Carminati/CERN Root WorkShop Geneva, October 14

2ROOT WorkShopOctober 2002 The ALICE collaboration includes 1223 collaborators from 85 different institutes from 27 countries. online system multi-level trigger filter out background reduce data volume level 0 - special hardware 8 kHz (160 GB/sec) level 1 - embedded processors level 2 - PCs 200 Hz (4 GB/sec) 30 Hz (2.5 GB/sec) 30 Hz (1.25 GB/sec) data recording & offline analysis Alice collaboration Total weight10,000t Overall diameter 16.00m Overall length25m Magnetic Field0.4Tesla Total weight2,000t Overall length17.3m Total weight53,000t Overall length270.4m

3ROOT WorkShopOctober 2002 Back to the origin of time 10 ms (T = 170 MeV) after the Big Bang quarks and gluons combined in protons and neutrons The initial quark and gluons plasma was first recreated at CERN SPS At LHC the energy will be 300 times (1144 TeV for Pb-Pb, 5.5 TeV/N)

4ROOT WorkShopOctober 2002 Strategic decision in 1998 Hits Zebra/RZ/FZ ROOT Framework kinematics in C++ Geometry in C++ Hits Digits ROOT Trees Geant3 Alice Geant3, PAW FORTRAN

5ROOT WorkShopOctober 2002 Root digit structures AliRoot evolution schema ROOT data visualisation Root tracks C++ macro Physics results Alice Offline Framework Root output file Root hits structures C++ macro C++ macro

6ROOT WorkShopOctober 2002 Framework AliRoot framework C++: 400kLOC + 225kLOC (generated) + macros: 77kLOC FORTRAN: 13kLOC (ALICE) + 914kLOC (external packages) Maintained on Linux (any version!), HP-UX, DEC Unix, Solaris Works also with Intel icc compiler Two packages to install (ROOT+AliRoot) Less that 1 second to link (thanks to 37 shared libs) 1-click-away install: download and make (non-recursive makefile) AliEn 25kLOC of PERL5 (ALICE) ~1.5MLOC of PERL5 (opens source components) Installed on more than 30 sites by physicists >50 users develop AliRoot from detector groups 70% of code developed outside, 30% by the core Offline team

7ROOT WorkShopOctober 2002 AliRoot layout ROOT AliRoot STEER Virtual MC G3 G4 FLUKA HIJING MEVSIM PYTHIA6 PDF CRT EMCALZDC FMD ITS MUON PHOSPMDTRD TPC TOF STRUCT START RICH RALICE EVGEN HBTP HBTAN ISAJET AliEn

8ROOT WorkShopOctober 2002 Whiteboard Data Communication Class 1Class 2 Class 3 Class 4 Class 5Class 6 Class 7 Class 8

9ROOT WorkShopOctober 2002 ALICE Event/100 ALICE Detector complexity similar to ATLAS and CMS GEANT3 Developed in 1981 Still used by the majority of experiments Geant4 A huge investment Slow penetration in HEP experiments FLUKA State of the art for hadronics and neutron physics Difficult to use for full detector simulation

10ROOT WorkShopOctober 2002 The Virtual MC User Code VMC Virtual Geometrical Modeller G3 G3 transport G4 transport G4 FLUKA transport FLUKA Geometrical Modeller Reconstruction Visualisation Geant3.tar.gz includes an upgraded Geant3 with a C++ interface Geant4_mc.tar.gz includes the TVirtualMC Geant4 interface classes Generators

11ROOT WorkShopOctober 2002 Alice 3 million volumes

12ROOT WorkShopOctober 2002 Tracking efficiencies TPC+ITS tracking efficiency at dN/dy 8000 standard definition of good vs. fake tracks requires all 6 ITS hits to be correct  most of ‘incorrect’ tracks just have one bad point when this definition is relaxed to 5 out of 6 hits the efficiency is better by % (fake track probability correspondingly lower)

13ROOT WorkShopOctober 2002 ALICE Physics Performance Report Detailed evaluation of ALICE performance using the latest simulation tools Acceptance, efficiency, resolution Need O(10 7 ) central HI events 300,000 PC's for one year! Need alternative Step1: Generate parametrised background summable digits Step2: Generate on the flight the signals and merge Step3: Event analysis O(10 4 ) background reused O(10 3 ) times

14ROOT WorkShopOctober 2002 The distributed computing model Basic principle Every physicist should have in principle equal access to the data and to the resources The system will be extremely complex Number of components in each site Number of sites Different tasks performed in parallel: simulation, reconstruction, scheduled and unscheduled analysis Bad news is that the basic tools are missing Distributed resource management Distributed namespace for files and objects Distributed authentication Local resource management of large clusters Data replication and caching WAN/LAN monitoring and logging Good news is that we are not alone All the above issues are central to the new developments going on in the US and now in Europe under the collective name of GRID

15ROOT WorkShopOctober 2002 AliEn a lightweight GRID AliEn ( ) is a lightweight alternative to full blown GRID based on standard components (SOAP, Web services) File Catalogue as a global file system on a RDB TAG Catalogue, as extension Secure Authentication Central Queue Manager ("pull" vs "push" model) Monitoring infrastructure Automatic software installation with AliKit The Core GRID Functionality !! AliEn is routinely used in production for Alice PPR AliEn is used to provide GRID component for MammoGRID, a 3 year, 2M€ EU funded project started in September

16ROOT WorkShopOctober 2002 AliEn (P.Buncic, P.Saiz, J-E.Revschbak) File catalogue : global file system on top of relational database Secure authentication service independent of underlying database Central task queue API Services (file transport, sync) Perl5 SOAP Architecture ALICE USERS ALICE SIM Tier1 ALICE LOCAL |--./ | |--cern.ch/ | | |--user/ | | | |--a/ | | | | |--admin/ | | | | | | | | | |--aliprod/ | | | | | |--f/ | | | | |--fca/ | | | | | |--p/ | | | | |--psaiz/ | | | | | |--as/ | | | | | | | | |--dos/ | | | | | | | | |--local/ |--simulation/ | | / | | |--V3.05/ | | | |--Config.C | | | |--grun.C | |--36/ | | |--stderr | | |--stdin | | |--stdout | | |--37/ | | |--stderr | | |--stdin | | |--stdout | | |--38/ | | |--stderr | | |--stdin | | |--stdout | | | | | |--b/ | | | | |--barbera/ Files, commands (job specification) as well as job input and output, tags and even binary package tar files are stored in the catalogue File catalogue Authentication Data access --./ | |--r3418_01-01.ds | |--r3418_02-02.ds | |--r3418_03-03.ds | |--r3418_04-04.ds | |--r3418_05-05.ds | |--r3418_06-06.ds | |--r3418_07-07.ds | |--r3418_08-08.ds | |--r3418_09-09.ds | |--r3418_10-10.ds | |--r3418_11-11.ds | |--r3418_12-12.ds | |--r3418_13-13.ds | |--r3418_14-14.ds | |--r3418_15-15.ds D0 path dir hostIndex entryId char(255) integer(11) T2526 type dir name owner ctime comment content method methodArg gowner size char(4) integer(8) char(64) char(8) char(16) char(80) char(255) char(20) char(255) char(8) integer(11) T2527 type dir name owner ctime comment content method methodArg gowner size char(4) integer(8) char(64) char(8) char(16) char(80) char(255) char(20) char(255) char(8) integer(11) Bookkeeping

17ROOT WorkShopOctober 2002 Production Summary 5682 events validated, 118 failed (2%) Up to 300 concurrently running jobs worldwide (5 weeks) 5 TB of data generated and stored at the sites with mass storage capability (CERN 73%,CCIN2P3 14%, LBL, 14%, OSC 1%) GSI, Karlsruhe, Dubna, Nantes, Budapest, Bari, Zagreb, Birmingham(?), Calcutta in addition ready by now 13 clusters, 9 sites 10^5 CPU hours CERN CCIN2P3 LBL Torino Catania Padova OSC NIKHEF Bari Yerevan CERN Saclay Lyon Dubna Capetown, ZA Birmingham Cagliari NIKHEF GSI Catania Bologna Torino Padova IRB Kolkata, India OSU/OSC LBL/NERSC Merida Bari Nantes 37 people 21 insitutions

18ROOT WorkShopOctober 2002 AliEn & PROOF Local Remote Selection Parameters Procedure Proc.C PROOF CPU TagD B RD B DB 1 DB 4 DB 5 DB 6 DB 3 DB 2 Bring the KB to the PB and not the PB to the KB

19ROOT WorkShopOctober 2002 ALICE Data challenges ROOT DAQ ROOT I/O CASTOR Simulated Data CERN TIER 0 TIER 1 Raw Data GEANT3 GEANT4 FLUKA AliRoot Regional TIER 1 TIER 2 GRID

20ROOT WorkShopOctober 2002 ADC IV Hardware Setup Total: 192 CPU servers (96 on Gbe, 96 on Fe), 36 DISK servers, 10 TAPE servers 2 10 TAPE servers (distributed) Backbone (4 Gbps) 6 TOTAL: 32 ports TOTAL: 18 ports CPU servers on FE TBED TBED0007D 01D-12D13D-24D25D-36DLXSHARE Gigabit switches 3 Gigabit switches 4 Gigabit switches2 Fastethernet switches Fibers

21ROOT WorkShopOctober 2002 Performances Data generation in LDC, event building, no data recording Data generation in LDC, event building, data recording to disk

22ROOT WorkShopOctober 2002 Software development In the LEP era the code was 90% written in FORTRAN ~10 instructions, 50 pages! In the LHC era the code is in many cooperating languages, mainly C++ ~ 50 instructions, 700 pages – nobody understands it completely (B.Sjostrup) But also C#, Java, Perl, Python, php…, Web and GRID Users are heterogeneous, sparse and without hierarchical structure From very expert analysts to users, from 5% to 100% of time devoted to computing People come and go with a very high rate Programs have to be maintained by people who did not develop them Young physicists need knowledge they can use also outside physics And yet HEP software has been largely successful! Experiments have not been hindered by software in their scientific goals GEANT3, PAW and ROOT: in use since 20 years on all architectures and OS And yet we (as a community) have not used traditional SE Did we do something right?

23ROOT WorkShopOctober 2002 A Software Engineer’s nightmare A Soft Engineer’s view of HEP 1.HEP has a complicated problem to solve 2.SE is good for complicated problems 3.These physicists cannot even spell SE correctly 4.A bit of SE can do wonder Or, in its weaker form, it should work at least here! Let’s introduce SE in HEP! A scheduled success! What can possibly go wrong?

24ROOT WorkShopOctober 2002 Agile Technologies (aka SE catching up) SE response to HCP are the “Agile Methodologies” Adaptive rather than predictive People-oriented rather than process-oriented As simple as possible to be able to react quickly Incremental and iterative, short iterations (weeks) Based on testing and coding rather than on analysis and design Uncovering better ways of developing software by valuing: That is, while there is value in the items on the right, we value the items on the left more. Individuals and interactions Working software Customer collaboration Responding to change processes and tools huge documentation contract negotiation following a plan OVER

25ROOT WorkShopOctober 2002 Software Development Process ALICE opted for a light core CERN offline team… Concentrate on framework, software distribution and maintenance …plus people from the collaboration GRID coordination (Torino), World Computing Model (Nantes), Detector Database (Warsaw) A development cycle adapted to ALICE has been elaborated Developers work on the most important feature at any moment A stable production version exists Collective ownership of the code Flexible release cycle and simple packaging and installation Micro-cycles happen continuously 2-3 macro-cycles per year Discussed & implemented at Off-line meetings and Code Reviews Corresponding to major code releases We have high-level milestones for technology and physics Computing Data Challenges test technology and integration DAQ – Off-line Physics Data Challenges test the Off-line from the user viewpoint

26ROOT WorkShopOctober 2002 EXplaining Planning in ALICE Decide release dates time Split work in 1-3 week tasks Team takes on tasks Weekly Standup Meeting Planning Meeting Release at Fixed date!

27ROOT WorkShopOctober 2002 AliRoot maintenance Regular release schedule One major release every six months One minor release (tag) every month Continuous maintenance and support Very few bugs in the production release because the development version is always available Emphasis on delivering production code Corrections, protections, code cleaning, geometry Nightly produced UML diagrams, code listing, coding rule violations, build and testsUML diagramscode listing, coding rule violationsbuild and tests One single repository with production and development coderepository

28ROOT WorkShopOctober 2002 ROOT, ALICE & LCG LCG is the LHC Computing Project The objective is to build the computing environment for LHC ALICE has lobbied strongly to base the LCG project on ROOT and AliEn The choice of the other experiments is to establish a client- provider relationship with ROOT While developing alternatives for some of existing ROOT elements or hiding them behind abstract interfaces And to use the result of GRID MiddleWare projects ALICE has expressed its worries Little time to develop and deploy a new system Duplication and dispersion of efforts Divergence with the rest of the HEP community ALICE will continue to develop its system And to provide basic technology, i.e. VMC and geometrical modeller … and it will try to collaborate with LCG wherever possible

29ROOT WorkShopOctober 2002 produced by the ALICE Offline Project in collaboration with the ROOT team the names have been changed to protect the innocent parental advice is suggested before visioning these slides in the presence of little children no animal has been hurt during the preparation of this presentation