1 The ALICE Offline Environment F.Carminati 13 June 2001.

Slides:



Advertisements
Similar presentations
CHEP 2012 – New York City 1.  LHC Delivers bunch crossing at 40MHz  LHCb reduces the rate with a two level trigger system: ◦ First Level (L0) – Hardware.
Advertisements

1 Databases in ALICE L.Betev LCG Database Deployment and Persistency Workshop Geneva, October 17, 2005.
Andreas Morsch CERN/ALICE 3rd LHC Computing WorkshopMarseille, September 30, 1999 Base Classes for Simulation The ALICE Simulation Strategy Andreas Morsch.
23/04/2008VLVnT08, Toulon, FR, April 2008, M. Stavrianakou, NESTOR-NOA 1 First thoughts for KM3Net on-shore data storage and distribution Facilities VLV.
12. March 2003Bernd Panzer-Steindel, CERN/IT1 LCG Fabric status
O. Stézowski IPN Lyon AGATA Week September 2003 Legnaro Data Analysis – Team #3 ROOT as a framework for AGATA.
ACAT 2002, Moscow June 24-28thJ. Hernández. DESY-Zeuthen1 Offline Mass Data Processing using Online Computing Resources at HERA-B José Hernández DESY-Zeuthen.
The new The new MONARC Simulation Framework Iosif Legrand  California Institute of Technology.
August 98 1 Jürgen Knobloch ATLAS Software Workshop Ann Arbor ATLAS Computing Planning ATLAS Software Workshop August 1998 Jürgen Knobloch Slides also.
Large scale data flow in local and GRID environment V.Kolosov, I.Korolko, S.Makarychev ITEP Moscow.
CERN - European Laboratory for Particle Physics HEP Computer Farms Frédéric Hemmer CERN Information Technology Division Physics Data processing Group.
CMS Report – GridPP Collaboration Meeting VI Peter Hobson, Brunel University30/1/2003 CMS Status and Plans Progress towards GridPP milestones Workload.
REVIEW OF NA61 SOFTWRE UPGRADE PROPOSAL. Mandate The NA61 experiment is contemplating to rewrite its fortran software in modern technology and are requesting.
5 November 2001F Harris GridPP Edinburgh 1 WP8 status for validating Testbed1 and middleware F Harris(LHCb/Oxford)
Andrew McNab - Manchester HEP - 5 July 2001 WP6/Testbed Status Status by partner –CNRS, Czech R., INFN, NIKHEF, NorduGrid, LIP, Russia, UK Security Integration.
 To explain the importance of software configuration management (CM)  To describe key CM activities namely CM planning, change management, version management.
3 Sept 2001F HARRIS CHEP, Beijing 1 Moving the LHCb Monte Carlo production system to the GRID D.Galli,U.Marconi,V.Vagnoni INFN Bologna N Brook Bristol.
+ discussion in Software WG: Monte Carlo production on the Grid + discussion in TDAQ WG: Dedicated server for online services + experts meeting (Thusday.
Computing Infrastructure Status. LHCb Computing Status LHCb LHCC mini-review, February The LHCb Computing Model: a reminder m Simulation is using.
Update on Database Issues Peter Chochula DCS Workshop, June 21, 2004 Colmar.
Fermilab User Facility US-CMS User Facility and Regional Center at Fermilab Matthias Kasemann FNAL.
DataGrid Applications Federico Carminati WP6 WorkShop December 11, 2000.
Finnish DataGrid meeting, CSC, Otaniemi, V. Karimäki (HIP) DataGrid meeting, CSC V. Karimäki (HIP) V. Karimäki (HIP) Otaniemi, 28 August, 2000.
DOSAR Workshop, Sao Paulo, Brazil, September 16-17, 2005 LCG Tier 2 and DOSAR Pat Skubic OU.
The ALICE DAQ: Current Status and Future Challenges P. VANDE VYVRE CERN-EP/AID.
ALICE Upgrade for Run3: Computing HL-LHC Trigger, Online and Offline Computing Working Group Topical Workshop Sep 5 th 2014.
Grid Workload Management Massimo Sgaravatto INFN Padova.
ATLAS and GridPP GridPP Collaboration Meeting, Edinburgh, 5 th November 2001 RWL Jones, Lancaster University.
LHC Computing Review Recommendations John Harvey CERN/EP March 28 th, th LHCb Software Week.
Using Virtual Servers for the CERN Windows infrastructure Emmanuel Ormancey, Alberto Pace CERN, Information Technology Department.
5 May 98 1 Jürgen Knobloch Computing Planning for ATLAS ATLAS Software Week 5 May 1998 Jürgen Knobloch Slides also on:
Status of the LHCb MC production system Andrei Tsaregorodtsev, CPPM, Marseille DataGRID France workshop, Marseille, 24 September 2002.
1 Planning for Reuse (based on some ideas currently being discussed in LHCb ) m Obstacles to reuse m Process for reuse m Project organisation for reuse.
10/22/2002Bernd Panzer-Steindel, CERN/IT1 Data Challenges and Fabric Architecture.
Andreas Morsch, CERN EP/AIP CHEP 2003 Simulation in ALICE Andreas Morsch For the ALICE Offline Project 2003 Conference for Computing in High Energy and.
ATLAS Data Challenges US ATLAS Physics & Computing ANL October 30th 2001 Gilbert Poulard CERN EP-ATC.
CMS Computing and Core-Software USCMS CB Riverside, May 19, 2001 David Stickland, Princeton University CMS Computing and Core-Software Deputy PM.
Introduction CMS database workshop 23 rd to 25 th of February 2004 Frank Glege.
ALICE Condition DataBase Magali Gruwé CERN PH/AIP Alice Offline week May 31 st 2005.
CERN IT Department CH-1211 Genève 23 Switzerland t Frédéric Hemmer IT Department Head - CERN 23 rd August 2010 Status of LHC Computing from.
AliRoot survey P.Hristov 11/06/2013. Offline framework  AliRoot in development since 1998  Directly based on ROOT  Used since the detector TDR’s for.
6/23/2005 R. GARDNER OSG Baseline Services 1 OSG Baseline Services In my talk I’d like to discuss two questions:  What capabilities are we aiming for.
GLAST LAT Offline SoftwareCore review, Jan. 17, 2001 Review of the “Core” software: Introduction Environment: THB, Thomas, Ian, Heather Geometry: Joanne.
Computing R&D and Milestones LHCb Plenary June 18th, 1998 These slides are on WWW at:
NEC' /09P.Hristov1 Alice off-line computing Alice Collaboration, presented by P.Hristov, CERN NEC'2001 September 12-18, Varna.
CD FY09 Tactical Plan Status FY09 Tactical Plan Status Report for Neutrino Program (MINOS, MINERvA, General) Margaret Votava April 21, 2009 Tactical plan.
The MEG Offline Project General Architecture Offline Organization Responsibilities Milestones PSI 2/7/2004Corrado Gatto INFN.
1 A Scalable Distributed Data Management System for ATLAS David Cameron CERN CHEP 2006 Mumbai, India.
PCAP Close Out Feb 2, 2004 BNL. Overall  Good progress in all areas  Good accomplishments in DC-2 (and CTB) –Late, but good.
ALICE RRB-T ALICE Computing – an update F.Carminati 23 October 2001.
ALICE experiences with CASTOR2 Latchezar Betev ALICE.
Markus Frank (CERN) & Albert Puig (UB).  An opportunity (Motivation)  Adopted approach  Implementation specifics  Status  Conclusions 2.
Follow-up to SFT Review (2009/2010) Priorities and Organization for 2011 and 2012.
14 June 2001LHCb workshop at Bologna1 LHCb and Datagrid - Status and Planning F Harris(Oxford)
Barthélémy von Haller CERN PH/AID For the ALICE Collaboration The ALICE data quality monitoring system.
Fermilab Scientific Computing Division Fermi National Accelerator Laboratory, Batavia, Illinois, USA. Off-the-Shelf Hardware and Software DAQ Performance.
DAQ thoughts about upgrade 11/07/2012
Bernd Panzer-Steindel CERN/IT/ADC1 Medium Term Issues for the Data Challenges.
Monthly video-conference, 18/12/2003 P.Hristov1 Preparation for physics data challenge'04 P.Hristov Alice monthly off-line video-conference December 18,
ALICE Computing Data Challenge VI
CMS High Level Trigger Configuration Management
LHC experiments Requirements and Concepts ALICE
AliRoot status and PDC’04
Simulation use cases for T2 in ALICE
US ATLAS Physics & Computing
ALICE Data Challenges Fons Rademakers Click to add notes.
Status and plans for bookkeeping system and production tools
Offline framework for conditions data
Presentation transcript:

1 The ALICE Offline Environment F.Carminati 13 June 2001

2ROOT WorkShop 2001June 13, 2001 ALICE Offline Project ALICE is unique in the LHC context Major decisions already taken (DAQ, Off-line) Move to C++ completed Adoption of the ROOT framework Tightly knit Off-line team No ongoing “turf war” Physics performance and computing in a single team One single development line Aggressive Data Challenge program Impressive achievement record And of course Minimal support from CERN (apart for ADC’s)

3ROOT WorkShop 2001June 13, 2001 LHC Computing Review Recognition of Size and cost of LHC computing (240MCHF by /y after) Shortage of manpower in LHC core software teams Importance of Data Challenges Recommendations for Common prototype ( ) & projects (IT+LHC) CERN support of FLUKA / ROOT Establishment of SC2 to oversee LHC computing Drafting of a computing MoU See: The battled is not ended, it just started!

4ROOT WorkShop 2001June 13, 2001 ALICE Software Process highlights Freely inspired by the most recent Software Engineering trends Extreme Programming Explained, Addison Wesley Martin Fowler, The New Methodology, Linux, gnu, ROOT, KDE, GSL… Exhaustive code documentation made nightly and on the web UML diagrams, code listing, coding rule violations UML diagramscode listing, coding rule violations Continuous code testing & verification, nightly build and testsbuild and tests Collective ownership of the code one single repositoryrepository Flexible release cycle: release often, release early Simple packaging and installation Code is composed only of 2 main packages (ROOT and AliRoot) We could not have done it without ROOT

5ROOT WorkShop 2001June 13, 2001 ALICE Software Process synoptic Requirements are necessary to develop the code ALICE users express their requirements continuously Design is good for evolving the code Users participate in the system design continuously Testing is important for quality and robustness Testing is done automatically every night Integration is needed to ensure coherence AliRoot is completely integrated every night Discussions are valuable for users and developers Discussion is continuous in the ALICE offline list

6ROOT WorkShop 2001June 13, 2001 Status of AliRoot Version v3-05 just released Foreseen stable for 6 months Folder structure Major refactoring of the code AliStack introduced Reconstruction for all detectors from digits TRD tracking Inverse Kalman filter Major step toward global tracking for ITS – TPC – TRD Abstract interface between VMC, G3 and G4 FLUKA (in preparation) AliGenerator interface improved / extended (cocktail & afterburners)

7ROOT WorkShop 2001June 13, 2001 Folders Long (stormy) discussions with René on the role of folders We had to clarify what is the relative role of object design, folders structure and relation with containers Now this is clear, and AliRoot is posting shared data structures to folders It came at the right moment for us, as we are designing now global reconstruction We should think about the relation between folders, GRID LFNs and datasets Forward evolutive concept

8ROOT WorkShop 2001June 13, 2001 Tasks Less clear (at least for me) the use of tasks Elegant way to generic programming Going from data incapsulation to method incapsulation For the moment we are afraid by the lack of “expressivity” of the TTask->Execute() Also the necessary refactoring of the code is probably rather large More prototyping is needed before a decision is taken

9ROOT WorkShop 2001June 13, 2001 Whiteboard Data Communication Class 1Class 2 Class 3 Class 4 Class 5Class 6 Class 7 Class 8

10ROOT WorkShop 2001June 13, 2001 Update on VMC The G3 interface is in production and evolving Possibility to translate geometry is TGeometry will be in v3-06 pre-release The G4 interface is technically working but G4 is a monolithic and non-modular program Conversion from GEANT3 geometry is heavy The question of MANY is still unsolved The physics of G4 seems out-of-control Work on the FLUKA interface is ongoing, but the tracking / stacking is different from G3 / G4 and this may be a major challenge

11ROOT WorkShop 2001June 13, 2001 The Virtual MC User Code VMC G3 G3 transport G3 geometry G4 transport G4 geometry G4 FLUKA transport FLUKA

12ROOT WorkShop 2001June 13, 2001 The framework question PAW GEANT3 Recons? ROOT OK GEANT4 LOST ? Discussion on Thu? TParticle VMC TGenerator

13ROOT WorkShop 2001June 13, 2001 The Framework question The problem of the MC is clearly not solvable in this context However a very important piece on which we can work is the Geometrical Modeller G3 modeller had an incredible longevity, but it has several shortcomings This has been one of the areas of IT most active in these past 20 years A possible way ahead Evaluate few candidates (I have at least 2/3) Define a ROOT abstract interface for CSG modelling Interface with ROOT 3D graphics Define a representation on file

14ROOT WorkShop 2001June 13, 2001 The Virtual MC II User Code VMC Virtual Geometrical Modeller G3 G3 transport G4 transport G4 FLUKA transport FLUKA Geometrical Modeller Reconstruction Visualisation

15ROOT WorkShop 2001June 13, 2001 ALICE Data Challenges ALICE plans to acquire data at 1.25GB/s To be confident we will be able to do that we run yearly data challenges of increasing complexity and size The goals ADC III (1H2001) were Stable bandwidth of 100 MB/s over the complete chain during a week Aggregate bandwidth of 300 MB/s in the DAQ Total 80 TB of data stored in MSS Testing of SMP servers Online monitoring tools Regional centres (not done!)

16ROOT WorkShop 2001June 13, 2001 ADC III hardware LHC testbed farm (ADC using part of the testbed) Low-cost PCs only: 2 Intel CPUs running Linux 35 CPU servers (22 on Gigabit Ethernet, 10 on Fast Ethernet) ? disk servers 12 tape servers, 12 tape drives, 1000 tape cartridges Fast/Gigabit Ethernet network 6 switches from 3 manufacturers on 2 media ALICE servers 3 PC servers: 6 Intel CPUs running Linux used as disk servers (joint project ALICE/HP) In total: ~ 10 % of the ALICE DAQ

17ROOT WorkShop 2001June 13, 2001 Alice ADC III Event building (DATE) Recording (ROOT+RFIO)Mass Storage (CASTOR)

18ROOT WorkShop 2001June 13, 2001 DATE + ROOT I/O

19ROOT WorkShop 2001June 13, 2001 DATE + ROOT I/O + CASTOR Writing to local disk Migration to tape

20ROOT WorkShop 2001June 13, 2001 DATE + ROOT I/O + CASTOR

21ROOT WorkShop 2001June 13, 2001 DATE + ROOT I/O + CASTOR

22ROOT WorkShop 2001June 13, 2001 Highlights Excellent system stability during 3 months Max DATE throughput: 550 MB/s (max) 350 MB/s (ALICE-like) Max DATE+ROOT throughput: 240 MB/s Max DATE+ROOT+CASTOR throughput: 120 MB/s average during several days: 85 MB/s (> 50 TB/week) 2200 runs, 2* 10 7 events, longest DATE run (86 hours, 54 TB) 500 TB in DAQ, 200 TB in DAQ+ROOT I/O CASTOR > files of 1 Gbyte => 110 Tbytes Database for metadata with 10 5 entries HP SMP’s: cost-effective alternative to unexpensive disk servers Online monitoring tools developed Concerns LHC Computing Prototype should receive adequate resources for software and hardware

23ROOT WorkShop 2001June 13, 2001 Performances and plans

24ROOT WorkShop 2001June 13, 2001 ADC IV Planning and resources under discussion Test of individual components before next ADC Goals: Increase performances (200MB/s to tape, 1GB/s through the switch) Study key issues such as computers and fabric architecture Include some L3 trigger functionality Involve 1 or 2 regional centres Global ADC second half of 2002 (new tape generation and 10 Gbit Eth.)

25ROOT WorkShop 2001June 13, 2001 DataGRID project status Huge project 21 partners, 10MEuro, 3 years, 12 WorkPackages Global design still evolving An ATF has been mandated to design the architecture of the system to be delivered at PM6 (=now!) To progress they need continuous feed-back from the users Users are in three workpackages HEP in WP8 Earth Observation in WP9 Biology in WP10 ALICE long term project is distributed interactive data analysis

26ROOT WorkShop 2001June 13, 2001 DataGrid & ROOT Local Remote Selection Parameters Procedure Proc.C PROOF CPU TagD B RD B DB 1 DB 4 DB 5 DB 6 DB 3 DB 2

27ROOT WorkShop 2001June 13, 2001 DataGrid & ROOT Root RDB Selection parameters TAG DB selected events Grid RB LFN #hits Grid MDS Grid perf log Grid perf mon Grid replica catalog Grid cost evaluator best places Spawn PROOF tasks Grid net ws Grid replica manager Grid autenticate PROOF loop Grid replica catalog Update Root RDB output LFNs Send results back Grid log & monitor

28ROOT WorkShop 2001June 13, 2001 Comments to ROOT team Really no hard comment to the root team The release policy on CVS could include a branch for every version This would allow to make “patches” to production versions instead of being forced to take the HEAD to have a fix HEAD v4-01 v4-02v5-02 v5-01 v6-02 v6-01 tagbranch

29ROOT WorkShop 2001June 13, 2001 Conclusion The development of AliRoot continues happily The organisation of the Off-line project is more and more oriented toward an OS distributed development model ROOT fits perfectly in this scheme We are confident that ROOT will evolve to meet our requirements in 2007 and beyond We will try hard to have the recommendations of the LHC Computing Review implemented at CERN We are grateful to the ROOT team for their continued and excellent support