ALICE Computing TDR Federico Carminati June 29, 2005.

Slides:



Advertisements
Similar presentations
1 ALICE Grid Status David Evans The University of Birmingham GridPP 14 th Collaboration Meeting Birmingham 6-7 Sept 2005.
Advertisements

31/03/00 CMS(UK)Glenn Patrick What is the CMS(UK) Data Model? Assume that CMS software is available at every UK institute connected by some infrastructure.
Status GridKa & ALICE T2 in Germany Kilian Schwarz GSI Darmstadt.
T1 at LBL/NERSC/OAK RIDGE General principles. RAW data flow T0 disk buffer DAQ & HLT CERN Tape AliEn FC Raw data Condition & Calibration & data DB disk.
1 Databases in ALICE L.Betev LCG Database Deployment and Persistency Workshop Geneva, October 17, 2005.
Grid and CDB Janusz Martyniak, Imperial College London MICE CM37 Analysis, Software and Reconstruction.
1 Software & Grid Middleware for Tier 2 Centers Rob Gardner Indiana University DOE/NSF Review of U.S. ATLAS and CMS Computing Projects Brookhaven National.
ALICE Operations short summary LHCC Referees meeting June 12, 2012.
1 Status of the ALICE CERN Analysis Facility Marco MEONI – CERN/ALICE Jan Fiete GROSSE-OETRINGHAUS - CERN /ALICE CHEP Prague.
CMS Report – GridPP Collaboration Meeting VI Peter Hobson, Brunel University30/1/2003 CMS Status and Plans Progress towards GridPP milestones Workload.
5 November 2001F Harris GridPP Edinburgh 1 WP8 status for validating Testbed1 and middleware F Harris(LHCb/Oxford)
ALICE Roadmap for 2009/2010 Patricia Méndez Lorenzo (IT/GS) Patricia Méndez Lorenzo (IT/GS) On behalf of the ALICE Offline team Slides prepared by Latchezar.
ALICE Upgrade for Run3: Computing HL-LHC Trigger, Online and Offline Computing Working Group Topical Workshop Sep 5 th 2014.
ATLAS and GridPP GridPP Collaboration Meeting, Edinburgh, 5 th November 2001 RWL Jones, Lancaster University.
F.Carminati BNL Seminar March 21, 2005 ALICE Computing Model.
7April 2000F Harris LHCb Software Workshop 1 LHCb planning on EU GRID activities (for discussion) F Harris.
5 May 98 1 Jürgen Knobloch Computing Planning for ATLAS ATLAS Software Week 5 May 1998 Jürgen Knobloch Slides also on:
November SC06 Tampa F.Fanzago CRAB a user-friendly tool for CMS distributed analysis Federica Fanzago INFN-PADOVA for CRAB team.
The ALICE short-term use case DataGrid WP6 Meeting Milano, 11 Dec 2000Piergiorgio Cerello 1 Physics Performance Report (PPR) production starting in Feb2001.
David Adams ATLAS ADA, ARDA and PPDG David Adams BNL June 28, 2004 PPDG Collaboration Meeting Williams Bay, Wisconsin.
ATLAS Data Challenges US ATLAS Physics & Computing ANL October 30th 2001 Gilbert Poulard CERN EP-ATC.
The ALICE Computing F.Carminati May 4, 2006 Madrid, Spain.
The ALICE Distributed Computing Federico Carminati ALICE workshop, Sibiu, Romania, 20/08/2008.
M.MaseraALICE CALCOLO 2004 PHYSICS DATA CHALLENGE III IL CALCOLO NEL 2004: PHYSICS DATA CHALLENGE III Massimo Masera Commissione Scientifica.
Status of PDC’07 and user analysis issues (from admin point of view) L. Betev August 28, 2007.
ARDA Prototypes Andrew Maier CERN. ARDA WorkshopAndrew Maier, CERN2 Overview ARDA in a nutshell –Experiments –Middleware Experiment prototypes (basic.
FP6−2004−Infrastructures−6-SSA E-infrastructure shared between Europe and Latin America High Energy Physics Applications in EELA.
F.Carminati ALICE Computing Model Workshop December 9-10, 2004 Introduction and Overview of the ALICE Computing Model.
Planning and status of the Full Dress Rehearsal Latchezar Betev ALICE Offline week, Oct.12, 2007.
AliRoot survey P.Hristov 11/06/2013. Offline framework  AliRoot in development since 1998  Directly based on ROOT  Used since the detector TDR’s for.
Integration of the ATLAS Tag Database with Data Management and Analysis Components Caitriana Nicholson University of Glasgow 3 rd September 2007 CHEP,
DataGrid is a project funded by the European Commission under contract IST rd EU Review – 19-20/02/2004 WP8 - Demonstration ALICE – Evolving.
NEC' /09P.Hristov1 Alice off-line computing Alice Collaboration, presented by P.Hristov, CERN NEC'2001 September 12-18, Varna.
Phase 2 of the Physics Data Challenge ‘04 Latchezar Betev ALICE Offline week Geneva, September 15, 2004.
Large scale data flow in local and GRID environment Viktor Kolosov (ITEP Moscow) Ivan Korolko (ITEP Moscow)
Computing for Alice at GSI (Proposal) (Marian Ivanov)
LCG-LHCC mini-review ALICE Latchezar Betev Latchezar Betev for the ALICE collaboration.
Predrag Buncic ALICE Status Report LHCC Referee Meeting CERN
Summary of User Requirements for Calibration and Alignment Database Magali Gruwé CERN PH/AIP ALICE Offline Week Alignment and Calibration Workshop February.
The MEG Offline Project General Architecture Offline Organization Responsibilities Milestones PSI 2/7/2004Corrado Gatto INFN.
Summary of Workshop on Calibration and Alignment Database Magali Gruwé CERN PH/AIP ALICE Computing Day February 28 th 2005.
ALICE RRB-T ALICE Computing – an update F.Carminati 23 October 2001.
Status of AliEn2 Services ALICE offline week Latchezar Betev Geneva, June 01, 2005.
ALICE Computing Status1 ALICE Computing Status Are we ready? What about our choices? Workshop on LHC Computing 26 October Ren é Brun CERN Several slides.
L. Perini DATAGRID WP8 Use-cases 19 Dec ATLAS short term grid use-cases The “production” activities foreseen till mid-2001 and the tools to be used.
P. Cerello (INFN – Torino) T0/1 Network Meeting Amsterdam January 20/21, 2005 The ALICE Computing and Data Model.
Meeting with University of Malta| CERN, May 18, 2015 | Predrag Buncic ALICE Computing in Run 2+ P. Buncic 1.
12 March, 2002 LCG Applications Area - Introduction slide 1 LCG Applications Session LCG Launch Workshop March 12, 2002 John Harvey, CERN LHCb Computing.
ALICE Physics Data Challenge ’05 and LCG Service Challenge 3 Latchezar Betev / ALICE Geneva, 6 April 2005 LCG Storage Management Workshop.
F.Carminati LHCC Review of the Experiment Computing Needs January 18, 2005 Overview of the ALICE Computing Model.
1 ALICE Summary LHCC Computing Manpower Review September 3, 2003.
Monthly video-conference, 18/12/2003 P.Hristov1 Preparation for physics data challenge'04 P.Hristov Alice monthly off-line video-conference December 18,
LHCb Computing 2015 Q3 Report Stefan Roiser LHCC Referees Meeting 1 December 2015.
ATLAS – statements of interest (1) A degree of hierarchy between the different computing facilities, with distinct roles at each level –Event filter Online.
The ALICE Analysis -- News from the battlefield Federico Carminati for the ALICE Computing Project CHEP 2010 – Taiwan.
INFN-GRID Workshop Bari, October, 26, 2004
Alice DC Status P. Cerello March 19th, 2004.
Calibrating ALICE.
ALICE Physics Data Challenge 3
ALICE – Evolving towards the use of EDG/LCG - the Data Challenge 2004
LHCb Computing Model and Data Handling Angelo Carbone 5° workshop italiano sulla fisica p-p ad LHC 31st January 2008.
Readiness of ATLAS Computing - A personal view
AliRoot status and PDC’04
MC data production, reconstruction and analysis - lessons from PDC’04
Alice Week Offline Day F.Carminati March 18, 2002 ALICE Week India.
Simulation use cases for T2 in ALICE
ALICE Computing Upgrade Predrag Buncic
US ATLAS Physics & Computing
ATLAS DC2 & Continuous production
Offline framework for conditions data
Presentation transcript:

ALICE Computing TDR Federico Carminati June 29, 2005

ALICE Computing TDR2 Layout Parameters Computing framework Distributed computing and Grid Project Organisation and planning Resources needed n/Collaboration/Documents/TDR/Comp uting.htmlhttp://aliceinfo.cern.ch/NewAlicePortal/e n/Collaboration/Documents/TDR/Comp uting.html

ALICE Computing TDR3 Parameters Computing framework Distributed computing and Grid Project Organisation and planning Resources needed

ALICE Computing TDR4 Parameters UnitppPbPb T1#7 T2#23 Size rawMB0.2x512.5 Recording rateHz100 ESDMB AODkB4250 Event CataloguekB10 Running time s Events / y# Reconstruction passes (av)#3 RAW duplication#2 AOD/ESD duplication#2 Scheduled analysis passes / rec ev / y (av)#3 Chaotic analysis passes / rec ev / y (av)#20

ALICE Computing TDR5 Parameters Computing framework Distributed computing and Grid Project Organisation and planning Resources needed

ALICE Computing TDR6 Offline framework AliRoot in development since 1998 Two main packages to install (ROOT and AliRoot) Ported on Linux (IA32/64 & AMD), Mac OS X (ppc & Intel), Digital True64, SunOS… Over 50 developers (30% CERN), one CVS repository repository Integration with DAQ (data recorder) and HLT (CVS) Abstract interfaces for modularity Subset of C++ used for maximum portability

ALICE Computing TDR7 AliRoot layout ROOT AliRoot STEER Virtual MC G3 G4 FLUKA HIJING MEVSIM PYTHIA6 PDF EVGEN HBTP HBTAN ISAJET EGEE+AliEn EMCALZDCITSPHOSTRDTOFRICH ESD AliAnalysis AliReconstruction PMD CRTFMDMUONTPCSTARTRALICE STRUCT AliSimulation JETAN

ALICE Computing TDR8 Software management Major release ~ six months, minor (tag) ~ monthly Emphasis on delivering production code Nightly produced UML diagrams code listing coding rule violations build and testsUML diagramscode listing coding rule violationsbuild and tests –No version management software Development of new coding tools (IRST) –Smell detection, aspect oriented programming, genetic testing

ALICE Computing TDR9 Detector Construction Database (DCDB) Detector construction in distributed environment: –Sub-detector groups work independently –Data collected in a central repository to facilitate movement of components between groups and during integration and operation Different user interfaces WEB portal LabView, XML ROOT for visualisation In production since 2002 Important spin-offs Cable Database Calibration Database

ALICE Computing TDR10 Simulation Simulation performed with G3 till now Will move to FLUKA and G4 VMC insulates users from the transport MC New geometrical modeller in production Physics Data Challenge ‘05 with FLUKA Interface with G4 designed (4Q’05?) Test-beam validation activity ongoing

ALICE Computing TDR11 The Virtual MC User Code VMC Geometrical Modeler G3 G3 transport G4 transport G4 FLUKA transport FLUKA Reconstruction Visualisation Generators

ALICE Computing TDR12 TGeo modeller

ALICE Computing TDR13 Results GeV/c protons in 60T field Geant3FLUKA HMPID 5 GeV Pions

ALICE Computing TDR14 Reconstruction strategy Very high flux, TPC occupancy up to 40% Maximum information approach Optimization of access and use of information –Localize relevant information –Keep this information until it is needed

ALICE Computing TDR15 Tracking & PID PIV 3GHz – (dN/dy – 6000) –TPC tracking - ~ 40s –TPC kink finder ~ 10 s –ITS tracking ~ 40 s –TRD tracking ~ 200 s TPCITS+TPC+TOF+TRD Contamination Efficiency ITS & TPC & TOF

ALICE Computing TDR16 Condition and alignment Heterogeneous sources periodically polled ROOT files with condition information created Published on the Grid and distributed as needed by the Grid DMS Files contain validity information and are identified via DMS metadata No need for a distributed DBMS Reuse of the existing Grid services

ALICE Computing TDR17 External relations and DB connectivity DAQ Trigger DCS ECS Physics data DCDB Grid FC metadata file store calibration procedures calibration files Calibration classes AliRoot API files From URs: Source, volume, granularity, update frequency, access pattern, runtime environment and dependencies API – Application Program Interface API HLT Collection of UR from detectors

ALICE Computing TDR18 Metadata Essential for the selection of events Grid file catalogue for file-level MD Need event-level MetaData Collaboration with STAR Prototype in preparation for PDC’05-III

ALICE Computing TDR19 ALICE CDC’s DateMBytes/s Tbytes to MSS Offline milestone 10/ Rootification of raw data -Raw data for TPC and ITS 9/ Integration of single detector HLT code Partial data replication to remote centres 5/2004  4/ HLT prototype for more detectors - Remote reconstruction of partial data streams -Raw digits for barrel and MUON 5/ Prototype of the final remote data replication (Raw digits for all detectors) 5/ (or higher) Final test (Final system)

ALICE Computing TDR20 Parameters Computing framework Distributed computing and Grid Project Organisation and planning Resources needed

ALICE Computing TDR21 Period (milestone) Fraction of the final capacity (%) Physics Objective 06/01-12/011% pp studies, reconstruction of TPC and ITS 06/02-12/025% First test of the complete chain from simulation to reconstruction for the PPR Simple analysis tools Digits in ROOT format 01/04-06/0410% Complete chain used for trigger studies Prototype of the analysis tools Comparison with parameterised MonteCarlo Simulated raw data 06/05-12/0515% Test of condition infrastructure and FLUKA To be combined with SDC 3 Speed test of distributing data from CERN 01/06-06/06?20% Test of the final system for reconstruction and analysis ALICE Physics Data Challenges

ALICE Computing TDR22 CERN Tier2Tier1Tier2Tier1 Production of RAW Shipment of RAW to CERN Reconstruction of RAW in all T1’s Analysis AliEn job control Data transfer PDC04 schema

ALICE Computing TDR23 Goals, structure and tasks Validate the computing model with ~10% of the SDTY data Use the offline chain, the Grid, PROOF and the ALICE ARDA prototype 1.Production of underlying Pb+Pb and p+p events –Completed June Mixing of different signal events with underlying Pb+Pb events (up to 50 times) –Completed September Distributed analysis –Delayed

ALICE Computing TDR24 Master job submission, Job Optimizer (N sub-jobs), RB, File catalogue, processes monitoring and control, SE… Central servers CEs Sub-jobs Job processing AliEn-LCG interface Sub-jobs RB Job processing CEs Storage CERN CASTOR: underlying events Local SEs CERN CASTOR: backup copy Storage Primary copy Local SEs Output files Underlying event input files zip archive of output files Register in AliEn FC: LCG SE: LCG LFN = AliEn PFN edg(lcg) copy&register File catalogue Phase 2 job structure Simulate the event reconstruction and remote event storage Completed Sep. 2004

ALICE Computing TDR25 Summary Monitoring – ~ parameters –7 GB data –24M minute granularity jobs, 6 hours/job, 750 MSi2K hours 9M entries in the AliEn file catalogue 4M physical files at 20 AliEn SEs world-wide 30 CASTOR 10 SEs + 10 TB 200 TB network transfer CERN –> remote centres

ALICE Computing TDR26 Job repartition Jobs (AliEn/LCG): Phase /25%, Phase 2 – 89/11% More sites added as PDC progressed Phase 2 Phase 1 17 sites (33 total) under AliEn direct control and additional resources through GRID federation (LCG & INFN Grid)

ALICE Computing TDR27 Summary of PDC’04 Computer Centres –Tuning of environment –Availability of resources Middleware –Phase 1&2 successful –AliEn fully functional –LCG not yet ready –No AliEn development for phase 3, LCG not ready Computing model validation –AliRoot worked well –Data Analysis partially tested on local CN, distributed analysis prototype demonstrated

ALICE Computing TDR28 Development of Analysis ROOT & a small library on AOD’s and ESD’s Work on distributed analysis from ARDA Batch prototype tested at the end 2004 Interactive prototype demonstrated end 2004 Physics Working Groups providing requirements Planning for fast analysis facility at CERN

ALICE Computing TDR29 Forward Proxy Rootd Proofd Grid/Root Authentication Grid Access Control Service TGrid UI/Queue UI Proofd Startup PROOFClient PROOFMaster Slave Registration/ Booking- DB Site PROOF SLAVE SERVERS Site A PROOF SLAVE SERVERS Site B LCGPROOFSteer Master Setup New Elements Grid Service Interfaces Grid File/Metadata Catalogue Client retrieves list of logical file (LFN + MSN) Booking Request with logical file names “Standard” Proof Session Slave ports mirrored on Master host Optional Site Gateway Master Client Grid-Middleware independend PROOF Setup Only outgoing connectivity

ALICE Computing TDR30 ALICE computing model pp –Quasi-online data distribution and first reconstruction at T0 –Further reconstructions at T1’s AA –Calibration, alignment and pilot reconstructions during data taking –Data distribution and first reconstruction at T0 during four months after AA –Further reconstructions at T1’s One copy of RAW at T0 and one distributed at T1’s

ALICE Computing TDR31 ALICE computing model T0 –First pass reconstruction, storage of one copy of RAW, calibration data and first-pass ESD’s T1 –Reconstructions and scheduled analysis, storage of the second collective copy of RAW and one copy of all data to be kept, disk replicas of ESD’s and AOD’s T2 –Simulation and end-user analysis, disk replicas of ESD’s and AOD’s Difficult to estimate network load

ALICE Computing TDR32 ALICE T1’s & T2’s T1 for ALICE: CERN, CCIN2P3, CNAF, GridKa, NIKHEF, NGDF, RAL, USA (under discussion) T2 for ALICE: CERN, CCIN2P3, Nantes, Clermont-Ferrand, Paris, Bari, Catania, Legnaro, Torino, GSI, RUSSIA, Prague, Korea, Kolkata, Wuhan, Cape Town, USA, UK Grid, Athens

ALICE Computing TDR33 ALICE MW requirements Baseline Services available on LCG (in three flavours?) An agreed standard procedure to deploy and operate VO-specific services The tests of the integration of the components have started

ALICE Computing TDR34 Parameters Computing framework Distributed computing and Grid Project Organisation and planning Resources needed

ALICE Computing TDR35 Production Environment Coord. Production environment (simulation, reconstruction & analysis) Distributed computing environment Database organisation Detector Projects Framework & Infrastructure Coord. Framework development (simulation, reconstruction & analysis) Persistency technology Computing data challenges Industrial joint projects Tech. Tracking Documentation Simulation Coord. Detector Simulation Physics simulation Physics validation GEANT 4 integration FLUKA integration Radiation Studies Geometrical modeler International Offline Board DAQ Reconstruction & Physics Soft Coord. Tracking Detector reconstruction Global reconstruction Analysis tools Analysis algorithms Physics data challenges Calibration & alignment algorithms Management Board Regional Tiers Computing Board Chair: Comp Coord Software Projects HLT LCG SC2, PEB, GDB, POB Core Computing and Software EU Grid coord. US Grid coord. Offline Coordination Resource planning Relation with funding agencies Relations with C-RRB Offline Coord. (Deputy PL)

ALICE Computing TDR36 Core Computing Staffing Offline activities in the other projects Extended Core Offline CERN Core Offline 20  ~  ~500? 10  ~7 Computing Project Core Computing Subdetector Software Physics Analysis Software Core Software Infrastructure & Services Offline Coordination Central Support M&O A Funding Comp projDetector projPhysics WGs The call to the collaboration has been successful Four people have been sent to CERN to help with CORE Computing Securing few long-term position is still a very high priority

ALICE Computing TDR37 Milestones Jun 05: prototype of the condition infrastructure Jun 05: FLUKA MC in production with new geometrical modeller Jun 05: Computing TDR completed Aug 05: PDC05 simulation with Service Data Challenge 3 Sep 05: MetaData prototype infrastructure ready Nov 05: Analysis of PDC05 data Dec 05: condition infrastructure deployed Dec 05: alignment and calibration algorithms for all detectors Jun 06: PDC 06 successfully executed (depends on SDC4) Jun 06: alignment and calibration final algorithms Dec 06: AliRoot ready for data taking

ALICE Computing TDR38 Parameters Computing framework Distributed computing and Grid Project Organisation and planning Resources needed

ALICE Computing TDR39 Tier0Tier1Tier1exTier2Tier2exTotalCERN CPU (MSI2k) 8,318,712,321,414,435,08,3 35%41%100%24% Disk (PB) 0,28,67,45,35,114,11,7 2%60%52%38%36%100%12% MS (PB/y) 2,58,16,910,63,6 23%77%66%100%34% Network in (Gb/s) 8,002,000,01 Network out (Gb/s) 6,001,500,27 Summary of needs

ALICE Computing TDR40 Capacity profile

ALICE Computing TDR Tier1 CPU (MSI2k)0,481,032,918,9414,8814,81 Disk (PB)0,090,501,533,485,705,88 MS (PB)0,110,852,535,869,938,59 Tier2 CPU (MSI2k)1,822,924,816,188,349,01 Disk (PB)0,280,611,071,682,583,41 Pledged versus required CPU (%)61%48%56%44% Disk (%)47%37%46%40% MS (%)78%72%94%63% Summary pledged resources

ALICE Computing TDR42 Conclusions ALICE choices for the Computing framework have been validated by experience –The Offline development is on schedule, although contingency is scarce Collaboration between physicists and computer scientists is excellent Integration with ROOT allows fast prototyping and development cycle Early availability of Baseline Grid services and their integration with the ALICE-specific services will be crucial –This is a major “risk factor” for ALICE readiness –The development of the analysis infrastructure is particularly late The manpower situation for the core team has been stabilised thanks to the help from the collaboration The availability of few long-term positions at CERN is of the highest priority