F.Carminati ALICE Computing Model Workshop December 9-10, 2004 Introduction and Overview of the ALICE Computing Model.

Slides:



Advertisements
Similar presentations
Status GridKa & ALICE T2 in Germany Kilian Schwarz GSI Darmstadt.
Advertisements

5/2/  Online  Offline 5/2/20072  Online  Raw data : within the DAQ monitoring framework  Reconstructed data : with the HLT monitoring framework.
1 User Analysis Workgroup Update  All four experiments gave input by mid December  ALICE by document and links  Very independent.
Combined tracking based on MIP. Proposal Marian Ivanov.
1 Databases in ALICE L.Betev LCG Database Deployment and Persistency Workshop Geneva, October 17, 2005.
23/04/2008VLVnT08, Toulon, FR, April 2008, M. Stavrianakou, NESTOR-NOA 1 First thoughts for KM3Net on-shore data storage and distribution Facilities VLV.
Algorithms and Methods for Particle Identification with ALICE TOF Detector at Very High Particle Multiplicity TOF simulation group B.Zagreev ACAT2002,
Ian M. Fisk Fermilab February 23, Global Schedule External Items ➨ gLite 3.0 is released for pre-production in mid-April ➨ gLite 3.0 is rolled onto.
August 98 1 Jürgen Knobloch ATLAS Software Workshop Ann Arbor ATLAS Computing Planning ATLAS Software Workshop August 1998 Jürgen Knobloch Slides also.
Large scale data flow in local and GRID environment V.Kolosov, I.Korolko, S.Makarychev ITEP Moscow.
LCG Milestones for Deployment, Fabric, & Grid Technology Ian Bird LCG Deployment Area Manager PEB 3-Dec-2002.
Experience with analysis of TPC data Marian Ivanov.
ALICE Upgrade for Run3: Computing HL-LHC Trigger, Online and Offline Computing Working Group Topical Workshop Sep 5 th 2014.
Real data reconstruction A. De Caro (University and INFN of Salerno) CERN Building 29, December 9th, 2009ALICE TOF General meeting.
ATLAS and GridPP GridPP Collaboration Meeting, Edinburgh, 5 th November 2001 RWL Jones, Lancaster University.
F.Carminati BNL Seminar March 21, 2005 ALICE Computing Model.
5 May 98 1 Jürgen Knobloch Computing Planning for ATLAS ATLAS Software Week 5 May 1998 Jürgen Knobloch Slides also on:
November SC06 Tampa F.Fanzago CRAB a user-friendly tool for CMS distributed analysis Federica Fanzago INFN-PADOVA for CRAB team.
ALICE Simulation Framework Ivana Hrivnacova 1 and Andreas Morsch 2 1 NPI ASCR, Rez, Czech Republic 2 CERN, Geneva, Switzerland For the ALICE Collaboration.
Andreas Morsch, CERN EP/AIP CHEP 2003 Simulation in ALICE Andreas Morsch For the ALICE Offline Project 2003 Conference for Computing in High Energy and.
ATLAS Data Challenges US ATLAS Physics & Computing ANL October 30th 2001 Gilbert Poulard CERN EP-ATC.
The ALICE Computing F.Carminati May 4, 2006 Madrid, Spain.
Track extrapolation to TOF with Kalman filter F. Pierella for the TOF-Offline Group INFN & Bologna University PPR Meeting, January 2003.
ARDA Prototypes Andrew Maier CERN. ARDA WorkshopAndrew Maier, CERN2 Overview ARDA in a nutshell –Experiments –Middleware Experiment prototypes (basic.
GDB Meeting - 10 June 2003 ATLAS Offline Software David R. Quarrie Lawrence Berkeley National Laboratory
Planning and status of the Full Dress Rehearsal Latchezar Betev ALICE Offline week, Oct.12, 2007.
1 Andrea Sciabà CERN Critical Services and Monitoring - CMS Andrea Sciabà WLCG Service Reliability Workshop 26 – 30 November, 2007.
AliRoot survey P.Hristov 11/06/2013. Offline framework  AliRoot in development since 1998  Directly based on ROOT  Used since the detector TDR’s for.
V0 analytical selection Marian Ivanov, Alexander Kalweit.
AliEn AliEn at OSC The ALICE distributed computing environment by Bjørn S. Nilsen The Ohio State University.
Computing R&D and Milestones LHCb Plenary June 18th, 1998 These slides are on WWW at:
I.BelikovCHEP 2004, Interlaken, 30 Sep Bayesian Approach for Combined Particle Identification in ALICE Experiment at LHC. I.Belikov, P.Hristov, M.Ivanov,
DataGrid is a project funded by the European Commission under contract IST rd EU Review – 19-20/02/2004 WP8 - Demonstration ALICE – Evolving.
NEC' /09P.Hristov1 Alice off-line computing Alice Collaboration, presented by P.Hristov, CERN NEC'2001 September 12-18, Varna.
Track reconstruction in high density environment I.Belikov, P.Hristov, M.Ivanov, T.Kuhr, K.Safarik CERN, Geneva, Switzerland.
Testing and integrating the WLCG/EGEE middleware in the LHC computing Simone Campana, Alessandro Di Girolamo, Elisa Lanciotti, Nicolò Magini, Patricia.
Large scale data flow in local and GRID environment Viktor Kolosov (ITEP Moscow) Ivan Korolko (ITEP Moscow)
Computing for Alice at GSI (Proposal) (Marian Ivanov)
1 Offline Week, October 28 th 2009 PWG3-Muon: Analysis Status From ESD to AOD:  inclusion of MC branch in the AOD  standard AOD creation for PDC09 files.
Summary of User Requirements for Calibration and Alignment Database Magali Gruwé CERN PH/AIP ALICE Offline Week Alignment and Calibration Workshop February.
Development of the parallel TPC tracking Marian Ivanov CERN.
Summary of Workshop on Calibration and Alignment Database Magali Gruwé CERN PH/AIP ALICE Computing Day February 28 th 2005.
ALICE RRB-T ALICE Computing – an update F.Carminati 23 October 2001.
L. Perini DATAGRID WP8 Use-cases 19 Dec ATLAS short term grid use-cases The “production” activities foreseen till mid-2001 and the tools to be used.
1 Reconstruction tasks R.Shahoyan, 25/06/ Including TRD into track fit (JIRA PWGPP-1))  JIRA PWGPP-2: Code is in the release, need to switch setting.
P. Cerello (INFN – Torino) T0/1 Network Meeting Amsterdam January 20/21, 2005 The ALICE Computing and Data Model.
ALICE Computing TDR Federico Carminati June 29, 2005.
ALICE Physics Data Challenge ’05 and LCG Service Challenge 3 Latchezar Betev / ALICE Geneva, 6 April 2005 LCG Storage Management Workshop.
F.Carminati LHCC Review of the Experiment Computing Needs January 18, 2005 Overview of the ALICE Computing Model.
Marco Cattaneo, 3-June Event Reconstruction for LHCb  What is the scope of the project?  What are the goals (short+medium term)?  How do we organise.
Monthly video-conference, 18/12/2003 P.Hristov1 Preparation for physics data challenge'04 P.Hristov Alice monthly off-line video-conference December 18,
ALICE Computing Data Challenge VI
ALICE Offline Organisation and Status
F.Carminati Offline Day September 20, 2004
Ian Bird WLCG Workshop San Francisco, 8th October 2016
ALICE analysis preservation
INFN-GRID Workshop Bari, October, 26, 2004
Calibrating ALICE.
ALICE – First paper.
ALICE Physics Data Challenge 3
v4-18-Release: really the last revision!
ALICE – Evolving towards the use of EDG/LCG - the Data Challenge 2004
Commissioning of the ALICE HLT, TPC and PHOS systems
AliRoot status and PDC’04
MC data production, reconstruction and analysis - lessons from PDC’04
April HEPCG Workshop 2006 GSI
Simulation use cases for T2 in ALICE
US ATLAS Physics & Computing
ATLAS DC2 & Continuous production
Offline framework for conditions data
Presentation transcript:

F.Carminati ALICE Computing Model Workshop December 9-10, 2004 Introduction and Overview of the ALICE Computing Model

December 9, 2004Computing Model Workshop2 Objective of the meeting Present the current status of the ALICE computing model Receive feedback from the Collaboration Receive endorsement for the draft to be presented to the LHCC review committee Start the process that will bring to the Computing TDR

December 9, 2004Computing Model Workshop3 Timeline December 15: the draft computing model and the projected needs are presented to the LHCC review committee January LHCC review with sessions devoted to each of the experiments and a close- out session  Monday, 17 January : ATLAS (a.m.), CMS (p.m.)  Tuesday, 18 January: ALICE (a.m.) LHCb (p.m.)  Wednesday, 19 January: Closed Session (a.m.)

December 9, 2004Computing Model Workshop4 Computing TDR’s LCG TDR  Next meeting first half 2005  First draft 11 April, good copy 9 May  15 June final TDR to LHCC (LHCC mtg June)  3 June ready for approval by PEB on 7 June ALICE Computing TDR  Early draft given to LHCC on December 15  Draft presented and distributed to the Collaboration during the ALICE/offline week in February  Final discussion and approval during the ALICE/offline week beginning of June

December 9, 2004Computing Model Workshop5 Computing MoU Distributed to the Collaboration management to obtain feedback on October 1 Coupled with the LHCC review in February Provide the C-RRB with documents that can be finalised and approved at its April 2005 meeting Subsequently distributed for signature

December 9, 2004Computing Model Workshop6 Mandate of the February LHCC review In the context of the preparation of the Computing MoUs and TDRs, the LHC experiments have come forward with estimated computing capacity requirements in terms of disks, tapes, CPUs and networks for the Tier-0, Tier-1 and Tier-2 centres. The numbers vary in many cases (mostly upwards) from those submitted to the LHC Computing Review in 2001 […] it is felt to be desirable at this stage to seek an informed, independent view on the reasonableness of the present estimates. […] the task of this Review is thus to examine critically, in close discussion with the computing managements of the experiments, the current estimates and report on their validity in the light of the presently understood characteristics of the LHC experimental programme. The exercise will therefore not be a review of the underlying computing architecture.

December 9, 2004Computing Model Workshop7 Membership Chairman: J. Engelen - CERN Chief Scientific Officer Representatives from the LHCC: F. Forti, P. McBride, T.Wyatt External: E. Blucher (Univ. Chicago), N.N. LHCC Chairman and Secretary: S. Bertolucci, E. Tsesmelis PH Department: J.-J. Blaising, D. Schlatter IT Department: J. Knobloch, L. Roberston, W. von Rueden

December 9, 2004Computing Model Workshop8 Elements of the computing model From detector to Raw data (see P.Vande Vyvre’s talk) Framework & software management Simulation Reconstruction Condition infrastructure Analysis Grid Middleware & distributed computing environment Project management & planning From RAW data to physics analysis (see Y.Schutz’s talk)

December 9, 2004Computing Model Workshop9 Framework AliRoot in development since 1998  Entirely based on ROOT  Used already for the detector TDR’s Two packages to install (ROOT and AliRoot)  Plus transport MC’s Ported on several architectures (Linux IA32 and IA64, Mac OSX, Digital True64, SunOS…) Distributed development  Over 50 developers and a single cvs repository Tight integration with DAQ (data recorder) and HLT (same codebase)

December 9, 2004Computing Model Workshop10 AliRoot layout ROOT AliRoot STEER Virtual MC G3 G4 FLUKA HIJING MEVSIM PYTHIA6 PDF EVGEN HBTP HBTAN ISAJET AliEn/gLite EMCALZDCITSPHOSTRDTOFRICH ESD AliAnalysis AliReconstruction PMD CRTFMDMUONTPCSTARTRALICE STRUCT AliSimulation

December 9, 2004Computing Model Workshop11 Software management Regular release schedule  Major release every six months, minor release (tag) every month Emphasis on delivering production code  Corrections, protections, code cleaning, geometry Nightly produced UML diagrams, code listing, coding rule violations, build and tests, single repository with all the codeUML diagramscode listing, coding rule violations build and tests,repository  No version management software (we have only two packages!) Advanced code tools under development with IRST/Italy  Aspect oriented programming  Smell detection  Automated testing

December 9, 2004Computing Model Workshop12 Simulation Simulation performed with Geant3 till now Virtual MonteCarlo interface separates the ALICE code from the MonteCarlo used New geometrical modeller scheduled to enter production at the beginning of 2005 Interface with FLUKA finishing validation The Physics Data Challenge 2005 will be performed with FLUKA Interface with Geant4 ready to be implemented  Second half 2005 (?) Testbeam validation activity started

December 9, 2004Computing Model Workshop13 The Virtual MC User Code VMC Geometrical Modeller G3 G3 transport G4 transport G4 FLUKA transport FLUKA Reconstruction Visualisation Geant3.tar.gz includes an upgraded Geant3 with a C++ interface Geant4_mc.tar.gz includes the TVirtualMC Geant4 interface classes Generators

December 9, 2004Computing Model Workshop14 HMPID: 5 GeV Pions Geant3FLUKA

December 9, 2004Computing Model Workshop15 TGeo modeller

December 9, 2004Computing Model Workshop16 Reconstruction strategy Main challenge - Reconstruction in the high flux environments (occupancy in the TPC detector up to 40%) requires a new approach to tracking Basic principle – Maximum information principle  use everything you can, you will get the best Algorithms and data structures optimized for fast access and usage of all relevant information  Localize relevant information  Keep this information until it is needed

December 9, 2004Computing Model Workshop17 Tracking strategy – Primary tracks Iterative process  Forward propagation towards to the vertex –TPC-ITS  Back propagation – ITS-TPC-TRD-TOF  Refit inward TOF- TRD-TPC-ITS Continuous seeding –track segment finding in all detectors TRD TPC ITS TOF

December 9, 2004Computing Model Workshop18 Sources of information spatial characteristic of a track and sets of tracks  px,py,pz,y,z parameters and covariance  chi2  number of points on the track  number of shared clusters on the track  overlaps between tracks  DCA for V0s, Kinks and Cascades  … dEdx  mean, sigma, number of points, number of shared points… reliability TOF of a track and sets of tracks derived variables  Mass  Causality - Probability that particle “ really exists” in some space interval (used for causality cuts) Based on clusters occurrence, and chi2 before – after vertex  Invariant mass  Pointing angle of neutral mother particle  …

December 9, 2004Computing Model Workshop19 ITS tracking Follow the TPC seeds into a tree of track hypotheses connecting reconstructed clusters  track in dead zone  missing clusters (dead or noisy channels, clusters below threshold)  secondary tracks not cross ITS layer as function of impact parameter in z and r-φ  probability of the cluster to be shared as a function of the cluster shape  restricted amount of tracks kept for further parallel tracking procedure  for secondary tracks also short best tracks kept, for further V0 study Best track is registered to all the clusters which belong to that track Overlap between the best track and all other tracks is calculated, and if above threshold, χ2 of the pair of tracks is calculated

December 9, 2004Computing Model Workshop20 ITS - Parallel tracking (2) double loop over all possible pair of branches weighted χ2 of two tracks calculated  effective probability of cluster sharing and for secondary particles the probability not to cross given layer taken into account Best track 1 Best track 2 Conflict !

December 9, 2004Computing Model Workshop21 Results – Tracking efficiency (TPC) PIV 3GHz – (dN/dy – 6000)  TPC tracking - ~ 40s  TPC kink finder ~ 10 s  ITS tracking ~ 40 s  TRD tracking ~ 200 s

December 9, 2004Computing Model Workshop22 Kink finder efficiency Efficiency for Kaons as a function of decay radius Left side – low multiplicity (dN/dy~2000) – 2000 Kaons Right side – same events merged with central event (dN/dy~8000)

December 9, 2004Computing Model Workshop23 is the combined response function. C i are the same as in the single detector case (or even something reasonably arbitrary like C e ~0.1, C  ~0.1, C  ~7, C K ~1, …) PID combined over several detectors The functions R(S|i) are not necessarily “formulas” (can be “procedures”). Some other effects (like mis-measurements) can be accounted for. Probability to be a particle of i -type ( i = e,  K, p, … ), if we observe a vector S= {s ITS, s TPC, s TOF, …} of PID signals:

December 9, 2004Computing Model Workshop24 PID combined over ITS, TPC and TOF (Kaons) ITSTPC TOF Efficiency of the combined PID is higher (or equal) and the contamination is lower (or equal) than the ones given by any of the detectors stand-alone. Selection : ITS & TPC & TOF (central PbPb HIJING events) Contamination Efficiency ITS & TPC & TOF

December 9, 2004Computing Model Workshop25 HLT Monitoring Aliroot Simulation Digits Raw Data LDC GDC Event builder alimdc Root file CASTOR AliEn Monitoring Online Monitoring ESD Histograms

December 9, 2004Computing Model Workshop26 Condition DataBases Information source stored in heterogeneous databases A program periodically polls all sources and creates ROOT file with condition information These files are published on the Grid Distribution of the files is done by the Grid DMS Files are identified via DMS metadata

December 9, 2004Computing Model Workshop27 External relations and DB connectivity DAQ Trigger DCS ECS Physics data DCDB AliEn/GLite: metadata file store calibration procedures calibration files AliRoot Calibration classes API files From URs: Source, volume, granularity, update frequency, access pattern, runtime environment and dependencies API – Application Program Interface Relations between DBs not final not all shown API HLT Call for UR to come!!

December 9, 2004Computing Model Workshop28 Development of Analysis Analysis Object Data designed to be analysis oriented  Contains data needed to analysis only  Designed for efficiency of the analysis Analysis à la PAW  ROOT + at most a small Work on the infrastructure done by the ARDA project Batch analysis infrastructure  Prototype end 2004 Interactive analysis infrastructure  Demonstration end 2004 Physics working groups here just starting

December 9, 2004Computing Model Workshop29 Forward Proxy Rootd Proofd Grid/Root Authentication Grid Access Control Service TGrid UI/Queue UI Proofd StartupPROOFClient PROOFMaster Slave Registration/ Booking- DB Site PROOF SLAVE SERVERS Site A PROOF SLAVE SERVERS Site B LCGPROOFSteer Master Setup New Elements Grid Service Interfaces Grid File/Metadata Catalogue Client retrieves list of logical file (LFN + MSN) Booking Request with logical file names “Standard” Proof Session Slave ports mirrored on Master host Optional Site Gateway Master Client Grid-Middleware independend PROOF Setup Only outgoing connectivity

December 9, 2004Computing Model Workshop30

December 9, 2004Computing Model Workshop31 The ALICE Grid (AliEn) Functionality + Simulation Interoperability + Reconstruction Performance, Scalability, Standards + Analysis First production (distributed simulation) Physics Performance Report (mixing & reconstruction) 10% Data Challenge (analysis) Start There are millions lines of code in OS dealing with GRID issues Why not using them to build the minimal GRID that does the job? Fast development of a prototype, can restart from scratch etc etc Hundreds of users and developers Immediate adoption of emerging standards AliEn by ALICE (5% of code developed, 95% imported) gLite

December 9, 2004Computing Model Workshop32 Why Physics Data Challenges? We need simulated events to exercise physics reconstruction and analysis We need to exercise the code and the computing infrastructure to define the parameters of the computing model We need a serious evaluation of the Grid infrastructure We need to exercise the collaboration readiness to take and analyse data

December 9, 2004Computing Model Workshop33 CERN Tier2Tier1Tier2Tier1 Production of RAW Shipment of RAW to CERN Reconstruction of RAW in all T1’s Analysis AliEn job control Data transfer PDC04 schema DO IT ALL ON THE GRID!!!!

December 9, 2004Computing Model Workshop34 Signal-free event Mixed signal Merging

December 9, 2004Computing Model Workshop35 Phase II (started 1/07) – statistics In addition to phase I  Distributed production of signal events and merging with phase I events  Network and file transfer tools stress  Storage at remote SEs and stability (crucial for phase III) Conditions, jobs …:  110 conditions total  1 million jobs  10 TB produced data  200 TB transferred from CERN  500 MSI2k hours CPU End by 30 September

December 9, 2004Computing Model Workshop36 Structure of event production in phase II Master job submission, Job Optimizer (N sub-jobs), RB, File catalogue, processes monitoring and control, SE… Central servers CEs Sub-jobs Job processing AliEn-LCG interface Sub-jobs RB Job processing CEs Storage CERN CASTOR: underlying events Local SEs CERN CASTOR: backup copy Storage Primary copy Local SEs Output files Underlying event input files zip archive of output files Register in AliEn FC: LCG SE: LCG LFN = AliEn PFN edg(lcg) copy&register File catalogue

December 9, 2004Computing Model Workshop37 end to UIapplication middleware shell

December 9, 2004Computing Model Workshop38 Structure analysis in phase 3 Master job submission, Job Optimizer (N sub-jobs), RB, File catalogue, processes monitoring and control, SE… Central servers CEs Sub-jobs Job processing AliEn-LCG interface Sub-jobs RB Job processing CEs Local SEs Primary copy Local SEs Input files File catalogue Job splitter File catalogue Metadata lfn 1 lfn 2 lfn 3 lfn 7 lfn 8 lfn 4 lfn 5 lfn 6 PFN = (LCG SE:) LCG LFN PFN = AliEn PFN Query LFN’s Get PFN’s User query

December 9, 2004Computing Model Workshop39 Phase III - Execution Strategy Very labour intensive  The status of LCG DMS is not brilliant Does not “leverage” the (excellent!) work done in ARDA  So… why not doing it with gLite? Advantages  Uniform configuration: gLite on EGEE/LCG-managed sites & on ALICE-managed sites  If we have to go that way, the sooner the better  AliEn is anyway “frozen” as all the developers are working on gLite/ARDA Disadvantages  It may introduce a delay with respect to the use of the present – available – AliEn/LCG configuration  But we believe it will pay off in the medium term PEB accepted to provide us with limited support for this exercise  Provided it does not hinder the EGEE release plans

December 9, 2004Computing Model Workshop40 New phase III - Layout Server gLite/A CE/SE lfn 1 lfn 2 lfn 3 lfn 7 lfn 8 lfn 4 lfn 5 lfn 6 gLite/L CE/SE Catalog gLite/E CE/SE gLite/A CE/SE User Query

December 9, 2004Computing Model Workshop41 Production Environment Coord. Production environment (simulation, reconstruction & analysis) Distributed computing environment Database organisation Detector Projects Framework & Infrastructure Coord. Framework development (simulation, reconstruction & analysis) Persistency technology Computing data challenges Industrial joint projects Tech. Tracking Documentation Simulation Coord. Detector Simulation Physics simulation Physics validation GEANT 4 integration FLUKA integration Radiation Studies Geometrical modeler International Computing Board DAQ Reconstruction & Physics Soft Coord. Tracking Detector reconstruction Global reconstruction Analysis tools Analysis algorithms Physics data challenges Calibration & alignment algorithms Management Board Regional Tiers Offline Board Chair: Comp Coord Software Projects HLT LCG SC2, PEB, GDB, POB Core Computing and Software EU Grid coord. US Grid coord. Offline Coordination Resource planning Relation with funding agencies Relations with C-RRB Offline Coord. (Deputy PL)

December 9, 2004Computing Model Workshop42 Core Computing Staffing

December 9, 2004Computing Model Workshop43 Computing Project Core Computing Subdetector Software Physics Analysis Software Core Software Infrastructure & Services Offline Coordination Central Support M&O A Funding Comp projDetector projPhysics WGs

December 9, 2004Computing Model Workshop44 Offline activities in the other projects Extended Core Offline CERN Core Offline 20  ~  ~500? 10  ~7

December 9, 2004Computing Model Workshop45 Cosmic Ray Telescope (CRT) A.Fernández Offline Board Chair F.Carminati Electromagnetic Calorimeter (EMCAL) G.Odiniek, M.Horner Forward Multiplicity Detector (FMD) A.Maevskaya Inner Tracking System (ITS) R.Barbera, M.Masera Muon Spectrometer (MUON) A.DeFalco, G.Martinez Photon Spectrometer (PHOS) Y.Schutz Photon Multiplicity Detector (PMD) B.Nandi High Momentum Particle ID (HMPID) D.DiBari T0 Detector (START) A.Maevskaya Time of Flight (TOF) A.DeCaro, G.Valenti Time Projection Chamber (TPC) M.Kowalski, M.Ivanov Transition Radiation Detector (TRD) C.Blume, A.Sandoval V0 detector (VZERO) B.Cheynis Zero Degree Calorimeter (ZDC) E.Scomparin Detector Construction DB W.Peryt ROOT R.Brun, F.Rademakers Core Offline P.Buncic, A.Morsch, F.Rademakers, K.Safarik Web & VMC CEADEN Eu Grid coordination P.Cerello US Grid coordination L.Pinsky

December 9, 2004Computing Model Workshop46 What do we have to do by end 2005 Alignment & Calibration Change of MC Integration with HLT Control of AliRoot evolution Development of analysis environment Development of visualisation Revision of detector geometry and simulation Migration to new Grid software Physics and computing challenge 2005 Project structure & staffing Organisation of computing resources Writing of the computing TDR

December 9, 2004Computing Model Workshop47 Period (milestone) Fraction of the final capacity (%) Physics Objective 06/01-12/011% pp studies, reconstruction of TPC and ITS 06/02-12/025% First test of the complete chain from simulation to reconstruction for the PPR Simple analysis tools Digits in ROOT format 01/04-06/0410% Complete chain used for trigger studies Prototype of the analysis tools Comparison with parameterised MonteCarlo Simulated raw data 05/05-07/05TBD Test of condition infrastructure and FLUKA Test of gLite and CASTOR Speed test of distributing data from CERN 01/06-06/0620% Test of the final system for reconstruction and analysis ALICE Physics Data Challenges NEW

December 9, 2004Computing Model Workshop48 ALICE Offline Timeline ALICE PDC04 Analysis PDC04 Design of new components Development of new components Pre-challenge ‘06 PDC06 preparation PDC06 Final development of AliRoot First data taking preparation PDC06 AliRoot ready Computing TDR PDC06 AliRoot ready nous sommes ici CDC 04 PDC04 CDC 05