What is expected from ALICE during CCRC’08 in February.

Slides:



Advertisements
Similar presentations
LCG Tiziana Ferrari - SC3: INFN installation status report 1 Service Challenge Phase 3: Status report Tiziana Ferrari on behalf of the INFN SC team INFN.
Advertisements

Status GridKa & ALICE T2 in Germany Kilian Schwarz GSI Darmstadt.
T1 at LBL/NERSC/OAK RIDGE General principles. RAW data flow T0 disk buffer DAQ & HLT CERN Tape AliEn FC Raw data Condition & Calibration & data DB disk.
December Pre-GDB meeting1 CCRC08-1 ATLAS’ plans and intentions Kors Bos NIKHEF, Amsterdam.
Ian M. Fisk Fermilab February 23, Global Schedule External Items ➨ gLite 3.0 is released for pre-production in mid-April ➨ gLite 3.0 is rolled onto.
Production Activities and Requirements by ALICE Patricia Méndez Lorenzo (on behalf of the ALICE Collaboration) Service Challenge Technical Meeting CERN,
LHCC Comprehensive Review – September WLCG Commissioning Schedule Still an ambitious programme ahead Still an ambitious programme ahead Timely testing.
INFSO-RI Enabling Grids for E-sciencE SRMv2.2 experience Sophie Lemaitre WLCG Workshop.
CCRC’08 Common Computing Readiness Challenge
G.Rahal LHC Computing Grid: CCIN2P3 role and Contribution KISTI-CCIN2P3 Workshop Ghita Rahal KISTI, December 1st, 2008.
Computing Infrastructure Status. LHCb Computing Status LHCb LHCC mini-review, February The LHCb Computing Model: a reminder m Simulation is using.
ALICE Roadmap for 2009/2010 Patricia Méndez Lorenzo (IT/GS) Patricia Méndez Lorenzo (IT/GS) On behalf of the ALICE Offline team Slides prepared by Latchezar.
SRM 2.2: tests and site deployment 30 th January 2007 Flavia Donno, Maarten Litmaath IT/GD, CERN.
CCRC’08 Weekly Update Plus Brief Comments on WLCG Collaboration Workshop Jamie Shiers ~~~ WLCG Management Board, 29 th April 2008.
LCG Service Challenge Phase 4: Piano di attività e impatto sulla infrastruttura di rete 1 Service Challenge Phase 4: Piano di attività e impatto sulla.
CCRC08-1 report WLCG Workshop, April KorsBos, ATLAS/NIKHEF/CERN.
Status of the production and news about Nagios ALICE TF Meeting 22/07/2010.
Offline shifter training tutorial L. Betev February 19, 2009.
CCRC’08 Weekly Update Jamie Shiers ~~~ LCG MB, 1 st April 2008.
And Tier 3 monitoring Tier 3 Ivan Kadochnikov LIT JINR
Testing the UK Tier 2 Data Storage and Transfer Infrastructure C. Brew (RAL) Y. Coppens (Birmingham), G. Cowen (Edinburgh) & J. Ferguson (Glasgow) 9-13.
The ALICE Distributed Computing Federico Carminati ALICE workshop, Sibiu, Romania, 20/08/2008.
LHCb The LHCb Data Management System Philippe Charpentier CERN On behalf of the LHCb Collaboration.
1 LHCb on the Grid Raja Nandakumar (with contributions from Greig Cowan) ‏ GridPP21 3 rd September 2008.
CERN IT Department CH-1211 Genève 23 Switzerland t Frédéric Hemmer IT Department Head - CERN 23 rd August 2010 Status of LHC Computing from.
INFSO-RI Enabling Grids for E-sciencE Enabling Grids for E-sciencE Pre-GDB Storage Classes summary of discussions Flavia Donno Pre-GDB.
Planning and status of the Full Dress Rehearsal Latchezar Betev ALICE Offline week, Oct.12, 2007.
LHCb T2D sites A.Tsaregorodtsev, CPPM. Why T2D sites for LHCb  The T2D concept introduced in 2013  to allow non-T1 country sites to controbute storage.
BNL Service Challenge 3 Status Report Xin Zhao, Zhenping Liu, Wensheng Deng, Razvan Popescu, Dantong Yu and Bruce Gibbard USATLAS Computing Facility Brookhaven.
CCRC’08 Monthly Update ~~~ WLCG Grid Deployment Board, 14 th May 2008 Are we having fun yet?
Plans for Service Challenge 3 Ian Bird LHCC Referees Meeting 27 th June 2005.
Doug Benjamin Duke University. 2 ESD/AOD, D 1 PD, D 2 PD - POOL based D 3 PD - flat ntuple Contents defined by physics group(s) - made in official production.
CERN IT Department CH-1211 Genève 23 Switzerland t HEPiX Conference, ASGC, Taiwan, Oct 20-24, 2008 The CASTOR SRM2 Interface Status and plans.
Production Activities and Results by ALICE Patricia Méndez Lorenzo (on behalf of the ALICE Collaboration) Service Challenge Technical Meeting CERN, 15.
LCG-LHCC mini-review ALICE Latchezar Betev Latchezar Betev for the ALICE collaboration.
Handling of T1D0 in CCRC’08 Tier-0 data handling Tier-1 data handling Experiment data handling Reprocessing Recalling files from tape Tier-0 data handling,
Victoria, Sept WLCG Collaboration Workshop1 ATLAS Dress Rehersals Kors Bos NIKHEF, Amsterdam.
Enabling Grids for E-sciencE INFSO-RI Enabling Grids for E-sciencE Gavin McCance GDB – 6 June 2007 FTS 2.0 deployment and testing.
ALICE experiences with CASTOR2 Latchezar Betev ALICE.
SRM v2.2 Production Deployment SRM v2.2 production deployment at CERN now underway. – One ‘endpoint’ per LHC experiment, plus a public one (as for CASTOR2).
INFSO-RI Enabling Grids for E-sciencE Enabling Grids for E-sciencE Storage Element Model and Proposal for Glue 1.3 Flavia Donno,
8 August 2006MB Report on Status and Progress of SC4 activities 1 MB (Snapshot) Report on Status and Progress of SC4 activities A weekly report is gathered.
CMS: T1 Disk/Tape separation Nicolò Magini, CERN IT/SDC Oliver Gutsche, FNAL November 11 th 2013.
11/01/20081 Data simulator status CCRC’08 Preparatory Meeting Radu Stoica, CERN* 11 th January 2007 * On leave from IFIN-HH.
Grid Deployment Board 5 December 2007 GSSD Status Report Flavia Donno CERN/IT-GD.
ALICE Computing Model A pictorial guide. ALICE Computing Model External T1 CERN T0 During pp run i (7 months): P2: data taking T0: first reconstruction.
The Grid Storage System Deployment Working Group 6 th February 2007 Flavia Donno IT/GD, CERN.
GRID interoperability and operation challenges under real load for the ALICE experiment F. Carminati, L. Betev, P. Saiz, F. Furano, P. Méndez Lorenzo,
Status of GSDC, KISTI Sang-Un Ahn, for the GSDC Tier-1 Team
SRM 2.2: experiment requirements, status and deployment plans 6 th March 2007 Flavia Donno, INFN and IT/GD, CERN.
ALICE Physics Data Challenge ’05 and LCG Service Challenge 3 Latchezar Betev / ALICE Geneva, 6 April 2005 LCG Storage Management Workshop.
Top 5 Experiment Issues ExperimentALICEATLASCMSLHCb Issue #1xrootd- CASTOR2 functionality & performance Data Access from T1 MSS Issue.
The ALICE Production Patricia Méndez Lorenzo (CERN, IT/PSS) On behalf of the ALICE Offline Project LCG-France Workshop Clermont, 14th March 2007.
ALICE Full Dress Rehearsal ALICE TF Meeting 02/08/07.
ATLAS Computing Model Ghita Rahal CC-IN2P3 Tutorial Atlas CC, Lyon
CMS data access Artem Trunov. CMS site roles Tier0 –Initial reconstruction –Archive RAW + REC from first reconstruction –Analysis, detector studies, etc.
CERN IT Department CH-1211 Genève 23 Switzerland t EGEE09 Barcelona ATLAS Distributed Data Management Fernando H. Barreiro Megino on behalf.
WLCG IPv6 deployment strategy
Flavia Donno CERN GSSD Storage Workshop 3 July 2007
Jan 12, 2005 Improving CMS data transfers among its distributed Computing Facilities N. Magini CERN IT-ES-VOS, Geneva, Switzerland J. Flix Port d'Informació.
Status and Prospects of The LHC Experiments Computing
LHCb Computing Model and Data Handling Angelo Carbone 5° workshop italiano sulla fisica p-p ad LHC 31st January 2008.
Dagmar Adamova (NPI AS CR Prague/Rez) and Maarten Litmaath (CERN)
Simulation use cases for T2 in ALICE
Offline shifter training tutorial
The INFN Tier-1 Storage Implementation
R. Graciani for LHCb Mumbay, Feb 2006
The LHC Computing Grid Visit of Professor Andreas Demetriou
Offline framework for conditions data
The LHCb Computing Data Challenge DC06
Presentation transcript:

What is expected from ALICE during CCRC’08 in February

2 CCRC’08 in February Duration – 26 days, 04/02 Monday to 29/02 Friday inclusive Duration – 26 days, 04/02 Monday to 29/02 Friday inclusive Scope Scope Whatever the experiments’ computing model requires Whatever the experiments’ computing model requires Computing exercise, not necessarily detector data from the experimental setup with reconstruction etc… Computing exercise, not necessarily detector data from the experimental setup with reconstruction etc… Should exercise the infrastructure and services Should exercise the infrastructure and services Should show that the WLCG is ready/not ready for data taking Should show that the WLCG is ready/not ready for data taking

3 CCRC’08 in February (2) Everybody (including ALICE) is focusing on data management Everybody (including ALICE) is focusing on data management For the others, this is mostly SRM v.2, LCG-utils (icluding GFAL) and FTS For the others, this is mostly SRM v.2, LCG-utils (icluding GFAL) and FTS For ALICE this is FTD with FTS and xrootd- enabled storage For ALICE this is FTD with FTS and xrootd- enabled storage Status of site configuration for SRM v2.2 Status of site configuration for SRM v2.2 Has site upgraded to SRM v2.2 in production? (Y|N) Has site upgraded to SRM v2.2 in production? (Y|N) Has space management been turned on? (Y|N) Has space management been turned on? (Y|N) Are endpoints correctly published? (Y|N) Are endpoints correctly published? (Y|N) Are spaces correctly configured per token & VO? (Y|N) Are spaces correctly configured per token & VO? (Y|N)

4 Data paths and rates Raw Data CASTOR2 (CERN) Pass1 reconstruction at CERN Custodial Shuttle Conditions Data (OCDB) Tier 2: Simulation Analysis ALICE DAQ Data rate from DAQ max 1.5GB/s max 1.5GB/s read access from CASTOR2 (RAW), max 150 MB/s write access (ESDs) FTS: Max 60MB/s in total for replication of RAW data and pass 1 reconstructed ESDs Shuttle gathers data from DAQ, HTL and DCS. Publication of condition objects in Grid FC, storing in GRID SEs and replication to T1s (small volume) Pass2 reconstruction at T1 sites

5 What is expected from ALICE Steady stream of data out of CERN with rate 60MB/s (FTS) Steady stream of data out of CERN with rate 60MB/s (FTS) This will be partly RAW data, partly padding data to arrive at the above rate This will be partly RAW data, partly padding data to arrive at the above rate During the December exercise, the total RAW data written to CASTOR2 and replicated was 20TB (11 days) During the December exercise, the total RAW data written to CASTOR2 and replicated was 20TB (11 days) For February, we should expect some 60TB RAW For February, we should expect some 60TB RAW

6 What is expected from ALICE Total transfer volume is 130TB (over 25 days) Total transfer volume is 130TB (over 25 days) Rates and storage document here Rates and storage document herehere The ‘padding’ data should go to a recyclable tape pool – the sites are very protective of their tapes The ‘padding’ data should go to a recyclable tape pool – the sites are very protective of their tapes We will have 3 types of storage at all T1s We will have 3 types of storage at all T1s T1D0 T1D0 T0D1 T0D1 T1D0 recyclable T1D0 recyclable

CCRC 04/12/20077 Services and criticality ALICE Wiki: