Summary of Services for the MC Production Patricia Méndez Lorenzo WLCG T2 Workshop CERN, 12 th June 2006.

Slides:



Advertisements
Similar presentations
Resources for the ATLAS Offline Computing Basis for the Estimates ATLAS Distributed Computing Model Cost Estimates Present Status Sharing of Resources.
Advertisements

T1 at LBL/NERSC/OAK RIDGE General principles. RAW data flow T0 disk buffer DAQ & HLT CERN Tape AliEn FC Raw data Condition & Calibration & data DB disk.
CERN – June 2007 View of the ATLAS detector (under construction) 150 million sensors deliver data … … 40 million times per second.
December Pre-GDB meeting1 CCRC08-1 ATLAS’ plans and intentions Kors Bos NIKHEF, Amsterdam.
Ian M. Fisk Fermilab February 23, Global Schedule External Items ➨ gLite 3.0 is released for pre-production in mid-April ➨ gLite 3.0 is rolled onto.
Production Activities and Requirements by ALICE Patricia Méndez Lorenzo (on behalf of the ALICE Collaboration) Service Challenge Technical Meeting CERN,
LHCC Comprehensive Review – September WLCG Commissioning Schedule Still an ambitious programme ahead Still an ambitious programme ahead Timely testing.
16 October 2005 Collaboration Meeting1 Computing Issues & Status L. Pinsky Computing Coordinator ALICE-USA.
SC4 Workshop Outline (Strong overlap with POW!) 1.Get data rates at all Tier1s up to MoU Values Recent re-run shows the way! (More on next slides…) 2.Re-deploy.
Computing Infrastructure Status. LHCb Computing Status LHCb LHCC mini-review, February The LHCb Computing Model: a reminder m Simulation is using.
INFSO-RI Enabling Grids for E-sciencE Project Gridification: the UNOSAT experience Patricia Méndez Lorenzo CERN (IT-PSS/ED) CERN,
LCG Service Challenge Phase 4: Piano di attività e impatto sulla infrastruttura di rete 1 Service Challenge Phase 4: Piano di attività e impatto sulla.
IST E-infrastructure shared between Europe and Latin America High Energy Physics Applications in EELA Raquel Pezoa Universidad.
The LHC Computing Grid – February 2008 The Worldwide LHC Computing Grid Dr Ian Bird LCG Project Leader 25 th April 2012.
F. Fassi, S. Cabrera, R. Vives, S. González de la Hoz, Á. Fernández, J. Sánchez, L. March, J. Salt, A. Lamas IFIC-CSIC-UV, Valencia, Spain Third EELA conference,
7April 2000F Harris LHCb Software Workshop 1 LHCb planning on EU GRID activities (for discussion) F Harris.
WLCG Service Report ~~~ WLCG Management Board, 1 st September
Tier-2  Data Analysis  MC simulation  Import data from Tier-1 and export MC data CMS GRID COMPUTING AT THE SPANISH TIER-1 AND TIER-2 SITES P. Garcia-Abia.
CCRC’08 Weekly Update Jamie Shiers ~~~ LCG MB, 1 st April 2008.
Enabling Grids for E-sciencE System Analysis Working Group and Experiment Dashboard Julia Andreeva CERN Grid Operations Workshop – June, Stockholm.
1 LCG-France sites contribution to the LHC activities in 2007 A.Tsaregorodtsev, CPPM, Marseille 14 January 2008, LCG-France Direction.
LCG Service Challenges: Planning for Tier2 Sites Update for HEPiX meeting Jamie Shiers IT-GD, CERN.
LCG Service Challenges: Planning for Tier2 Sites Update for HEPiX meeting Jamie Shiers IT-GD, CERN.
The LHCb Italian Tier-2 Domenico Galli, Bologna INFN CSN1 Roma,
Stefano Belforte INFN Trieste 1 CMS Simulation at Tier2 June 12, 2006 Simulation (Monte Carlo) Production for CMS Stefano Belforte WLCG-Tier2 workshop.
The LHC Computing Grid – February 2008 The Challenges of LHC Computing Dr Ian Bird LCG Project Leader 6 th October 2009 Telecom 2009 Youth Forum.
1 Andrea Sciabà CERN Critical Services and Monitoring - CMS Andrea Sciabà WLCG Service Reliability Workshop 26 – 30 November, 2007.
The ATLAS Cloud Model Simone Campana. LCG sites and ATLAS sites LCG counts almost 200 sites. –Almost all of them support the ATLAS VO. –The ATLAS production.
Testing and integrating the WLCG/EGEE middleware in the LHC computing Simone Campana, Alessandro Di Girolamo, Elisa Lanciotti, Nicolò Magini, Patricia.
Plans for Service Challenge 3 Ian Bird LHCC Referees Meeting 27 th June 2005.
Production Activities and Results by ALICE Patricia Méndez Lorenzo (on behalf of the ALICE Collaboration) Service Challenge Technical Meeting CERN, 15.
1 Andrea Sciabà CERN The commissioning of CMS computing centres in the WLCG Grid ACAT November 2008 Erice, Italy Andrea Sciabà S. Belforte, A.
ATLAS Distributed Computing perspectives for Run-2 Simone Campana CERN-IT/SDC on behalf of ADC.
Victoria, Sept WLCG Collaboration Workshop1 ATLAS Dress Rehersals Kors Bos NIKHEF, Amsterdam.
1 A Scalable Distributed Data Management System for ATLAS David Cameron CERN CHEP 2006 Mumbai, India.
Distributed Physics Analysis Past, Present, and Future Kaushik De University of Texas at Arlington (ATLAS & D0 Collaborations) ICHEP’06, Moscow July 29,
Status of AliEn2 Services ALICE offline week Latchezar Betev Geneva, June 01, 2005.
GDB, 07/06/06 CMS Centre Roles à CMS data hierarchy: n RAW (1.5/2MB) -> RECO (0.2/0.4MB) -> AOD (50kB)-> TAG à Tier-0 role: n First-pass.
8 August 2006MB Report on Status and Progress of SC4 activities 1 MB (Snapshot) Report on Status and Progress of SC4 activities A weekly report is gathered.
CMS: T1 Disk/Tape separation Nicolò Magini, CERN IT/SDC Oliver Gutsche, FNAL November 11 th 2013.
PDC’06 - status of deployment and production Latchezar Betev TF meeting – April 27, 2006.
The Grid Storage System Deployment Working Group 6 th February 2007 Flavia Donno IT/GD, CERN.
LCG LHC Grid Deployment Board Regional Centers Phase II Resource Planning Service Challenges LHCC Comprehensive Review November 2004 Kors Bos, GDB.
1 June 11/Ian Fisk CMS Model and the Network Ian Fisk.
WLCG November Plan for shutdown and 2009 data-taking Kors Bos.
Dominique Boutigny December 12, 2006 CC-IN2P3 a Tier-1 for W-LCG 1 st Chinese – French Workshop on LHC Physics and associated Grid Computing IHEP - Beijing.
SAM architecture EGEE 07 Service Availability Monitor for the LHC experiments Simone Campana, Alessandro Di Girolamo, Nicolò Magini, Patricia Mendez Lorenzo,
Top 5 Experiment Issues ExperimentALICEATLASCMSLHCb Issue #1xrootd- CASTOR2 functionality & performance Data Access from T1 MSS Issue.
The ALICE Production Patricia Méndez Lorenzo (CERN, IT/PSS) On behalf of the ALICE Offline Project LCG-France Workshop Clermont, 14th March 2007.
1 S. JEZEQUEL- First chinese-french workshop 13 December 2006 Grid: An LHC user point of vue S. Jézéquel (LAPP-CNRS/Université de Savoie)
ATLAS Computing Model Ghita Rahal CC-IN2P3 Tutorial Atlas CC, Lyon
LHCC meeting – Feb’06 1 SC3 - Experiments’ Experiences Nick Brook In chronological order: ALICE CMS LHCb ATLAS.
ATLAS Computing: Experience from first data processing and analysis Workshop TYL’10.
CERN IT Department CH-1211 Genève 23 Switzerland t EGEE09 Barcelona ATLAS Distributed Data Management Fernando H. Barreiro Megino on behalf.
1-2 March 2006 P. Capiluppi INFN Tier1 for the LHC Experiments: ALICE, ATLAS, CMS, LHCb.
WLCG Tier-2 Asia Workshop TIFR, Mumbai 1-3 December 2006
LCG Service Challenge: Planning and Milestones
Jan 12, 2005 Improving CMS data transfers among its distributed Computing Facilities N. Magini CERN IT-ES-VOS, Geneva, Switzerland J. Flix Port d'Informació.
Data Challenge with the Grid in ATLAS
Resources and Financial Plan
ALICE Physics Data Challenge 3
LHCb Computing Model and Data Handling Angelo Carbone 5° workshop italiano sulla fisica p-p ad LHC 31st January 2008.
Installation and Commissioning of ALICE VO-BOXES and AliEn Services
Simulation use cases for T2 in ALICE
ALICE Computing Model in Run3
An introduction to the ATLAS Computing Model Alessandro De Salvo
R. Graciani for LHCb Mumbay, Feb 2006
LHC Data Analysis using a worldwide computing grid
The LHCb Computing Data Challenge DC06
Presentation transcript:

Summary of Services for the MC Production Patricia Méndez Lorenzo WLCG T2 Workshop CERN, 12 th June 2006

Patricia Méndez Lorenzo WLCG T2 Workshop 12 th June 2006 Outline  Main Purpose  Present the T2 infrastructure required by each experiment at the sites  Content of the talk  Summary of the T2 activities experiment by experiment  T1-T2 association  Each experiment has provided different information  Not following therefore a similar structure for each experiment during this talk  It tries to be a initial “draft” to be completed after the session

Patricia Méndez Lorenzo WLCG T2 Workshop 12 th June ALICE: Generalities  Distribution of tasks per Tiers in the ALICE computing model  T2 is responsible for MC simulation and analysis  Difference between T1 and T2 for ALICE is only QoS

Patricia Méndez Lorenzo WLCG T2 Workshop 12 th June ALICE: MC Production on T2`s  Production extensively tested in 2 data challenges: PDC’05 and the ongoing PDC’06  Standard setup – LCG/gLite with an ALICE VOBOX (as on the T1s)  Distributed at all T1`s and T2`s  At WMS level they are at the same level  All job submission to T2s is through the Grid:  Installation of application software, including simulation packages is handled through the ALICE Grid tools  Produced MC data are stored on the local SE and transferred for safekeeping to the host T1

Patricia Méndez Lorenzo WLCG T2 Workshop 12 th June ALICE: Specific T2 Requirements  Large amount of memory consumption per job: 2 GB max  Job duration - typically 8 KSI2k hours  Input Data - minimal set of configuration  Output Data - up to 1.5 GB/job, standard 300 MB  The jobs are (naturally) CPU-intensive, no stringent requirement on storage

Patricia Méndez Lorenzo WLCG T2 Workshop 12 th June ALICE: PDC`06 tests of T2  Generally, ALICE is doing tests of all elements of the computing model  The MC production - ongoing  In July 2006, T2-T1 transfers (FTS) tests  Relational matrix T2-host T1 is being built  Installation of FTS client infrastructure is ongoing

Patricia Méndez Lorenzo WLCG T2 Workshop 12 th June ALICE: T1-T2 relations  In the ALICE computing model there are not privileged relations among T1`s and T2`s  Both types of sites hold VOBOXES  Both types of sites hold local LFC  Relations T1-T2 are based in terms of storage and transfers  MC data and AOD from analysis at T2 shipped to the closest T1 for custodial purposes  In countries with a T1, T2`s are the country to refer  France, Germany and Italy  In countries with no T1, this role should be played by the site with the best bandwidth

Patricia Méndez Lorenzo WLCG T2 Workshop 12 th June ATLAS: Generalities  Main Activities to perform in the T2`s  Run the data simulation  Hold AOD (in disk for analysis) generated at T0 and distributed among T1 and T2  Run the user analysis jobs  ATLAS consider a hierarchical structure between T1`s and T2`s  Defined by the generation and distribution of data  Associated to the baseline services required for their production

Patricia Méndez Lorenzo WLCG T2 Workshop 12 th June ATLAS: Distribution of Data  RAW data (T0)  T1: Part of raw data, fraction of ESD and full set of AOD  Nominal AOD data rate: 20MB/s  T2: Receive also AOD from T1`s  Large T2`s storing full sets of AOD`s  Small T2`s sharing full sets  Reprocessing of raw data (T1)  Production of ESD to exchange between T1`s  AOD to distribute to other T1`s and T2`s  Expected a 20MB/s to T2`s  Simulation (T2)  Data transferred to T1`s for ist permanent storage  Low upload from T2 to T1 (few MB/s)

Patricia Méndez Lorenzo WLCG T2 Workshop 12 th June ATLAS: Baseline Services  Baseline Services to build the ATLAS infrastructure  FTS and LFC  The ATLAS DDM uses these blocks to define a hierarchical and distributed data cataloging  Central dataset catalogues: Information of datasets and location  Local File catalogues: Mapping PFN vs LFN  FTS and LFC managed by T1`s  Each T1 provides services to a certain group of T2`s  Defining regions consisting in each T1 and associated T2`s

Patricia Méndez Lorenzo WLCG T2 Workshop 12 th June ATLAS: Definition of regions  Each T1 will group a certain set of T2`s where to download AODs and upload simulated data  Fast and reliable network required  T1 holds the local LFC`s and contains the entries of those files storaged in the T2`s  Fast communication between running jobs at T2 and local catalogue at T1  Geographically closed  Matching of CPU capability of the T2 and Storage power of the T1  Large countries will provide matching capacities between T1 and T2  Small countries will provide either T1 or T2  Handling the problems  Fast human connections (not 12 hours of difference)

Patricia Méndez Lorenzo WLCG T2 Workshop 12 th June List of Services required at T2 during SC4  Not particular requirements besides a SRM based SE  General services provided by the Grid infrastructure, CE, SE...  FTS to be set at T1`s  Nominal rate to T2: 20MB/s during 24 h  Expertise in SE installation and maintenance

Patricia Méndez Lorenzo WLCG T2 Workshop 12 th June CMS: Simulation Production at T2  Simulation Production  Generation Phase  Small input configuration file, large output dataset  High CPU activity  Simulation, reconstruction  New iterations  Set of input data and production of output data  More similar to analysis  I/O activities included  Simulation at T2  Produced data defined by the physics groups  Individual user productions also foreseen  Transparent for the T2

Patricia Méndez Lorenzo WLCG T2 Workshop 12 th June CMS: Simulation Procedure  Workflow  Jobs submitted centrally using a set of automatic tools  Central request queue: Production Manager  Management of jobs by the Production Agents  Human support for possible failures is needed  Dataflow  Simulated data stored at T1  Backup on tape  Reprocessing and distribution to other T2 maybe needed  T2 WNs stored the output locally  For validation purposes  Marge of files locally (I/O operations)

Patricia Méndez Lorenzo WLCG T2 Workshop 12 th June CMS: Requirements  T2 will provide disk and CPU to perform the simulation and the majority of the analysis  CMS requires:  Good behavior: Pass all SFT  Good and solid batch farm  Good storage: size, performance and services  SRM access  FTS channels  The server is placed at T1 but the good transfer is the responsibility of the two ends  Good I/O  Good network  1Gbit/s for data movement  Fast local access for read/write

Patricia Méndez Lorenzo WLCG T2 Workshop 12 th June CMS:Services  CMS Services  CMS software distribution  Integration of LFC and PhEDEx (on top of FTS) in the Data Management system  Tasks of the T2`s  The good behavior of the site is its tasks  Good management of the jobs and fixing the SW area cannot be done by the experiment  Active and responsible T2 is wished  Good communication with the experiment is fundamental

Patricia Méndez Lorenzo WLCG T2 Workshop 12 th June LHCb: Generalities  T0  Generation of raw data, reconstruction, stripping and user analysis  T1  Besides real data taking, similar tasks as T0  T2  Monte Carlo production (no analysis phase)  At least in countries also providing T1  T2 are considered PC-farms with a small disk buffer  Disk used as temporary cache until data are transferred to T1  Full copy also at CERN

Patricia Méndez Lorenzo WLCG T2 Workshop 12 th June LHCb: Analysis at T1  The LHCb analysis jobs consist in selecting the events stored at T1 and focus on a particular channel of analysis  Typical analysis jobs run on a 10 6 event sample  Some analysis jobs will run even larger event samples (10 7 )  The analysis input is completely stored at each T1  The output can be processed in smaller sites  To perform the analysis at the T1 seems to be faster and less expensive in terms of hardware, infrastructure and staff resources than running it at T2`s

Patricia Méndez Lorenzo WLCG T2 Workshop 12 th June LHCb: T2 Requirements  In terms of software  Well known functionality since they only perform MC production  CE, SE, RB, GPBOX (policy enforcement mechanism)  T2 are not technically critical in LHCb  They only have to produce the required amount of MC data  In terms of support  Fundamental a good performance of the T1  Need of dedicated personnel cannot be set aside  Infrastructural and organizational problems must be solved

Patricia Méndez Lorenzo WLCG T2 Workshop 12 th June T1-T2 Association  During the Rome GDB it was asked to the experiments to provide the T2-T1 relationships  Here we have the (in some cases) tentative table  It is still an open question for some sites and an issue to be solved

Patricia Méndez Lorenzo WLCG T2 Workshop 12 th June ALICE  CCIN2P3  French T2, Sejong (Korea), Lyon T2, Madrid (Spain)  CERN  Capa Town (South Africa), Kolkata (India), T2 Federation (Romania), RMKI (Hungary), Athens (Greece), Slovakia, T2 Federation (Poland), Wuhan (China)  FZK  FZU (Czech Republic), RDIG (Russia), GSI and Muenster (Germany)  CNAF  Tier2 Federation  RAL  Birmingham  SARA/NIKHEF  PDSF  Houston

Patricia Méndez Lorenzo WLCG T2 Workshop 12 th June ATLAS  BNL  USA T2 Federation  TRIUMF  Canada T2 Federation  NDGF  Ljubljana  PIC  Spanish T2 Federation  Portuguese Federation  RAL  T2 in UK  CNAF  T2 in Italy

Patricia Méndez Lorenzo WLCG T2 Workshop 12 th June ATLAS (cont.)  CCIN2P3  T2 in France  Romanian Federation  Alternative data path for Beijing  Alternative data path for Tokyo  ASGC  Melbourne  Beijing  Tokyo  NIKHEF  Russian Federation  Israeli Federation  Alternative path for NorthGrid and Prague

Patricia Méndez Lorenzo WLCG T2 Workshop 12 th June CMS  CCIN2P3  Belgium  T2 France  FZK  German T2  Poland  Russia  Switzerland  CNAF  Greece  Hungary  Italian T2  PIC  Portugal T2  Spain T2

Patricia Méndez Lorenzo WLCG T2 Workshop 12 th June CMS (cont.)  ASCC  India  Korea  Pakistan  Taiwan  RAL  Estonia  T2 UK  FNAL  Brazil  US  China, Croatia, Finland to be confirmed

Patricia Méndez Lorenzo WLCG T2 Workshop 12 th June LHCb  T2 mapped to CERN and T1  CNAF  Italian T2  RAL  UK T2  PIC  Spain T2  FZK  German T2  Poland  Switzerland  CERN  Russia  CCIN2P3  France T2, Bulgaria + west of meridian line  NIKHEF  Nederlands + east of meridian line

Patricia Méndez Lorenzo WLCG T2 Workshop 12 th June Summary  All the experiments will run the MC production in the T2  Apart of LHCb, all of them will also run the analysis  CMS foreseen an important I/O activity  Data will always be transferred to T1 for storing  All experiments are requiring good catalogues and transfers services  ALICE puts in this sense the T2 performance to the T1 level  ATLAS defines a more hierarchical structure  The responsibility of the corresponding T1 is fundamental  The association T1-T2 should be clarify as soon as possible on those countries with no T1