1-2 March 2006 P. Capiluppi INFN Tier1 for the LHC Experiments: ALICE, ATLAS, CMS, LHCb.

Slides:



Advertisements
Similar presentations
23/04/2008VLVnT08, Toulon, FR, April 2008, M. Stavrianakou, NESTOR-NOA 1 First thoughts for KM3Net on-shore data storage and distribution Facilities VLV.
Advertisements

DATA PRESERVATION IN ALICE FEDERICO CARMINATI. MOTIVATION ALICE is a 150 M CHF investment by a large scientific community The ALICE data is unique and.
Large scale data flow in local and GRID environment V.Kolosov, I.Korolko, S.Makarychev ITEP Moscow.
December 17th 2008RAL PPD Computing Christmas Lectures 11 ATLAS Distributed Computing Stephen Burke RAL.
Zhiling Chen (IPP-ETHZ) Doktorandenseminar June, 4 th, 2009.
Alexandre A. P. Suaide VI DOSAR workshop, São Paulo, 2005 STAR grid activities and São Paulo experience.
Computing Infrastructure Status. LHCb Computing Status LHCb LHCC mini-review, February The LHCb Computing Model: a reminder m Simulation is using.
F.Fanzago – INFN Padova ; S.Lacaprara – LNL; D.Spiga – Universita’ Perugia M.Corvo - CERN; N.DeFilippis - Universita' Bari; A.Fanfani – Universita’ Bologna;
F. Fassi, S. Cabrera, R. Vives, S. González de la Hoz, Á. Fernández, J. Sánchez, L. March, J. Salt, A. Lamas IFIC-CSIC-UV, Valencia, Spain Third EELA conference,
Status of the LHCb MC production system Andrei Tsaregorodtsev, CPPM, Marseille DataGRID France workshop, Marseille, 24 September 2002.
November SC06 Tampa F.Fanzago CRAB a user-friendly tool for CMS distributed analysis Federica Fanzago INFN-PADOVA for CRAB team.
Tier-2  Data Analysis  MC simulation  Import data from Tier-1 and export MC data CMS GRID COMPUTING AT THE SPANISH TIER-1 AND TIER-2 SITES P. Garcia-Abia.
1 LCG-France sites contribution to the LHC activities in 2007 A.Tsaregorodtsev, CPPM, Marseille 14 January 2008, LCG-France Direction.
Stefano Belforte INFN Trieste 1 Middleware February 14, 2007 Resource Broker, gLite etc. CMS vs. middleware.
US LHC OSG Technology Roadmap May 4-5th, 2005 Welcome. Thank you to Deirdre for the arrangements.
1 Andrea Sciabà CERN Critical Services and Monitoring - CMS Andrea Sciabà WLCG Service Reliability Workshop 26 – 30 November, 2007.
23.March 2004Bernd Panzer-Steindel, CERN/IT1 LCG Workshop Computing Fabric.
Lofar Information System on GRID A.N.Belikov. Lofar Long Term Archive Prototypes: EGEE Astro-WISE Requirements to data storage Tiers Astro-WISE adaptation.
The CMS Top 5 Issues/Concerns wrt. WLCG services WLCG-MB April 3, 2007 Matthias Kasemann CERN/DESY.
Testing and integrating the WLCG/EGEE middleware in the LHC computing Simone Campana, Alessandro Di Girolamo, Elisa Lanciotti, Nicolò Magini, Patricia.
EGEE-II INFSO-RI Enabling Grids for E-sciencE EGEE and gLite are registered trademarks CRAB: the CMS tool to allow data analysis.
Data Transfer Service Challenge Infrastructure Ian Bird GDB 12 th January 2005.
INFSO-RI Enabling Grids for E-sciencE CRAB: a tool for CMS distributed analysis in grid environment Federica Fanzago INFN PADOVA.
Predrag Buncic (CERN/PH-SFT) Software Packaging: Can Virtualization help?
1 A Scalable Distributed Data Management System for ATLAS David Cameron CERN CHEP 2006 Mumbai, India.
Ian Bird Overview Board; CERN, 8 th March 2013 March 6, 2013
ATLAS Distributed Analysis Dietrich Liko IT/GD. Overview  Some problems trying to analyze Rome data on the grid Basics Metadata Data  Activities AMI.
Distributed Analysis Tutorial Dietrich Liko. Overview  Three grid flavors in ATLAS EGEE OSG Nordugrid  Distributed Analysis Activities GANGA/LCG PANDA/OSG.
ATLAS Distributed Analysis DISTRIBUTED ANALYSIS JOBS WITH THE ATLAS PRODUCTION SYSTEM S. González D. Liko
LHCb Current Understanding of Italian Tier-n Centres Domenico Galli, Umberto Marconi Roma, January 23, 2001.
Meeting with University of Malta| CERN, May 18, 2015 | Predrag Buncic ALICE Computing in Run 2+ P. Buncic 1.
Breaking the frontiers of the Grid R. Graciani EGI TF 2012.
A Computing Tier 2 Node Eric Fede – LAPP/IN2P3. 2 Eric Fede – 1st Chinese-French Workshop Plan What is a Tier 2 –Context and definition To be a Tier 2.
SAM architecture EGEE 07 Service Availability Monitor for the LHC experiments Simone Campana, Alessandro Di Girolamo, Nicolò Magini, Patricia Mendez Lorenzo,
Acronyms GAS - Grid Acronym Soup, LCG - LHC Computing Project EGEE - Enabling Grids for E-sciencE.
Job Priorities and Resource sharing in CMS A. Sciabà ECGI meeting on job priorities 15 May 2006.
BaBar & Grid Eleonora Luppi for the BaBarGrid Group TB GRID Bologna 15 febbraio 2005.
WLCG Operations Coordination Andrea Sciabà IT/SDC GDB 11 th September 2013.
Hall D Computing Facilities Ian Bird 16 March 2001.
LHCb Computing 2015 Q3 Report Stefan Roiser LHCC Referees Meeting 1 December 2015.
Operations Workshop Introduction and Goals Markus Schulz, Ian Bird Bologna 24 th May 2005.
Claudio Grandi INFN Bologna Workshop congiunto CCR e INFNGrid 13 maggio 2009 Le strategie per l’analisi nell’esperimento CMS Claudio Grandi (INFN Bologna)
Dynamic Extension of the INFN Tier-1 on external resources
WLCG IPv6 deployment strategy
INFNGRID Technical Board, Feb
Davide Salomoni INFN-CNAF Bologna, Jan 12, 2006
ALICE and LCG Stefano Bagnasco I.N.F.N. Torino
Real Time Fake Analysis at PIC
LCG Service Challenge: Planning and Milestones
Virtualization and Clouds ATLAS position
GDB 8th March 2006 Flavia Donno IT/GD, CERN
Jan 12, 2005 Improving CMS data transfers among its distributed Computing Facilities N. Magini CERN IT-ES-VOS, Geneva, Switzerland J. Flix Port d'Informació.
Data Challenge with the Grid in ATLAS
INFN-GRID Workshop Bari, October, 26, 2004
The LHCb Software and Computing NSS/IEEE workshop Ph. Charpentier, CERN B00le.
Status and Prospects of The LHC Experiments Computing
LHCb Computing Model and Data Handling Angelo Carbone 5° workshop italiano sulla fisica p-p ad LHC 31st January 2008.
Readiness of ATLAS Computing - A personal view
Artem Trunov and EKP team EPK – Uni Karlsruhe
Simulation use cases for T2 in ALICE
ALICE – FAIR Offline Meeting KVI (Groningen), 3-4 May 2010
CernVM Status Report Predrag Buncic (CERN/PH-SFT).
New strategies of the LHC experiments to meet
R. Graciani for LHCb Mumbay, Feb 2006
LHC Data Analysis using a worldwide computing grid
Pierre Girard ATLAS Visit
HLRmon accounting portal
ATLAS DC2 & Continuous production
WLCG Status – 1 Use remains consistently high
The LHCb Computing Data Challenge DC06
Presentation transcript:

1-2 March 2006 P. Capiluppi INFN Tier1 for the LHC Experiments: ALICE, ATLAS, CMS, LHCb

2 P. Capiluppi 1-2 March 2006 LHC: role of a Tier1 (short remind) u Custodial of a fraction (1/#Tier1s) of Raw & Reconstructed (ESD aka RECO aka DST) Data u Full set of AODs u Reprocessing of data (re-reconstruction) u Skimming and selection of data (Large analysis jobs by Physics Groups) {ALICE, ATLAS, CMS} u User analyses {LHCb} u Distribution of data to the Tier2s u Many services needed for that (SLA):  Accounting (fair share of resources)  Permanent storage  Data access, location and distribution  User accesses  Job tracking and monitoring  Availability 24x7, … etc.

3 P. Capiluppi 1-2 March 2006 The LHC (shared) Tier1-CNAF

4 P. Capiluppi 1-2 March 2006 Usage so far of the Tier1-CNAF by the Italian LHC community u Simulation & reconstruction u Data & Computing Challenges u WLCG Service challenges u Analysis of simulated data u Custodial for many produced data (simulated and real, like test-beams data, cosmic, etc.) u Test of many “new” functionalities  Both “Grid-like” and “Experiment-specific” u LHC is (still) not running, therefore  LHC Experiments activities have spikes  The role (and use) of the Tier1 is (still) not that of the Computing TDRs

5 P. Capiluppi 1-2 March 2006 Use of Tier1 (Grid only) ATLAS LHCb CMS ALICE Nov 2005 Total CPU Time LHC Jobs/day: Dec05-Feb06

6 P. Capiluppi 1-2 March 2006 Analysis: CMS-CRAB Monitor Submitted jobs Submitted from Destination of jobs

7 P. Capiluppi 1-2 March 2006 ALICE Simulation jobs at CNAF Tier1 Pb-Pb events JAN-27 FEB-10 Done jobs

8 P. Capiluppi 1-2 March 2006 WLCG SC3 rates CNAF

9 P. Capiluppi 1-2 March 2006 Services: really “a number” of…  CEs, SEs, RBs, UIs, VOMS, Information Systems, Catalogs, etc. And…  Mass Storage system, disk file systems, LAN configuration, data-base services, Accounting, Monitoring, compilers, libraries, experiments’ software and libraries, shared caches, etc. u Most of them are there, however u Integration of WLCG and specific needs of the experiments might be a problem u The INFN Tier1 is part of WLCG, INFN-Grid, EGEE programs: mostly integrated, but… u Castor(2) is quite new (and in evolution), but is a key element of the Tier1: support by and collaboration with CERN? u File transfer from/to the Tier0, the Tier1s, the Tier2s is still largely untested (SC4) u Publication of data (files, or better “datasets”) needs a strong interaction with experiments for the implementation at the Tier1

10 P. Capiluppi 1-2 March 2006 Services: really “a number” of… Storage Storage Storage (and CPUs) And in addition: u WLCG Service Challenge 4?  “Production” vs “Challenge”  Duplication of effort? u Partitioning the Tier1?  For different scopes, different experiments and also different needs within an experiment?  Is it needed, desirable, possible? And if yes, how?

11 P. Capiluppi 1-2 March 2006 Supporting the Experiments  Single shared Farm  Same File system  Experiment dedicated Storage Elements  Common services (WLCG) u Too many Experiments?  Integration with WLCG is enough? And the non-WLCG Experiments? è Also, compatibility with other Experiments Tier1s  Problem solving and resources-competition u Specific services needed by Experiments (maybe temporally)  How to manage? Procedures? u User support  Accounts (on UIs)  Interactive access (development machines)  Dedicated queues for particular purposes  Etc. Good, but

12 P. Capiluppi 1-2 March 2006 And … Experiments supporting Tier1 u LHC Experiments personnel is actively working with the Tier1 personnel on:  Disk storage file-systems performances testing {LHCb, CMS, ALICE, …} è GPFS, dCache, Parallel File Systems, …  File Transfers tools {CMS,ALICE, …} è FTS, …  Catalogs { ALICE, ATLAS, CMS,…} è Data-base services usage and implementation (Oracle, MySQL, Squidd, Frontier, …)  VO-Boxes {ALICE, ATLAS, CMS, …} è Harmonization with the common services  WLCG Service Challenges {CMS, ALICE, ATLAS, …} è Service implementation and running  WMS (Workload Management System) {ATLAS, ALICE, CMS, …} è Feedback on performances and setup

13 P. Capiluppi 1-2 March 2006 WMS performances tests: ATLAS/LCG/EGEE Task Force gLiteLCG (sec/job)Submission Match- making OverallSubmission Match- making Overall Simple hello world ~ Simple hello world with CE requirement ~ LS with 48 KB inputsandbox (partially shared) ~ u gLite  Observable effect comes from the number of matched CEs  The inputsandbox effect on submission still not be fully understood with the data in the table u LCG  Match-making takes place right after submission  No observable effect from the number of matched CEs  Submission of job with inputsandbox is about 2 times slower than simple hello world job

14 P. Capiluppi 1-2 March 2006 What we would like to have now … and is (still) missing u Storage accounting and quotas  For the use of the Experiments (runs or datasets, not files) u Job priority (Tier1 has to guarantee agreed sharing) u Catalogs  Data-bases for data-access: common and experiments specific u Experiments support: too few people  <~ one person per experiment (LHC) u Transparent access to heterogeneous hardware u Link and co-ordination with Experiments is a KEY ISSUE  Operations  Planning  Clear interfaces and contacts (for every issue)  Testing & integration of Experiments specific software è How, when and if possible (decision taking process) Both functions urgently needed

15 P. Capiluppi 1-2 March 2006 Conclusions A Tier1 already working for the LHC Experiments  With reasonable satisfaction  We got a lot of support, both common and specific, by very technically competent personnel  However we are very worried of  The Tier1 is largely understaffed  and also very few senior personnel for management Ü consequently a lack of personnel dedicated to Experiments support  However, experiments’ people from outside sites is already supporting Tier1  Organization of the link/interaction with experiments  Must be improved  To Guarantee the experiments commitments and the Tier1 running  in addition  Few (many?) other areas that need urgent investments  Storage access and use  User interaction with the Center  Procedures for interventions (emergency and routine)  Means for notifications of events (of any kind, not only for Italy) Last but not the least: Hardware procurement