Presentation is loading. Please wait.

Presentation is loading. Please wait.

Presented by: Santiago González de la Hoz IFIC – Valencia (Spain) Experience running a distributed Tier-2 and an Analysis.

Similar presentations


Presentation on theme: "Presented by: Santiago González de la Hoz IFIC – Valencia (Spain) Experience running a distributed Tier-2 and an Analysis."— Presentation transcript:

1 Presented by: Santiago González de la Hoz (Santiago.Gonzalez@ific.uv.es) IFIC – Valencia (Spain) Experience running a distributed Tier-2 and an Analysis Facility infrastructure (Tier-3) for ATLAS Experiment at IFIC - Valencia S.González, G. Amorós, F. Fassi, A. Fernández, M. Kaci, A. Lamas, L. March, J. Sánchez, J. Salt

2 Santiago González de la Hoz, Third Conference of the EELA project, 3-5 December Catania (Italy )2 1)Introduction 2)Resources and Services 3)Data transfer and Management 4)Monte Carlo Simulated Data Production 5)Applications: a) Job priorities b) Distributed Analysis 6)Tier3 prototype at IFIC-Valencia 7)Conclusions Contents Spanish Tier2 services Spanish Tier2 applications Tier3

3 Santiago González de la Hoz, Third Conference of the EELA project, 3-5 December Catania (Italy )3 Introduction

4 4 Large Hadron Collider (LHC) The LHC is p-p collider: = 14 TeV and ʆ = 10 34 cm -2 s -1 (10 35 in high lumi phase) There are 4 detectors: - 2 for general purposes: ATLAS and CMS - 1 for B physics: LHCb - 1 for heavy ions: ALICE

5 Santiago González de la Hoz, Third Conference of the EELA project, 3-5 December Catania (Italy )5 A solution: Grid technologies The offline computing: - Output event rate: 200 Hz ~ 10 9 events/year - Average event size (raw data): 1.6 MB/event Processing: - 40,000 of today’s fastest PCs Storage: - Raw data recording rate 320 MB/sec - Accumulating at 5-8 PB/year ATLAS Computing Worldwide LHC Computing Grid (WLCG) ATLAS Data Challenge (DC) ATLAS Production System (ProdSys)

6 Santiago González de la Hoz, Third Conference of the EELA project, 3-5 December Catania (Italy )6 The WLCG for ATLAS has three Grid flavours The Computing Resources can be accessed by the ATLAS Physicists through GRID Middleware components & services Worldwide LHC Computing GRID (WLCG) Project has to deploy and operate the GRID Middleware for LHC experiments –To help them to operate GRID tools with their software base 3 GRID flavours are deployed on the ATLAS Computing Resources: –GRID3/OSG USA –NDG/ARC Scandinavian Countries + other countries –LCG-2/EGEE most of European countries + Canada + Far East (Spanish Tier2) The ATLAS production & analysis systems are designed to be independent on any particular GRID flavour GRID deployments used by HEP ensure the highest possible degree of interoperability at the service and API level

7 Santiago González de la Hoz, Third Conference of the EELA project, 3-5 December Catania (Italy )7 RAL IN2P3 BNL FZK CNAF PIC ICEPP FNAL USC NIKHEF Krakow Legnaro IFAE Taipei TRIUMF IFIC UAM UB IFCA CIEMAT MSU Prague Budapest Cambridge small centres desktops portables Services of disk storage for data files and data bases To provide analysis capacity for the Physics Working groups. To provide simulation according to the requirements of the experiments To provide Network services for the interchange of data with TIER-1 centers Tier-1Tier-2 Tier-3 Tier-2 Functionalities The Spanish ATLAS Tier-2: UAM: Universidad Autónoma de Madrid IFAE: Institut de Física d’Altes Energies de Barcelona IFIC: Institut de Física Corpuscular de València

8 Santiago González de la Hoz, Third Conference of the EELA project, 3-5 December Catania (Italy )8 Resources and Services

9 Santiago González de la Hoz, Third Conference of the EELA project, 3-5 December Catania (Italy )9 Computing resources EquipmentTier-2 IFAEUAMIFIC CPU(kSI2k)434135167132 Storage(TB)87163734 + 4.7 (tape front-end) Strong increase of resources Spanish ATLAS T-2 assuming a contribution of a 5% to the whole effort

10 Santiago González de la Hoz, Third Conference of the EELA project, 3-5 December Catania (Italy )10 It is provided by the Spanish NREN RedIRIS -Connection at 1 Gbps to University backbone -10 Gbps among RedIRIS POP in Valencia, Madrid and Catalunya Atlas collaboration: -More than 9 PetaBytes (> 10 million of files) transferred in the last 6 months among Tiers -The ATLAS link requirement between Tier1 and Tiers-2 has to be 50 MBytes/s (400 Mbps) in a real data taken scenario. Network Data transfer between Spanish Tier1 and Tier2 One week ( 22-30 th October)

11 Santiago González de la Hoz, Third Conference of the EELA project, 3-5 December Catania (Italy )11 Data transfer and Management

12 Santiago González de la Hoz, Third Conference of the EELA project, 3-5 December Catania (Italy )12 IFIC, and the Spanish Tier 2, is participating in different ATLAS data management exercises ATLAS Distributed Data Management (DDM) Throughput transfer rate of 5 MB/s was reached in both directions From Spanish Tier1 to Tier2 and vice versa Data control stored at the Spanish Tier2 via a web site: http://ific.uv.es/atlas-t2-es

13 Santiago González de la Hoz, Third Conference of the EELA project, 3-5 December Catania (Italy )13 Monte Carlo Simulated Data production

14 Santiago González de la Hoz, Third Conference of the EELA project, 3-5 December Catania (Italy )14 One of the main activities of the ATLAS Tiers-2 Monte Carlo production jobs run at Spanish ATLAS Tier-2 inside LCG/EGEE Grid flavour Production of Simulated data Number of jobs done Spanish Tier-2: ~85000 Collaboration: ~330000 CPU Wall time Spanish Tier-2: ~22670 days Collaboration: ~820000 days 2.55%2.76%

15 Santiago González de la Hoz, Third Conference of the EELA project, 3-5 December Catania (Italy )15 Statistics of IFIC ProdSys instance 8 instances for LCG/EGEE at that time Monte Carlo production jobs were managed by an ATLAS production system (ProdSys) instance installed and run at IFIC from January to September 2006 16 % 393714 62910 Jobs produced IFIC ProdSys instance: ~62910 Collaboration: ~393714 16%

16 Santiago González de la Hoz, Third Conference of the EELA project, 3-5 December Catania (Italy )16 Applications

17 Santiago González de la Hoz, Third Conference of the EELA project, 3-5 December Catania (Italy )17 Jobs Priorities Mechanism to differentiate groups of Grid users based on VOMS groups and roles. IFIC is participating in the configuration and deployment of this tool. The Fair-Share applied is: ATLASOther Users 70%30% atlb:/atlas/RolePriority normal user (atlas:atlas) 50 % production50 % softwareNo FS lcgadminNo FS Sporadic jobs to install software Jobs with high priority 50%

18 Santiago González de la Hoz, Third Conference of the EELA project, 3-5 December Catania (Italy )18 Distributed Analysis Users at IFIC are doing physics analysis using GANGA on data distributed around the world (see F. Fassi’s talk in this conference). This tool has been installed in our Tier2 infrastructure GANGA is an easy-to-use front-end for job definition, management and submission. Users interaction via: Its own Python shell (command line) A Graphical User Interface (GUI) In our case the jobs are sent to the LCG/EGEE Grid flavour. We are doing performance test: LCG Resource Broker (RB) gLite Workload Management System (WMS). New RB from EGEE

19 Santiago González de la Hoz, Third Conference of the EELA project, 3-5 December Catania (Italy )19 Distributed Analysis A job in GANGA is constructed from a set of building blocks. Software to be run (application) Analysis algorithm, Simulation programs, etc. Processing system (backend) GANGA allows trivial switching between: Testing on a local batch system (my PC) Testing on large-scale processing on Grid resources Tier0123 Fraction8%37%40%15% ATLAS LHCb Others Since January 2007: - 968 persons using this tool - 150 belong to ATLAS - ~275 users per month Since September 2007: -Around 50k jobs

20 Santiago González de la Hoz, Third Conference of the EELA project, 3-5 December Catania (Italy )20 Tier-3 prototype at IFIC Valencia

21 Santiago González de la Hoz, Third Conference of the EELA project, 3-5 December Catania (Italy )21 ATLAS Data Model

22 Santiago González de la Hoz, Third Conference of the EELA project, 3-5 December Catania (Italy )22 ATLAS analysis This analysis will be performed by ATLAS physics groups preferentially on Tier-2 centres A local user analysis facility, called Tier-3, is necessary The ATLAS analysis is divided into: - scheduled central production - user analysis Users from universities and institutes need some extra computing resources

23 Santiago González de la Hoz, Third Conference of the EELA project, 3-5 December Catania (Italy )23 CE AFS RBdispatcher WN … … Tier-2 WN … … Extra Tier-2 Desktop/Laptop … … Workers Tier-3 Tier-3 prototype at IFIC - Private User Interface - Ways to submit our jobs to other grid sites - Tools to transfer data -PROOF FARM for interactive analysis on DPD -Work with ATLAS software -Use of final analysis tools (i.e. Root, PROOF) -User disk space -Tools (i.e GANGA) and ATLAS software installed at IFIC AFS area -AOD private production for further analysis

24 Santiago González de la Hoz, Third Conference of the EELA project, 3-5 December Catania (Italy )24 Conclusions

25 Santiago González de la Hoz, Third Conference of the EELA project, 3-5 December Catania (Italy )25 The experience gained by running the ATLAS Spanish distributed Tier-2 is quite relevant and allows to improve the efficiency of the various services provided to the Spanish physicist community as well as to the whole ATLAS collaboration At IFIC Valencia we are proposing a possible Tier-3 configuration and software setups that match the requirements according to the DPD analysis needs as formulated by the ATLAS Analysis Model group Summary and Conclusions

26 Santiago González de la Hoz, Third Conference of the EELA project, 3-5 December Catania (Italy )26 Backup

27 Santiago González de la Hoz, Third Conference of the EELA project, 3-5 December Catania (Italy )27 ATLAS data model Full and fast ATLAS detector simulation chain, using ATHENA framework Event Generator (HEP MC) Geant 4 Simulation ATLFAST Geant 4 Digitization Reconstruction (physics data) ESD & CBNT AOD DPD ESDAODDPDhistos ATHENA ROOT AOD buildingAnalysisDPD building ESD = Event Summary Data AOD = Analysis Output Data DPD = Derived Physics Data


Download ppt "Presented by: Santiago González de la Hoz IFIC – Valencia (Spain) Experience running a distributed Tier-2 and an Analysis."

Similar presentations


Ads by Google