II IFIC Álvaro Fernández CasaníISGC2012, Taipei, 2th March 2012 Evolution of the Atlas data and computing model for a Tier-2 in the EGI infrastructure.

Slides:



Advertisements
Similar presentations
EGEE-III INFSO-RI Enabling Grids for E-sciencE EGEE and gLite are registered trademarks Deployment and Management of Grid Services.
Advertisements

S. Gadomski, "ATLAS computing in Geneva", journee de reflexion, 14 Sept ATLAS computing in Geneva Szymon Gadomski description of the hardware the.
The new The new MONARC Simulation Framework Iosif Legrand  California Institute of Technology.
1 Bridging Clouds with CernVM: ATLAS/PanDA example Wenjing Wu
Large scale data flow in local and GRID environment V.Kolosov, I.Korolko, S.Makarychev ITEP Moscow.
Ian Fisk and Maria Girone Improvements in the CMS Computing System from Run2 CHEP 2015 Ian Fisk and Maria Girone For CMS Collaboration.
CERN - IT Department CH-1211 Genève 23 Switzerland t Monitoring the ATLAS Distributed Data Management System Ricardo Rocha (CERN) on behalf.
ATLAS DQ2 Deletion Service D.A. Oleynik, A.S. Petrosyan, V. Garonne, S. Campana (on behalf of the ATLAS Collaboration)
US ATLAS Western Tier 2 Status and Plan Wei Yang ATLAS Physics Analysis Retreat SLAC March 5, 2007.
Computing for ILC experiment Computing Research Center, KEK Hiroyuki Matsunaga.
José M. Hernández CIEMAT Grid Computing in the Experiment at LHC Jornada de usuarios de Infraestructuras Grid January 2012, CIEMAT, Madrid.
Computing and LHCb Raja Nandakumar. The LHCb experiment  Universe is made of matter  Still not clear why  Andrei Sakharov’s theory of cp-violation.
Take on messages from Lecture 1 LHC Computing has been well sized to handle the production and analysis needs of LHC (very high data rates and throughputs)
3rd June 2004 CDF Grid SAM:Metadata and Middleware Components Mòrag Burgon-Lyon University of Glasgow.
F. Fassi, S. Cabrera, R. Vives, S. González de la Hoz, Á. Fernández, J. Sánchez, L. March, J. Salt, A. Lamas IFIC-CSIC-UV, Valencia, Spain Third EELA conference,
Using Virtual Servers for the CERN Windows infrastructure Emmanuel Ormancey, Alberto Pace CERN, Information Technology Department.
Introduction to dCache Zhenping (Jane) Liu ATLAS Computing Facility, Physics Department Brookhaven National Lab 09/12 – 09/13, 2005 USATLAS Tier-1 & Tier-2.
November SC06 Tampa F.Fanzago CRAB a user-friendly tool for CMS distributed analysis Federica Fanzago INFN-PADOVA for CRAB team.
Cracow Grid Workshop October 2009 Dipl.-Ing. (M.Sc.) Marcus Hilbrich Center for Information Services and High Performance.
And Tier 3 monitoring Tier 3 Ivan Kadochnikov LIT JINR
Architecture and ATLAS Western Tier 2 Wei Yang ATLAS Western Tier 2 User Forum meeting SLAC April
T3 analysis Facility V. Bucard, F.Furano, A.Maier, R.Santana, R. Santinelli T3 Analysis Facility The LHCb Computing Model divides collaboration affiliated.
1 User Analysis Workgroup Discussion  Understand and document analysis models  Best in a way that allows to compare them easily.
US LHC OSG Technology Roadmap May 4-5th, 2005 Welcome. Thank you to Deirdre for the arrangements.
1 Andrea Sciabà CERN Critical Services and Monitoring - CMS Andrea Sciabà WLCG Service Reliability Workshop 26 – 30 November, 2007.
CMS Computing Model Simulation Stephen Gowdy/FNAL 30th April 2015CMS Computing Model Simulation1.
SLACFederated Storage Workshop Summary For pre-GDB (Data Access) Meeting 5/13/14 Andrew Hanushevsky SLAC National Accelerator Laboratory.
The ATLAS Cloud Model Simone Campana. LCG sites and ATLAS sites LCG counts almost 200 sites. –Almost all of them support the ATLAS VO. –The ATLAS production.
EGI-InSPIRE EGI-InSPIRE RI DDM solutions for disk space resource optimization Fernando H. Barreiro Megino (CERN-IT Experiment Support)
David Adams ATLAS ATLAS distributed data management David Adams BNL February 22, 2005 Database working group ATLAS software workshop.
Doug Benjamin Duke University. 2 ESD/AOD, D 1 PD, D 2 PD - POOL based D 3 PD - flat ntuple Contents defined by physics group(s) - made in official production.
Data Transfer Service Challenge Infrastructure Ian Bird GDB 12 th January 2005.
David Adams ATLAS ATLAS-ARDA strategy and priorities David Adams BNL October 21, 2004 ARDA Workshop.
ATLAS-specific functionality in Ganga - Requirements for distributed analysis - ATLAS considerations - DIAL submission from Ganga - Graphical interfaces.
MND review. Main directions of work  Development and support of the Experiment Dashboard Applications - Data management monitoring - Job processing monitoring.
Global ADC Job Monitoring Laura Sargsyan (YerPhI).
EGI-InSPIRE RI EGI-InSPIRE EGI-InSPIRE RI Data Management Highlights in TSA3.3 Services for HEP Fernando Barreiro Megino,
Pavel Nevski DDM Workshop BNL, September 27, 2006 JOB DEFINITION as a part of Production.
1 A Scalable Distributed Data Management System for ATLAS David Cameron CERN CHEP 2006 Mumbai, India.
CernVM-FS Infrastructure for EGI VOs Catalin Condurache - STFC RAL Tier1 EGI Webinar, 5 September 2013.
Distributed Analysis Tutorial Dietrich Liko. Overview  Three grid flavors in ATLAS EGEE OSG Nordugrid  Distributed Analysis Activities GANGA/LCG PANDA/OSG.
BNL dCache Status and Plan CHEP07: September 2-7, 2007 Zhenping (Jane) Liu for the BNL RACF Storage Group.
Dynamic Data Placement: the ATLAS model Simone Campana (IT-SDC)
EGEE-III INFSO-RI Enabling Grids for E-sciencE EGEE and gLite are registered trademarks Response of the ATLAS Spanish Tier2 for.
ATLAS Distributed Computing ATLAS session WLCG pre-CHEP Workshop New York May 19-20, 2012 Alexei Klimentov Stephane Jezequel Ikuo Ueda For ATLAS Distributed.
Meeting with University of Malta| CERN, May 18, 2015 | Predrag Buncic ALICE Computing in Run 2+ P. Buncic 1.
Breaking the frontiers of the Grid R. Graciani EGI TF 2012.
ATLAS TIER3 in Valencia Santiago González de la Hoz IFIC – Instituto de Física Corpuscular (Valencia)
Considerations on Using CernVM-FS for Datasets Sharing Within Various Research Communities Catalin Condurache STFC RAL UK ISGC, Taipei, 18 March 2016.
Claudio Grandi INFN Bologna Virtual Pools for Interactive Analysis and Software Development through an Integrated Cloud Environment Claudio Grandi (INFN.
Acronyms GAS - Grid Acronym Soup, LCG - LHC Computing Project EGEE - Enabling Grids for E-sciencE.
Joe Foster 1 Two questions about datasets: –How do you find datasets with the processes, cuts, conditions you need for your analysis? –How do.
Open Science Grid Consortium Storage on Open Science Grid Placing, Using and Retrieving Data on OSG Resources Abhishek Singh Rana OSG Users Meeting July.
Efi.uchicago.edu ci.uchicago.edu Sharing Network Resources Ilija Vukotic Computation and Enrico Fermi Institutes University of Chicago Federated Storage.
ATLAS Computing Model Ghita Rahal CC-IN2P3 Tutorial Atlas CC, Lyon
Enabling Grids for E-sciencE Claudio Cherubino INFN DGAS (Distributed Grid Accounting System)
Scientific Data Processing Portal and Heterogeneous Computing Resources at NRC “Kurchatov Institute” V. Aulov, D. Drizhuk, A. Klimentov, R. Mashinistov,
Dynamic Extension of the INFN Tier-1 on external resources
WLCG IPv6 deployment strategy
Status: ATLAS Grid Computing
Virtualization and Clouds ATLAS position
Belle II Physics Analysis Center at TIFR
Added value of new features of the ATLAS computing model and a shared Tier-2 and Tier-3 facilities from the community point of view Gabriel Amorós on behalf.
LHCb Computing Model and Data Handling Angelo Carbone 5° workshop italiano sulla fisica p-p ad LHC 31st January 2008.
Readiness of ATLAS Computing - A personal view
Evolution of the distributed computing model The case of CMS
ATLAS Sites Jamboree, CERN January, 2017
Статус ГРИД-кластера ИЯФ СО РАН.
Artem Trunov and EKP team EPK – Uni Karlsruhe
Ákos Frohner EGEE'08 September 2008
Presentation transcript:

II IFIC Álvaro Fernández CasaníISGC2012, Taipei, 2th March 2012 Evolution of the Atlas data and computing model for a Tier-2 in the EGI infrastructure Álvaro Fernández Casaní on behalf of the IFIC Atlas computing group II PCI2010 Workshop Valencia, 10 th -12 th January

II IFIC Álvaro Fernández CasaníISGC2012, Taipei, 2th March 2012 Outline Introduction: IFIC and Atlas Tier2 SPAIN Evolution of the Atlas computing model – Flattening the model to a Mesh – Tier 2 duties – Data distribution: Tier2s policy Data shares ( site classification) Availability and Connectivity Networking – Dynamic data distribution and caching – Remote data access Real Users example 2

II IFIC Álvaro Fernández CasaníISGC2012, Taipei, 2th March 2012 ATLAS CENTERS 3

II IFIC Álvaro Fernández CasaníISGC2012, Taipei, 2th March Introduction

II IFIC Álvaro Fernández CasaníISGC2012, Taipei, 2th March SPAIN CONTRIBUTION TO ATLAS

II IFIC Álvaro Fernández CasaníISGC2012, Taipei, 2th March 2012 Spanish ATLAS Tier2 6

II IFIC Álvaro Fernández CasaníISGC2012, Taipei, 2th March 2012 IFIC Computing Infrastructure Resources 7 EGI CSIC resources (not experiment resources): 25% IFIC users 25% CSIC 25% European Grid 25% Iberian Grid To migrate Scientific application to the GRID ATLAS

II IFIC Álvaro Fernández CasaníISGC2012, Taipei, 2th March 2012 Last year summary 8 3,579,606 Jobs 6,038,754 CPU consumption hours 13,776,655 KSi2K (CPU time normalised) Supporting 22 Virtual Organizations (VOs)

II IFIC Álvaro Fernández CasaníISGC2012, Taipei, 2th March 2012 Prevision 2012 For IFIC 9 For 2012: – We already fulfil the CPU requirements – Increasing Disk: 230 TB => 4 SuperMicro x 57.6 (2 TB disks) YEAR CPU(HS06) DISK(TB)

II IFIC Álvaro Fernández CasaníISGC2012, Taipei, 2th March 2012 EVOLUTION OF THE ATLAS COMPUTING MODEL 10

II IFIC Álvaro Fernández CasaníISGC2012, Taipei, 2th March 2012 Previous ATLAS cloud model Hierarchical Model based on the Monarc network topology CLOUDS OF Tier-1 with its geographical related Tier-2 Possible communications: T0-T1 T1-T1 Intra-cloud T1-T2 Forbidden communications: Inter-cloud T1-T2 Inter-cloud T2-T2 Simone Campana Software & Computing Workshop (April’11)

II IFIC Álvaro Fernández CasaníISGC2012, Taipei, 2th March 2012 Shortcomings of Cloud Boundaries Consolidation of User Analysis outputs problematic – Analysis runs over many cloud – Consolidation needs to “hop” the data through the T1s MonteCarlo production must confine one task to one cloud – To facilitate the output aggregation at T1 Replication of datasets (PD2P) more inflexible – Need to replicate from T1 to T2s of the same cloud – Or “hop” through a T1 T2s can not really be used as storage of “primary” data – Issues in creating secondary copies at other T1s Tier-2 limited usability due to T1 downtimes – Dependency on LFC data catalog More info: Simone Campana Software & Computing Workshop (April’11) DATA FLOWS TOO STRICT OPERATIONAL PROBLEMS

II IFIC Álvaro Fernández CasaníISGC2012, Taipei, 2th March 2012 Solving issues 13 LFC Spanish Cloud planned this week ! Make Tier-2 activities more independent Reduce service dependencies: – Move Catalog (LFC) to CERN: Backup at US. Another at UK Remove from tier-1 Analysis jobs will be able to run during T1 downtime Production jobs can keep running during T1 downtime Benefit from technology improvements: – The network model today does not really resemble the Monarc model Many T2s are connected very well with many T1s Many T2s are not that well connected with their T1 – So it makes sense to break cloud boundaries.

II IFIC Álvaro Fernández CasaníISGC2012, Taipei, 2th March 2012 Flattening the model to break cloud boundaries. – Let DDM freely transfer from every site to every site – Inter-cloud direct transfers – Multi-cloud production Not quite there – Some links simply have limited bandwidth – In those cases, several hops will anyway be needed Defining T2Ds is an attempt to break cloud boundaries “for the cases where it makes sense” Network is a key component to optimize the use of the storage and CPU

II IFIC Álvaro Fernández CasaníISGC2012, Taipei, 2th March 2012 Categorization of Sites: T2D T2Ds are the Tier-2 sites that can directly transfer any size of files (especially large files) not only from/to the T1 of the cloud (to which they are associated to) but also from/to the other T1s of the other clouds. T2Ds T2Ds are candidates for T2Ds – multi-cloud production sites – primary replica repository sites (T2PRR)T2PRR

II IFIC Álvaro Fernández CasaníISGC2012, Taipei, 2th March 2012 ATLAS Tier2 Share Revision In 2010, many Tier2s got full; T2 data distribution share revision The data distribution to Tier-2 (and Tier-3g) sites should take into account the network connectivity of the sites (thus, T2Ds should have more data)T2Ds the availability for analysis. Introduced at Software & Computing Workshop 20 July 2011Software & Computing Workshop 20 July 2011 – Up to ADC Ops to define the distribution policy among T2s: Preference to Reliable sites for analysis (fraction of time with analysis queue online) Well connected sites (T2Ds) to transfer datasets quicklyT2Ds – Started from summer 2011 Based on T2D list defined on 1st July 2011T2D – Simplified with 4 groups treated equally : Alpha (60% share-17 sites) : T2Ds with > 90 % reliability (60%)T2Ds Bravo (30 % share-21 sites) : non-T2Ds with > 90% reliability Charlie (10 % share-12 sites) : Any T2 with 80%<reliability<90% Delta (0% share-13 sites): Any T2 with reliability < 80 % – one single list both for pre-placement and dynamic data 16

II IFIC Álvaro Fernández CasaníISGC2012, Taipei, 2th March 2012 ATLAS Tier2 17

II IFIC Álvaro Fernández CasaníISGC2012, Taipei, 2th March 2012 Spanish Cloud Data shares Last Month 18 IFIC IFIC is alpha T2D site -> also candidates for more datasets in PD2P ( see later)

II IFIC Álvaro Fernández CasaníISGC2012, Taipei, 2th March 2012 Availability 19 LAST MONTH Hammercloud: for ATLAS Distributed Analysis testing system For avoiding jobs go to problematic sites Can exclude sites if test jobs are not passed EGI has own different availability tools: Based on ops VO Can have conflicting results: Last month IFIC was ATLAS ALPHA SITE ( >90 % availability) But had just 64% ops egi availability due to a config issue

II IFIC Álvaro Fernández CasaníISGC2012, Taipei, 2th March 2012 Connectivity Inter-cloud direct transfers 20 Transfer monitoring

II IFIC Álvaro Fernández CasaníISGC2012, Taipei, 2th March 2012 Services User Interfaces: – UI00: (LVSDR) UI04: SL 5 64 bits UI05: SL 4 64 bits UI06: SL 5 64 bits Computing Elements (cream-ce) and Worker nodes : – WN (CE02): SL 5 64 bits Glite 3.2 with MPI and shared home in Lustre – WN (CE03): SL 5 64 bits Glite 3.2 (puppet) – WN (CE05): SL 5 64 bits Glite 3.2 (puppet) 21

II IFIC Álvaro Fernández CasaníISGC2012, Taipei, 2th March 2012 Job Distribution at Tier-2 Tier 2 usage in data processing – More job slots used at Tier2s than at Tier1 Large part of MC production and analysis done at Tier2s More ‘digi+reco’ jobs and “group production” at Tier-2s – less weight at Tier-1s Production shares to be implemented to limit “group production” jobs at Tier-1, and run at Tier- 2s Analysis share reduced at Tier-1s 22 PIC TIER-1 – 2012 More Analisys jobs at IFIC TIER 2

II IFIC Álvaro Fernández CasaníISGC2012, Taipei, 2th March 2012 Datadisk in tier2 Tier 2 usage disk space – T2_datadisk ≈ T1_datadisk in volume – T2_datadisk ≈ input for data processing (secondary replicas) 23 T2_datadisk T1_datadisk 20PB Total Capacity (Usable/Margin) primary February ‘12 secondary Used (according SRM) Used (according dq2)

II IFIC Álvaro Fernández CasaníISGC2012, Taipei, 2th March 2012 For Spanish Sites 24 February ‘12

II IFIC Álvaro Fernández CasaníISGC2012, Taipei, 2th March 2012 IFIC Storage resources Based in SUN: – X X4540 – Lustre v1.8 – New Disk servers: SuperMicro, SAS disks, capacity 2 TB per disk and 10 Gbe connectivity – Srmv2 (STORM) – 3 x gridftp servers 25

II IFIC Álvaro Fernández CasaníISGC2012, Taipei, 2th March 2012 Storage resources 26 Pledges

II IFIC Álvaro Fernández CasaníISGC2012, Taipei, 2th March 2012 Storage resources 27

II IFIC Álvaro Fernández CasaníISGC2012, Taipei, 2th March 2012 IFIC Lustre Filesystems and Pools With the Lustre release 1.8.1, we have added pool capabilities to the installation. Allows us to partition the HW inside a given Filesystem – Better data management – Assign determined OSTs to a application/group of users – Can separate heterogeneous disks in the future CVMFS FOR ATLAS 4 Filesystems with various pools: /lustre/ific.uv.es Read Only on WNs and UI. RW on GridFTP + SRM /lustre/ific.uv.es/sw. Software: ReadWrite on WNs, UI (ATLAS USES CVMFS) /lustre/ific.uv.es/grid/atlas/t3 Space for T3 users: ReadWrite on WNs and UI on /rhome type lustre. Shared Home for users and mpi applications: ReadWrite on WNs and UI Different T2 / T3 ATLAS pools, and separated from other Vos Better Management and Performance Different T2 / T3 ATLAS pools, and separated from other Vos Better Management and Performance

II IFIC Álvaro Fernández CasaníISGC2012, Taipei, 2th March 2012 CERN CVMFS CVMFS: A caching, http based read-only filesystem optimised for delivering experiment software to (virtual) machines. 29

II IFIC Álvaro Fernández CasaníISGC2012, Taipei, 2th March 2012 CVMFS at IFIC (+SQUID) 30 Installed in all our WNs and UIs since September 2011 Easy installation (only 2 configuration files) 20 GB/repository There is not a dedicated partition Using the same SQUID as frontier (sq5.ific.uv.es) the squid server is pointing to the public replicas (CERN, BNL, RAL) The performance until now is so good. We are monitoring SQUID via CACTI (SNMP) Sept’2011 Reduced job setup time Better performance for analysis jobs Reduced load on Lustre MDS Reduced job setup time Better performance for analysis jobs Reduced load on Lustre MDS

II IFIC Álvaro Fernández CasaníISGC2012, Taipei, 2th March 2012 Storm SRM server Access like a local file system, so it can create and control all the data available in disk with a SRM interface. Coordinate data transfers, real data streams are transferred with a gridFTPserver in another physical machine. Enforce authorization policies defined by the site and the VO. Developed Authorization plugin to respect local file system with the corresponding user mappings and ACL's

II IFIC Álvaro Fernández CasaníISGC2012, Taipei, 2th March 2012 IFIC Network Cisco 4500 – core centre infrastructure. Cisco 6500 – scientific computing infrastructure Data servers: Sun with 1GB connection. Channel bonding tests were made aggregation 2 channels SuperMicro with 10 Gbe WNs and GridFTP servers with 1GB 10 Gbe Data Network based on gigabyte ethernet. 10GBe uplink to the backbone network reach 1Gbit per data server Recent Upgrade FTP servers to satisfy requirements for alpha sites

II IFIC Álvaro Fernández CasaníISGC2012, Taipei, 2th March 2012 Data Distribution: dynamic data distribution and caching 33

II IFIC Álvaro Fernández CasaníISGC2012, Taipei, 2th March Atlas presentation – Michael Ernst – BNL – LHCONE 12 May 2011, Washington

II IFIC Álvaro Fernández CasaníISGC2012, Taipei, 2th March 2012 PandaDynamicDataPlacement (PD2P) resources utilization policy evolved during First year of data taking: – “jobs go to data” “data and jobs move to the available CPU resources”. Also a dynamic data placement approach has been employed Tier-1 algorithm – Primary copy at tier-1 basend on Planned data placement – Secondary copy when data popular, with location based on pledges Tier-2 algoritm: – Jobs submitted to Panda trigger PD2P – Replicates popular datasets to a Tier2 with highest weight More details: /PandaDynamicDataPlacement

II IFIC Álvaro Fernández CasaníISGC2012, Taipei, 2th March 2012 A further step: caching PD2P makes data movement more dynamic and user- driven, but replication of dataset may suffer from other problems for the users: – Latency to finally access the data – Complexity of the tools Still works on the idea of datasets replication Explore the possibility to access to file or subfile dynamically without explicit replication – Xrootd allows federation of resources and to redirect the client to the data source. – Shorten latencies by caching in the xrootd server – To work correctly, there has to be efficient event data I/O with minimal transactions between application and storage 36

II IFIC Álvaro Fernández CasaníISGC2012, Taipei, 2th March 2012 PD2P in Es-Cloud last month 37 Es-Cloud Tier 2 sites getting more datasets Bit more that Tier-1 Pic IFIC 49 ES 166 EsCloud T2Ds = IFIC and IFAE

II IFIC Álvaro Fernández CasaníISGC2012, Taipei, 2th March 2012 User Example 38

II IFIC Álvaro Fernández CasaníISGC2012, Taipei, 2th March 2012 Example; Grid & Physics Analysis 39 Distributed Computing and Data Management Tools based on GRID technologies have been used by the IFIC physicists to obtain their results As an example, the Boosted Top candidate presented by M. Villaplana

II IFIC Álvaro Fernández CasaníISGC2012, Taipei, 2th March 2012 Distributed Analysis in ATLAS ATLAS has a specifically system for Production and Distributed Analysis (PANDA): – Including all ATLAS requirements – Highly automated – Low manpower – Unifies the different grid environments (EGI-Glite, OSG and EGI-ARC) – Monitoring web pages Reference:

II IFIC Álvaro Fernández CasaníISGC2012, Taipei, 2th March 2012 Distributed Analysis in ATLAS For ATLAS users, GRID tools have been developed: – For Data management Don Quijote 2 (DQ2) – Data info: name, files, sites, number,… – Download and register files on GRID,.. ATLAS Metadata Interface (AMI) – Data info: events number, availability – For simulation: generation parameter, … Data Transfer Request (DaTri) – Users make request a set of data (datasets) to create replicas in other sites (under restrictions) – For Grid jobs PanDa Client – Tools from PanDa team for sending jobs in a easy way for user Ganga (Gaudi/Athena and Grid alliance) – A job management tool for local, batch system and the grid 41

II IFIC Álvaro Fernández CasaníISGC2012, Taipei, 2th March 2012 Tier2 and Tier3 examples from Spain At IFIC the Tier3 resources are being split into two parts: – Resources coupled to IFIC Tier2 Grid environment Use by IFIC-ATLAS users Resources are idle, used by the ATLAS community – A computer farm to perform interactive analysis (proof) outside the grid framework Reference: – ATL-SOFT-PROC

II IFIC Álvaro Fernández CasaníISGC2012, Taipei, 2th March 2012 Daily user activity in Distributed Analysis An example of Distributed Analysis in heavy exotic particles – Input files – Work flow: 43

II IFIC Álvaro Fernández CasaníISGC2012, Taipei, 2th March 2012 Daily user activity in Distributed Analysis 1) A python script is created where requirements are defined – Application address, – Input, Output – A replica request to IFIC – Splitting 2) Script executed with Ganga/Panda – Grid job is sent 3) Job finished successfully, output files are copied in the IFIC Tier3 – Easy access for the user 44 Just in two weeks, 6 users for this analysis sent: jobs, 64 sites, 1032 jobs ran in T2-ES (2.89%), Input: 815 datasets Output: 1270 datasets Just in two weeks, 6 users for this analysis sent: jobs, 64 sites, 1032 jobs ran in T2-ES (2.89%), Input: 815 datasets Output: 1270 datasets

II IFIC Álvaro Fernández CasaníISGC2012, Taipei, 2th March 2012 User experience PD2P is transparent to users, but eventually they learn that it is in action: – “a job was finally sent to a destination that did not originally had the requested dataset” – “I check later PD2P has copied my original dataset” – “ I just realized because I used dq2 previously to check where the dataset was” Another question is that user datasets are not replicated: – “We see a failure because it is not replicating our homemade D3PDs” 45

II IFIC Álvaro Fernández CasaníISGC2012, Taipei, 2th March 2012 Summary 46

II IFIC Álvaro Fernández CasaníISGC2012, Taipei, 2th March 2012 BACKUP SLIDES 47

II IFIC Álvaro Fernández CasaníISGC2012, Taipei, 2th March 2012 A TIER 2 in ATLAS 48 Main activities

II IFIC Álvaro Fernández CasaníISGC2012, Taipei, 2th March 2012 Spanish Tier2 Number (more info: Pepe’s slides) The ATLAS Spanish Tier2 (T2-ES) consists in a federation of 3 Spanish Institutions (see Jose’s talk): – IFAE-Barcelona (25%) – UAM- Madrid (25%) – IFIC-Valencia (50%, coordinator) The T2-ES represents 5% of the ATLAS resources (between T2s): References: – J. Phys. Conf. Ser

II IFIC Álvaro Fernández CasaníISGC2012, Taipei, 2th March 2012 Summary 50

II IFIC Álvaro Fernández CasaníISGC2012, Taipei, 2th March 2012 Summary 51

II IFIC Álvaro Fernández CasaníISGC2012, Taipei, 2th March 2012 Migration of Scientific Applications to Grid at IFC 52

II IFIC Álvaro Fernández CasaníISGC2012, Taipei, 2th March 2012 Migration of Scientific Applications to Grid at IFC 53

II IFIC Álvaro Fernández CasaníISGC2012, Taipei, 2th March 2012 LHCONE for ATLAS 54

II IFIC Álvaro Fernández CasaníISGC2012, Taipei, 2th March 2012 NETWORK T2 network 55

II IFIC Álvaro Fernández CasaníISGC2012, Taipei, 2th March 2012 For Spanish Sites 56 IFAE IFIC UAM TIER2s DATADISK-SPAIN SITES

II IFIC Álvaro Fernández CasaníISGC2012, Taipei, 2th March 2012 Availability ES-Tier2 availability for ATLAS 57 /autoexclusion/?cloud=20&site=

II IFIC Álvaro Fernández CasaníISGC2012, Taipei, 2th March 2012 Availability Hammercloud: – Distributed Analysis testing system – For avoiding jobs go to problematic sites – Can excluded sites if test jobs are not passed – Reference: ATLAS grid tools are improving day to day – For instance: automatic jobs for merging output files – Merging Merging ATLAS users can ask to Distributed Analysis Support Team (DAST, – Problems with her/his jobs – Useful for developers Improve the tools and services 58

II IFIC Álvaro Fernández CasaníISGC2012, Taipei, 2th March 2012 Distributed Analysis in ATLAS 59 References:

II IFIC Álvaro Fernández CasaníISGC2012, Taipei, 2th March 2012 Analysis Efficiency in September ATLAS Tier0 + Tier1s ANALY*_queues 60

II IFIC Álvaro Fernández CasaníISGC2012, Taipei, 2th March 2012 Analysis Efficiency in September ATLAS Tier2s (ANALY*_queues) 61