DPM at ATLAS sites and testbeds in Italy

Slides:



Advertisements
Similar presentations
LCG Tiziana Ferrari - SC3: INFN installation status report 1 Service Challenge Phase 3: Status report Tiziana Ferrari on behalf of the INFN SC team INFN.
Advertisements

Deployment Team. Deployment –Central Management Team Takes care of the deployment of the release, certificates the sites and manages the grid services.
DPM Italian sites and EPEL testbed in Italy Alessandro De Salvo (INFN, Roma1), Alessandra Doria (INFN, Napoli), Elisabetta Vilucchi (INFN, Laboratori Nazionali.
DPM Italian sites and EPEL testbed in Italy Alessandro De Salvo (INFN, Roma1), Alessandra Doria (INFN, Napoli), Elisabetta Vilucchi (INFN, Laboratori Nazionali.
LHCC Comprehensive Review – September WLCG Commissioning Schedule Still an ambitious programme ahead Still an ambitious programme ahead Timely testing.
StoRM Some basics and a comparison with DPM Wahid Bhimji University of Edinburgh GridPP Storage Workshop 31-Mar-101Wahid Bhimji – StoRM.
Puppet with vSphere Workshop Install, configure and use Puppet on your laptop for vSphere DevOps Billy Lieberman August 1, 2015.
LCG Service Challenge Phase 4: Piano di attività e impatto sulla infrastruttura di rete 1 Service Challenge Phase 4: Piano di attività e impatto sulla.
Storage Wahid Bhimji DPM Collaboration : Tasks. Xrootd: Status; Using for Tier2 reading from “Tier3”; Server data mining.
Daniela Anzellotti Alessandro De Salvo Barbara Martelli Lorenzo Rinaldi.
Configuration Management with Cobbler and Puppet Kashif Mohammad University of Oxford.
D C a c h e Michael Ernst Patrick Fuhrmann Tigran Mkrtchyan d C a c h e M. Ernst, P. Fuhrmann, T. Mkrtchyan Chep 2003 Chep2003 UCSD, California.
Introduction to dCache Zhenping (Jane) Liu ATLAS Computing Facility, Physics Department Brookhaven National Lab 09/12 – 09/13, 2005 USATLAS Tier-1 & Tier-2.
Status of the LHCb MC production system Andrei Tsaregorodtsev, CPPM, Marseille DataGRID France workshop, Marseille, 24 September 2002.
Changes to CernVM-FS repository are staged on an “installation box" using a read/write file system interface. There is a dedicated installation box for.
INFSO-RI Enabling Grids for E-sciencE Enabling Grids for E-sciencE Pre-GDB Storage Classes summary of discussions Flavia Donno Pre-GDB.
WebFTS File Transfer Web Interface for FTS3 Andrea Manzi On behalf of the FTS team Workshop on Cloud Services for File Synchronisation and Sharing.
Performance Tests of DPM Sites for CMS AAA Federica Fanzago on behalf of the AAA team.
1 M. Biasotto – INFN T1 + T2 cloud workshop, Bologna, November T2 storage issues M. Biasotto – INFN Legnaro.
Grid Technology CERN IT Department CH-1211 Geneva 23 Switzerland t DBCF GT DPM / LFC and FTS news Ricardo Rocha ( on behalf of the IT/GT/DMS.
Xrootd Monitoring and Control Harsh Arora CERN. Setting Up Service  Monalisa Service  Monalisa Repository  Test Xrootd Server  ApMon Module.
A Silvio Pardi on behalf of the SuperB Collaboration a INFN-Napoli -Campus di M.S.Angelo Via Cinthia– 80126, Napoli, Italy CHEP12 – New York – USA – May.
Condor on WAN D. Bortolotti - INFN Bologna T. Ferrari - INFN Cnaf A.Ghiselli - INFN Cnaf P.Mazzanti - INFN Bologna F. Prelz - INFN Milano F.Semeria - INFN.
Doug Benjamin Duke University. 2 ESD/AOD, D 1 PD, D 2 PD - POOL based D 3 PD - flat ntuple Contents defined by physics group(s) - made in official production.
Andrea Manzi CERN On behalf of the DPM team HEPiX Fall 2014 Workshop DPM performance tuning hints for HTTP/WebDAV and Xrootd 1 16/10/2014.
CMS: T1 Disk/Tape separation Nicolò Magini, CERN IT/SDC Oliver Gutsche, FNAL November 11 th 2013.
ATLAS Distributed Computing ATLAS session WLCG pre-CHEP Workshop New York May 19-20, 2012 Alexei Klimentov Stephane Jezequel Ikuo Ueda For ATLAS Distributed.
MW Readiness WG Update Andrea Manzi Maria Dimou Lionel Cons Maarten Litmaath On behalf of the WG participants GDB 09/09/2015.
Andrea Manzi CERN EGI Conference on Challenges and Solutions for Big Data Processing on cloud 24/09/2014 Storage Management Overview 1 24/09/2014.
An Analysis of Data Access Methods within WLCG Shaun de Witt, Andrew Lahiff (STFC)
DPM: Future Proof Storage Ricardo Rocha ( on behalf of the DPM team ) EMI INFSO-RI
PRIN STOA-LHC: STATUS BARI BOLOGNA-18 GIUGNO 2014 Giorgia MINIELLO G. MAGGI, G. DONVITO, D. Elia INFN Sezione di Bari e Dipartimento Interateneo.
EMI is partially funded by the European Commission under Grant Agreement RI Future Proof Storage with DPM Oliver Keeble (on behalf of the CERN IT-GT-DMS.
Implementation of GLUE 2.0 support in the EMI Data Area Elisabetta Ronchieri on behalf of JRA1’s GLUE 2.0 Working Group INFN-CNAF 13 April 2011, EGI User.
INFN Site Report R.Gomezel November 5-9,2007 The Genome Sequencing University St. Louis.
A Distributed Tier-1 for WLCG Michael Grønager, PhD Technical Coordinator, NDGF CHEP 2007 Victoria, September the 3 rd, 2007.
Claudio Grandi INFN Bologna Workshop congiunto CCR e INFNGrid 13 maggio 2009 Le strategie per l’analisi nell’esperimento CMS Claudio Grandi (INFN Bologna)
CernVM-FS vs Dataset Sharing
Dynamic Extension of the INFN Tier-1 on external resources
Extending the farm to external sites: the INFN Tier-1 experience
WLCG IPv6 deployment strategy
on behalf of the Belle II Computing Group
Status: ATLAS Grid Computing
Smart Cities and Communities and Social Innovation
Ricardo Rocha ( on behalf of the DPM team )
LCG Service Challenge: Planning and Milestones
Virtualization and Clouds ATLAS position
StoRM: a SRM solution for disk based storage systems
Smart Cities and Communities and Social Innovation
Status of the SRM 2.2 MoU extension
Computing activities in Napoli
LCG 3D Distributed Deployment of Databases
gLite->EMI2/UMD2 transition
Outline Benchmarking in ATLAS Performance scaling
Andrea Chierici On behalf of INFN-T1 staff
Service Challenge 3 CERN
StoRM Architecture and Daemons
Future challenges for the BELLE II experiment
Data Federation with Xrootd Wei Yang US ATLAS Computing Facility meeting Southern Methodist University, Oct 11-12, 2011.
ATLAS Software Installation redundancy Alessandro De Salvo Alessandro
STORM & GPFS on Tier-2 Milan
Brookhaven National Laboratory Storage service Group Hironori Ito
DPM releases and platforms status
WLCG Demonstrator R.Seuster (UVic) 09 November, 2016
Ákos Frohner EGEE'08 September 2008
The INFN Tier-1 Storage Implementation
Data Management cluster summary
Australia Site Report Sean Crosby DPM Workshop – 13 December 2013.
Computing activities in Napoli
Presentation transcript:

DPM at ATLAS sites and testbeds in Italy Alessandro De Salvo (INFN, Roma1), Alessandra Doria (INFN, Napoli), Elisabetta Vilucchi (INFN, Laboratori Nazionali di Frascati) 23/11/2016 A. Doria - DPM Workshop 2016 - Paris

DPM at ATLAS IT Sites 3 out of 4 ATLAS Tier2 in Italy adopted DPM since 2007. The Tier1 at CNAF (Bologna) ad Milano Tier2 use STORM with GPFS. The Site Managers of the 3 DPM Tier2s (Frascati National Labs, Napoli and Roma) are involved in the DPM collaboration During the last year the ATLAS Tier3 in Cosenza deployed its storage system (~400 TB) with DPM. Italian ATLAS sites Milano Bologna T1 Roma Frascati Napoli Cosenza 23/11/2016 A. Doria - DPM Workshop 2016 - Paris

DPM setup at ATLAS IT Tier2s The storage setup is similar, with some peculiarity: INFN FRASCATI: 1PB, 11 pool nodes Head node on a virtual machine and MySql DB on a physical server DB replication with MySQL master/slave configuration . INFN NAPOLI: 2PB, 22 pool nodes Head node and Mysql DB on the same physical server DB daily backups with Percona xtrabackup. INFN ROMA: 1.5PB , 16 pool nodes MySQL main replica in the DPM head node, Slave replica on a different node, with backup The three sites run with DPM 1.8.11, the latest release in EPEL 23/11/2016 A. Doria - DPM Workshop 2016 - Paris

Puppet + Foreman for DPM configuration 13/12/2013 Puppet + Foreman for DPM configuration All the 3 sites use puppet with Foreman for DPM configuration (and for installation and configuration of most/all site servers) DPM Workshop 2015: “Parametrized Puppet modules developed: Puppet manifests released with DPM must be adapted in order to handle parameters from Foreman” New releases (starting with 1.8.11): DPM developers released the modules puppet-dpm and puppet-dmlite that make the configuration much easier, directly allowing to use both Foreman and hiera as ENC. 23/11/2016 A. Doria - DPM Workshop 2016 - Paris A.Doria

Puppet + Foreman setup: Pros & Cons 13/12/2013 Puppet + Foreman setup: Pros & Cons Pros: Use Foreman GUI to set configuration parameters, no file handling. Reuse of generic classes , advantage of a hierarchical model Same tool used for node installation and configuration Easier than before with the puppet-dpm module Cons: First approach to the tools not so much friendly Passwords in clear on Foreman server Less consciousness of configuration details, harder to debug in case of problem 23/11/2016 A. Doria - DPM Workshop 2016 - Paris A.Doria

EPEL testbeds in Italy Each new DPM version is first released in EPEL-testing repo by the DPM developers Two testbed installations: Frascati: installed in 2013 for the first time, Head node + 2 pool nodes (one physical and one virtual), is a “official” EPEL testbed for DPM Roma: instantiated at the end of April 2014 for the first time, now migrated to Ovirt+CEPH. “private” testbed but fully functional and re-installable very quickly. Validation of DPM 1.8.7, 1.8.8, 1.8.9, 1.8.10, 1.8.11 puppet-dpm module validated this summer. 23/11/2016 A. Doria - DPM Workshop 2016 - Paris

EPEL testing - DPM 1.9.0 The new release has several new features. The upgrade of Frascati testbed from Dpm-1.8.11 to Dpm-1.9.0 was smooth for the components already present in the old release. Some more parameters to configure, via Foreman. Some issues in the test of the new DOME flavour. Initial tests performed with the new dpm-tester tool, that tries transfers with different protocols. Very useful! We started to test the quotatokens and the caching mechanisms. 23/11/2016 A. Doria - DPM Workshop 2016 - Paris

23/11/2016 A. Doria - DPM Workshop 2016 - Paris

Ongoing and planned activities related to DPM Project about storage access from external resources and about caching Tests with the HTTP Dynamic Federation These activities will also contibute to the WLCG Regional Federation Demonstrator, as presented at Nov 16 GDB 23/11/2016 A. Doria - DPM Workshop 2016 - Paris

Storage access from external resources Project at INFN (joint ATLAS and CMS) about storage access by jobs running on cloud resources. Different Cloud providers and/or external resource owners Aruba CloudItalia Microsoft Azure ($20.000 grant now in place, collaboration starting now) Different possibilities for data access GPFS over WAN (with AFM cache) Already in place between CNAF and Bari, but needs some tuning for clouds Not general purpose, needs GPFS at both sites Storage federations Xrootd and Https/WebDAV Access to storage federations through distributed cache 23/11/2016 A. Doria - DPM Workshop 2016 - Paris

Usecase for DPM cache We plan to setup testbeds with Microsoft Azure and CloudItalia, with cached data access: use of DPM in caching mode We also plan to evaluate: the full in-cloud DPM setup the hybrid situation with the DPM head node is installed at INFN and DPM volatile pools are in the cloud, acting as caches for the jobs running in the cloud. Interesting solutions also in view of diskless sites. 23/11/2016 A. Doria - DPM Workshop 2016 - Paris

Tests with DynaFed In the last years we performed tests with DPM in HTTP Dynamic Federation (results presented at Chep 2015): PROOF-based analysis jobs accessed with HTTPS/WebDav protocol to input datasets stored in two different DPM SEs (Frascati and Napoli) using the ATLAS federator server located at DESY. The fallback mechanism of the Dynamic Federation behaved as expected, with a delay due to the transition phase: the jobs accessed data on the nearest SE; if it went down they were redirected to the other server. To extend this testing activity, a new Dynamic Federation will be set up, installing a federator server for ATLAS in Italy Several ATLAS sites (different SRM: DPM and StoRM) will be involved, starting with Frascati, Napoli, Roma and CNAF. Functionality and performance tests will be accomplished. 23/11/2016 A. Doria - DPM Workshop 2016 - Paris

Conclusions Our activity with the DPM collaboration is Very useful for the optimal management of our production site resources Stimulating for experimentations in correlated fields DPM greatly improved with the latest releases. I want to thank the DPM developers for the great work and for their helpfulness ! 23/11/2016 A. Doria - DPM Workshop 2016 - Paris