INFN Site Report R.Gomezel October 9-13, 2006 Jefferson Lab, Newport News.

Slides:



Advertisements
Similar presentations
Tony Doyle - University of Glasgow GridPP EDG - UK Contributions Architecture Testbed-1 Network Monitoring Certificates & Security Storage Element R-GMA.
Advertisements

The RHIC-ATLAS Computing Facility at BNL HEPIX – Edinburgh May 24-28, 2004 Tony Chan RHIC Computing Facility Brookhaven National Laboratory.
Istituto Nazionale di Fisica Nucleare Italy LAL - Orsay April Site Report – R.Gomezel Site Report Roberto Gomezel INFN - Trieste.
Title US-CMS User Facilities Vivian O’Dell US CMS Physics Meeting May 18, 2001.
Storage Area Networks The Basics. Storage Area Networks SANS are designed to give you: More disk space Multiple server access to a single disk pool Better.
Cluster computing facility for CMS simulation work at NPD-BARC Raman Sehgal.
Status Report on Tier-1 in Korea Gungwon Kang, Sang-Un Ahn and Hangjin Jang (KISTI GSDC) April 28, 2014 at 15th CERN-Korea Committee, Geneva Korea Institute.
October, Scientific Linux INFN/Trieste B.Gobbo – Compass R.Gomezel - T.Macorini - L.Strizzolo INFN - Trieste.
Slide 1 DESIGN, IMPLEMENTATION, AND PERFORMANCE ANALYSIS OF THE ISCSI PROTOCOL FOR SCSI OVER TCP/IP By Anshul Chadda (Trebia Networks)-Speaker Ashish Palekar.
UTA Site Report Jae Yu UTA Site Report 4 th DOSAR Workshop Iowa State University Apr. 5 – 6, 2007 Jae Yu Univ. of Texas, Arlington.
INTRODUCTION The GRID Data Center at INFN Pisa hosts a big Tier2 for the CMS experiment, together with local usage from other HEP related/not related activities.
Storage Systems Market Analysis Dec 04. Storage Market & Technologies.
Federico Ruggieri INFN-CNAF GDB Meeting 10 February 2004 INFN TIER1 Status.
Laboratório de Instrumentação e Física Experimental de Partículas GRID Activities at LIP Jorge Gomes - (LIP Computer Centre)
RAL Site Report Andrew Sansum e-Science Centre, CCLRC-RAL HEPiX May 2004.
RAL Site Report John Gordon IT Department, CLRC/RAL HEPiX Meeting, JLAB, October 2000.
ITEP computing center and plans for supercomputing Plans for Tier 1 for FAIR (GSI) in ITEP  8000 cores in 3 years, in this year  Distributed.
HEPix April 2006 NIKHEF site report What’s new at NIKHEF’s infrastructure and Ramping up the LCG tier-1 Wim Heubers / NIKHEF (+SARA)
Globus Toolkit Massimo Sgaravatto INFN Padova. Massimo Sgaravatto Introduction Grid Services: LHC regional centres need distributed computing Analyze.
The LHCb Italian Tier-2 Domenico Galli, Bologna INFN CSN1 Roma,
US ATLAS Tier 1 Facility Rich Baker Brookhaven National Laboratory Review of U.S. LHC Software and Computing Projects Fermi National Laboratory November.
October, HEPiX Fall 2005 at SLACSLAC Site Report Roberto Gomezel INFN.
CASTOR evolution Presentation to HEPiX 2003, Vancouver 20/10/2003 Jean-Damien Durand, CERN-IT.
Forschungszentrum Karlsruhe in der Helmholtz-Gemeinschaft Implementation of a reliable and expandable on-line storage for compute clusters Jos van Wezel.
26/4/2001LAL Site Report - HEPix - LAL 2001 LAL Site Report HEPix – LAL Apr Michel Jouvin
CERN Computer Centre Tier SC4 Planning FZK October 20 th 2005 CERN.ch.
Tier3 monitoring. Initial issues. Danila Oleynik. Artem Petrosyan. JINR.
RHIC/US ATLAS Tier 1 Computing Facility Site Report Christopher Hollowell Physics Department Brookhaven National Laboratory HEPiX Upton,
Internet Protocol Storage Area Networks (IP SAN)
1.3 ON ENHANCING GridFTP AND GPFS PERFORMANCES A. Cavalli, C. Ciocca, L. dell’Agnello, T. Ferrari, D. Gregori, B. Martelli, A. Prosperini, P. Ricci, E.
Latest Improvements in the PROOF system Bleeding Edge Physics with Bleeding Edge Computing Fons Rademakers, Gerri Ganis, Jan Iwaszkiewicz CERN.
© 2007 EMC Corporation. All rights reserved. Internet Protocol Storage Area Networks (IP SAN) Module 3.4.
IT-INFN-CNAF Status Update LHC-OPN Meeting INFN CNAF, December 2009 Stefano Zani 10/11/2009Stefano Zani INFN CNAF (TIER1 Staff)1.
A Computing Tier 2 Node Eric Fede – LAPP/IN2P3. 2 Eric Fede – 1st Chinese-French Workshop Plan What is a Tier 2 –Context and definition To be a Tier 2.
PADME Kick-Off Meeting – LNF, April 20-21, DAQ Data Rate - Preliminary estimate Tentative setup: all channels read with Fast ADC 1024 samples, 12.
Bernd Panzer-Steindel CERN/IT/ADC1 Medium Term Issues for the Data Challenges.
INFN Site Report R.Gomezel November 5-9,2007 The Genome Sequencing University St. Louis.
Validation tests of CNAF storage infrastructure Luca dell’Agnello INFN-CNAF.
SuperB – Naples Site Dr. Silvio Pardi. Right now the Napoli Group is employed in 3 main tasks relate the computing in SuperB Fast Simulation Electron.
Brief introduction about “Grid at LNS”
Ryan Leonard Storage and Solutions Architect
Dynamic Extension of the INFN Tier-1 on external resources
Extending the farm to external sites: the INFN Tier-1 experience
WLCG IPv6 deployment strategy
Storage Area Networks The Basics.
Workload Management Workpackage
Luca dell’Agnello INFN-CNAF
Report from WLCG Workshop 2017: WLCG Network Requirements GDB - CERN 12th of July 2017
LCG 3D Distributed Deployment of Databases
Module 2: DriveScale architecture and components
Andrea Chierici On behalf of INFN-T1 staff
LCG Deployment in Japan
Direct Attached Storage and Introduction to SCSI
SAM at CCIN2P3 configuration issues
LHC DATA ANALYSIS INFN (LNL – PADOVA)
Luca dell’Agnello INFN-CNAF
UK GridPP Tier-1/A Centre at CLRC
The INFN TIER1 Regional Centre
The CCIN2P3 and its role in EGEE/LCG
Southwest Tier 2.
LHC Computing re-costing for
Introduction to Networks
The INFN Tier-1 Storage Implementation
Vladimir Sapunenko On behalf of INFN-T1 staff HEPiX Spring 2017
Direct Attached Storage and Introduction to SCSI
Experience with GPFS and StoRM at the INFN Tier-1
Design Unit 26 Design a small or home office network
iSCSI-based Virtual Storage System for Mobile Devices
High-Performance Storage System for the LHCb Experiment
QMUL Site Report by Dave Kant HEPSYSMAN Meeting /09/2019
Presentation transcript:

INFN Site Report R.Gomezel October 9-13, 2006 Jefferson Lab, Newport News

HEPiX Fall 2006 Meeting INFN Site Report – R.Gomezel 2 Network Connectivity CNAF Tier-1 site is now connected at 10GE to GARR backbone There is a second 10GE access completely dedicated to LCG All Tier-2 sites are now interconnected via link at 1GE GARR backbone access to GEANT2:10GE (and more than one) Future: GARR-x: links at Gbps (2008)

HEPiX Fall 2006 Meeting INFN Site Report – R.Gomezel 3 Cluster File system GPFS has been adopting in several sites as shared-disk file system for computing farm It shows a reliable and robust behavior Lustre has been tested by Tier-1 staff but GPFS is used in production Sometimes troubles seen while exporting file system on clients running SL3 via nfs After the first access to the file system exported we started getting a “NFS Stale File Handle” message (not always but very often)

HEPiX Fall 2006 Meeting INFN Site Report – R.Gomezel 4 Batch system As reported last year, Tier-1 makes 5000 LSF licences available for INFN sites needing them via a central server in Bologna So an increasing number of sites are moving to LSF batch system for local computing environment This is leading to a uniform and wide-spread adoption of LSF as a batch system in different sites

HEPiX Fall 2006 Meeting INFN Site Report – R.Gomezel INFN Tier2 Update (by M.Morandin) Development of Tier2 centres has been actively pursued in 2006 by the LHC INFN groups emphasis on building the new facilities in terms of coherent Tier2 federations indeed INFN pledge to LCG is in term of a global INFN Tier2 federation

HEPiX Fall 2006 Meeting INFN Site Report – R.Gomezel Tier2 sites approved first batch of Tier2 sites for Alice, Atlas and CMS have been selected and approved for funding by INFN management: Catania [Alice] LNL (Legnaro National Laboratory) [CMS] Napoli [Atlas] Roma [Atlas and CMS] Torino [Alice] but other sites are very active and participate in the Tier2 buid-up process in addition, a Tier2 facility for LHC-b, dedicated to Monte Carlo production, will be hosted by CNAF in Bologna

HEPiX Fall 2006 Meeting INFN Site Report – R.Gomezel Infrastructure at Tier2 sites all these sites are served by 1 Gbps links to the GARR backbone and are well integrated in the INFN Grid infrastructure significant upgrades of the cooling and electrical systems are under way at all sites to accommodate the computing resources up to and beyond 2010 ( 5-8 racks, ~ 100 kW heat removal rate capacity per site)

HEPiX Fall 2006 Meeting INFN Site Report – R.Gomezel Profile of computing resources deployment all sites have been actively involved with the Service Challenges and other activities especially during the summer '06 new LHC start-up schedule well matches INFN prudent approach to Tier2 initial grow funding has been secured for a moderate increase of the computing resources in 2007

HEPiX Fall 2006 Meeting INFN Site Report – R.Gomezel 9 Storage Update (by A.Brunengo –A.Tirel) SAN-based infrastructure is still the preferred solution to make storage system available to computing units Expertise in design and planning is getting increased INFN Storage WG started testing iSCSI technology to find out if it could be an efficient alternative to Fibre Channel solution for computing farm

HEPiX Fall 2006 Meeting INFN Site Report – R.Gomezel 10 iSCSI Testbed Testbed

HEPiX Fall 2006 Meeting INFN Site Report – R.Gomezel 11 iSCSI testbed configuration Dual Opteron dual core 2.2 Ghz servers used in tests with 4GB RAM equipped each with 2 GE interfaces Broadcom Corporation NetXtreme BCM5704 Two of them equipped with HBA(Host Bus Adapter) iSCSI (Qlogic QLA4022) iSCSI Box details: Network Appliance FAS3020 based on Xeon processor, 2 GB RAM and 512 MB di NVRAM, 4 Gb Ethernet ports (iSCSI), 2 FC ports at 2 Gb/s, 1 SCSI port and GB ATA disks connected via FC to FAS 4 logical units of 300 GB each configured for test activity Operating System used on servers SLC 4.3 on monitoring node (iscsi-mon) SL Ganglia integrated with Performance Co-Pilot used for monitoring testsPerformance Co-Pilot Linux Test Project tool disktest used for performance evaluation

HEPiX Fall 2006 Meeting INFN Site Report – R.Gomezel 12 iSCSI test results Tests accomplished running 4 parallel threads Other parameters introduced: Jumbo Frame usage TCP Segmentation Offload different block size The table gives a results overview (for 64KB block size) CPU load was 5% using HBA interface and 20-25% without it OperationHBAMTUMB/s READNO WRITENO READYES WRITEYES READNO WRITENO READYES WRITEYES

HEPiX Fall 2006 Meeting INFN Site Report – R.Gomezel 13 iSCSI: preliminary conclusions throughput is acceptable but not completely satisfying (<70 MB/s) but more and different tests must be done HBA needed to reduce CPU load path failover is functional and satisfying failover of Qlogic driver has not been tested yet future test in more complex environment tens of clients, parallel file system access, access type

HEPiX Fall 2006 Meeting INFN Site Report – R.Gomezel 14 Future testing activity EMC is going to provide us an iSCSI device to verify performance Tests are supposed to be concluded by the end of the year Test of a home made solution is under evaluation in order to compare performance and price with commercial devices It could be a better solution especially to have the freedom of buying disks at market price, not dependent from vendor price model

HEPiX Fall 2006 Meeting INFN Site Report – R.Gomezel 15 TRIP: a model for Wireless authentication TRIP is a project evaluated last year in order to provide a common infrastructure allowing INFN users to get easily access to wireless network available in INFN sites without the need to ask for site-specific authentication Authentication for INFN users: based on EAP-TTLS using a distributed architecture of radius servers configured in INFN sites There is a national radius server redirecting the client authentication request to the local radius server of the site the user belongs to Authentication for guest users: based on a captive portal forces a user that needs to have access to the network to be redirected to a web page in order to get credentials for authentication Most of units implemented the facility allowing roaming users to get authentication as if they were at local site

HEPiX Fall 2006 Meeting INFN Site Report – R.Gomezel 16 LDAP: directory service An increasing number of INFN sites are adopting LDAP to allow user’s authorization to resources and applications Every site is now using a site specific configuration A group is evaluating the opportunity and feasibility to define a model for a nation- wide authorization model