IT-INFN-CNAF Status Update LHC-OPN Meeting INFN CNAF, 10-11 December 2009 Stefano Zani 10/11/2009Stefano Zani INFN CNAF (TIER1 Staff)1.

Slides:



Advertisements
Similar presentations
Istituto Nazionale di Fisica Nucleare Italy LAL - Orsay April Site Report – R.Gomezel Site Report Roberto Gomezel INFN - Trieste.
Advertisements

LCG Tiziana Ferrari - SC3: INFN installation status report 1 Service Challenge Phase 3: Status report Tiziana Ferrari on behalf of the INFN SC team INFN.
T1-NREN Luca dell’Agnello CCR, 21-Ottobre The problem Computing for LHC experiments –Multi tier model (MONARC) –LHC computing based on grid –Experiment.
US CMS Tier1 Facility Network Andrey Bobyshev (FNAL) Phil DeMar (FNAL) CHEP 2010 Academia Sinica Taipei, Taiwan.
Southgrid Status Pete Gronbech: 27th June 2006 GridPP 16 QMUL.
Tier 1 Luca dell’Agnello INFN – CNAF, Bologna Workshop CCR Paestum, 9-12 Giugno 2003.
INFN-T1 site report Giuseppe Misurelli On behalf of INFN-T1 staff HEPiX Spring 2015.
Luca dell’Agnello INFN-CNAF FNAL, May
Agenda Network Infrastructures LCG Architecture Management
March 27, IndiaCMS Meeting, Delhi1 T2_IN_TIFR of all-of-us, for all-of-us, by some-of-us Tier-2 Status Report.
Questionaire answers D. Petravick P. Demar FNAL. 7/14/05 DLP -- GDB2 FNAL/T1 issues In interpreting the T0/T1 document how do the T1s foresee to connect.
© 2006 Cisco Systems, Inc. All rights reserved.Cisco PublicITE I Chapter 6 1 LAN Design LAN Switching and Wireless – Chapter 1.
INFN – Tier1 Site Status Report Vladimir Sapunenko on behalf of Tier1 staff.
NORDUnet NORDUnet The Fibre Generation Lars Fischer CTO NORDUnet.
Soluzioni HW per il Tier 1 al CNAF Luca dell’Agnello Stefano Zani (INFN – CNAF, Italy) III CCR Workshop May
Fermi National Accelerator Laboratory, U.S.A. Brookhaven National Laboratory, U.S.A, Karlsruhe Institute of Technology, Germany CHEP 2012, New York, U.S.A.
José M. Hernández CIEMAT Grid Computing in the Experiment at LHC Jornada de usuarios de Infraestructuras Grid January 2012, CIEMAT, Madrid.
INFN-T1 site report Andrea Chierici On behalf of INFN-T1 staff HEPiX Spring 2014.
INTRODUCTION The GRID Data Center at INFN Pisa hosts a big Tier2 for the CMS experiment, together with local usage from other HEP related/not related activities.
T0/T1 network meeting July 19, 2005 CERN
Federico Ruggieri INFN-CNAF GDB Meeting 10 February 2004 INFN TIER1 Status.
INFN Site Report Michele Michelotto 1. INFN new statute 2  The Research Ministry requested all the research agencies to present a new statute  Management.
LCG Service Challenge Phase 4: Piano di attività e impatto sulla infrastruttura di rete 1 Service Challenge Phase 4: Piano di attività e impatto sulla.
Using E2E technology for LHC Apr 3, 2006 HEPiX Spring Meeting 2006
Tier1 Status Report Martin Bly RAL 27,28 April 2005.
ScotGRID:The Scottish LHC Computing Centre Summary of the ScotGRID Project Summary of the ScotGRID Project Phase2 of the ScotGRID Project Phase2 of the.
Thoughts on Future LHCOPN Some ideas Artur Barczyk, Vancouver, 31/08/09.
14 Aug 08DOE Review John Huth ATLAS Computing at Harvard John Huth.
Tier1 status at INFN-CNAF Giuseppe Lo Re INFN – CNAF Bologna Offline Week
1 INFN-T1 site report Andrea Chierici On behalf of INFN-T1 staff 28 th October 2009.
INFN TIER1 (IT-INFN-CNAF) “Concerns from sites” Session LHC OPN/ONE “Networking for WLCG” Workshop CERN, Stefano Zani
Connect. Communicate. Collaborate perfSONAR MDM Service for LHC OPN Loukik Kudarimoti DANTE.
Network to and at CERN Getting ready for LHC networking Jean-Michel Jouanigot and Paolo Moroni CERN/IT/CS.
KOLKATA Grid Site Name :- IN-DAE-VECC-02Monalisa Name:- Kolkata-Cream VO :- ALICECity:- KOLKATACountry :- INDIA Shown many data transfers.
CERN IT Department CH-1211 Genève 23 Switzerland t Frédéric Hemmer IT Department Head - CERN 23 rd August 2010 Status of LHC Computing from.
ATLAS Tier 1 at BNL Overview Bruce G. Gibbard Grid Deployment Board BNL 5-6 September 2006.
INFN-T1 site report Andrea Chierici On behalf of INFN-T1 staff HEPiX Fall 2013.
INFN-T1 site report Andrea Chierici, Vladimir Sapunenko On behalf of INFN-T1 staff HEPiX spring 2012.
CERN Computer Centre Tier SC4 Planning FZK October 20 th 2005 CERN.ch.
High performance Brocade routers
Network infrastructure at FR-CCIN2P3 Guillaume Cessieux – CCIN2P3 network team Guillaume. cc.in2p3.fr On behalf of CCIN2P3 network team LHCOPN.
Tier-1 Andrew Sansum Deployment Board 12 July 2007.
JINR WLCG Tier 1 for CMS CICC comprises 2582 Core Disk storage capacity 1800 TB Availability and Reliability = 99% 49% 44% JINR (Dubna)End of.
Brookhaven Science Associates U.S. Department of Energy 1 Network Services LHC OPN Networking at BNL Summer 2006 Internet 2 Joint Techs John Bigrow July.
Eygene Ryabinkin, on behalf of KI and JINR Grid teams Russian Tier-1 status report May 9th 2014, WLCG Overview Board meeting.
INFN-T1 site report Luca dell’Agnello On behalf ot INFN-T1 staff HEPiX Spring 2013.
The Italian Tier-1: INFN-CNAF 11-Oct-2005 Luca dell’Agnello Davide Salomoni.
INFN-T1 site report Andrea Chierici On behalf of INFN-T1 staff HEPiX Fall 2015.
Connect communicate collaborate LHCONE European design & implementation Roberto Sabatino, DANTE LHCONE Meeting, Washington, June
Daniele Cesini - INFN CNAF. INFN-CNAF 20 maggio 2014 CNAF 2 CNAF hosts the Italian Tier1 computing centre for the LHC experiments ATLAS, CMS, ALICE and.
1 The S.Co.P.E. Project and its model of procurement G. Russo, University of Naples Prof. Guido Russo.
Dominique Boutigny December 12, 2006 CC-IN2P3 a Tier-1 for W-LCG 1 st Chinese – French Workshop on LHC Physics and associated Grid Computing IHEP - Beijing.
100GE Upgrades at FNAL Phil DeMar; Andrey Bobyshev CHEP 2015 April 14, 2015.
NERSC/LBNL at LBNL in Berkeley October 2009 Site Report Roberto Gomezel INFN 1.
INFN Site Report R.Gomezel November 5-9,2007 The Genome Sequencing University St. Louis.
Stato del Tier1 Luca dell’Agnello 11 Maggio 2012.
INFN Site Report R.Gomezel October 9-13, 2006 Jefferson Lab, Newport News.
Luca dell’Agnello INFN-CNAF
LCG Service Challenge: Planning and Milestones
INFN CNAF TIER1 Network Service
INFN Computing infrastructure - Workload management at the Tier-1
Andrea Chierici On behalf of INFN-T1 staff
INFN CNAF TIER1 and TIER-X network infrastructure
LHC-OPN Meeting Janet (London), 8-9 March 2010
Update on Plan for KISTI-GSDC
Luca dell’Agnello INFN-CNAF
The INFN TIER1 Regional Centre
The INFN Tier-1 Storage Implementation
Luca dell’Agnello Daniele Cesini GDB - 13/12/2017
Vladimir Sapunenko On behalf of INFN-T1 staff HEPiX Spring 2017
Presentation transcript:

IT-INFN-CNAF Status Update LHC-OPN Meeting INFN CNAF, December 2009 Stefano Zani 10/11/2009Stefano Zani INFN CNAF (TIER1 Staff)1

INFN CNAF CNAF is the main computing facility of the INFN – Core business: TIER1 for all the LHC Experiments (Atlas, CMS, Alice and LHCB) – Other Activities: Provide Computing Resources for several other experiments in which INFN is involved (CDF, Babar, Virgo, SuperB,..) – R&D on GRID middleware development 10/11/2009Stefano Zani INFN CNAF (TIER1 Staff)2

The Computing facility Infrastructure Computing center infrastructure expansion has been completed in spring 2009 More than 130 Racks are in place and the cooling system is capable of dissipating the heat produced by 2MW of installed computing devices. The total Electrical Distribution available for the Center is about 5MW The total “Protected Electrical Power” is about 3.4 MW (2 Rotary UPS + 2 Diesel Engines) 10/11/2009Stefano Zani INFN CNAF (TIER1 Staff)3

Computing Resources “at a glance” COMPUTING NODES – More than 2800 cores (6.6 MSpecINT2K or HEP Spec) 1U “Standard” Servers (1 Mother Board 2 Quad Core - 8 Cores in 1U) 1U Twin Servers (2 Mother Boards with 2 Quad Core -16 Cores in 1 U) Blade Chassis (With 16 Servers in 10 U – 12.8 Cores in 1 U with integrated I/O Modules and management system) SORAGE – 2,5 PB of Disk Capacity SAN (EMC2+SUN) Served by 200 Disk Servers (STORM+GPFS) – 5 PB of TAPE Capacity (Expandable Up to 10PB ) served by 20 Drives NETWORK – 3 Core Switch/Routers – 60 Gigabit aggregation Switches – 20 Gb/s WAN Connectivity 10/11/2009Stefano Zani INFN CNAF (TIER1 Staff)4

10/11/2009Stefano Zani INFN CNAF (TIER1 Staff) GARR 2x10Gb/s 10Gb/s Exterme BD x10Gb/s INFN CNAF TIER1 Network 10Gb/s LHC-OPN dedicated link 10Gb/s T0-T1 (CERN) T1-T1 (PIC,RAL,TRIUMPH) S.Z T1-T1’s (BNL,FNAL,TW-ASGC,NDGF) T1-T2’s CNAF General purpose Exterme BD8810 Worker Nodes 2x1Gb/s Extreme Summit450 Extreme Summit450 4x1Gb/s Extreme Summit450 Worker Nodes LHC-OPN (Cross Border) CNAF-KIT CNAF-IN2P3 CNAF-SARA T0-T1 BACKUP 10Gb/s WAN RAL PIC TRIUMPH 4x1Gb/s 2x10Gb/s Extreme Summit400 Storage Servers Disk Servers Castor Stagers Fiber Channel Storage Devices SAN Extreme Summit400 In Case of network Congestion: Uplink upgrade from 4 x 1Gb/s to 10 Gb/s or 2x10Gb/s

Monitoring – MDM Devices are on line and connected to the core switch since summer ’09. (Recently fixed the GPS issue with an antenna signal splitter) – Internal monitoring done with: MRTG (Each port of the net) Nagios managing alerts with automatic SMS on critical events Netflow/Sflow For traffic accounting 10/11/2009Stefano Zani INFN CNAF (TIER1 Staff) GARR 2x10Gb/s 10Gb/s Exterme BD x10Gb/s 10Gb/s Exterme BD8810 GARR Extreme Summit400 MDM Servers WAN

T1 Connectivity Wishes to have a dedicated link for all the T1-T1 connectivity via a T1 or NAP well connected to US T1s. For resiliency reasons it would be better to avoid using CERN for this task. 10/11/2009Stefano Zani INFN CNAF (TIER1 Staff)7 T1 WAN Connectivity evolution (Wishes) CNAF-IT-T1 All other T1s (T1 or NAP) …But solutions to have a second link through CERN seems to be more feasible in brief time and GARR is facing the possibility to start testing a 100 Gb/s Between CNAF and CERN via GEANT before the end of 2010.

T2 WAN Connectivity evolution Italian TIER-2’s Bari, Catania, Milan, Naples, Legnaro (PD), Pisa, Rome and Turin. Currently connected via GARR with 1Gb/s connections 10/11/2009Stefano Zani INFN CNAF (TIER1 Staff)8 CNAF (T1) CNAF TIER1 Naples Rome Pisa LNL Catania Milan Turin Bari Italian T2’s WAN interconnections are evolving from 1 Gb/s to10 Gb/s GARR-X accesses (Q4 2010) Each T2 needs an IP connection in order to reach all other T1s and T2s via GEANT and INFN is asking GARR a dedicated light paths for the traffic between INFN CNAF and all the Italian T2s. TIER2 GARR-X IP To other T1s Or T2s Light Path T2-CNAF

Next steps on LAN side.. New Cisco Nexus 7000 installation as a CORE (end of January) switch for the TIER1 resources GridFTP Servers and SE Direct 10Gb/s CORE Connection Virtual Nodes on demand and their interconnection issues to be handled.. 10/11/2009Stefano Zani INFN CNAF (TIER1 Staff)9

10/11/2009Stefano Zani INFN CNAF (TIER1 Staff)10 That’s it Questions?

Backup Slides 10/11/2009Stefano Zani INFN CNAF (TIER1 Staff)11

10/11/2009Stefano Zani INFN CNAF (TIER1 Staff)12 Internet 2 x 1 Gbps 1 x 10 Gbps LHC OPN 4 x 1 Gbps sw sw sw sw Cisco 7606 Cisco 6500 sw sw sw sw sw sw sw sw sw farmswg3 farmswf1 farmswf2 sw sw sw sw sw sw sw sw sw sw sw sw sw sw BD8810 Network Layout BD10K sw sw sw sw sw sw sw sw sw sw sw sw sw sw sw sw sw sw sw sw sw sw sw sw-f-01

10/11/2009Stefano Zani INFN CNAF (TIER1 Staff)13