T0/T1 network meeting July 19, 2005 CERN

Slides:



Advertisements
Similar presentations
TAB, 03. March 2006 Bruno Hoeft German LHC – WAN present state – future concept Forschungszentrum Karlsruhe GmbH Institute for Scientific Computing P.O.
Advertisements

Connect. Communicate. Collaborate GÉANT2 (and 3) CCIRN XiAn 26 August 2007 David West DANTE.
GARR-G: The Italian Universities and Research Giganet Mauro Campanella INFN-GARR
Infrastrukturë Backbone ALBANIAN NATIONAL FIBER OPTIC NETWORK.
IN2P3 Computing Center, Lyon, France Tier1 for Atlas, ALICE, CMS, LHCb
Connect. Communicate. Collaborate GEANT2 update. Connect. Communicate. Collaborate GÉANT2 Topology.
Service Challenge at INFN Tier1 Brief site report. LCG T.Ferrari, L.d.Agnello, G.Lore, S.Zani INFN CNAF.
T1-NREN Luca dell’Agnello CCR, 21-Ottobre The problem Computing for LHC experiments –Multi tier model (MONARC) –LHC computing based on grid –Experiment.
FNAL Site Perspective on LHCOPN & LHCONE Future Directions Phil DeMar (FNAL) February 10, 2014.
ENTERPRISE NETWORK IMPLEMENTATION
RIT Campus Data Network. General Network Statistics Over 23,000 wired outlets Over 14,500 active switched ethernet ports > 250 network closets > 1,000.
Modelli di differenziazione delle prestazioni per il supporto del traffico LHC1 Modelli di differenziazione delle prestazioni per il supporto del traffico.
Luca dell’Agnello INFN-CNAF FNAL, May
Agenda Network Infrastructures LCG Architecture Management
GLIF Engineering (TEC) Working Group & SURFnet6 Blue Print Erik-Jan Bos Director of Network Services, SURFnet I2 Fall meeting, Austin, TX, USA September.
LHC high-level network architecture Erik-Jan Bos Director of Network Services SURFnet, The Netherlands T0/T1 network meeting NIKHEF/SARA, Amsterdam, The.
4 th EVN-NREN Meeting, 12 th Oct 2005, Schiphol – Michael Enrico GÉANT2 Network Infrastructure 12 th October 2005, 4 th EVN-NREN.
Questionaire answers D. Petravick P. Demar FNAL. 7/14/05 DLP -- GDB2 FNAL/T1 issues In interpreting the T0/T1 document how do the T1s foresee to connect.
Connect. Communicate. Collaborate Place your organisation logo in this area End-to-End Coordination Unit Toby Rodwell, Network Engineer, DANTE TNLC, 28.
Connect. Communicate. Collaborate Eastern GÉANT2 Extension Porta Optica Study Regional Workshop Kiev H. Döbbeling DANTE.
May 2001GRNET GRNET2 Designing The Optical Internet of Greece: A case study Magda Chatzaki Dimitrios K. Kalogeras Nassos Papakostas Stelios Sartzetakis.
TF-MSP meeting, Rome 4 february 2010 Claudia Battista GARR The Italian National Research and Education Network.
Connect. Communicate. Collaborate Building and developing international future networks Roberto Sabatino, DANTE HEANET conference, Athlone, 10 November.
NetFlow: Digging Flows Out of the Traffic Evandro de Souza ESnet ESnet Site Coordinating Committee Meeting Columbus/OH – July/2004.
The University of Bolton School of Games Computing & Creative Technologies LCT2516 Network Architecture CCNA Exploration LAN Switching and Wireless Chapter.
LCG Service Challenge Phase 4: Piano di attività e impatto sulla infrastruttura di rete 1 Service Challenge Phase 4: Piano di attività e impatto sulla.
Using E2E technology for LHC Apr 3, 2006 HEPiX Spring Meeting 2006
US LHC Tier-1 WAN Data Movement Security Architectures Phil DeMar (FNAL); Scott Bradley (BNL)
Thoughts on Future LHCOPN Some ideas Artur Barczyk, Vancouver, 31/08/09.
Techs in Paradise 2004, Honolulu / Lambda Networking BOF / Jan 27 NetherLight day-to-day experience APAN lambda networking BOF Erik Radius Manager Network.
1 LHC-OPN 2008, Madrid, th March. Bruno Hoeft, Aurelie Reymund GridKa – DE-KIT procedurs Bruno Hoeft LHC-OPN Meeting 10. –
Peering Concepts and Definitions Terminology and Related Jargon.
Brookhaven Science Associates U.S. Department of Energy 1 Network Services BNL USATLAS Tier 1 / Tier 2 Meeting John Bigrow December 14, 2005.
INFN TIER1 (IT-INFN-CNAF) “Concerns from sites” Session LHC OPN/ONE “Networking for WLCG” Workshop CERN, Stefano Zani
RENU Design Overview October Acknowledgement 2 The bulk of information contained in this presentation is the result of a design session in Seattle.
Connect. Communicate. Collaborate perfSONAR MDM Service for LHC OPN Loukik Kudarimoti DANTE.
Network to and at CERN Getting ready for LHC networking Jean-Michel Jouanigot and Paolo Moroni CERN/IT/CS.
GRNET: owned fiber in South East Europe ready for e2e lamda services (via GEANT2) Chrysostomos Tziouvaras MSc. GRNET Broadband Infrastructures Development.
TANZANIA TELECOMMUNICATIONS COMPANY LTD TTCL Bringing people closerSlide 1 TANZANIA IP POP Presentation By Adam L Mwaipungu.
Connect. Communicate. Collaborate Using PerfSONAR tools in a production environment Marian Garcia, Operations Manager, DANTE Joint Tech Workshop, 16 th.
Network infrastructure at FR-CCIN2P3 Guillaume Cessieux – CCIN2P3 network team Guillaume. cc.in2p3.fr On behalf of CCIN2P3 network team LHCOPN.
NORDUnet Nordic eInfrastructure for Research & Education NORDUnet Nordic Infrastructure for Research & Education Magnus Bergroth NORDUnet A/S LHCOPN, September.
Point-to-point Architecture topics for discussion Remote I/O as a data access scenario Remote I/O is a scenario that, for the first time, puts the WAN.
Connect. Communicate. Collaborate GÉANT2 status and update, IPv6 Network Infrastructure & Services Marco Marletta, GARR HEPiX Spring meeting rd.
Router Basics MM Clements.
Connect communicate collaborate LHCONE in Europe DANTE, DFN, GARR, RENATER, RedIRIS LHCONE Meeting Amsterdam 26 th – 27 th Sept 11.
Brookhaven Science Associates U.S. Department of Energy 1 Network Services LHC OPN Networking at BNL Summer 2006 Internet 2 Joint Techs John Bigrow July.
U.S. ATLAS Tier 1 Networking Bruce G. Gibbard LCG T0/1 Network Meeting CERN 19 July 2005.
2005 ESCC/Internet2 Joint Tech Workshop, 15 th February 2005, Salt Lake City – Otto Kreiter GÉANT2 Otto Kreiter Network Engineering.
Connect communicate collaborate LHCONE European design & implementation Roberto Sabatino, DANTE LHCONE Meeting, Washington, June
Strawman LHCONE Point to Point Experiment Plan LHCONE meeting Paris, June 17-18, 2013.
David Foster, CERN LHC T0-T1 Meeting, Seattle, November November Meeting David Foster SEATTLE.
IT-INFN-CNAF Status Update LHC-OPN Meeting INFN CNAF, December 2009 Stefano Zani 10/11/2009Stefano Zani INFN CNAF (TIER1 Staff)1.
A Strawman for Merging LHCOPN and LHCONE infrastructure LHCOPN + LHCONE Meeting Washington, DC, Jan. 31, 2013 W. E. Johnston and Chin Guok.
LHC OPN and LHC ONE ”LHC networks” Marco Marletta (GARR) – Stefano Zani (INFN CNAF) Workshop INFN CCR - GARR, Napoli, 2012.
RENU Design Overview October Acknowledgement 2 The bulk of information contained in this presentation is the result of a design session in Seattle.
100GE Upgrades at FNAL Phil DeMar; Andrey Bobyshev CHEP 2015 April 14, 2015.
LHC high-level network architecture Erik-Jan Bos Director of Network Services SURFnet, The Netherlands T0/T1 network meeting CERN, Geneva, Switzerland;
17 September 2004David Foster CERN IT-CS 1 Network Planning September 2004 David Foster Networks and Communications Systems Group Leader
T0-T1 Networking Meeting 16th June Meeting
Luca dell’Agnello INFN-CNAF
GÉANT LHCONE Update Mian Usman Network Architect
LCG Service Challenge: Planning and Milestones
INFN CNAF TIER1 Network Service
INFN-GRID Work Package 5 - NETWORK
INFN CNAF TIER1 and TIER-X network infrastructure
The INFN TIER1 Regional Centre
The INFN Tier-1 Storage Implementation
Spring 2006 Internet2 Member Meeting
Spring 2006 Internet2 Member Meeting
Presentation transcript:

T0/T1 network meeting July 19, 2005 CERN

Italian LHC Architecture 2 [T0/T1 Network Meeting] INFN Tier1 (1)  Located at CNAF – Bologna  Tier1 for all Alice, Atlas, CMS, LHCb experiments  WAN connectivity with dedicated 10 Gbps to T0 provided by GARR (September 2005) –interface 10GE LAN PHY –backup 10Gbps possibly through other T1 (TBD) –AS number 137 (owner GARR) –Network prefix /24 (owner INFN)

Italian LHC Architecture 3 [T0/T1 Network Meeting]  LAN connectivity based on 10GE technology with capacity for 10 GE link aggregation  Data flows will terminate on disk buffer system (possibly CASTOR, but also other SRM systems under evaluation)  Security model will be based on L3 filters (ACLs) within the L3 equipment  Monitoring via snmp (presently v2) INFN Tier1 (2)

Italian LHC Architecture 4 [T0/T1 Network Meeting] L1 L0 GARR L2 T1 (tbd) SC  Datamover Cluster /24 Si 7600 BD T1+SC L1, L2 links could be directly connected to BD

Italian LHC Architecture 5 [T0/T1 Network Meeting] PoP Geant2 - IT AT GARR CNAF RT1.BO1 RT.RM1 RT.MI1 CERN eBGP AS513 – AS137 T1 T0 T1 T2 10GE-LAN STM-64 IP access 10GE-LAN lightpath access GFP-F STM-64 10G – leased λ’s Italian LHC Architecture PoP Geant2 - CH

Italian LHC Architecture 6 [T0/T1 Network Meeting] Q&A - 1 Q1: In interpreting the T0/T1 document how do the T1s foresee to connect to the lambda? A1: Via GARR equipment, 10GE-LAN PHY port Q2: Which networking equipment will be used? A2: GARR will use Juniper M320 now, SDH switch in future Q3: How is local network layout organised? A3: see previous slide

Italian LHC Architecture 7 [T0/T1 Network Meeting] Q&A – 2 Q4: How is the routing organised between the OPN and the general purpose internet? A4: The italian T1 public IP address space will be routed at least to all Research Networks. Announcement to general purpose internet can be withdrawn. Q5: What AS number and IP Prefixes will they use and are the IP prefixes dedicated to the connectivity to the T0? A5: GARR ASN is 137, prefixes will be used to connect also to other T1 and T2 Q6: What backup connectivity is foreseen? A6: GARR infrastructure will allow for national backup. International backup to be guaranteed via lightpath interconnection with other T1 Q7: What is the monitoring technology used locally? A7: Good old L3 monitoring, via SNMP

Italian LHC Architecture 8 [T0/T1 Network Meeting] Q&A - 3 Q8: How is the operational support organised? A8: GARR NOC support is Mon-Fri, 8-20 CET (24x7x365 support tbd) Q9: What is the security model to be used with OPN? How will it be implemented? A9: L3 and L4 filters can be implemented without performance impact Q10: What is the policy for external monitoring of local network devices, e.g. the border router for the OPN. A10: SNMP read-only access can be provided

Italian LHC Architecture 9 [T0/T1 Network Meeting] GARR-G Phase 3  Implementation ongoing  Completed Nov 05  10G lambdas  2x10G accesses, several 1G  L3 infrastructure  GEANT2 access:n*10Gbps (Sep-Oct 05) MI3 FI TO PG NA PI FRA RM2 BA CT 2.5Gbps SDH/WL 10Gbps SDH/WL Dark Fibre/DWDM PD EumedConnect BO MI1 MI2 RM1 MI4 GEANT2 INFN-CNAF (10Gbps) DEISA-CINECA (10Gbps) Juniper M320 / M20 Cisco GSR-124xx CA SS AQ AN PV 155M SDH / 622M SDH CO CS ME RC PZ MT LE PA TS VE GE GX Telia Seabone SA

Italian LHC Architecture 10 [T0/T1 Network Meeting] GARR-G Phase 4  Next generation network  Roma-Bologna- Milano ring  1000 Km total fibre length  Owned optical infrastructure  DWDM: 4x10G initially on each span MI3 FI TO GigaPOP MegaPOP 2.5Gbps SDH/WL 10Gbps SDH/WL Dark Fibre/DWDM EumedConnect BO MI4 CO SA CA SS PG AQ AN SA NA MT LE BA CS PZ ME RC CT PA MI1 TS VE PD PI GE PV MI2 RM2 FRA RM1