Questionaire answers D. Petravick P. Demar FNAL. 7/14/05 DLP -- GDB2 FNAL/T1 issues In interpreting the T0/T1 document how do the T1s foresee to connect.

Slides:



Advertisements
Similar presentations
Chapter 7: Intranet LAN Design
Advertisements

UTC-N Overview of Campus Networks Design.
T1-NREN Luca dell’Agnello CCR, 21-Ottobre The problem Computing for LHC experiments –Multi tier model (MONARC) –LHC computing based on grid –Experiment.
Internet Access for Academic Networks in Lorraine TERENA Networking Conference - May 16, 2001 Antalya, Turkey Infrastructure and Services Alexandre SIMON.
FNAL Site Perspective on LHCOPN & LHCONE Future Directions Phil DeMar (FNAL) February 10, 2014.
Module 5 - Switches CCNA 3 version 3.0 Cabrillo College.
Multi-Layer Switching Layers 1, 2, and 3. Cisco Hierarchical Model Access Layer –Workgroup –Access layer aggregation and L3/L4 services Distribution Layer.
WAN design ผศ. ดร. อนันต์ ผลเพิ่ม Asst.Prof.Anan Phonphoem, Ph.D. Computer Engineering Department Kasetsart.
11 TROUBLESHOOTING Chapter 12. Chapter 12: TROUBLESHOOTING2 OVERVIEW  Determine whether a network communications problem is related to TCP/IP.  Understand.
Ethernet and switches selected topics 1. Agenda Scaling ethernet infrastructure VLANs 2.
Network Upgrades { } Networks Projects Briefing 12/17/03 Phil DeMar; Donna Lamore Data Comm. Group Leaders.
2005 FNAL Computer Security Peer Review and Self Assessment Networking – Current Status FNAL Computer Security Peer Review Phil DeMar March 22, 2005.
RIT Campus Data Network. General Network Statistics Over 23,000 wired outlets Over 14,500 active switched ethernet ports > 250 network closets > 1,000.
Ch.6 - Switches CCNA 3 version 3.0.
31 May, 2006Stefan Stancu1 Control and Data Networks Architecture Proposal S. Stancu, C. Meirosu, B. Martin.
ACACIA Threaded Case Study Seamus Burns Ronan Conaghan Eugene Cullen.
What Is Network ?. A Network is a connected collection of devices and systems, such as computers and servers, which can communicate with each other.
LHC high-level network architecture Erik-Jan Bos Director of Network Services SURFnet, The Netherlands T0/T1 network meeting NIKHEF/SARA, Amsterdam, The.
1 Wide Area Network. 2 What is a WAN? A wide area network (WAN ) is a data communications network that covers a relatively broad geographic area and that.
CD FY09 Tactical Plan Review FY09 Tactical Plan for Wide-Area Networking Phil DeMar 9/25/2008.
1 October 20-24, 2014 Georgian Technical University PhD Zaza Tsiramua Head of computer network management center of GTU South-Caucasus Grid.
Fermi National Accelerator Laboratory, U.S.A. Brookhaven National Laboratory, U.S.A, Karlsruhe Institute of Technology, Germany CHEP 2012, New York, U.S.A.
Computer Concepts 2014 Chapter 5 Local Area Networks.
T0/T1 network meeting July 19, 2005 CERN
100G R&D at Fermilab Gabriele Garzoglio (for the High Throughput Data Program team) Grid and Cloud Computing Department Computing Sector, Fermilab Overview.
N ETWORKING C OMPONENTS A-3 LTEC 4550 by Joe Garcia.
NETWORKING COMPONENTS AN OVERVIEW OF COMMONLY USED HARDWARE Christopher Johnson LTEC 4550.
The University of Bolton School of Games Computing & Creative Technologies LCT2516 Network Architecture CCNA Exploration LAN Switching and Wireless Chapter.
High-quality Internet for higher education and research GigaPort  Overview SURFnet6 Niels den Otter SURFnet EVN-NREN Meeting Amsterdam October 12, 2005.
Using E2E technology for LHC Apr 3, 2006 HEPiX Spring Meeting 2006
US LHC Tier-1 WAN Data Movement Security Architectures Phil DeMar (FNAL); Scott Bradley (BNL)
Thoughts on Future LHCOPN Some ideas Artur Barczyk, Vancouver, 31/08/09.
S4-Chapter 3 WAN Design Requirements. WAN Technologies Leased Line –PPP networks –Hub and Spoke Topologies –Backup for other links ISDN –Cost-effective.
TeraPaths TeraPaths: Establishing End-to-End QoS Paths through L2 and L3 WAN Connections Presented by Presented by Dimitrios Katramatos, BNL Dimitrios.
Brookhaven Science Associates U.S. Department of Energy 1 Network Services BNL USATLAS Tier 1 / Tier 2 Meeting John Bigrow December 14, 2005.
1 Network Measurement Summary ESCC, Feb Joe Metzger ESnet Engineering Group Lawrence Berkeley National Laboratory.
CD FY10 Budget and Tactical Plan Review FY10 Tactical Plans for Wide-Area Networking Phil DeMar 10/1/2009 Tactical plan name(s)DocDB# FY10 Tactical Plan.
INFN TIER1 (IT-INFN-CNAF) “Concerns from sites” Session LHC OPN/ONE “Networking for WLCG” Workshop CERN, Stefano Zani
Connect. Communicate. Collaborate perfSONAR MDM Service for LHC OPN Loukik Kudarimoti DANTE.
Network to and at CERN Getting ready for LHC networking Jean-Michel Jouanigot and Paolo Moroni CERN/IT/CS.
Cisco 3 - Switches Perrine - Brierley Page 112/1/2015 Module 5 Switches.
Lambda Station Matt Crawford, Fermilab co-PI: Don Petravick, Fermilab co-PI: Harvey Newman, Caltech.
The Technical Network in brief Jean-Michel Jouanigot & all IT/CS.
Network infrastructure at FR-CCIN2P3 Guillaume Cessieux – CCIN2P3 network team Guillaume. cc.in2p3.fr On behalf of CCIN2P3 network team LHCOPN.
Point-to-point Architecture topics for discussion Remote I/O as a data access scenario Remote I/O is a scenario that, for the first time, puts the WAN.
 Describe the basic and hybrid LAN physical topologies and their uses, advantages, and disadvantages  Describe the backbone structures that form the.
9 th November 2005David Foster CERN IT-CS 1 LHCC Review WAN Status David Foster Networks and Communications Systems Group Leader.
Brookhaven Science Associates U.S. Department of Energy 1 Network Services LHC OPN Networking at BNL Summer 2006 Internet 2 Joint Techs John Bigrow July.
U.S. ATLAS Tier 1 Networking Bruce G. Gibbard LCG T0/1 Network Meeting CERN 19 July 2005.
Rationalizing Tier 2 Traffic and Utilizing the Existing Resources (T/A Bandwidth) What We Have Here is a Traffic Engineering Problem LHC Tier 2 Technical.
From the Transatlantic Networking Workshop to the DAM Jamboree David Foster CERN-IT.
IT-INFN-CNAF Status Update LHC-OPN Meeting INFN CNAF, December 2009 Stefano Zani 10/11/2009Stefano Zani INFN CNAF (TIER1 Staff)1.
Use of Alternate Path Circuits at Fermilab {A Site Perspective of E2E Circuits} Phil DeMar I2/JointTechs Meeting Monday, Feb. 12, 2007.
A Strawman for Merging LHCOPN and LHCONE infrastructure LHCOPN + LHCONE Meeting Washington, DC, Jan. 31, 2013 W. E. Johnston and Chin Guok.
Cisco Router Technology. Overview Topics :- Overview of cisco Overview of cisco Introduction of Router Introduction of Router How Router Works How Router.
Threaded Case Study Acacia School Project Project Members: Md. Shafayet Hossain Md. Shakhawat Hossain Md. Moniruzzaman Md. Maksudur Rahman.
Run - II Networks Run-II Computing Review 9/13/04 Phil DeMar Networks Section Head.
100GE Upgrades at FNAL Phil DeMar; Andrey Bobyshev CHEP 2015 April 14, 2015.
LHC high-level network architecture Erik-Jan Bos Director of Network Services SURFnet, The Netherlands T0/T1 network meeting CERN, Geneva, Switzerland;
Fermilab Cal Tech Lambda Station High-Performance Network Research PI Meeting BNL Phil DeMar September 29, 2005.
J. Bunn, D. Nae, H. Newman, S. Ravot, X. Su, Y. Xia California Institute of Technology US LHCNet LHCNet WG September 12 th 2006.
WLCG Network Outlook By Edoardo Martelli with input from and interpreted by Tony Cass.
T0-T1 Networking Meeting 16th June Meeting
Fermilab T1 infrastructure
Luca dell’Agnello INFN-CNAF
Cisco Router Technology
Establishing End-to-End Guaranteed Bandwidth Network Paths Across Multiple Administrative Domains The DOE-funded TeraPaths project at Brookhaven National.
Wide Area Network.
Module 5 - Switches CCNA 3 version 3.0.
an overlay network with added resources
Presentation transcript:

Questionaire answers D. Petravick P. Demar FNAL

7/14/05 DLP -- GDB2 FNAL/T1 issues In interpreting the T0/T1 document how do the T1s foresee to connect to the lambda? We expect LHCnet will provide the network infrastructure for our LHC traffic to/from CERN. We anticipate LHCnet to operate a three node ring (CERN, StarLight, MANLAN), which is logically & physically diverse, including vendor-diversity across the Atlantic. We expect to connect our site to this ring at the Starlight facility in Chicago. Initially, we expect to connect directly to LHCnet via a switch in the FNAL rack at Starlight. Later, we will utilize the ESnet Chicago MAN to connect to LHCnet, when the MAN is completed.

7/14/05 DLP -- GDB3 Which networking equipment will be used? The connection at CERN will be specified by LHCnet The connection at Starlight between LHC and FNAL will be 10 gigabit Ethernet (LAN_PHY). The connection via the ESNet MAN is TBD, but expected to 10 gigabit Ethernet (LAN_PHY).

7/14/05 DLP -- GDB4 How is local network layout organized? The local network is organized into a network core and workgroup LANs, including a CMS computing facility LAN. Both the network core and the CMS computing facility LAN are based on 10 gigabit Ethernet (10GE) technology, with capacity for 10GE link aggregation. To the extent the OPN transports only data flows, we expect that flows will normally terminate in the CMS dcache system in the CMS workgroup LAN at Fermilab. We anticipate for backup and redundancy purposes, flows may also terminate in the general-use STKen workgroup LAN (tape storage).

7/14/05 DLP -- GDB5 How is the routing organized between the OPN and the general purpose internet? We expect to forward LHC traffic from the CMS (backup: STKEN) workgroup to LHCnet, via an alternate off-site network path, using policy routing. (Note that this model may be somewhat at odds to a model that assumes all FNAL/CERN traffic is carried across the LHCnet channel(s).)

7/14/05 DLP -- GDB6 What AS number and IP Prefixes will they use and are the IP prefixes dedicated to the connectivity to the T0? No prefixes are dedicated solely to the T0. The CMS dCache and STKEN prefixes are: /23 (CMS dCache) l3.0/25 (STKen) The AS number will be the FNAL AS 3152

7/14/05 DLP -- GDB7 What backup connectivity is foreseen ? LHCnet will provide a ring structure. If one of the trans-Atlantic links is broken, bandwidth will be shared by the US CMS T1 (FNAL) and the US ATLAS T1 (BNL) via the other path in the ring. The Starlight facility and CERN facility are single points of failure in the ring. We expect to be able to exploit a tertiary backup path via ESnet and GEANT2 to work around that vulnerability.

7/14/05 DLP -- GDB8 What is the monitoring technology used locally? WAN monitoring is based on use of SNMP MIB objects and L3 flow records. These will serve as building blocks for LHC network monitoring. We anticipate migrating to SNMP V3 with MD5 authentication & ACL-restricted access for LHC monitoring. We expect to make use of MonaLisa. Experience has shown that we need vector-driven monitoring tools, not just scaler-driven tools.

7/14/05 DLP -- GDB9 How is the operational support organised? 24x7 coverage of the local network, including WAN access channels is provided. Off-hours coverage is via pager list, remotely contactable at

7/14/05 DLP -- GDB10 What is the security model to be used with OPN? How will it be implemented? Our LHC network traffic will utilize an alternate path for into & out of the facility LAN for high impact data traffic. The alternate path utilizes a high performance router with ACLs to control access. The ACL model will be default deny, with exceptions for permitted LHC traffic. The ACLs will be maximally restrictive, to the extent practicable.

7/14/05 DLP -- GDB11 What is the policy for external monitoring of local network devices, e.g. the border router for the OPN We will comply with any reasonable policy for read-only access to our perimeter network infrastructure that is established for the OPN.