High Energy & Nuclear Physics (HENP) SIG October 4 th 2011 – Fall Member Meeting Jason Zurawski, Internet2 Research Liaison.

Slides:



Advertisements
Similar presentations
Circuit Monitoring July 16 th 2011, OGF 32: NMC-WG Jason Zurawski, Internet2 Research Liaison.
Advertisements

Enabling GENI Connections Quilt GENI Workshop Heidi Picher Dempsey July 22, 2010.
Network & Services Overview June 2012 Jeff Ambern
The Instageni Initiative
Network Measurements Session Introduction Joe Metzger Network Engineering Group ESnet Eric Boyd Deputy Technology Officer Internet2 July Joint.
Internet2 and AL2S Eric Boyd Senior Director of Strategic Projects
Title or Title Event/Date Presenter, PresenterTitle, Internet2 Network Virtualization & the Internet2 Innovation Platform To keep our community at the.
Kansei Connectivity Requirements: Campus Deployment Case Study Anish Arora/Wenjie Zeng, GENI Kansei Project Prasad Calyam, Ohio Supercomputer Center/OARnet.
Linux Clustering A way to supercomputing. What is Cluster? A group of individual computers bundled together using hardware and software in order to make.
Trial of the Infinera PXM Guy Roberts, Mian Usman.
Transatlantic Connectivity in LHCONE Artur Barczyk California Institute of Technology LHCONE Meeting Washington DC, June 13 th,
CMS Data Transfer Challenges LHCOPN-LHCONE meeting Michigan, Sept 15/16th, 2014 Azher Mughal Caltech.
IRNC Special Projects: IRIS and DyGIR Eric Boyd, Internet2 October 5, 2011.
1 ESnet Network Measurements ESCC Feb Joe Metzger
Circuit Services - IPTV Christian Todorov Internet2 Fall Member Meeting October 9, 2007.
National Science Foundation Arlington, Virginia January 7-8, 2013 Tom Lehman University of Maryland Mid-Atlantic Crossroads.
Fermi National Accelerator Laboratory, U.S.A. Brookhaven National Laboratory, U.S.A, Karlsruhe Institute of Technology, Germany CHEP 2012, New York, U.S.A.
The DYNES Instrument: A Description and Overview May 24 th 2012 – CHEP 2012 Jason Zurawski, Senior Network Engineer - Internet2.
LHCONE in North America September 27 th 2011, LHCOPN Eric Boyd, Internet2.
OSCARS Overview Path Computation Topology Reachability Contraints Scheduling AAA Availability Provisioning Signalling Security Resiliency/Redundancy OSCARS.
LHC Tier 2 Networking BOF Joe Metzger Joint Techs Vancouver 2005.
HOPI Update Rick Summerhill Director Network Research, Architecture, and Technologies Jerry Sobieski MAX GigaPoP and TSC Program Manager Mark Johnson MCNC.
A Technology Vision for the Future Rick Summerhill, Chief Technology Officer, Eric Boyd, Deputy Technology Officer, Internet2 Joint Techs Meeting 16 July.
Pacific NorthWest Gigapop, Pacific Wave & International Peering David Morton, PNWGP RENU Technology Workshop.
Rick Summerhill Chief Technology Officer, Internet2 Internet2 Fall Member Meeting 9 October 2007 San Diego, CA The Dynamic Circuit.
Advanced Network Services Tomorrow October 3, 2011.
1 ESnet Update Joint Techs Meeting Minneapolis, MN Joe Burrescia ESnet General Manager 2/12/2007.
InterDomain Dynamic Circuit Network Demo Joint Techs - Hawaii Jan 2008 John Vollbrecht, Internet2
Thoughts on Future LHCOPN Some ideas Artur Barczyk, Vancouver, 31/08/09.
Service Delivery to Campus Patrick Christian University of Wisconsin October 12, 2006.
Delivering Circuit Services to Researchers: The HOPI Testbed Rick Summerhill Director, Network Research, Architecture, and Technologies, Internet2 Joint.
Copyright 2004 National LambdaRail, Inc N ational L ambda R ail Update 9/28/2004 Debbie Montano Director, Development & Operations
Scientific Networking: The Cause of and Solution to All Problems April 14 th Workshop on High Performance Applications of Cloud and Grid Tools Jason.
Connect communicate collaborate GÉANT3 Services Connectivity and Monitoring Services by and for NRENs Ann Harding, SWITCH TNC 2010.
Internet2 Update June 29 th 2010, LHCOPN Jason Zurawski – Internet2.
ASCR/ESnet Network Requirements an Internet2 Perspective 2009 ASCR/ESnet Network Requirements Workshop April 15/16, 2009 Richard Carlson -- Internet2.
DYNES Storage Infrastructure Artur Barczyk California Institute of Technology LHCOPN Meeting Geneva, October 07, 2010.
Connect. Communicate. Collaborate perfSONAR MDM Service for LHC OPN Loukik Kudarimoti DANTE.
DYnamic NEtwork System (DYNES) NSF # October 3 rd 2011 – Fall Member Meeting Eric Boyd, Internet2 Jason Zurawski, Internet2.
Internet2 Update October 7 th 2010, LHCOPN Jason Zurawski, Internet2.
LHC OPEN NETWORK ENVIRONMENT STATUS UPDATE Artur Barczyk/Caltech Tokyo, May 2013 May 14, 2013
LHC Open Network Environment Architecture Overview and Status Artur Barczyk/Caltech LHCONE meeting Amsterdam, September 26 th,
Connect communicate collaborate LHCONE moving forward Roberto Sabatino, Mian Usman DANTE LHCONE technical workshop SARA, 1-2 December 2011.
DYNES Project Updates October 11 th 2011 – USATLAS Facilities Meeting Shawn McKee, University of Michigan Jason Zurawski, Internet2.
Dynamic Circuit Network An Introduction John Vollbrecht, Internet2 May 26, 2008.
Simple Infrastructure to Exploit 100G Wide Are Networks for Data-Intensive Science Shawn McKee / University of Michigan Supercomputing 2015 Austin, Texas.
ANSE: Advanced Network Services for Experiments Institutes: –Caltech (PI: H. Newman, Co-PI: A. Barczyk) –University of Michigan (Co-PI: S. McKee) –Vanderbilt.
Dynamic Network Services In Internet2 John Vollbrecht /Dec. 4, 2006 Fall Members Meeting.
Connect communicate collaborate LHCONE European design & implementation Roberto Sabatino, DANTE LHCONE Meeting, Washington, June
Strawman LHCONE Point to Point Experiment Plan LHCONE meeting Paris, June 17-18, 2013.
DYNES ( DYnamic NEtwork System) & LHCONE ( LHC Open Network Env.) Shawn McKee University of Michigan Jason Zurawski Internet2 USATLAS Facilities Meeting.
DICE Diagnostic Service Joe Metzger Joint Techs Measurement Working Group January
LHCONE NETWORK SERVICES: GETTING SDN TO DEV-OPS IN ATLAS Shawn McKee/Univ. of Michigan LHCONE/LHCOPN Meeting, Taipei, Taiwan March 14th, 2016 March 14,
The DYNES Architecture & LHC Data Movement Shawn McKee/University of Michigan For the DYNES collaboration Contributions from Artur Barczyk, Eric Boyd,
Run - II Networks Run-II Computing Review 9/13/04 Phil DeMar Networks Section Head.
100GE Upgrades at FNAL Phil DeMar; Andrey Bobyshev CHEP 2015 April 14, 2015.
The Internet2 Network and LHC Rick Summerhill Director Network Research, Architecture, and Technologies Internet2 Given by Rich Carlson LHC Meeting 25.
J. Bunn, D. Nae, H. Newman, S. Ravot, X. Su, Y. Xia California Institute of Technology US LHCNet LHCNet WG September 12 th 2006.
LHCOPN / LHCONE Status Update John Shade /CERN IT-CS Summary of the LHCOPN/LHCONE meeting in Amsterdam Grid Deployment Board, October 2011.
HOPI Update Rick Summerhill Director Network Research, Architecture, and Technologies Internet2 Joint Techs 17 July 2006 University of Wisconsin, Madison,
The Internet2 Network and LHC Rick Summerhill Director Network Research, Architecture, and Technologies Internet2 LHC Meeting 23 October 2006 FERMI Lab,
Joint Genome Institute
Dynamic Network Services In Internet2
InterDomain Dynamic Circuit Network Demo
Establishing End-to-End Guaranteed Bandwidth Network Paths Across Multiple Administrative Domains The DOE-funded TeraPaths project at Brookhaven National.
MAN LAN Update Rick Summerhill
Dynamic Circuit Service Hands-On GMPLS Engineering Workshop
LHC Tier 2 Networking BOF
The Future of Regional Networking Tempe, AZ February 12, 2008
OSCARS Roadmap Chin Guok
Presentation transcript:

High Energy & Nuclear Physics (HENP) SIG October 4 th 2011 – Fall Member Meeting Jason Zurawski, Internet2 Research Liaison

Group Name/Future Meetings LHCONE DYNES SC11 Planning AOB 2 – 10/26/2015, © 2011 Internet2 Agenda

“HENP SIG” is too hard for people to dereference when looking at the agenda – “Physics SIG”? – “Science SIG” – more embracing … – Others? Alternate Proposal – Do we need a ‘LHC BoF’ – topics to focus on network support? 3 – 10/26/2015, © 2011 Internet2 Group Name/Future Meetings

LHCONE DYNES SC11 Planning AOB 4 – 10/26/2015, © 2011 Internet2 Agenda

5 – 10/26/2015, © 2011 Internet2 LHCONE High-level Architecture

LHCONE – Early Planning

7 – 10/26/2015, © 2011 Internet2 “Joe’s Solution” Two “issues” identified at the DC meeting as needing particular attention: Multiple paths across Atlantic Resiliency Agreed to have the architecture group work out a solution

LHCONE is a response to the changing dynamic of data movement in the LHC environment. It is composed of multiple parts: – North America, Transatlantic Links, Europe – Others? It is expected to be composed of multiple services – Multipoint service – Point-to-point service – Monitoring service 8 – 10/26/2015, © 2011 Internet2 LHCONE Status

Initially created as a shared Layer 2 domain. Uses 2 VLANs (2000 and 3000) on separate transatlantic routes in order to avoid loops. Enables up to 25G on the Trans-Atlantic routes for LHC traffic. Use of dual paths provides redundancy. 9 – 10/26/2015, © 2011 Internet2 LHCONE Multipoint Service

Planned point-to-point service Suggestion: Build on efforts of DYNES and DICE- Dynamic service DICE-Dynamic service being rolled out by ESnet, GÉANT, Internet2, and USLHCnet – Remaining issues being worked out – Planned commencement of service: October, 2011 – Built on OSCARS (ESnet, Internet2, USLHCnet) and AUTOBAHN (GÉANT), using IDC protocol 10 – 10/26/2015, © 2011 Internet2 LHCONE Point-to-Point Service

Planned monitoring service Suggestion: Build on efforts of DYNES and DICE- Diagnostic service DICE-Diagnostic service, being rolled out by ESnet, GÉANT, and Internet2 – Remaining issues being worked out – Planned commencement of service: October, 2011 – Built on perfSONAR 11 – 10/26/2015, © 2011 Internet2 LHCONE Monitoring Service

12 – 10/26/2015, © 2011 Internet2 LHCONE (NA) Multipoint Service

13 – 10/26/2015, © 2011 Internet2 LHCONE Pilot (Late Sept 2011) 13 Mian Usman, DANTE, LHCONE technical proposal v2.0

Domains interconnected through Layer 2 switches Two vlans (nominal IDs: 3000, 2000) – Vlan 2000 configured on GEANT/ACE transatlantic segment – Vlan 3000 configured on US LHCNet transatlantic segment Allows to use both TA segments, provides TA resiliency 2 route servers per vlan – Each connecting site peers will all 4 route servers Keeping in mind this is a “now” solution, does not scale well to more transatlantic paths – Continued charge to Architecture group 14 – 10/26/2015, © 2011 Internet2 LHCONE Pilot

15 – 10/26/2015, © 2011 Internet2 LHCONE in GEANT

16 – 10/26/2015, © 2011 Internet2 LHCONE in GEANT

VLANS 2000 and 3000 for the multipoint service are configured. – Transatlantic routes, Internet2, and CANARIE all are participating in the shared VLAN service. New switch will be installed at MAN LAN in October. – Will enable new connection by BNL Peering with Univ of Toronto through the CANARIE link to MAN LAN is complete End sites that have direct connections to MAN LAN are: – MIT – BNL – BU/Harvard 17 – 10/26/2015, © 2011 Internet2 Internet2 (NA) – New York Status

VLANS for multipoint service configured on 9/23. – Correctly configured shortly thereafter to prevent routing loop – Testing on the link can start any time. Status of FNAL Cisco. – Resource constraints on the Chicago router have prevented this from happening. – Port availability is the issue. End Sites – See diagram from this summer 18 – 10/26/2015, © 2011 Internet2 LHCONE (NA) - Chicago

19 – 10/26/2015, © 2011 Internet2 LHCONE (NA) - Chicago

New York Exchange Point Ciena Core Director and Cisco 6513 Current Connections on the Core Director: – 11 OC-192’s – 9 1 Gig Current Connection on the 6513 – 16 10G Ethernets – 7 1G Ethernet 20 – 10/26/2015, © 2011 Internet2 MAN LAN

Switch upgrade: – Brocade MLXe-16 was purchased with: 24 10G ports 24 1 G ports 2 100G ports – Internet2 and ESnet will be connected at 100G. The Brocade will allow landing transatlantic circuits of greater then 10G. An IDC for Dynamic circuits will be installed. – Comply with GLIF GOLE definition 21 – 10/26/2015, © 2011 Internet2 MAN LAN Roadmap

MAN LAN is an Open Exchange Point. 1 Gbps, 10 Gbps, and 100 Gbps interfaces on the Brocade switch. – 40 Gbps could be available by Map dedicated VLANs through for Layer2 connectivity beyond the ethernet switch. With the Brocade the possibility of higher layer services should there be a need. – This would include OpenFlow being enabled on the Brocade. Dynamic services via an IDC. perfSONAR-ps instrumentation. 22 – 10/26/2015, © 2011 Internet2 MAN LAN Services

WIX = Washinton DC International Exchange Point Joint project being developed by MAX and Internet2 and transferred for MAX to manage once in operation. WIX is a state‐of‐the‐art international peering exchange facility, located at the Level 3 POP in McLean VA, designed to serve research and education networks. WIX is architected to meet the diverse needs of different networks. Initially, WIX facility will hold 4 racks, expandable to 12 racks as needed. – Bulk cables between the existing MAX and Internet2 suites will also be in place. WIX is implemented with a Ciena Core Director and a Brocade MLXe – 10/26/2015, © 2011 Internet2 WIX

Grow the connections to existing Exchange Points. Expand the facility with “above the net” capabilities located in the suite. – Allows for easy access both domestically and internationally Grow the number of transatlantic links to insure adequate connectivity as well as diversity. 24 – 10/26/2015, © 2011 Internet2 WIX Roadmap

Dedicated VLANs between participants for traffic exchange at Layer 2. WDC-IX will be an Open Exchange Point. Access to Dynamic Circuit Networks such as Internet2 ION. With the Brocade, there exists the possibility of higher layer services, should there be a need. – Possibility of OpenFlow being enabled on the Brocade 1 Gbps, 10 Gbps, and 100 Gbps interfaces are available on the Brocade switch. 40 Gbps could be available by perfSONAR instrumentation 25 – 10/26/2015, © 2011 Internet2 WIX Services

Group Name/Future Meetings LHCONE DYNES SC11 Planning AOB 26 – 10/26/2015, © 2011 Internet2 Agenda

27 – 10/26/2015, © 2011 Internet2 DYNES Projected Topology (October 2011)

Inter-domain Controller (IDC) Server and Software – IDC creates virtual LANs (VLANs) dynamically between the FDT server, local campus, and wide area network – IDC software is based on the OSCARS and DRAGON software which is packaged together as the DCN Software Suite (DCNSS) – DCNSS version correlates to stable tested versions of OSCARS. The current version of DCNSS is v – Initial DYNES deployments will include both DCNSSv0.6 and DCNSSv0.5.4 virtual machines Currently XEN based Looking into KVM for future releases A Dell R410 1U Server has been chosen, running CentOS 5.x 28 – 10/26/2015, © 2011 Internet2 DYNES Hardware

Fast Data Transfer (FDT) server – Fast Data Transfer (FDT) server connects to the disk array via the SAS controller and runs the FDT software – FDT server also hosts the DYNES Agent (DA) Software – The standard FDT server will be a DELL 510 server with dual-port Intel X520 DA NIC. This server will a PCIe Gen2.0 card x8 card along with 12 disks for storage. DYNES Ethernet switch options: – Dell PC6248 (48 1GE ports, 4 10GE capable ports (SFP+, CX4 or optical) – Dell PC8024F (24 10GE SFP+ ports, 4 “combo” ports supporting CX4 or optical) 29 – 10/26/2015, © 2011 Internet2 DYNES Hardware

IDC – Dell R410 1U Server – Dual 2.4 GHz Xeon (64 Bit), 16G RAM, 500G HD – FDT – Dell R510 2U Server – Dual 2.4 GHz Xeon (64 Bit), 24G RAM, 300G Main, 12TB through RAID – Switch – Dell 8024F or Dell 6048 – 10G vs 1G Sites; copper ports and SFP+; Optics on a site by site basis – – – 10/26/2015, © 2011 Internet2 Our Choices

31 – 10/26/2015, © 2011 Internet2 DYNES Data Flow Overview

AMPATH Mid-Atlantic Crossroads (MAX) – The Johns Hopkins University (JHU) Mid‐Atlantic Gigapop in Philadelphia for Internet2 (MAGPI)* – Rutgers (via NJEdge) – University of Delaware Southern Crossroads (SOX) – Vanderbilt University CENIC* – California Institute of Technology (Caltech) MREN* – University of Michigan (via MERIT and CIC OmniPoP) Note: USLHCNet will also be connected to DYNES Instrument via a peering relationship with DYNES 32 – 10/26/2015, © 2011 Internet2 Phase 3 Group A Members * temp configuration of static VLANs until future group

Mid‐Atlantic Gigapop in Philadelphia for Internet2 (MAGPI) – University of Pennsylvania Metropolitan Research and Education Network (MREN) – Indiana University (via I-Light and CIC OmniPoP) – University of Wisconsin Madison (via BOREAS and CIC OmniPoP) – University of Illinois at Urbana‐Champaign (via CIC OmniPoP) – The University of Chicago (via CIC OmniPoP) Lonestar Education And Research Network (LEARN) – Southern Methodist University (SMU) – Texas Tech University – University of Houston – Rice University – The University of Texas at Dallas – The University of Texas at Arlington Florida International University (Connected through FLR) 33 – 10/26/2015, © 2011 Internet2 Phase 3 Group B Members

Front Range GigaPop (FRGP) – University of Colorado Boulder Northern Crossroads (NoX) – Boston University – Harvard University – Tufts University CENIC** – University of California, San Diego – University of California, Santa Cruz CIC OmniPoP *** – The University of Iowa (via BOREAS) Great Plains Network (GPN)*** – The University of Oklahoma (via OneNet) – The University of Nebraska‐Lincoln 34 – 10/26/2015, © 2011 Internet2 Phase 3 Group C Members ** deploying own dynamic infrastructure *** static configuration based

Group Name/Future Meetings LHCONE DYNES SC11 Planning AOB 35 – 10/26/2015, © 2011 Internet2 Agenda

SC11 is ~ 1Month Out What’s brewing? – LHCONE Demo Internet2, GEANT, and end sites in the US and Europe (UMich, CNAF initially targeted, any US end site open to get connected) Idea will be to show “real” applications, and use of the new network – DYNES Demo Booths (Inernet2, Caltech, Vanderbilt) External Deployments (Group A and some Group B) External to DYNES (CERN, SPRACE, HEPGrid) 36 – 10/26/2015, © 2011 Internet2 It’s the Most Wonderful Time of the Year

What’s brewing? – 100G Capabilities ESnet/Internet2 coast to coast 100G network Lots of other demos using this – SRS (SCinet Research Sandbox) Demonstration of high speed capabilities Lots of entries Use of OpenFlow devices – Speakers at the Internet2 Booth CIOs from Campus/Fed installations Scientists Networking Experts 37 – 10/26/2015, © 2011 Internet2 It’s the Most Wonderful Time of the Year

38 – 10/26/2015, © 2011 Internet2 DYNES Demo - Topology

39 – 10/26/2015, © 2011 Internet2 DYNES Demo - Participants

Group Name/Future Meetings LHCONE DYNES SC11 Planning AOB 40 – 10/26/2015, © 2011 Internet2 Agenda

UF Lustre work ? MWT2 Upgrades? 41 – 10/26/2015, © 2011 Internet2 AOB

High Energy & Nuclear Physics (HENP) SIG October 4 th 2011 – Fall Member Meeting Jason Zurawski - Internet2 Research Liaison For more information, visit 42 – 10/26/2015, © 2011 Internet2