Presentation is loading. Please wait.

Presentation is loading. Please wait.

High Energy & Nuclear Physics (HENP) SIG October 4 th 2011 – Fall Member Meeting Jason Zurawski, Internet2 Research Liaison.

Similar presentations


Presentation on theme: "High Energy & Nuclear Physics (HENP) SIG October 4 th 2011 – Fall Member Meeting Jason Zurawski, Internet2 Research Liaison."— Presentation transcript:

1 High Energy & Nuclear Physics (HENP) SIG October 4 th 2011 – Fall Member Meeting Jason Zurawski, Internet2 Research Liaison

2 Group Name/Future Meetings LHCONE DYNES SC11 Planning AOB 2 – 10/26/2015, © 2011 Internet2 Agenda

3 “HENP SIG” is too hard for people to dereference when looking at the agenda – “Physics SIG”? – “Science SIG” – more embracing … – Others? Alternate Proposal – Do we need a ‘LHC BoF’ – topics to focus on network support? 3 – 10/26/2015, © 2011 Internet2 Group Name/Future Meetings

4 LHCONE DYNES SC11 Planning AOB 4 – 10/26/2015, © 2011 Internet2 Agenda

5 5 – 10/26/2015, © 2011 Internet2 LHCONE High-level Architecture

6 LHCONE – Early Planning

7 7 – 10/26/2015, © 2011 Internet2 “Joe’s Solution” Two “issues” identified at the DC meeting as needing particular attention: Multiple paths across Atlantic Resiliency Agreed to have the architecture group work out a solution

8 LHCONE is a response to the changing dynamic of data movement in the LHC environment. It is composed of multiple parts: – North America, Transatlantic Links, Europe – Others? It is expected to be composed of multiple services – Multipoint service – Point-to-point service – Monitoring service 8 – 10/26/2015, © 2011 Internet2 LHCONE Status

9 Initially created as a shared Layer 2 domain. Uses 2 VLANs (2000 and 3000) on separate transatlantic routes in order to avoid loops. Enables up to 25G on the Trans-Atlantic routes for LHC traffic. Use of dual paths provides redundancy. 9 – 10/26/2015, © 2011 Internet2 LHCONE Multipoint Service

10 Planned point-to-point service Suggestion: Build on efforts of DYNES and DICE- Dynamic service DICE-Dynamic service being rolled out by ESnet, GÉANT, Internet2, and USLHCnet – Remaining issues being worked out – Planned commencement of service: October, 2011 – Built on OSCARS (ESnet, Internet2, USLHCnet) and AUTOBAHN (GÉANT), using IDC protocol 10 – 10/26/2015, © 2011 Internet2 LHCONE Point-to-Point Service

11 Planned monitoring service Suggestion: Build on efforts of DYNES and DICE- Diagnostic service DICE-Diagnostic service, being rolled out by ESnet, GÉANT, and Internet2 – Remaining issues being worked out – Planned commencement of service: October, 2011 – Built on perfSONAR 11 – 10/26/2015, © 2011 Internet2 LHCONE Monitoring Service

12 12 – 10/26/2015, © 2011 Internet2 LHCONE (NA) Multipoint Service

13 13 – 10/26/2015, © 2011 Internet2 LHCONE Pilot (Late Sept 2011) 13 Mian Usman, DANTE, LHCONE technical proposal v2.0

14 Domains interconnected through Layer 2 switches Two vlans (nominal IDs: 3000, 2000) – Vlan 2000 configured on GEANT/ACE transatlantic segment – Vlan 3000 configured on US LHCNet transatlantic segment Allows to use both TA segments, provides TA resiliency 2 route servers per vlan – Each connecting site peers will all 4 route servers Keeping in mind this is a “now” solution, does not scale well to more transatlantic paths – Continued charge to Architecture group 14 – 10/26/2015, © 2011 Internet2 LHCONE Pilot

15 15 – 10/26/2015, © 2011 Internet2 LHCONE in GEANT

16 16 – 10/26/2015, © 2011 Internet2 LHCONE in GEANT

17 VLANS 2000 and 3000 for the multipoint service are configured. – Transatlantic routes, Internet2, and CANARIE all are participating in the shared VLAN service. New switch will be installed at MAN LAN in October. – Will enable new connection by BNL Peering with Univ of Toronto through the CANARIE link to MAN LAN is complete End sites that have direct connections to MAN LAN are: – MIT – BNL – BU/Harvard 17 – 10/26/2015, © 2011 Internet2 Internet2 (NA) – New York Status

18 VLANS for multipoint service configured on 9/23. – Correctly configured shortly thereafter to prevent routing loop – Testing on the link can start any time. Status of FNAL Cisco. – Resource constraints on the Chicago router have prevented this from happening. – Port availability is the issue. End Sites – See diagram from this summer 18 – 10/26/2015, © 2011 Internet2 LHCONE (NA) - Chicago

19 19 – 10/26/2015, © 2011 Internet2 LHCONE (NA) - Chicago

20 New York Exchange Point Ciena Core Director and Cisco 6513 Current Connections on the Core Director: – 11 OC-192’s – 9 1 Gig Current Connection on the 6513 – 16 10G Ethernets – 7 1G Ethernet 20 – 10/26/2015, © 2011 Internet2 MAN LAN

21 Switch upgrade: – Brocade MLXe-16 was purchased with: 24 10G ports 24 1 G ports 2 100G ports – Internet2 and ESnet will be connected at 100G. The Brocade will allow landing transatlantic circuits of greater then 10G. An IDC for Dynamic circuits will be installed. – Comply with GLIF GOLE definition 21 – 10/26/2015, © 2011 Internet2 MAN LAN Roadmap

22 MAN LAN is an Open Exchange Point. 1 Gbps, 10 Gbps, and 100 Gbps interfaces on the Brocade switch. – 40 Gbps could be available by 2012. Map dedicated VLANs through for Layer2 connectivity beyond the ethernet switch. With the Brocade the possibility of higher layer services should there be a need. – This would include OpenFlow being enabled on the Brocade. Dynamic services via an IDC. perfSONAR-ps instrumentation. 22 – 10/26/2015, © 2011 Internet2 MAN LAN Services

23 WIX = Washinton DC International Exchange Point Joint project being developed by MAX and Internet2 and transferred for MAX to manage once in operation. WIX is a state‐of‐the‐art international peering exchange facility, located at the Level 3 POP in McLean VA, designed to serve research and education networks. WIX is architected to meet the diverse needs of different networks. Initially, WIX facility will hold 4 racks, expandable to 12 racks as needed. – Bulk cables between the existing MAX and Internet2 suites will also be in place. WIX is implemented with a Ciena Core Director and a Brocade MLXe-16. 23 – 10/26/2015, © 2011 Internet2 WIX

24 Grow the connections to existing Exchange Points. Expand the facility with “above the net” capabilities located in the suite. – Allows for easy access both domestically and internationally Grow the number of transatlantic links to insure adequate connectivity as well as diversity. 24 – 10/26/2015, © 2011 Internet2 WIX Roadmap

25 Dedicated VLANs between participants for traffic exchange at Layer 2. WDC-IX will be an Open Exchange Point. Access to Dynamic Circuit Networks such as Internet2 ION. With the Brocade, there exists the possibility of higher layer services, should there be a need. – Possibility of OpenFlow being enabled on the Brocade 1 Gbps, 10 Gbps, and 100 Gbps interfaces are available on the Brocade switch. 40 Gbps could be available by 2012. perfSONAR instrumentation 25 – 10/26/2015, © 2011 Internet2 WIX Services

26 Group Name/Future Meetings LHCONE DYNES SC11 Planning AOB 26 – 10/26/2015, © 2011 Internet2 Agenda

27 27 – 10/26/2015, © 2011 Internet2 DYNES Projected Topology (October 2011)

28 Inter-domain Controller (IDC) Server and Software – IDC creates virtual LANs (VLANs) dynamically between the FDT server, local campus, and wide area network – IDC software is based on the OSCARS and DRAGON software which is packaged together as the DCN Software Suite (DCNSS) – DCNSS version correlates to stable tested versions of OSCARS. The current version of DCNSS is v0.5.4. – Initial DYNES deployments will include both DCNSSv0.6 and DCNSSv0.5.4 virtual machines Currently XEN based Looking into KVM for future releases A Dell R410 1U Server has been chosen, running CentOS 5.x 28 – 10/26/2015, © 2011 Internet2 DYNES Hardware

29 Fast Data Transfer (FDT) server – Fast Data Transfer (FDT) server connects to the disk array via the SAS controller and runs the FDT software – FDT server also hosts the DYNES Agent (DA) Software – The standard FDT server will be a DELL 510 server with dual-port Intel X520 DA NIC. This server will a PCIe Gen2.0 card x8 card along with 12 disks for storage. DYNES Ethernet switch options: – Dell PC6248 (48 1GE ports, 4 10GE capable ports (SFP+, CX4 or optical) – Dell PC8024F (24 10GE SFP+ ports, 4 “combo” ports supporting CX4 or optical) 29 – 10/26/2015, © 2011 Internet2 DYNES Hardware

30 http://www.internet2.edu/ion/hardware.html IDC – Dell R410 1U Server – Dual 2.4 GHz Xeon (64 Bit), 16G RAM, 500G HD – http://i.dell.com/sites/content/shared-content/data-sheets/en/Documents/R410-Spec-Sheet.pdf http://i.dell.com/sites/content/shared-content/data-sheets/en/Documents/R410-Spec-Sheet.pdf FDT – Dell R510 2U Server – Dual 2.4 GHz Xeon (64 Bit), 24G RAM, 300G Main, 12TB through RAID – http://i.dell.com/sites/content/shared-content/data-sheets/en/Documents/R510-Spec-Sheet.pdf http://i.dell.com/sites/content/shared-content/data-sheets/en/Documents/R510-Spec-Sheet.pdf Switch – Dell 8024F or Dell 6048 – 10G vs 1G Sites; copper ports and SFP+; Optics on a site by site basis – http://www.dell.com/downloads/global/products/pwcnt/en/PC_6200Series_proof1.pdf http://www.dell.com/downloads/global/products/pwcnt/en/PC_6200Series_proof1.pdf – http://www.dell.com/downloads/global/products/pwcnt/en/switch-powerconnect-8024f-spec.pdf http://www.dell.com/downloads/global/products/pwcnt/en/switch-powerconnect-8024f-spec.pdf 30 – 10/26/2015, © 2011 Internet2 Our Choices

31 31 – 10/26/2015, © 2011 Internet2 DYNES Data Flow Overview

32 AMPATH Mid-Atlantic Crossroads (MAX) – The Johns Hopkins University (JHU) Mid‐Atlantic Gigapop in Philadelphia for Internet2 (MAGPI)* – Rutgers (via NJEdge) – University of Delaware Southern Crossroads (SOX) – Vanderbilt University CENIC* – California Institute of Technology (Caltech) MREN* – University of Michigan (via MERIT and CIC OmniPoP) Note: USLHCNet will also be connected to DYNES Instrument via a peering relationship with DYNES 32 – 10/26/2015, © 2011 Internet2 Phase 3 Group A Members * temp configuration of static VLANs until future group

33 Mid‐Atlantic Gigapop in Philadelphia for Internet2 (MAGPI) – University of Pennsylvania Metropolitan Research and Education Network (MREN) – Indiana University (via I-Light and CIC OmniPoP) – University of Wisconsin Madison (via BOREAS and CIC OmniPoP) – University of Illinois at Urbana‐Champaign (via CIC OmniPoP) – The University of Chicago (via CIC OmniPoP) Lonestar Education And Research Network (LEARN) – Southern Methodist University (SMU) – Texas Tech University – University of Houston – Rice University – The University of Texas at Dallas – The University of Texas at Arlington Florida International University (Connected through FLR) 33 – 10/26/2015, © 2011 Internet2 Phase 3 Group B Members

34 Front Range GigaPop (FRGP) – University of Colorado Boulder Northern Crossroads (NoX) – Boston University – Harvard University – Tufts University CENIC** – University of California, San Diego – University of California, Santa Cruz CIC OmniPoP *** – The University of Iowa (via BOREAS) Great Plains Network (GPN)*** – The University of Oklahoma (via OneNet) – The University of Nebraska‐Lincoln 34 – 10/26/2015, © 2011 Internet2 Phase 3 Group C Members ** deploying own dynamic infrastructure *** static configuration based

35 Group Name/Future Meetings LHCONE DYNES SC11 Planning AOB 35 – 10/26/2015, © 2011 Internet2 Agenda

36 SC11 is ~ 1Month Out What’s brewing? – LHCONE Demo Internet2, GEANT, and end sites in the US and Europe (UMich, CNAF initially targeted, any US end site open to get connected) Idea will be to show “real” applications, and use of the new network – DYNES Demo Booths (Inernet2, Caltech, Vanderbilt) External Deployments (Group A and some Group B) External to DYNES (CERN, SPRACE, HEPGrid) 36 – 10/26/2015, © 2011 Internet2 It’s the Most Wonderful Time of the Year

37 What’s brewing? – 100G Capabilities ESnet/Internet2 coast to coast 100G network Lots of other demos using this – SRS (SCinet Research Sandbox) Demonstration of high speed capabilities Lots of entries Use of OpenFlow devices – Speakers at the Internet2 Booth CIOs from Campus/Fed installations Scientists Networking Experts 37 – 10/26/2015, © 2011 Internet2 It’s the Most Wonderful Time of the Year

38 38 – 10/26/2015, © 2011 Internet2 DYNES Demo - Topology

39 39 – 10/26/2015, © 2011 Internet2 DYNES Demo - Participants

40 Group Name/Future Meetings LHCONE DYNES SC11 Planning AOB 40 – 10/26/2015, © 2011 Internet2 Agenda

41 UF Lustre work ? MWT2 Upgrades? 41 – 10/26/2015, © 2011 Internet2 AOB

42 High Energy & Nuclear Physics (HENP) SIG October 4 th 2011 – Fall Member Meeting Jason Zurawski - Internet2 Research Liaison For more information, visit http://www.internet2.edu/sciencehttp://www.internet2.edu/science 42 – 10/26/2015, © 2011 Internet2


Download ppt "High Energy & Nuclear Physics (HENP) SIG October 4 th 2011 – Fall Member Meeting Jason Zurawski, Internet2 Research Liaison."

Similar presentations


Ads by Google