The DYNES Architecture & LHC Data Movement Shawn McKee/University of Michigan For the DYNES collaboration Contributions from Artur Barczyk, Eric Boyd,

Slides:



Advertisements
Similar presentations
Circuit Monitoring July 16 th 2011, OGF 32: NMC-WG Jason Zurawski, Internet2 Research Liaison.
Advertisements

High Performance Computing Course Notes Grid Computing.
Kathy Benninger, Pittsburgh Supercomputing Center Workshop on the Development of a Next-Generation Cyberinfrastructure 1-Oct-2014 NSF Collaborative Research:
Internet2 and AL2S Eric Boyd Senior Director of Strategic Projects
Title or Title Event/Date Presenter, PresenterTitle, Internet2 Network Virtualization & the Internet2 Innovation Platform To keep our community at the.
Evolving Data Logistics Needs of the LHC Experiments Paul Sheldon Vanderbilt University.
America Connects to Europe (ACE) and TransPAC3 (TP3) Cooperative Partnerships to facilitate Global R/E Collaboration John Hicks – ACE and TP3 Engineer.
Internet2 Network: Convergence of Innovation, SDN, and Cloud Computing Eric Boyd Senior Director of Strategic Projects.
GNEW 2004 CERN, Geneva, Switzerland March 16th, 2004Shawn McKee The UltraLight Program UltraLight: An Overview for GNEW2004 Shawn McKee University of Michigan.
Transatlantic Connectivity in LHCONE Artur Barczyk California Institute of Technology LHCONE Meeting Washington DC, June 13 th,
ESnet On-demand Secure Circuits and Advance Reservation System (OSCARS) Chin Guok Network Engineering Group Thomas Ndousse Visit February Energy.
CMS Data Transfer Challenges LHCOPN-LHCONE meeting Michigan, Sept 15/16th, 2014 Azher Mughal Caltech.
IRNC Special Projects: IRIS and DyGIR Eric Boyd, Internet2 October 5, 2011.
Circuit Services - IPTV Christian Todorov Internet2 Fall Member Meeting October 9, 2007.
Virtual LAN Design Switches also have enabled the creation of Virtual LANs (VLANs). VLANs provide greater opportunities to manage the flow of traffic on.
National Science Foundation Arlington, Virginia January 7-8, 2013 Tom Lehman University of Maryland Mid-Atlantic Crossroads.
NORDUnet NORDUnet The Fibre Generation Lars Fischer CTO NORDUnet.
The DYNES Instrument: A Description and Overview May 24 th 2012 – CHEP 2012 Jason Zurawski, Senior Network Engineer - Internet2.
The Research and Education Network: Platform for Innovation Heather Boyles, Next Generation Network Symposium Malaysia 2007-March-15.
HOPI Update Rick Summerhill Director Network Research, Architecture, and Technologies Jerry Sobieski MAX GigaPoP and TSC Program Manager Mark Johnson MCNC.
GEC 15 Houston, Texas October 23, 2012 Tom Lehman Xi Yang University of Maryland Mid-Atlantic Crossroads (MAX)
InterDomain Dynamic Circuit Network Demo Joint Techs - Hawaii Jan 2008 John Vollbrecht, Internet2
LHC Open Network Environment LHCONE Artur Barczyk California Institute of Technology LISHEP Workshop on the LHC Rio de Janeiro, July 9 th,
Thoughts on Future LHCOPN Some ideas Artur Barczyk, Vancouver, 31/08/09.
A Framework for Internetworking Heterogeneous High-Performance Networks via GMPLS and Web Services Xi Yang, Tom Lehman Information Sciences Institute (ISI)
Looking Ahead: A New PSU Research Cloud Architecture Chuck Gilbert - Systems Architect and Systems Team Lead Research CI Coordinating Committee Meeting.
Hybrid MLN DOE Office of Science DRAGON Hybrid Network Control Plane Interoperation Between Internet2 and ESnet Tom Lehman Information Sciences Institute.
Delivering Circuit Services to Researchers: The HOPI Testbed Rick Summerhill Director, Network Research, Architecture, and Technologies, Internet2 Joint.
DataTAG Research and Technological Development for a Transatlantic Grid Abstract Several major international Grid development projects are underway at.
High Energy & Nuclear Physics (HENP) SIG October 4 th 2011 – Fall Member Meeting Jason Zurawski, Internet2 Research Liaison.
Scientific Networking: The Cause of and Solution to All Problems April 14 th Workshop on High Performance Applications of Cloud and Grid Tools Jason.
Connect communicate collaborate GÉANT3 Services Connectivity and Monitoring Services by and for NRENs Ann Harding, SWITCH TNC 2010.
GrangeNet Dr. Greg Wickham APAN NOC 25 August 2005.
ASCR/ESnet Network Requirements an Internet2 Perspective 2009 ASCR/ESnet Network Requirements Workshop April 15/16, 2009 Richard Carlson -- Internet2.
TeraPaths TeraPaths: Establishing End-to-End QoS Paths through L2 and L3 WAN Connections Presented by Presented by Dimitrios Katramatos, BNL Dimitrios.
DYNES Storage Infrastructure Artur Barczyk California Institute of Technology LHCOPN Meeting Geneva, October 07, 2010.
NA-MIC National Alliance for Medical Image Computing UCSD: Engineering Core 2 Portal and Grid Infrastructure.
1 Network Measurement Summary ESCC, Feb Joe Metzger ESnet Engineering Group Lawrence Berkeley National Laboratory.
Connect. Communicate. Collaborate perfSONAR MDM Service for LHC OPN Loukik Kudarimoti DANTE.
DYnamic NEtwork System (DYNES) NSF # October 3 rd 2011 – Fall Member Meeting Eric Boyd, Internet2 Jason Zurawski, Internet2.
ANSE: Advanced Network Services for [LHC] Experiments Artur Barczyk, California Institute of Technology for the ANSE team LHCONE Point-to-Point Service.
Internet2 Update October 7 th 2010, LHCOPN Jason Zurawski, Internet2.
Cisco S3C3 Virtual LANS. Why VLANs? You can define groupings of workstations even if separated by switches and on different LAN segments –They are one.
Ruth Pordes November 2004TeraGrid GIG Site Review1 TeraGrid and Open Science Grid Ruth Pordes, Fermilab representing the Open Science.
Chapter 3 - VLANs. VLANs Logical grouping of devices or users Configuration done at switch via software Not standardized – proprietary software from vendor.
6/23/2005 R. GARDNER OSG Baseline Services 1 OSG Baseline Services In my talk I’d like to discuss two questions:  What capabilities are we aiming for.
LHC Open Network Environment Architecture Overview and Status Artur Barczyk/Caltech LHCONE meeting Amsterdam, September 26 th,
DYNES Project Updates October 11 th 2011 – USATLAS Facilities Meeting Shawn McKee, University of Michigan Jason Zurawski, Internet2.
Dynamic Circuit Network An Introduction John Vollbrecht, Internet2 May 26, 2008.
CMS Usage of the Open Science Grid and the US Tier-2 Centers Ajit Mohapatra, University of Wisconsin, Madison (On Behalf of CMS Offline and Computing Projects)
TeraPaths: A QoS Enabled Collaborative Data Sharing Infrastructure for Petascale Computing Research The TeraPaths Project Team Usatlas Tier 2 workshop.
Internet2 Dynamic Circuit Services and Tools Andrew Lake, Internet2 July 15, 2007 JointTechs, Batavia, IL.
ANSE: Advanced Network Services for Experiments Institutes: –Caltech (PI: H. Newman, Co-PI: A. Barczyk) –University of Michigan (Co-PI: S. McKee) –Vanderbilt.
I Arlington, VA April 20th, 2004Shawn McKee The UltraLight Program UltraLight: An Overview for the Internet2 Spring 2004 Meeting Shawn McKee University.
Connect communicate collaborate LHCONE European design & implementation Roberto Sabatino, DANTE LHCONE Meeting, Washington, June
Strawman LHCONE Point to Point Experiment Plan LHCONE meeting Paris, June 17-18, 2013.
DYNES ( DYnamic NEtwork System) & LHCONE ( LHC Open Network Env.) Shawn McKee University of Michigan Jason Zurawski Internet2 USATLAS Facilities Meeting.
LHCONE NETWORK SERVICES: GETTING SDN TO DEV-OPS IN ATLAS Shawn McKee/Univ. of Michigan LHCONE/LHCOPN Meeting, Taipei, Taiwan March 14th, 2016 March 14,
100GE Upgrades at FNAL Phil DeMar; Andrey Bobyshev CHEP 2015 April 14, 2015.
The Internet2 Network and LHC Rick Summerhill Director Network Research, Architecture, and Technologies Internet2 Given by Rich Carlson LHC Meeting 25.
Connecting to the new Internet2 Network What to Expect… Steve Cotter Rick Summerhill FMM 2006 / Chicago.
Brookhaven Science Associates U.S. Department of Energy 1 n BNL –8 OSCARS provisioned circuits for ATLAS. Includes CERN primary and secondary to LHCNET,
1 Network Measurement Challenges LHC E2E Network Research Meeting October 25 th 2006 Joe Metzger Version 1.1.
The Internet2 Network and LHC Rick Summerhill Director Network Research, Architecture, and Technologies Internet2 LHC Meeting 23 October 2006 FERMI Lab,
Joint Genome Institute
Report from WLCG Workshop 2017: WLCG Network Requirements GDB - CERN 12th of July 2017
InterDomain Dynamic Circuit Network Demo
Establishing End-to-End Guaranteed Bandwidth Network Paths Across Multiple Administrative Domains The DOE-funded TeraPaths project at Brookhaven National.
Chapter 3 VLANs Chaffee County Academy
The UltraLight Program
Presentation transcript:

The DYNES Architecture & LHC Data Movement Shawn McKee/University of Michigan For the DYNES collaboration Contributions from Artur Barczyk, Eric Boyd, Tom Lehman, Harvey Newman and Jason Zurawski OSG All-hands Meeting March 8 th 2011, Harvard Medical School

DYNES Summary NSF MRI-R2: DYnamic NEtwork System (DYNES, NSF # ) NSF MRI-R2: DYnamic NEtwork System (DYNES, NSF # ) What is it?: ◦ A nationwide cyber-instrument spanning up to ~40 US universities and ~14 Internet2 connectors  Extends Internet2s ION service into regional networks and campuses, based on OSCARS implementation of IDC protocol (developed in partnership with ESnet) Who is it? ◦ A collaborative team including Internet2, Caltech, University of Michigan, and Vanderbilt University ◦ Community of regional networks and campuses ◦ LHC, astrophysics community, OSG, WLCG, other virtual organizations What are the goals? ◦ Support large, long-distance scientific data flows in the LHC, other leading programs in data intensive science (such as LIGO, Virtual Observatory, and other large scale sky surveys), and the broader scientific community ◦ Build a distributed virtual instrument at sites of interest to the LHC but available to R&E community generally March 8, OSG All-hands Meeting

DYNES and LHC: The Problem to be Addressed The LHC experiments’ “Tiered” computing and storage system already encompasses more than 140 sites ◦ Each hosting from tens of terabytes (Tier3) to hundreds of terabytes (Tier2) to petabytes (Tier1) Sustained throughputs at 1-10 Gbps (and some > 10 Gbps) are in production use today by some Tier2s as well as Tier1s, particularly in US LHC data volumes and transfer rates are expected to expand by an order of magnitude over the next several years ◦ As higher capacity storage and regional, national and transoceanic network links of 40 and 100 Gbps become available and affordable. ◦ US LHCNet, for example, is expected to reach Gbps by 2014 between its points of presence in NYC, Chicago, CERN and Amsterdam Network usage on this scale can only be accommodated with planning, an appropriate architecture, and nationwide community involvement ◦ By the LHC groups at universities and labs ◦ By campuses, regional and state networks connecting to Internet2 ◦ By ESnet, US LHCNet, NSF/IRNC, major networks in US & Europe

DYNES: Addressing the Problem with Dynamic Network Circuits (1/2) DYNES LHC DYNES will deliver the needed capabilities to the LHC, and to the broader scientific community at all the campuses served, by coupling to their analysis systems: ◦ Dynamic network circuit provisioning: IDC Controller ◦ Data transport: Low-cost IDC-capable Ethernet switch; ◦ FDT server for high throughput, Low-cost storage array ◦ End-to-end monitoring services

DYNES: Addressing the Problem with Dynamic Network Circuits(2/2)  DYNES does not fund more bandwidth, but provides access to Internet2’s dynamic circuit network (“ION”), plus the standard mechanisms, tools and equipment needed ◦ To build circuits with bandwidth guarantees across multiple network domains, across the U.S. and to Europe  In a manageable way, with fair-sharing  Will require scheduling services at some stage ◦ To build a community with high throughput capability using standardized, common methods

DYNES: Why Dynamic Circuits(1/2)? To meet the science requirements, Internet2 and ESnet, along with several US regional networks, US LHCNet, and in GEANT in Europe, have developed a strategy (starting with a meeting at CERN, March 2004) based on a ‘hybrid’ network architecture utilizing both routed and circuit based paths. The traditional IP network backbone is paralleled by a circuit-oriented core network reserved for large-scale science traffic.

DYNES: Why Dynamic Circuits(2/2)? Exiting examples are Internet2’s Dynamic Circuit Network (Its “ION Service”) and ESnet’s Science Data Network (SDN), each of which provides: ◦ Increased effective bandwidth capacity, and reliability of network access, by mutually isolating the large long-lasting flows (on ION and/or the SDN) and the traditional IP mix of many small flows ◦ Guaranteed bandwidth as a service by building a system to automatically schedule and implement virtual circuits traversing the network backbone, and ◦ Improved ability of scientists to access network measurement data for all the network segments end- to-end via the perfSONAR monitoring infrastructure.

DYNES: Why Not Static Circuits or Traditional, General Purpose Networks ? Separation (physical or logical) of the dynamic circuit-oriented network from the IP backbone is driven by the need to meet different functional, security, and architectural needs: ◦ Static “nailed-up” circuits will not scale. ◦ GP network firewalls incompatible with enabling large-scale science network dataflows ◦ Implementing many high capacity ports on traditional routers would be very expensive  Price balance: Worse in the next generation: 40G and 100G general purpose router ports are several hundred k$ each.

DYNES Project Schedule All applications has been reviewed. ◦ Clarifications are needed for some. This could require some changes to the proposed configuration ◦ Teleconferences with individual sites will be arranged A draft DYNES Program Plan document is available with additional details on the project plan and schedule: ◦ March 8, OSG All-hands Meeting

DYNES Infrastructure Overview DYNES Topology ◦ Based on Applications received ◦ Plus existing peering wide area Dynamic Circuit Connections (DCN ) March 8, OSG All-hands Meeting

NSF proposal defined four project phases Phase 1: Site Selection and Planning (4 months) (Sep-Dec 2010) ◦ Participant Selection Announcement: February 1, 2011 ◦ 33 Total Applications  8 Regional Networks  25 Site Networks Phase 2: Initial Development and Deployment (6 months) (Jan 1-Jun 30, 2011) ◦ Development of DYNES at a limited number of sites (February 28, 2011)  Caltech, Vanderbilt, University of Michigan, Internet2, USLHCnet  Regional networks as needed ◦ Initial Site Systems Testing and Evaluation Complete: end of March, 2011 ◦ Phase 3-Group A Deployment (10 Sites) (March -July, 2011) Phase 3-Group A Deployment (10 Sites) (March-July, 2011) ◦ Receive DYNES Equipment: April, 2011 ◦ Ship Configured Phase 3-Group A Equipment to sites: May 2011 ◦ Deploy and Test at Phase 3-Group A Sites: May-June, 2011 March 8, OSG All-hands Meeting

DYNES Phase 3 & 4 Project Schedule Phase 3: Scale Up to Full-scale System Development (14 months) (July, 2011-August, 2012) ◦ Phase 3-Group A Deployment (10 Sites): Moved to Phase 2  Moving this to Phase 2 represents a more ambitious schedule then the original proposal plan. This will allow for some buffer in case unexpected issues are uncovered as part of the initial deployment and testing. ◦ Phase 3-Group B Deployment (10 Sites): July -September, 2011 ◦ Phase 3-Group C Deployment (15 Sites): October-November, 2011 Full-scale System Development, Testing, and Evaluation (November August, 2012) ◦ Phase 4: Full-Scale Integration At-Scale; Transition to Routine O&M (12 months) (September, 2012-August, 2013)  DYNES will be operated, tested, integrated and optimized at scale, transitioning to routine operations and maintenance as soon as this phase is completed March 8, OSG All-hands Meeting

DYNES Standard Equipment Inter-domain Controller (IDC) Server and Software ◦ IDC creates virtual LANs (VLANs) dynamically between the FDT server, local campus, and wide area network OSCARSDRAGON DCNSS ◦ IDC software is based on the OSCARS and DRAGON software which is packaged together as the DCN Software Suite (DCNSS) ◦ DCNSS OSCARSDCNSS ◦ DCNSS version correlates to stable tested versions of OSCARS. The current version of DCNSS is v DCNSSv0.6 DCNSSv0.6 ◦ It expected that DCNSSv0.6 will be utilized for Phase 3- Group B deployments and beyond. DCNSSv0.6 will be fully backward compatible with v This will allow us to have a mixed environment as may result depending on actual deployment schedules. Dell ◦ The IDC server will be a Dell R610 1U machine. March 8, OSG All-hands Meeting

DYNES Standard Equipment FDT Fast Data Transfer (FDT) server FDT FDT ◦ Fast Data Transfer (FDT) server connects to the disk array via the SAS controller and runs the FDT software ◦ FDTDYNES Agent ◦ FDT server also hosts the DYNES Agent (DA) Software Dell Intel The standards FDT server will be a Dell 510 server with dual- port Intel X520 DA NIC. This server will a PCIe Gen2.0 card x8 card along with 12 disks for storage. DYNES DYNES Ethernet switch options: ◦ Dell ◦ Dell PC6248 (48 1GE ports, 4 10GE capable ports (SFP+, CX4 or optical) ◦ Dell ◦ Dell PC8024F (24 10GE SFP+ ports, 4 “combo” ports supporting CX4 or optical) March 8, OSG All-hands Meeting

DYNES Data Flow Overview Each DYNES Sites will be assigned DYNES Project private address space (10.20/16) and an EndPoint Name (siteZ.fdt1) Transfers via DYNES transfer URL sent to transfer agent March 8, OSG All-hands Meeting

Dynes DataFlow Information The DYNES Agent (DA) will provide the functionality to request the circuit instantiation, initiate and manage the data transfer, and terminate the dynamically provisioned resources. Specifically the DA will do the following: Accept user requests as a DYNES Transfer URL Locate the remote side DYNES Submits a dynamic circuit Confirm the circuit has been established Starts and manages Data Transfer, logging progress Initiate release of dynamic circuit upon completion Finalize log transfer record March 8, OSG All-hands Meeting

DYNES and LHC “Integration” The DYNES collaboration is creating a distributed virtual instrument to create virtual circuits between DYNES sites OSCARS/DCN, ESCPS, StorNetGLIF Collaborating with other efforts: OSCARS/DCN, ESCPS, StorNet, GLIF, etc. Future work to enable support for operating in the context of the LHC collaborations existing data management infrastructures. LHCONE Future integration with LHCONE effort Plans in both USATLAS and USCMS to integrate DYNES capabilities into production March 8, 2011OSG All-hands Meeting17

DYNES References DYNES ◦ OSCARS ◦ DRAGON ◦ DCN Software Suite (DCNSS) ◦ FDT ◦ REDDnet ◦ March 8, OSG All-hands Meeting

?Questions? March 8, 2011OSG All-hands Meeting19