Kathy Benninger, Pittsburgh Supercomputing Center Workshop on the Development of a Next-Generation Cyberinfrastructure 1-Oct-2014 NSF Collaborative Research:

Slides:



Advertisements
Similar presentations
Cross-site data transfer on TeraGrid using GridFTP TeraGrid06 Institute User Introduction to TeraGrid June 12 th by Krishna Muriki
Advertisements

Xsede eXtreme Science and Engineering Discovery Environment Ron Perrott University of Oxford 1.
1 US activities and strategy :NSF Ron Perrott. 2 TeraGrid An instrument that delivers high-end IT resources/services –a computational facility – over.
Joint CASC/CCI Workshop Report Strategic and Tactical Recommendations EDUCAUSE Campus Cyberinfrastructure Working Group Coalition for Academic Scientific.
High Performance Computing Course Notes Grid Computing.
Internet2 and AL2S Eric Boyd Senior Director of Strategic Projects
EInfrastructures (Internet and Grids) US Resource Centers Perspective: implementation and execution challenges Alan Blatecky Executive Director SDSC.
Title or Title Event/Date Presenter, PresenterTitle, Internet2 Network Virtualization & the Internet2 Innovation Platform To keep our community at the.
Internet2 Network: Convergence of Innovation, SDN, and Cloud Computing Eric Boyd Senior Director of Strategic Projects.
Summary Role of Software (1 slide) ARCS Software Architecture (4 slides) SNS -- Caltech Interactions (3 slides)
1 Software & Grid Middleware for Tier 2 Centers Rob Gardner Indiana University DOE/NSF Review of U.S. ATLAS and CMS Computing Projects Brookhaven National.
1 Chin Guok ESnet Network Engineer David Robertson DSD Computer Software Engineer Lawrence Berkeley National Laboratory.
1 Chin Guok ESnet Network Engineer David Robertson DSD Computer Software Engineer Lawrence Berkeley National Laboratory.
Presented by Scalable Systems Software Project Al Geist Computer Science Research Group Computer Science and Mathematics Division Research supported by.
Milos Kobliha Alejandro Cimadevilla Luis de Alba Parallel Computing Seminar GROUP 12.
1© Copyright 2015 EMC Corporation. All rights reserved. SDN INTELLIGENT NETWORKING IMPLICATIONS FOR END-TO-END INTERNETWORKING Simone Mangiante Senior.
MCTS Guide to Microsoft Windows Server 2008 Network Infrastructure Configuration Chapter 8 Introduction to Printers in a Windows Server 2008 Network.
Internet2 Network: Controlling a Slice of the National Network Eric Boyd Senior Director of Strategic Projects.
Simo Niskala Teemu Pasanen
Distributed Systems: Client/Server Computing
ESnet On-demand Secure Circuits and Advance Reservation System (OSCARS) Chin Guok Network Engineering Group Thomas Ndousse Visit February Energy.
James Deaton Chief Technology Officer OneNet Glass, Graphs and Gear.
TeraPaths : A QoS Collaborative Data Sharing Infrastructure for Petascale Computing Research USATLAS Tier 1 & Tier 2 Network Planning Meeting December.
TeraPaths: A QoS Collaborative Data Sharing Infrastructure for Petascale Computing Research Bruce Gibbard & Dantong Yu High-Performance Network Research.
National Science Foundation Arlington, Virginia January 7-8, 2013 Tom Lehman University of Maryland Mid-Atlantic Crossroads.
Current Job Components Information Technology Department Network Systems Administration Telecommunications Database Design and Administration.
LARK Bringing Distributed High Throughput Computing to the Network Todd Tannenbaum U of Wisconsin-Madison Garhan Attebury
Remote Access Chapter 4. Learning Objectives Understand implications of IEEE 802.1x and how it is used Understand VPN technology and its uses for securing.
Software-defined Networking Capabilities, Needs in GENI for VMLab ( Prasad Calyam; Sudharsan Rajagopalan;
Campus Cyberinfrastructure – Network Infrastructure and Engineering (CC-NIE) Kevin Thompson NSF Office of CyberInfrastructure April 25, 2012.
Miron Livny Center for High Throughput Computing Computer Sciences Department University of Wisconsin-Madison Open Science Grid (OSG)
October 21, 2015 XSEDE Technology Insertion Service Identifying and Evaluating the Next Generation of Cyberinfrastructure Software for Science Tim Cockerill.
CYBERINFRASTRUCTURE FOR THE GEOSCIENCES Data Replication Service Sandeep Chandra GEON Systems Group San Diego Supercomputer Center.
A Framework for Internetworking Heterogeneous High-Performance Networks via GMPLS and Web Services Xi Yang, Tom Lehman Information Sciences Institute (ISI)
DataTAG Research and Technological Development for a Transatlantic Grid Abstract Several major international Grid development projects are underway at.
CyberInfrastructure workshop CSG May Ann Arbor, Michigan.
NA-MIC National Alliance for Medical Image Computing UCSD: Engineering Core 2 Portal and Grid Infrastructure.
1October 9, 2001 Sun in Scientific & Engineering Computing Grid Computing with Sun Wolfgang Gentzsch Director Grid Computing Cracow Grid Workshop, November.
BNL Tier 1 Service Planning & Monitoring Bruce G. Gibbard GDB 5-6 August 2006.
GRIDS Center Middleware Overview Sandra Redman Information Technology and Systems Center and Information Technology Research Center National Space Science.
GRID Overview Internet2 Member Meeting Spring 2003 Sandra Redman Information Technology and Systems Center and Information Technology Research Center National.
Scalable Systems Software for Terascale Computer Centers Coordinator: Al Geist Participating Organizations ORNL ANL LBNL.
Ruth Pordes November 2004TeraGrid GIG Site Review1 TeraGrid and Open Science Grid Ruth Pordes, Fermilab representing the Open Science.
CEOS Working Group on Information Systems and Services - 1 Data Services Task Team Discussions on GRID and GRIDftp Stuart Doescher, USGS WGISS-15 May 2003.
Leveraging the InCommon Federation to access the NSF TeraGrid Jim Basney Senior Research Scientist National Center for Supercomputing Applications University.
Introduction to Grids By: Fetahi Z. Wuhib [CSD2004-Team19]
Cyberinfrastructure: An investment worth making Joe Breen University of Utah Center for High Performance Computing.
Award # funded by the National Science Foundation Award #ACI Jetstream: A Distributed Cloud Infrastructure for.
1 NSF/TeraGrid Science Advisory Board Meeting July 19-20, San Diego, CA Brief TeraGrid Overview and Expectations of Science Advisory Board John Towns TeraGrid.
Internet2 Joint Techs Workshop, Feb 15, 2005, Salt Lake City, Utah ESnet On-Demand Secure Circuits and Advance Reservation System (OSCARS) Chin Guok
EGEE-III INFSO-RI Enabling Grids for E-sciencE EGEE and gLite are registered trademarks Vassiliki Pouli
SOFTWARE DEFINED NETWORKING/OPENFLOW: A PATH TO PROGRAMMABLE NETWORKS April 23, 2012 © Brocade Communications Systems, Inc.
Globus and PlanetLab Resource Management Solutions Compared M. Ripeanu, M. Bowman, J. Chase, I. Foster, M. Milenkovic Presented by Dionysis Logothetis.
Cyberinfrastructure: Many Things to Many People Russ Hobby Program Manager Internet2.
Simple Infrastructure to Exploit 100G Wide Are Networks for Data-Intensive Science Shawn McKee / University of Michigan Supercomputing 2015 Austin, Texas.
GRID ANATOMY Advanced Computing Concepts – Dr. Emmanuel Pilli.
TeraPaths: A QoS Enabled Collaborative Data Sharing Infrastructure for Petascale Computing Research The TeraPaths Project Team Usatlas Tier 2 workshop.
EGI-InSPIRE RI EGI-InSPIRE EGI-InSPIRE RI UMD Roadmap Steven Newhouse 14/09/2010.
PEER 2003 Meeting 03/08/031 Interdisciplinary Framework Major focus areas Structural Representation Fault Systems Earthquake Source Physics Ground Motions.
All Hands Meeting 2005 BIRN-CC: Building, Maintaining and Maturing a National Information Infrastructure to Enable and Advance Biomedical Research.
The EPIKH Project (Exchange Programme to advance e-Infrastructure Know-How) gLite Grid Introduction Salma Saber Electronic.
TeraPaths: A QoS Enabled Collaborative Data Sharing Infrastructure for Petascale Computing Research The TeraPaths Project Team Usatlas Tier 2 workshop.
1 Network Measurement Challenges LHC E2E Network Research Meeting October 25 th 2006 Joe Metzger Version 1.1.
INTRODUCTION TO XSEDE. INTRODUCTION  Extreme Science and Engineering Discovery Environment (XSEDE)  “most advanced, powerful, and robust collection.
SDN challenges Deployment challenges
Workload Management Workpackage
Bryan Learn Pittsburgh Supercomputing Center
Establishing End-to-End Guaranteed Bandwidth Network Paths Across Multiple Administrative Domains The DOE-funded TeraPaths project at Brookhaven National.
XSEDE’s Campus Bridging Project
ExaO: Software Defined Data Distribution for Exascale Sciences
Presentation transcript:

Kathy Benninger, Pittsburgh Supercomputing Center Workshop on the Development of a Next-Generation Cyberinfrastructure 1-Oct-2014 NSF Collaborative Research: CC-NIE Integration: Developing Applications with Networking Capabilities via End-to-End Software Defined Networking (DANCES)

2 © 2010 Pittsburgh Supercomputing Center © 2014 Pittsburgh Supercomputing Center What is DANCES? The DANCES project, an NSF funded CC-NIE collaborative award, is developing mechanisms for managing network bandwidth by adding end-to-end software-defined networking (SDN) capability and interoperability to selected CI applications and to application end point network infrastructure

3 © 2010 Pittsburgh Supercomputing Center © 2014 Pittsburgh Supercomputing Center DANCES Participants and Partner Sites Pittsburgh Supercomputer Center (PSC) National Institute for Computational Sciences (NICS) Pennsylvania State University (Penn State) National Center for Supercomputing Applications (NCSA) Texas Advanced Computing Center (TACC) Georgia Institute of Technology (GaTech) eXtreme Science and Engineering Discovery Environment (XSEDE) Internet2

4 © 2010 Pittsburgh Supercomputing Center © 2014 Pittsburgh Supercomputing Center DANCES Partner Sites on AL2S XSEDEnet

5 © 2010 Pittsburgh Supercomputing Center © 2014 Pittsburgh Supercomputing Center DANCES Application Integration Targets Add network bandwidth scheduling capability using SDN to supercomputing infrastructure applications Resource management and scheduling –Torque/MOAB scheduling software –Enable bandwidth reservation for file transfer Wide area distributed file systems –XSEDE-wide file system (XWFS) –SLASH2 wide area distributed file system developed by PSC

6 © 2010 Pittsburgh Supercomputing Center © 2014 Pittsburgh Supercomputing Center File System Application Integration Research XWFS –Based on IBM’s GPFS, this WAN file system is deployed across several XSEDE Service Providers. Research activity is XWFS data flow integration with SDN/OpenFlow across XSEDEnet/Internet2 SLASH2 –PSC’s SLASH2 WAN file system is deployed at PSC and partner sites. Research activity is SLASH2 data flow integration with SDN/OpenFlow and resource scheduling across XSEDEnet/Internet2

7 © 2010 Pittsburgh Supercomputing Center © 2014 Pittsburgh Supercomputing Center Application Integration Research GridFTP –Integration of SDN/OpenFlow capability with the resource management and scheduling subsystems of XSEDE’s advanced computational cyberinfrastructure to support the GridFTP data transfer application

8 © 2010 Pittsburgh Supercomputing Center © 2014 Pittsburgh Supercomputing Center DANCES System Diagram

9 © 2010 Pittsburgh Supercomputing Center © 2014 Pittsburgh Supercomputing Center SDN/OpenFlow Infrastructure Integration Application interface with SDN/OF environment –Torque Prologue and Epilogue scripts to set up and tear down network reservation for scheduled file transfer via file system (XWFS, SLASH2) or GridFTP –Map SLASH2 and XWFS file system interfaces to network bandwidth reservation –Interface to Internet2’s Open Exchange Software Suite (OESS) AL2S VLAN provisioning Establish end-to-end path between file transfer source and destination sites SDN/OF-capable switches –Existing infrastructure at some sites (e.g., CC-NIE and CC*IIE recipients) –Evaluating hardware for deployment

10 © 2010 Pittsburgh Supercomputing Center © 2014 Pittsburgh Supercomputing Center Workflow Example: SDN-enabled SLASH2 Note: SLASH2 supports file replication and multiple residency 1.User requests file residency at a particular site 2.SLASH2 checks and returns file residency status 3.Check user authorization for bandwidth scheduling 4.SLASH2 will initiate path set up with end site OpenFlow configuration and transaction with Internet2’s FlowSpace Firewall and OESS for wide area authorization and path provisioning 5.During transfer SLASH2 will poll for remote residency completion 6.Upon completion of transfer, remove the provisioned path

11 © 2010 Pittsburgh Supercomputing Center © 2014 Pittsburgh Supercomputing Center Workflow Example: Torque/MOAB with GridFTP 1.User creates DANCES-GridFTP job and submits it 2.Torque/MOAB schedules the job when resources are available 3.DANCES-GridFTP job initiated 4.Torque uses Prologue script to send Northbound API instruction to SDN controller to create end-to-end path 5.Path set up will include local OpenFlow configuration and transaction with Internet2’s FlowSpace Firewall and OESS for wide area authorization and path provisioning 6.Torque/MOAB Epilogue script to tear down provisioning when finished

12 © 2010 Pittsburgh Supercomputing Center © 2014 Pittsburgh Supercomputing Center User Interaction The user community primarily consists of domain researchers and scientists, therefore DANCES emphasizes transparent functionality of the bandwidth scheduling mechanism Administratrively, user requests bandwidth reservation capability –As a computational resource from the XRAC (typical one year) –To support a limited-time large data set transfer need (< one year) Operationally, a user’s bandwidth reservation request may –Succeed: bandwidth scheduled and transfer will proceed –Be deferred by scheduler with permission, until bandwidth is available –Fail: Request declined, user notified, transfer will proceed as best- effort along with the unscheduled traffic

13 © 2010 Pittsburgh Supercomputing Center © 2014 Pittsburgh Supercomputing Center Cyberinfrastructure Issues - Policy Criteria for allocating bandwidth scheduling capability to users/projects Agreement on the dedicated bandwidth that each site commits for scheduled transfers Monitoring and accounting of bandwidth usage

14 © 2010 Pittsburgh Supercomputing Center © 2014 Pittsburgh Supercomputing Center Cyberinfrastructure Issues - Technical Authentication and authorization mechanism for users/projects to allow bandwidth reservation request –Site/XSEDE context –Internet2 AL2S context Real-time cross-site tracking and management of allocated bandwidth resources Extend Torque/MOAB, XWFS, and SLASH2 to support SDN commands Vendor support for OpenFlow 1.3 flow metering

15 © 2010 Pittsburgh Supercomputing Center © 2014 Pittsburgh Supercomputing Center Research Questions How do multiple SDN/OF controllers overlay into the CI? Does OpenFlow 1.3 flow metering meet the performance needs? Are there significant SDN/OF operational differences between wide area and machine room environments? How well do multi-vendor OpenFlow 1.3 implementations interoperate? How to optimize network bandwidth utilization by using bandwidth scheduling? What is sufficient verification by project team to pave the way for production deployment at XSEDE and campus sites?