Simple Infrastructure to Exploit 100G Wide Are Networks for Data-Intensive Science Shawn McKee / University of Michigan Supercomputing 2015 Austin, Texas.

Slides:



Advertisements
Similar presentations
Models of Inservice Training Claire Bradin Siskin University of Pittsburgh.
Advertisements

Storage Workshop Summary Wahid Bhimji University Of Edinburgh On behalf all of the participants…
15 May 2006Collaboration Board GridPP3 Planning Executive Summary Steve Lloyd.
ASCR Data Science Centers Infrastructure Demonstration S. Canon, N. Desai, M. Ernst, K. Kleese-Van Dam, G. Shipman, B. Tierney.
India-US Network Enabled R/E Collaborations James Williams Director, International Networking University Information Technology Services Indiana University.
Kathy Benninger, Pittsburgh Supercomputing Center Workshop on the Development of a Next-Generation Cyberinfrastructure 1-Oct-2014 NSF Collaborative Research:
NCB - 31 August 2000Collaborative Tools in ATLAS - S.GoldfarbPage 1 Collaborative Tools in ATLAS Scope of this Presentation ATLAS Requirements of Collaborative.
GNEW 2004 CERN, Geneva, Switzerland March 16th, 2004Shawn McKee The UltraLight Program UltraLight: An Overview for GNEW2004 Shawn McKee University of Michigan.
Comments on this template -Leave time for questions -Leave time for demonstration (at the final LRR) -Use your actual hardware in the presentation -Practice.
Integrating Network and Transfer Metrics to Optimize Transfer Efficiency and Experiment Workflows Shawn McKee, Marian Babik for the WLCG Network and Transfer.
OKI Focus Groups at Educause, October 2002 Page 1 Open Knowledge Initiative Educause Focus Group Geoff Collier and Robby Robson, Eduworks Educause 2002,
Introduction to Grid Computing Ann Chervenak Carl Kesselman And the members of the Globus Team.
Section 11.1 Identify customer requirements Recommend appropriate network topologies Gather data about existing equipment and software Section 11.2 Demonstrate.
Data Center Infrastructure
IRNC Special Projects: IRIS and DyGIR Eric Boyd, Internet2 October 5, 2011.
Chapter 1 Introduction to Computers. Day 1 OBJECTIVE-PREBELL QUESTION Objective: The student will: define and illustrate operating system terminology.
Big Data: Movement, Crunching, and Sharing Guy Almes, Academy for Advanced Telecommunications 13 February 2015.
Mantychore Oct 2010 WP 7 Andrew Mackarel. Agenda 1. Scope of the WP 2. Mm distribution 3. The WP plan 4. Objectives 5. Deliverables 6. Deadlines 7. Partners.
Put the name of your scientist here
McGraw-Hill/Irwin © The McGraw-Hill Companies, All Rights Reserved BUSINESS PLUG-IN B17 Organizational Architecture Trends.
LHC Tier 2 Networking BOF Joe Metzger Joint Techs Vancouver 2005.
PPDG and ATLAS Particle Physics Data Grid Ed May - ANL ATLAS Software Week LBNL May 12, 2000.
Internet2 Middleware Initiative. Discussion Outline  What is Middleware why is it important why is it hard  What are the major components of middleware.
University of Chicago Brent O’Keeffe – CC-NIE proposal # CC-NIE proposal # Creation of HiPeRNet.
CS 3610: Software Engineering – Fall 2009 Dr. Hisham Haddad – CSIS Dept. Chapter 6 System Engineering Overview of System Engineering.
DataTAG Research and Technological Development for a Transatlantic Grid Abstract Several major international Grid development projects are underway at.
CyberInfrastructure workshop CSG May Ann Arbor, Michigan.
© 2010 VMware Inc. All rights reserved vSphere 4.1: Install, Configure, Manage.
TeraPaths The TeraPaths Collaboration Presented by Presented by Dimitrios Katramatos, BNL Dimitrios Katramatos, BNL.
Research Proposal The Alignment between Design, Implementation and Affordances, in Blended and Distance Learning.
CSE 102 Introduction to Computer Engineering What is Computer Engineering?
US LHC OSG Technology Roadmap May 4-5th, 2005 Welcome. Thank you to Deirdre for the arrangements.
CS/CoE 535 : Lockwood 1 CS/CoE 535 Acceleration of Networking Algorithms in Reconfigurable Hardware Lecture 18 Washington University Fall 2001
Introduction to Grids By: Fetahi Z. Wuhib [CSD2004-Team19]
ATLAS WAN Requirements at BNL Slides Extracted From Presentation Given By Bruce G. Gibbard 13 December 2004.
WEBMASTER 3224 PHYSICAL VS LOGICAL COMPONENTS OF THE INTERNET AND NETWORKS.
SOFTWARE DEFINED NETWORKING/OPENFLOW: A PATH TO PROGRAMMABLE NETWORKS April 23, 2012 © Brocade Communications Systems, Inc.
DYNES Project Updates October 11 th 2011 – USATLAS Facilities Meeting Shawn McKee, University of Michigan Jason Zurawski, Internet2.
Cyberinfrastructure: Many Things to Many People Russ Hobby Program Manager Internet2.
Data Transfer Service Challenge Infrastructure Ian Bird GDB 12 th January 2005.
SDN Provisioning, next steps after ANSE Kaushik De Univ. of Texas at Arlington US ATLAS Planning, CERN June 29, 2015.
Tshilidzi Tshiredo. Introduction Long time ago even before technologies, social networking platforms and mobile devices, Dewey, J.( ) stated that.
ANSE: Advanced Network Services for Experiments Institutes: –Caltech (PI: H. Newman, Co-PI: A. Barczyk) –University of Michigan (Co-PI: S. McKee) –Vanderbilt.
Activity 1 5 minutes to discuss and feedback on the following:
Network Consulting Customer NDA Deck September 2014.
AGLT2 Site Report Shawn McKee University of Michigan March / OSG-AHM.
Company LOGO Network Architecture By Dr. Shadi Masadeh 1.
I Arlington, VA April 20th, 2004Shawn McKee The UltraLight Program UltraLight: An Overview for the Internet2 Spring 2004 Meeting Shawn McKee University.
NORDUnet Nordic Infrastructure for Research & Education Report of the CERN LHCONE Workshop May 2013 Lars Fischer LHCONE Meeting Paris, June 2013.
Logistical Networking: Buffering in the Network Prof. Martin Swany, Ph.D. Department of Computer and Information Sciences.
SCIENCE_DMZ NETWORKS STEVE PERRY, DIRECTOR OF NETWORKS UNM PIYASAT NILKAEW, DIRECTOR OF NETWORKS NMSU.
Strawman LHCONE Point to Point Experiment Plan LHCONE meeting Paris, June 17-18, 2013.
HENP SIG Austin, TX September 27th, 2004Shawn McKee The UltraLight Program UltraLight: An Overview and Update Shawn McKee University of Michigan.
ATLAS Distributed Computing ATLAS session WLCG pre-CHEP Workshop New York May 19-20, 2012 Alexei Klimentov Stephane Jezequel Ikuo Ueda For ATLAS Distributed.
LHCONE NETWORK SERVICES: GETTING SDN TO DEV-OPS IN ATLAS Shawn McKee/Univ. of Michigan LHCONE/LHCOPN Meeting, Taipei, Taiwan March 14th, 2016 March 14,
LHCONE NETWORK SERVICES INTERFACE (NSI) POINT-TO-POINT TESTBED WITH ATLAS SITES Shawn McKee/Univ. of Michigan Kaushik De/Univ. of Texas Arlington (Thanks.
DutchGrid KNMI KUN Delft Leiden VU ASTRON WCW Utrecht Telin Amsterdam Many organizations in the Netherlands are very active in Grid usage and development,
Enterprise Security Management Franklin Tinsley COSC 481.
What is OSG? (What does it have to do with Atlas T3s?) What is OSG? (What does it have to do with Atlas T3s?) Dan Fraser OSG Production Coordinator OSG.
UNM SCIENCE DMZ Sean Taylor Senior Network Engineer.
Distributed OS.
Joint Genome Institute
AGLT2 Site Report Shawn McKee/University of Michigan
Overview of System Engineering
Customer NDA Deck September 2014
ExaO: Software Defined Data Distribution for Exascale Sciences
Big-Data around the world
LHC Tier 2 Networking BOF
Network Architecture By Dr. Shadi Masadeh 1.
The Internet2 HENP SIG Internet2 Fall Meeting September 28, 2004
Presentation transcript:

Simple Infrastructure to Exploit 100G Wide Are Networks for Data-Intensive Science Shawn McKee / University of Michigan Supercomputing 2015 Austin, Texas

SC15 - SDN 100G2 What Are We Doing Here? Michigan is participating in a couple network demonstrations for SC15 and they both involve 100G wide-area networks and SDN (Software Defined Networking) [See rack left of screen] Our two Network Research Exhibitions demonstrations are described on the SC15 NRE website scinet/nre-demos-2015 LHCONE Point2point Service with Data Transfer Nodes – Partners: Caltech #1248 and Vanderbilt #271 SDN Optimized High-Performance Data Transfer Systems for Exascale Science – Partners: California Institute of Technology / CACR #1248, University of Michigan #2103, Stanford University #2009, OCC #749, Vanderbilt #271, Dell #1009, and Echostreams #582

SC15 - SDN 100G3 The Michigan Net-Demo Focus The two main demonstrations focus on our LHC-ATLAS related needs for high- performance networking – ATLAS LHC – ATLAS is one of the two large general purpose experiments at the Large Hadron Collider (LHC) The demos highlight the effective use of 100G wide-area paths coordinated via the use of SDN. Our collaboration has selected Openflow 1.3 and the Open Daylight controller At the Michigan booth we are very interested in what the “minimal” infrastructure required to exploit 100G WAN paths is…

SC15 - SDN 100G4 Why Minimal? For our work in high-energy physics we have a large number of sites and institutions participating and there is a large range in the size and expertise available for each of them. Big sites with many experts are capable of effectively deploying and configuring enough storage systems to take maximum advantage of their networks. – Physicists working at these sites get the fastest possible access to new data which may hold the key to the next big discovery However there are many smaller sites who lack either the local technical expertise or can only afford a few storage systems which aren’t able to fully exploit their 10G, 40G or 100G institutional network. – Physicists at these sites are not able to participate in as timely a way The interesting question for me is What is the “minimal” infrastructure we can develop that is capable of effectively utilizing high-speed networks to move data? – The idea is to be able to recommend infrastructure that even small sites could deploy to expedite getting data in-to or out-of their site

SC15 - SDN 100G5 Big Picture of the SDN WAN Demo

SC15 - SDN 100G6 Real-time Visualization of Demo A real-time OpenFlow fabric visualization for our is available at http: // (Caltech)

SC15 - SDN 100G7 Demo Status Our Demos are still “under-construction” as we deal with the usual challenges of SC – Making various hardware and software inter-operate – Having minimal time to actually have the equipment on-hand to develop and test – Dealing with hardware, firmware and software components and their configuration, tuning and optimization We just got the final hardware pieces in place today to construct the end-to-end path to Michigan Still facing some challenges getting Openflow 1.3 setup on all the network devices along the path

SC15 - SDN 100G8 Details of the Path to AGLT2

SC15 - SDN 100G9 What We Have for Minimal

SC15 - SDN 100G10 Plans for the Rest of the Week Today we should complete all the Openflow setup and begin final tweaking of the transfer paths The next two days we will be starting sets of flows across the infrastructure and using SDN to manage how those flows share the possible paths Check Caltech’s real-time Openflow web page during the week to see how we are doing

SC15 - SDN 100G11 Thanks to our Partners We couldn’t do these kinds of demos without our partners. Thanks to Caltech, Dell, Starlight, Century Link, ESnet, U. Texas Arlington, Stanford, Merit Network, ADVA, QLogic and Juniper

ConclusionConclusion SC15 - SDN 100G12 Questions or Comments I wanted to leave time for questions, comments and discussion Any topics you want more details about? Comments or suggestions? Questions ?