Dynamic Resource Allocation over GMPLS Optical Networks Presented to IPOP 2005 February 21, 2005 Tokyo, JP Jerry Sobieski Mid-Atlantic Crossroads (MAX)

Slides:



Advertisements
Similar presentations
Electronic Visualization Laboratory University of Illinois at Chicago EVL Optical Networking Research Oliver Yu Electronic Visualization Laboratory University.
Advertisements

Photonic TeraStream and ODIN By Jeremy Weinberger The iCAIR iGRID2002 Demonstration Shows How Global Applications Can Use Intelligent Signaling to Provision.
MPLS and GMPLS Li Yin CS294 presentation.
Research Challenges in the Emerging Hybrid Network World Tom Lehman University of Southern California Information Sciences Institute (USC/ISI)
CCAMP WG, IETF 80th, Prague, Czech Republic draft-gonzalezdedios-subwavelength-framework-00 Framework for GMPLS and path computation support of sub-wavelength.
Generalized Multiprotocol Label Switching: An Overview of Signaling Enhancements and Recovery Techniques IEEE Communications Magazine July 2001.
Advance in Design and Implementation of VLSR in Support of E2E VLAN DRAGON Meeting, 2005 Xi Yang Information Sciences Institute University of Southern.
A Possible New Dawn for the Future GÉANT Network Architecture
DRAGON Dynamic Resource Allocation via GMPLS Optical Networks Tom Lehman University of Southern California Information Sciences Institute (USC/ISI) National.
DWDM-RAM: DARPA-Sponsored Research for Data Intensive Service-on-Demand Advanced Optical Networks DWDM RAM DWDM RAM BUSINESS WITHOUT BOUNDARIES.
1 In VINI Veritas: Realistic and Controlled Network Experimentation Jennifer Rexford with Andy Bavier, Nick Feamster, Mark Huang, and Larry Peterson
ESnet On-demand Secure Circuits and Advance Reservation System (OSCARS) Chin Guok Network Engineering Group Thomas Ndousse Visit February Energy.
1 Introduction to Optical Networks. 2 Telecommunications Network Architecture.
DRAGON Supercomputing 2004 Presentation Plans October 6, 2004 National Science Foundation.
Is Lambda Switching Likely for Applications? Tom Lehman USC/Information Sciences Institute December 2001.
Circuit Services - IPTV Christian Todorov Internet2 Fall Member Meeting October 9, 2007.
National Science Foundation Arlington, Virginia January 7-8, 2013 Tom Lehman University of Maryland Mid-Atlantic Crossroads.
NORDUnet NORDUnet The Fibre Generation Lars Fischer CTO NORDUnet.
U.S. Optical Network Testbeds Status Grant Miller National Coordination Office July 3, 2004.
Impact of “application empowered” networks >The semi-conductor revolution reduced CAPEX and OPEX costs for main frame computer >But its biggest impact.
The DRAGON Project Context, Objectives, Progress, and Directions Presented to the Optical Network Testbeds II Workshop NASA Ames Research Center September.
Fujitsu Proprietary and Confidential All Rights Reserved, ©2006 Fujitsu Network Communications Simplicity and Automation in Reconfigurable Optical Networks.
Chapter 1. Introduction. By Sanghyun Ahn, Deot. Of Computer Science and Statistics, University of Seoul A Brief Networking History §Internet – started.
HOPI Update Rick Summerhill Director Network Research, Architecture, and Technologies Jerry Sobieski MAX GigaPoP and TSC Program Manager Mark Johnson MCNC.
1 CHEETAH's use of DRAGON DRAGON software (current usage) RSVP-TE for an end-host client VLSR for a CVLSR to support immediate-request calls DRAGON network.
DRAGON Dynamic Resource Allocation via GMPLS Optical Networks API Overview Jaroslav Flidr, Peter O’Neil, Chris Tracy Mid-Atlantic Crossroads.
UVA work items  Provisioning across CHEETAH and UltraScience networks Transport protocol for dedicated circuits: Fixed-Rate Transport Protocol (FRTP)
High-quality Internet for higher education and research GigaPort  Overview SURFnet6 Niels den Otter SURFnet EVN-NREN Meeting Amsterdam October 12, 2005.
Rick Summerhill Chief Technology Officer, Internet2 Internet2 Fall Member Meeting 9 October 2007 San Diego, CA The Dynamic Circuit.
Dynamic Circuit Services Control Plane Overview April 24, 2007 Internet2 Member Meeting Arlington, Virginia Tom Lehman University of Southern California.
Salim Hariri HPDC Laboratory Enhanced General Switch Management Protocol Salim Hariri Department of Electrical and Computer.
A Framework for Internetworking Heterogeneous High-Performance Networks via GMPLS and Web Services Xi Yang, Tom Lehman Information Sciences Institute (ISI)
Hybrid MLN DOE Office of Science DRAGON Hybrid Network Control Plane Interoperation Between Internet2 and ESnet Tom Lehman Information Sciences Institute.
Delivering Circuit Services to Researchers: The HOPI Testbed Rick Summerhill Director, Network Research, Architecture, and Technologies, Internet2 Joint.
February 13, 2007 Winter 2007 Joint Techs Minneapolis, Minnesota Dynamic Services Control Plane Overview and Status Tom Lehman University of Southern California.
Department of Energy Office of Science ESCC & Internet2 Joint Techs Workshop Madison, Wisconsin.July 16-20, 2006 Network Virtualization & Hybridization.
OIF NNI: The Roadmap to Non- Disruptive Control Plane Interoperability Dimitrios Pendarakis
ASCR/ESnet Network Requirements an Internet2 Perspective 2009 ASCR/ESnet Network Requirements Workshop April 15/16, 2009 Richard Carlson -- Internet2.
TeraPaths TeraPaths: Establishing End-to-End QoS Paths through L2 and L3 WAN Connections Presented by Presented by Dimitrios Katramatos, BNL Dimitrios.
Graceful Label Numbering in Optical MPLS Networks Ibrahim C. Arkut Refik C. Arkut Nasir Ghani
NA-MIC National Alliance for Medical Image Computing UCSD: Engineering Core 2 Portal and Grid Infrastructure.
William Stallings Data and Computer Communications
GRID Overview Internet2 Member Meeting Spring 2003 Sandra Redman Information Technology and Systems Center and Information Technology Research Center National.
Dynamic Lightpath Services on the Internet2 Network Rick Summerhill Director, Network Research, Architecture, Technologies, Internet2 TERENA May.
1 Role of Ethernet in Optical Networks Debbie Montano Director R&E Alliances Internet2 Member Meeting, Apr 2006.
Chapter2 Networking Fundamentals
Optical Architecture Invisible Nodes, Elements, Hierarchical, Centrally Controlled, Fairly Static Traditional Provider Services: Invisible, Static Resources,
Circuit Services Christian Todorov Internet2 Fall Member Meeting October 9, 2007.
June 4, 2003Carleton University & EIONGMPLS - 1 GMPLS Generalized Multiprotocol Label Switching Vijay Mahendran Sumita Ponnuchamy Christy Gnanapragasam.
August 2003 At A Glance The IRC is a platform independent, extensible, and adaptive framework that provides robust, interactive, and distributed control.
Optical + Ethernet: Converging the Transport Network An Overview.
Dynamic Circuit Network An Introduction John Vollbrecht, Internet2 May 26, 2008.
GRID ANATOMY Advanced Computing Concepts – Dr. Emmanuel Pilli.
DOE UltraScience Net The Need –DOE large-scale science applications on supercomputers and experimental facilities require high-performance networking Petabyte.
TeraPaths: A QoS Enabled Collaborative Data Sharing Infrastructure for Petascale Computing Research The TeraPaths Project Team Usatlas Tier 2 workshop.
Internet2 Dynamic Circuit Services and Tools Andrew Lake, Internet2 July 15, 2007 JointTechs, Batavia, IL.
NORDUnet NORDUnet e-Infrastrucure: Grids and Hybrid Networks Lars Fischer CTO, NORDUnet Fall 2006 Internet2 Member Meeting, Chicago.
Company LOGO Network Architecture By Dr. Shadi Masadeh 1.
1 Revision to DOE proposal Resource Optimization in Hybrid Core Networks with 100G Links Original submission: April 30, 2009 Date: May 4, 2009 PI: Malathi.
March 2004 At A Glance The AutoFDS provides a web- based interface to acquire, generate, and distribute products, using the GMSEC Reference Architecture.
Supporting Advanced Scientific Computing Research Basic Energy Sciences Biological and Environmental Research Fusion Energy Sciences High Energy Physics.
Internet2 Members Meeting Washington, DC 1 Advanced Networking Infrastructure and Research (ANIR) Aubrey Bush Division Director, ANIR National Science.
The Internet2 Network and LHC Rick Summerhill Director Network Research, Architecture, and Technologies Internet2 Given by Rich Carlson LHC Meeting 25.
Connecting to the new Internet2 Network What to Expect… Steve Cotter Rick Summerhill FMM 2006 / Chicago.
What is FABRIC? Future Arrays of Broadband Radio-telescopes on Internet Computing Huib Jan van Langevelde, JIVE Dwingeloo.
HOPI Update Rick Summerhill Director Network Research, Architecture, and Technologies Internet2 Joint Techs 17 July 2006 University of Wisconsin, Madison,
The Internet2 Network and LHC Rick Summerhill Director Network Research, Architecture, and Technologies Internet2 LHC Meeting 23 October 2006 FERMI Lab,
An evolutionary approach to G-MPLS ensuring a smooth migration of legacy networks Ben Martens Alcatel USA.
Grid Optical Burst Switched Networks
Dynamic Circuit Service Hands-On GMPLS Engineering Workshop
Presentation transcript:

Dynamic Resource Allocation over GMPLS Optical Networks Presented to IPOP 2005 February 21, 2005 Tokyo, JP Jerry Sobieski Mid-Atlantic Crossroads (MAX) Tom Lehman University of Southern California Information Sciences Institute (USC ISI) Bijan Jabbari George Mason University (GMU) Don Riley University of Maryland (UMD) The DRAGON Project National Science Foundation

Why Develop Dynamic “Light Path” Networks? There exists a new set of emerging “E-Science” applications that require new network capabilities: –Global science Global climate modeling, life sciences, radio astronomy, high energy physics,… –Global teams of scientists –Globally distributed tools Sensors, radio telescopes, high energy physics facilities, remote field telemetry, computational/storage/visualization resources, … It is difficult (or impossible) for existing networks to meet the needs of these applications –Best-effort IP networks exhibit unpredictable performance for this small user community with high end specialized applications. –Current circuit based services are static and slow (on the order of weeks or months) to provision. These applications themselves are very dynamic in terms of topology

E-science Example: E-VLBI Electronic Very Long Baseline Interferometry (eVLBI) 2004 = 128 mbs 2006 > 2 gbs 2008 > 4+ gbs 2005 ~= 512mbs -> 1gbs 2006 > 10 gbs to 20+ gbs 2008 > 20 gbs to 40+ gbs

E-VLBI Requirements From 2 to as many as 20+ radio telescopes distributed around the world –Each generating 500mbs real time streams (this year) –Converging on a single computational cluster Realtime correlation requires network resources that are: –Deterministic in terms of loss, latency, and jitter –Repeatable –Scheduleable – must be coordinated with other resources such as the availability of the telescope itself. –Rapidly provisioned, under one minute to establish the topology The network is an integral part of the application itself – –An application specific topology (AST) must be instantiated “en masse” to run the application – I.e. all network resources must be provisioned as a whole –This physical AST will vary with the location of available/required nodal resources

The “Application Specific Topology” Telescopes CorrelatorC X Y Z Logical e-VLBI Topology: Physical Instantiations of the Application Specific Topology Kashima, JP MIT Haystack, US C X Y Z Westford, US NASA Goddard, US Onsala, SE Dwingeloo, NL C X Y Z Westerbork, NL Koke Park, US

What are the generic network requirements of these Super-Apps? Dedicated Network Resources –These applications want/need their own network resources, They do not care to “play fair” with other traffic. Deterministic Network Performance –Network performance must be consistent, predictable, and repeatable Reservable and Schedulable –The network must insure that the resources will be available when needed, and for as long as needed. Very High Performance –These applications require resources that often exceed current IP backbones. Dynamically Provisioned –The topologies, the performance requirements, the priorities, and purpose of the applications are not static and will vary from one run to the next. Application Specific Topologies –All resources must be allocated and provisioned as a whole.

So, what is the DRAGON Project? Dynamic Resource Allocation over GMPLS Optical Networks –DRAGON is a four year project funded by the US National Science Foundation (NSF) –Testbed deployed in the Washington DC metro area Purpose: –To develop/integrate network hardware and software technologies that can support dynamic, deterministic “light path” services. –To demonstrate these “light paths” services with real applications and over real network(s)

DRAGON Participants Mid-Atlantic Crossroads (MAX) USC/ Information Sciences Institute (ISI-East) George Mason University (GMU) MIT Haystack Observatory NASA Goddard Space Flight Center (GSFC) University of Maryland (UMCP) Movaz Networks (commercial partner) NCSA ACCESS US Naval Observatory

Project Features and Objectives All-Optical metro area network –Reduce OEO in the core, allow alien Waves in. GMPLS protocols for dynamic provisioning –Addition of CSPF Path Computation algorithms for wavelength routing Inter-domain service routing techniques –Network Aware Resource Broker (NARB) for service advertising, inter-domain ERO generation, AAA Application Specific Topology Description Language –Formalized means to describe the application topology and network service requirements Integration with real applications: –E-VLBI –HD-CVAN

The DRAGON Testbed Washington, D.C. Metropolitan Area HOPI / NLR CLPK ARLG DCGW MCLN MIT Haystack Observatory (HAYS) U. S. Naval Observatory (USNO) University of Maryland College Park (UMCP) Goddard Space Flight Center (GSFC) National Computational Science Aliance (NCSA) Univ of Southern California/ Information Sciences Institute (ISIE) DCNE MAX ATDnet

DRAGON Photonic Architecture Principles: –Standard practice of OEO engineering at every node is unnecessary in metro/regional networks Allow the user/client to define the transport –Core switching nodes should be all-optical: Any wavelength on any port to any other port Framing agnostic –OEO is provisioned only as a service function at core nodes: To provide wavelength translation to avoid wavelength blocking conditions To provide regeneration iff network diameter and service specs require it, and only on a request specific basis. –OEO transponders are used at edge only for ITU translation External ITU wavelength signaling and sourcing is encouraged –All waves are dynamically allocated using GMPLS protocols Extensions for CSPF path computation and inter-domain are new

DRAGON Generic Architectural Cells Core Wavelength Switching Primitive Cell -All waves are C-Band ITU compliant on 100 Ghz ITU spacing -Any wave can be individually switched from any input port to any output port -Each port goes to either a) another core switching cell, or b) an edge cell -Other wavelengths outside the C-Band are extinguished on entry and are not progressed thru the switch. -The switching cell can block any/all input waves on any input port -The switch is not sensitive to the content, framing of any data plane wave. Edge Service Introduction and Validation Cell -Client interfaces provide wavelength conversion to ITU grid lambdas -External wavelength interfaces verify conformance of customer provisioned waves to network constraints -Can also be used at core nodes to provide wavelength translation

Commercial Partner: Movaz Networks Private sector partner for the DRAGON project –Provide state of the art optical transport and switching technology –Major participant in IETF standards process –Software development group located in McLean Va (i.e. within MAX) –Demonstrated GMPLS conformance MEMS-based switching fabric 400 x 400 wavelength switching, scalable to 1000s x 1000s 9.23"x7.47"x3.28" in size Integrated multiplexing and demultiplexing, eliminating the cost and challenge of complex fiber management Movaz iWSS prototype switch installed at the University of Maryland

Movaz Partnership: New Technology Movaz and DRAGON will be deploying early versions of new technology such as: –Reconfigurable OADMs –Alien wavelength conditioning –Tunable wavelength transponders –40 gigabit wavelengths –Possibly other digital encoding formats such as RZ, DPSK, etc. The development and deployment plans of selected technologies are part of the annual review cycle

End to End GMPLS Transport What is missing? IP/Ethernet campus LAN Integration across Non-GMPLS enabled networks IP/ {Ethernet, sonet, wavelength } Core services GMPLS-RSVP-TE signaling GMPLS-{OSPF, ISIS}-TE intra-domain routing No standardized Inter-Domain Routing Architecture, including transport layer capability set advertisements No Simple API No end-to-end instantiation

Virtual Label Switched Router: VLSR Many networks consist of switching components that do not speak GMPLS, e.g. current ethernet switches, fiber switches, etc Contiguous sets of such components can be abstracted into a Virtual Label Switched Router The VLSR implements Open Source versions of GMPLS-OSPF-TE and GMPLS-RSVP-TE and runs on a Unix based PC/workstation –Zebra OSPF extended to GMPLS –KOM-RSVP likewise The VLSR translates GMPLS protocol events into generic pseudo- commands for the covered switches. –The pseudo commands are tailored to each specific vendor/architecture using SNMP, TL1, CLI, or a similar protocol. The VLSR can abstract and present a non-trivial internal topology as a “black box” to an external peering entity.

VLSR Abstraction Ethernet networkGMPLS network LSR OSPF-TE / RSVP-TE ?? SNMP control VLSR

Network Aware Resource Broker (NARB) Functions – IntraDomain NARB End System End System Ingress LSR Egress LSR Edge Signaling Authorization ASTDL Scheduling Authentication Authorization Accounting IP Control Plane Data Plane Data Plane LSP Signaling IGP Listener Edge Signaling Enforcement Authentication Path Computation ASTDL Induced Topology Computations Accounting Scheduling Authorization (flexible policy based) Edge Signaling Authentication AS#

Network Aware Resource Broker (NARB) Functions - InterDomain InterDomain NARB must do all IntraDomain functions plus: –EGP Listener –Exchange of InterDomain transport layer capability sets –InterDomain path calculation –InterDomain AAA policy/capability/data exchange and execution NARB End System NARB End System AS 1 AS 2 AS 3 Transport Layer Capability Set Exchange

Application Specific Topology Description Language: ASTDL ASTDL is a formalized definition language that describes complex topologies –By formally defining the application’s network requirements, service validation and performance verification can be performed (“wizard gap” issues) –Formal definition allows advanced scheduling – which must still be integrated with non-network resources such as computational clusters, instruments, sensor nets, storage arrays, visualization theatres… ASTDL includes run time libraries to instantiate the topology and link in the other resources of the application –Application topologies consist of multiple LSPs that must be instantiated as a set. –Resource availability must be dependable and predictable, i.e. resources must be reservable in advance for utilization at some later time

Concept Application Specific Topology Description Language - ASTDL Datalink:= { Framing=Ethernet; bandwidth=1g; SourceAddress=%1::vlbid; DestinationAddress=%2; } Topo_vlbi_ := { Correlator:=corr1.usno.navy.mil::vlbid; // USNO DataLink( mkiv.oso.chalmers.se, Correlator ); // OSO Sweden DataLink( pollux.haystack.mit.edu, Correlator );// MIT Haystack DataLink( ggao1.gsfc.nasa.gov, Correlator ); // NASA Goddard } C++ Code invocation example: eVLBI = new ASTDL::Topo( “Topo_vlbi_200406”); // Get the topology definition Stat = eVLBI.Create(); // Make it so! corr1.usno.navy.mil mkiv.oso.chalmers.se pollux.haystack.mit.edu ggao1.gsfc.nasa.gov Correlator GGAO telescope Onsala telescope Haystack telescope Formal Specification Instantiation

D Minion E F B A C The AST Process C C Minion B B A A Master

exec() the user and pass off the connections. ASTDL Driver Example #include "class_Topo.h++" #define DRAGON_TCP_PORT 5555; // // User prime mover "ast_master" for AST miniond // using namespace std; using namespace ASTDL; int main(int argc, char *argv[]) { int stat; Topo *topo; topo = new Topo(argv[1]); if(topo == NULL) exit(1); stat = topo->Resolve(); if(stat != 0) exit (2); stat = topo->Instantiate(); if(stat != 0) { cout << "Error stat=" << stat << endl; exit(3);} stat = topo->UserHandoff(); stat = topo->Release(); exit(0) } Read topology definition source and create the Topo object Resolve hostnames and other service specific data Establish all the connections All done. Tear it down.

High Definition Collaboration and Visual Area Networking (HD-CVAN) Dragon dynamic resource reservation will be used to instantiate an application specific topology –Video directly from HDTV cameras and 3D visualization clusters will be natively distributed across network Integration of 3D visualization remote viewing and steering into HD collaboration environments HD-CVAN Collaborators –UMD VPL –NASA GSFC (VAL and SVS) –USC/ISI (UltraGrid Multimedia Laboratory) –NCSA ACCESS

Uncompressed HDTV-over-IP Current Method oNot truly HDTV --> color is subsampled to 8bits oPerformance is at the mercy of best-effort IP network oUltraGrid processing introduces some latency SMPTE-292 HDTV output Gbps UltraGrid node UltraGrid node LDK Mbps RTP/UDP/IP network Astro 1602 DAC PDP-502MX

Low latency High Definition Collaboration DRAGON Enabled End-to-end native SMPTE 292M transport Media devices are directly integrated into the DRAGON environment via proxy hosts –Register the media device (camera, display, …) –Sink and source signaling protocols –Provide Authentication, authorization and accounting. LDK-6000 DRAGON Network SMPTE-292 NARB o/e e/o d/a e/o o/e d/a SMPTE-292 proxy NARB proxy

Low Latency Visual Area Networking IP network Directly share output of visualization systems across high performance networks. DRAGON allows elimination of latencies associated with IP transport. DRAGON Network Frame buffercodecpacketized depackettizecodecFrame buffer o/e e/o a/d 3D video/IP d/a

Status to Date Wavelength Selective Switch staged at UMD –Installed and operational Spring 2004, second phase expansion to happen Summer ‘05 –GMPLS control plane operational Spring ‘04 Initial VLSR functionality demonstrated –Successful interop tests with across Movaz, Juniper, Sycamore, Ethernet switches. More to come… –VLSR being used in other testbeds: CHEETAH, HOPI and USN are interested… Initial NARB demonstrated at SuperComputing 2004 in Pittsburgh –Plans being made to extend DRAGON to NLR and begin inter-domain experiments with Omninet, HOPI, USN, and others. –Currently planning optical GMPLS peerings with ATDnet(WDM) and HOPI (Ethernet) HD-CVAN UltraGrid node, using DRAGON technologies, demo’d at SC04 –Will be tested in DRAGON network over Spring 05

For more information… Web: dragon.maxgigapop.net Contact: