DWDM-RAM: DARPA-Sponsored Research for Data Intensive Service-on-Demand Advanced Optical Networks DWDM RAM Data@LIGHTspeed.

Slides:



Advertisements
Similar presentations
Electronic Visualization Laboratory University of Illinois at Chicago EVL Optical Networking Research Oliver Yu Electronic Visualization Laboratory University.
Advertisements

Photonic TeraStream and ODIN By Jeremy Weinberger The iCAIR iGRID2002 Demonstration Shows How Global Applications Can Use Intelligent Signaling to Provision.
Electronic Visualization Laboratory University of Illinois at Chicago Photonic Interdomain Negotiator (PIN): Interoperate Heterogeneous Control & Management.
Business Model Concepts for Dynamically Provisioned Optical Networks Tal Lavian DWDM RAM DWDM RAM Defense Advanced Research Projects Agency.
DWDM-RAM: DARPA-Sponsored Research for Data Intensive Service-on-Demand Advanced Optical Networks DWDM RAM DWDM RAM BUSINESS WITHOUT BOUNDARIES.
Application-engaged Dynamic Orchestration of Optical Network Resources DWDM RAM DWDM RAM Defense Advanced Research Projects Agency BUSINESS.
# 1 A Platform for Large-Scale Grid Data Service on Dynamic High-Performance Networks DWDM RAM DWDM RAM Defense Advanced Research Projects.
Business Models for Dynamically Provisioned Optical Networks Tal Lavian DWDM RAM DWDM RAM Defense Advanced Research Projects Agency BUSINESS.
A Platform for Data Intensive Services Enabled by Next Generation Dynamic Optical Networks DWDM RAM DWDM RAM Defense Advanced Research.
1 DWDM-RAM: Enabling Grid Services with Dynamic Optical Networks S. Figueira, S. Naiksatam, H. Cohen, D. Cutrell, P. Daspit, D. Gutierrez, D. Hoang, T.
1 Automatic Dynamic Run-time Optical Network Reservations John R. Lange Ananth I. Sundararaj and Peter A. Dinda Prescience Lab Department of Computer Science.
An Architecture for Data Intensive Service Enabled by Next Generation Optical Networks Nortel Networks International Center for Advanced Internet Research.
Chapter 9: Moving to Design
1© Copyright 2015 EMC Corporation. All rights reserved. SDN INTELLIGENT NETWORKING IMPLICATIONS FOR END-TO-END INTERNETWORKING Simone Mangiante Senior.
Abstraction and Control of Transport Networks (ACTN) BoF
OptIPuter Backplane: Architecture, Research Plan, Implementation Plan Joe Mambretti, Director,
DISTRIBUTED COMPUTING
DWDM RAM NTONC DWDM-RAM: DARPA-Sponsored Research for Data Intensive Service-on-Demand Advanced Optical Networks DWDM-RAM demonstration sponsored by Nortel.
High-quality Internet for higher education and research GigaPort  Overview SURFnet6 Niels den Otter SURFnet EVN-NREN Meeting Amsterdam October 12, 2005.
Rick Summerhill Chief Technology Officer, Internet2 Internet2 Fall Member Meeting 9 October 2007 San Diego, CA The Dynamic Circuit.
A Wide Range of Scientific Disciplines Will Require a Common Infrastructure Example--Two e-Science Grand Challenges –NSF’s EarthScope—US Array –NIH’s Biomedical.
1 CHAPTER 8 TELECOMMUNICATIONSANDNETWORKS. 2 TELECOMMUNICATIONS Telecommunications: Communication of all types of information, including digital data,
Metro OptIPuter Backplane: Architecture, Research Plan, Implementation Plan Joe Mambretti, Director,
TeraPaths TeraPaths: Establishing End-to-End QoS Paths through L2 and L3 WAN Connections Presented by Presented by Dimitrios Katramatos, BNL Dimitrios.
9 Systems Analysis and Design in a Changing World, Fourth Edition.
© 2006 National Institute of Informatics 1 Jun Matsukata National Institute of Informatics SINET3: The Next Generation SINET July 19, 2006.
Optical Architecture Invisible Nodes, Elements, Hierarchical, Centrally Controlled, Fairly Static Traditional Provider Services: Invisible, Static Resources,
GridNets, October 1, AR-PIN/PDC: Flexible Advance Reservation of Intradomain and Interdomain Lightpaths Eric He, Xi Wang, Jason Leigh Electronic.
GCRC Meeting 2004 BIRN Coordinating Center Software Development Vicky Rowley.
1 Simple provisioning, complex consolidation – An approach to improve the efficiency of provisioning oriented optical networks Tamás Kárász Budapest University.
A Platform for Data Intensive Services Enabled by Next Generation Dynamic Optical Networks DWDM RAM DWDM RAM Defense Advanced Research.
For WSIS 2003, CERN and the International Center for Advanced Internet Research (iCAIR) designed several demonstrations of next generation.
Securing the Grid & other Middleware Challenges Ian Foster Mathematics and Computer Science Division Argonne National Laboratory and Department of Computer.
GRID ANATOMY Advanced Computing Concepts – Dr. Emmanuel Pilli.
TeraPaths: A QoS Enabled Collaborative Data Sharing Infrastructure for Petascale Computing Research The TeraPaths Project Team Usatlas Tier 2 workshop.
Data Grid Plane Network Grid Plane Dynamic Optical Network Lambda OGSI-ification Network Resource Service Data Transfer Service Generic Data-Intensive.
Challenges in the Next Generation Internet Xin Yuan Department of Computer Science Florida State University
Supporting Advanced Scientific Computing Research Basic Energy Sciences Biological and Environmental Research Fusion Energy Sciences High Energy Physics.
The Internet2 Network and LHC Rick Summerhill Director Network Research, Architecture, and Technologies Internet2 LHC Meeting 23 October 2006 FERMI Lab,
DWDM-RAM: DARPA-Sponsored Research for Data Intensive Service-on-Demand Advanced Optical Networks DWDM-RAM demonstration sponsored by Nortel Networks and.
An evolutionary approach to G-MPLS ensuring a smooth migration of legacy networks Ben Martens Alcatel USA.
CLOUD ARCHITECTURE Many organizations and researchers have defined the architecture for cloud computing. Basically the whole system can be divided into.
Grid Optical Burst Switched Networks
Welcome Network Virtualization & Hybridization Thomas Ndousse
California Institute of Technology
Distributed Systems.
SURFnet6: the Dutch hybrid network initiative
StratusLab Final Periodic Review
StratusLab Final Periodic Review
Globus —— Toolkits for Grid Computing
Design Decisions / Lessons Learned
StarPlane: Application Specific Management of Photonic Networks
Grid Network Services: Lessons from SC04 draft-ggf-bas-sc04demo-0.doc
Grid Computing.
CT1303 LAN Rehab AlFallaj.
University of Technology
GGF15 – Grids and Network Virtualization
Chapter 7 Backbone Network
Akari Project an Initiative on Designing a New Generation Network
ECEN “Internet Protocols and Modeling”
SURFnet6 Hybrid Optical and Packet Switching Infrastructure
Specialized Cloud Mechanisms
Distributed Systems Bina Ramamurthy 11/30/2018 B.Ramamurthy.
Distributed Systems Bina Ramamurthy 12/2/2018 B.Ramamurthy.
Introduction to Grid Technology
The Anatomy and The Physiology of the Grid
Optical communications & networking - an Overview
The Anatomy and The Physiology of the Grid
Distributed Systems Bina Ramamurthy 4/22/2019 B.Ramamurthy.
L. Glimcher, R. Jin, G. Agrawal Presented by: Leo Glimcher
Presentation transcript:

DWDM-RAM: DARPA-Sponsored Research for Data Intensive Service-on-Demand Advanced Optical Networks DWDM RAM Data@LIGHTspeed

Optical Abundant Bandwidth Meets Grid Data-Intensive Applications DWDM-RAM Abundant Optical Bandwidth PBs Storage Tbs on single fiber strand The Data Intensive App Challenge: Emerging data intensive applications in the field of HEP, astro-physics, astronomy, bioinformatics, computational chemistry, etc., require extremely high performance and long term data flows, scalability for huge data volume, global reach, adjustability to unpredictable traffic behavior, and integration with multiple Grid resources. Response: DWDM-RAM An architecture for data intensive Grids enabled by next generation dynamic optical networks, incorporating new methods for lightpath provisioning. DWDM-RAM is designed to meet the networking challenges of extremely large scale Grid applications. Traditional network infrastructure cannot meet these demands, especially, requirements for intensive data flows

DWDM-RAM Architecture The DWDM-RAM architecture identifies two distinct planes over the dynamic underlying optical network: the Data Grid Plane that speaks for the diverse requirements of a data-intensive application by providing generic data-intensive interfaces and services and 2) the Network Grid Plane that marshals the raw bandwidth of the underlying optical network into network services, within the OGSI framework, and that matches the complex requirements specified by the Data Grid Plane. At the application middleware layer, the Data Transfer Service (DTS) presents an interface between the system and an application. It receives high-level client requests, policy-and-access filtered, to transfer specific named blocks of data with specific advance scheduling constraints. The network resource middleware layer consists of three services: the Data Handler Service (DHS), the Network Resource Service (NRS) and the Dynamic Lambda Grid Service (DLGS). Services of this layer initiate and control sharing of resources.

DWDM-RAM Architecture Data Center l1 ln Data-Intensive Applications Dynamic Lambda, Optical Burst, etc., Grid services Dynamic Optical Network OMNInet Transfer Service Basic Network Resource Service Network Scheduler Network Resource Handler Information Service Application Middleware Layer Connectivity and Fabric Layers l OGSI-ification API NRS Grid Service API DTS API Optical path control

DWDM-RAM vs. Layered Grid Architecture Layered DWDM-RAM Application Application “Coordinating multiple resources”: ubiquitous infrastructure services, app-specific distributed services DTS API Data Transfer Service Collective Application Middleware Layer “Sharing single resources”: negotiating access, controlling use Network Resource Service NRS Grid Service API Resource Network Resource Middleware Layer “Talking to things”: communication (Internet protocols) & security We define Grid architecture in terms of a layered collection of protocols. Fabric layer includes the protocols and interfaces that provide access to the resources that are being shared, including computers, storage systems, datasets, programs, and networks. This layer is a logical view rather then a physical view. For example, the view of a cluster with a local resource manager is defined by the local resource manger, and not the cluster hardware. Likewise, the fabric provided by a storage system is defined by the file system that is available on that system, not the raw disk or tapes. The connectivity layer defines core protocols required for Grid-specific network transactions. This layer includes the IP protocol stack (system level application protocols [e.g. DNS, RSVP, Routing], transport and internet layers), as well as core Grid security protocols for authentication and authorization. Resource layer defines protocols to initiate and control sharing of (local) resources. Services defined at this level are gatekeeper, GRIS, along with some user oriented application protocols from the Internet protocol suite, such as file-transfer. Collective layer defines protocols that provide system oriented capabilities that are expected to be wide scale in deployment and generic in function. This includes GIIS, bandwidth brokers, resource brokers,…. Application layer defines protocols and services that are parochial in nature, targeted towards a specific application domain or class of applications. These are are are … arrgh Data Path Control Service l OGSI-ification API Connectivity & Fabric Layer Connectivity Optical Control Plane “Controlling things locally”: Access to, & control of, resources Fabric l’s

DWDM-RAM Service Control Architecture Optical Control Network Network Service Request Data Transmission Plane OmniNet Control Plane ODIN UNI-N Connection Control L3 router L2 switch Data storage switch Path DATA GRID SERVICE PLANE l1 ln Center Service NETWORK SERVICE PLANE GRID Service Request

OMNInet Core Nodes UIC Northwestern U CA*net3--Chicago StarLight 8x1GE Optical Switching Platform Optical Switching Platform Application Cluster Passport 8600 Passport 8600 Application Cluster OPTera Metro 5200 StarLight CA*net3--Chicago Loop 8x1GE 4x10GE 4x10GE 8x1GE Optical Switching Platform Application Cluster Optical Switching Platform Passport 8600 Passport 8600 Closed loop A four-node multi-site optical metro testbed network in Chicago -- the first 10GE service trial! A test bed for all-optical switching and advanced high-speed services OMNInet testbed Partners: SBC, Nortel, iCAIR at Northwestern, EVL, CANARIE, ANL

8x8x8l Scalable photonic switch OMNInet 10/100/ GIGE 10 GE To Ca*Net 4 Lake Shore Photonic Node S. Federal W Taylor Sheridan Optera 5200 10Gb/s TSPR l 4 PP 8600 2 3 1 1310 nm 10 GbE WAN PHY interfaces … EVL/UIC OM5200 LAC/UIC INITIAL CONFIG: 10 LAMBDA (all GIGE) StarLight Interconnect with other research networks 10GE LAN PHY (Dec 03) TECH/NU-E CAMPUS FIBER (4) 10 LAMBDAS (ALL GIGE) Optera Metro 5200 OFA NWUEN-1 NWUEN-5 NWUEN-6 NWUEN-2 NWUEN-3 NWUEN-4 NWUEN-8 NWUEN-9 NWUEN-7 Fiber in use Fiber not in use 5200 OFA Optera 5200 OFA Grid Clusters CAMPUS FIBER (16) Fiber CAMPUS FIBER (4) 8x8x8l Scalable photonic switch Trunk side – 10 G WDM OFA on all trunks

DWDM-RAM Components Data Management Services OGSA/OGSI compliant, capable of receiving and understanding application requests, have complete knowledge of network resources, transmit signals to intelligent middleware, understand communications from Grid infrastructure, adjust to changing requirements, understands edge resources, on-demand or scheduled processing, support various models for scheduling, priority setting, event synchronization Intelligent Middleware for Adaptive Optical Networking OGSA/OGSI compliant, integrated with Globus, receives requests from data services and applications, knowledgeable about Grid resources, has complete understanding of dynamic lightpath provisioning, communicates to optical network services layer, can be integrated with GRAM for co-management, architecture is flexible and extensible Dynamic Lightpath Provisioning Services Optical Dynamic Intelligent Networking (ODIN), OGSA/OGSI compliant, receives requests from middleware services, knowledgeable about optical network resources, provides dynamic lightpath provisioning, communicates to optical network protocol layer, precise wavelength control, intradomain as well as interdomain, contains mechanisms for extending lightpaths through E-Paths - electronic paths, incorporates specialized signaling, utilizes IETF – GMPLS for provisioning, new photonic protocols

Design for Scheduling Network and Data Transfers scheduled Data Management schedule coordinates network, retrieval, and sourcing services (using their schedulers) Scheduled data resource reservation service (“Provide 2 TB storage between 14:00 and 18:00 tomorrow”) Network Management has own schedule Variety of request models: Fixed – at a specific time, for specific duration Under-constrained – e.g. ASAP, or within a window Auto-rescheduling for optimization Facilitated by under-constrained requests Data Management reschedules for its own requests or on request of Network Management

Example: Lightpath Scheduling 4:30 5:00 5:30 4:00 3:30 W Request for 1/2 hour between 4:00 and 5:30 on Segment D granted to User W at 4:00 New request from User X for same segment for 1 hour between 3:30 and 5:00 Reschedule user W to 4:30; user X to 3:30. Everyone is happy. 4:30 5:00 5:30 4:00 3:30 X 4:30 5:00 5:30 4:00 3:30 W X Route allocated for a time slot; new request comes in; 1st route can be rescheduled for a later slot within window to accommodate new request

End-to-end Transfer Time 20GB File Transfer Set up: 29.7s Transfer: 174.0s Tear down: 11.3s File transfer request arrives File transfer done, path released 0.5s 3.6s 0.5s 25s 0.14s 174s 0.3s 11s Data Transfer 20 GB Path Allocation request ODIN Server Processing Path ID returned Network reconfiguration FTP setup time Path Deallocation request ODIN Server Processing

20GB File Transfer