1 Network Services Interface Connection Service v2.0 Tomohiro Kudoh (AIST) (OGF NSI-WG)

Slides:



Advertisements
Similar presentations
© 2006 Open Grid Forum Network Services Interface OGF30: Connection Services Guy Roberts, 27 th Oct 2010.
Advertisements

Use cases for implementation of the NSI interface Takahiro Miyamoto, Nobutaka Matsumoto KDDI R&D Laboratories Inc. This work is partially supported by.
© 2006 Open Grid Forum Network Services Interface Introduction to NSI Guy Roberts.
NSI wg Architecture Elements John Vollbrecht Internet2.
© 2010 Open Grid Forum Choices in Reservation operation 1PC/2PC, timing Tomohiro Kudoh, AIST.
FIPA Interaction Protocol. Request Interaction Protocol Summary –Request Interaction Protocol allows one agent to request another to perform some action.
Lawrence Berkeley National LaboratoryU.S. Department of Energy | Office of Science Network Service Interface (NSI) Inder Monga Co-chair, Network Services.
© 2006 Open Grid Forum Network Service Interface in a Nut Shell GEC 19, Atlanta, GA Presenter: Chin Guok (ESnet) Contributors: Tomohiro Kudoh (AIST), John.
NORDUnet Nordic infrastructure for Research & Education The OGF Network Services Interface Framework An Overview, Status, and Futures Presented to: OGF.
NORDUnet Nordic infrastructure for Research & Education LHCONE “Point-to-Point Connection Service” Service Definition Jerry Sobieski.
A Computation Management Agent for Multi-Institutional Grids
Distributed Network Control for Optical Networks Presented by, Sree Rama Nomula
Click to add text Introduction to z/OS Basics © 2006 IBM Corporation Chapter 15: WebSphere MQ.
Abstraction and Control of Transport Networks (ACTN) BoF
Tanenbaum & Van Steen, Distributed Systems: Principles and Paradigms, 2e, (c) 2007 Prentice-Hall, Inc. All rights reserved DISTRIBUTED SYSTEMS.
NORDUnet Nordic infrastructure for Research & Education LHCONE  NSI Adaptation An Applications’ Perspective Jerry Sobieski NORDUnet CERN Dec. 13/
Tanenbaum & Van Steen, Distributed Systems: Principles and Paradigms, 2e, (c) 2007 Prentice-Hall, Inc. All rights reserved DISTRIBUTED SYSTEMS.
Tanenbaum & Van Steen, Distributed Systems: Principles and Paradigms, 2e, (c) 2007 Prentice-Hall, Inc. All rights reserved DISTRIBUTED.
DISTRIBUTED SYSTEMS Principles and Paradigms Second Edition ANDREW S
Connect. Communicate. Collaborate BANDWIDTH-ON-DEMAND SYSTEM CASE-STUDY BASED ON GN2 PROJECT EXPERIENCES Radosław Krzywania (speaker) PSNC Mauro Campanella.
OGF DMNR BoF Dynamic Management of Network Resources Documents available at: Guy Roberts, John Vollbrecht.
LHC OPEN NETWORK ENVIRONMENT STATUS UPDATE Artur Barczyk/Caltech Tokyo, May 2013 May 14, 2013
XA Transactions.
© 2006 Open Grid Forum Network Services Interface OGF 32, Salt Lake City Guy Roberts, Inder Monga, Tomohiro Kudoh 16 th July 2011.
Jini Architecture Introduction System Overview An Example.
GLOBAL EDGE SOFTWERE LTD1 R EMOTE F ILE S HARING - Ardhanareesh Aradhyamath.
Chapter 7: Consistency & Replication IV - REPLICATION MANAGEMENT By Jyothsna Natarajan Instructor: Prof. Yanqing Zhang Course: Advanced Operating Systems.
AutoGOLE Networks Status Report Gerben van Malenstein LHCOPN - LHCONE meeting at LBL June 2, 2015 – Berkeley, CA, USA.
DICE: Authorizing Dynamic Networks for VOs Jeff W. Boote Senior Network Software Engineer, Internet2 Cándido Rodríguez Montes RedIRIS TNC2009 Malaga, Spain.
NSI Aggregator: Joint SURFnet/ESnet effort LHCONE Workshop CERN (Geneva, CH) Feb 10-11, 2014 NSI PCE Development Team.
Antidio Viguria Ann Krueger A Nonblocking Quorum Consensus Protocol for Replicated Data Divyakant Agrawal and Arthur J. Bernstein Paper Presentation: Dependable.
© 2006 Open Grid Forum Network Services Interface Document roadmap, April 2014 Guy Roberts, Chin Guok, Tomohiro Kudoh.
Distributed DBMSPage © 1998 M. Tamer Özsu & Patrick Valduriez Outline Introduction Background Distributed DBMS Architecture Distributed Database.
COMPUTER NETWORKS Hwajung Lee. Image Source:
“Time” in the NSI Protocol Two notions of “Time” are important to NSI Connection Service – Absolute time Globally Coordinated Time and Date, “UTC” time.
© 2010 Open Grid Forum STP and TF how they work Tomohiro Kudoh.
NSI Service Definition Federation of providers A group of network providers get together and decide that they wish to offer a multi-domain connection services.
© 2006 Open Grid Forum Network Services Interface CS Errata Guy Roberts, Chin Guok, Tomohiro Kudoh 29 Sept 2015.
1 Network Services Interface An Interface for Requesting Dynamic Inter- datacenter Networks Tomohiro Kudoh (AIST) Guy Roberts (DANTE) Inder Monga (ESnet)
ESnet’s Use of OpenFlow To Facilitate Science Data Mobility Chin Guok Inder Monga, and Eric Pouyoul OGF 36 OpenFlow Workshop Chicago, Il Oct 8, 2012.
OGF 43, Washington 26 March FELIX background information Authorization NSI Proposed solution Summary.
Saturday, July OGF32 – Salt Lake City NSI-WG: Network Service Interface working group OGF NSI Protocol Protocol status and discussion John MacAuley.
Tanenbaum & Van Steen, Distributed Systems: Principles and Paradigms, 2e, (c) 2007 Prentice-Hall, Inc. All rights reserved DISTRIBUTED SYSTEMS.
© 2007 Open Grid Forum NSI CS Protocol State Machine Message Handling OGF 37.
© 2006 Open Grid Forum Network Services Interface Policy-based routing enforcement John MacAuley, ESnet 4 th February 2015.
Thoughts on the Firewall NAT issue* 1 Tomohiro Kudoh * I think the issue we are discussing as the “firewall issue” is almost a NAT issue (i.e. a process.
Lawrence Berkeley National LaboratoryU.S. Department of Energy | Office of Science Network Service Interface: Concepts and Architecture Inder Monga Guy.
NSI Topology v2.0 Version 1.2 John MacAuley, ESNET September 22, 2014 Uppsala.
© 2006 Open Grid Forum The Network Services Interface An Overview of the NSI Framework and the GLIF Automated GOLE dynamic network provisioning demonstration.
GEANT OpenCall – NSI CONTEST NSI CONTEST – Demonstrator Giacomo Bernini Nextworks GENI Networking Conference 22, March 2015, Washington DC.
OGF NSI CS Protocol State Machine Converged single view July 16, 2011.
Inter-Domain Network Provisioning Technology for LHC data transfer
Multi-layer software defined networking in GÉANT
OGF NSI CS Protocol State Machine
(OGF NSI-WG co-chairs)
Inder Monga Co-chair, OGF NSI-WG
GÉANT Multi-Domain Bandwidth-on-Demand Service
Network Services Interface
NSI Topology Thoughts on how topology fits into the NSI architecture
NSI wg Architecture Elements
NSI Service Definition
Outline Introduction Background Distributed DBMS Architecture
Integration of Network Services Interface version 2 with the JUNOS Space SDK
Network Services Interface gateway for future network services
Network Services Interface
Proposal for SwitchingService (SS)
Documenting ONAP components (functional)
Network Services Interface
Outline Announcements Fault Tolerance.
Grid Coordination by Using the Grid Coordination Protocol
Presentation transcript:

1 Network Services Interface Connection Service v2.0 Tomohiro Kudoh (AIST) (OGF NSI-WG)

2 Network Services Interface (1) An open standard for dynamic circuit service interoperability A simple API to support connectivity management Dynamic assignment of bandwidth, VLAN ids etc. Global reach: multi-provider enabled solution Being defined at OGF(Open Grid Forum) NSI-WG

3 Network Services Interface (2) Currently, NSI-WG is working to finalize version 2.0 of the Connection Service document, in which a service architecture and protocol for end-to-end circuit provisioning interface is described. Complementary services, such as a Topology Service and a Discovery Service, are being standardized to support the NSI Connection Service. To provide not just a connection but a dynamic network, Switching Service has been proposed and under discussion. NSI provides abstracted view of networks, hiding underlying physical infrastructure Any control plane such as GMPLS or OpenFlow can be used under NSI

4 NSI architecture and connection service (CS) STP (Service Termination Point) TF A:0 A:1 A:2 A:3 A:4 B:0 B:1 B:2 B:3 B:4 B:5 B:6 TF D:0 D:1 D:2 D:3 TF C:0 C:1 C:3 Network ANetwork B Network D Network C - A:0 to D:2, 1:002:00, 1Gbps Requester Provider AggregatorNSA User agent NRM Provider NSA A NRM Provider NSA B NRM Provider NSA C NRM Provider NSA D B:1 to B:4, 1:002:00, 1Gbps A:0 to A:3, 1:002:00, 1Gbps D:1 to D:2, 1:002:00, 1Gbps NSI TF: Transfer Function NRM: Network Resource Manager Network Service Plane Data Plane (abstracted) Discovery Service Topology Service Discovery Service Topology Service

5 NSI tree model uRAs (Ultimate Requester agent) Aggregators uPAs (Ultimate Provider agent) NW Data Plane There can be multiple ultimate requesters. Request propagation path is formed dynamically

6 A chain on a Tree Aggregators uPAs uRA

7 STP and TF: topology abstraction Data plane topology is abstracted by STP (Service Termination Point) and TF (Transfer Function) STP is a logical label of a point at the edge of a network STPs are used in a connection request to designate termination points of intra-network connection. TF represents each network’s capability of dynamically connecting two STPs of the network

8 uRA, uPA and Aggregator NSA NRM uRA AG uPA AG uRA: Ultimate Requester Agent AG: Aggregator uPA: Ultimate Provider Agent NRM: Network Resource Manager

9 Key extensions/changes of NSI CS 2.0 Separate state machines for reservation maintenance, provision control and terminate a connection Support of modify and use of two-phase commit for reservation Introduction of Coordinator Confirmation of delivery of request messages Supports aggregation of messages Notifications and error handling The majority of error states were removed from the state machine, and handled by Coordinator A new set of notifications were added to inform the uRA Ability to use a query operation for polling If a requester (client) cannot receive reply messages or notifications (by message loss or NAT problem, etc.), the requester can get status by using query operations.

10 Coordinator and MTL Coordinator’s role: To coordinate, track, and aggregate (if necessary) message requests, replies, and notifications, and realize reliable requests/replies communication To process or forward notifications as necessary To service query requests Coordinator maintains the following information on a per NSI request message basis: Who the (NSI request) message was sent to Was the message received (i.e. ack’ed) or not (i.e. MTL timeout) Which NSA has sent back an NSI reply (e.g. *.confirm, *.fail, *.not_applicable) for the initial NSI request (e.g. *.request)

11 Information maintained by Coordinator

12 Two-phase reserve/modify NSI is an advance-reservation based protocol A reservation of a connection has properties such: A-point, Z-point (mandatory) Start-time, End-time (mandatory) Bandwidth, Labels (optional) A reservation is made in two-phase First phase: availability is checked, if available resources are held Second phase: the requester either commit or abort a held reservation Two-phase is convenient when a requester requests resources from multiple providers, including other resources such as computers and storages Timeout: If a requester does not commit a held reservation for a certain period of time, a provider can timeout Modification of a reservation is supported. Currently, modification of start_time, end_time and bandwidth are supported

13 Reservation, Provisioning and Activation Reservation State Machine Provision State Machine Committed Reservation Provisioned /Scheduled start_time < current_time < end_time update transition timer Data Plane is activated according to the latest committed reservation, when PSM is in “Provisioned” state AND during a reservation period

14 Reservation State Machine Reservation State Machine maintains reservation/modification message sequences Reservation data base is updated when a reservation/modification is committed

RSM: Reservation State Machine reservation success case 15 Reserve Held Reserve Checking Reserve Failed <rsv.fl <rsv.cf <rsvcommit.cf >rsv.rq >rsvcommit.rq Reserve Committi ng Transitional States Initial State Stable States Reserve Aborting Reserve Timeout (reserve_timeout) <rsvTimeout.nt >rsvabort.rq <rsvabort.cf >rsvcommit.rq <rsvcommit.fl >rsvabort.rq Reserve Start <rsvcommit. f l uPA only reserve request (check availability) Commit request

16 Reserve Held Reserve Checking Reserve Failed <rsv.fl <rsv.cf <rsvcommit.cf >rsv.rq >rsvcommit.rq Reserve Committi ng Transitional States Initial State Stable States Reserve Aborting Reserve Timeout (reserve_timeout) <rsvTimeout.nt >rsvabort.rq <rsvabort.cf >rsvcommit.rq <rsvcommit.fl >rsvabort.rq Reserve Start <rsvcommit. f l uPA only reserve request (check availability) resource not available RSM: Reservation State Machine resource not available and failed

17 Reserve Held Reserve Checking Reserve Failed <rsv.fl <rsv.cf <rsvcommit.cf >rsv.rq >rsvcommit.rq Reserve Committi ng Transitional States Initial State Stable States Reserve Aborting Reserve Timeout (reserve_timeout) <rsvTimeout.nt >rsvabort.rq <rsvabort.cf >rsvcommit.rq <rsvcommit.fl >rsvabort.rq Reserve Start <rsvcommit. f l uPA only RSM: Reservation State Machine aborted after resource was held reserve request (check availability) abort request

18 Reserve Held Reserve Checking Reserve Failed <rsv.fl <rsv.cf <rsvcommit.cf >rsv.rq >rsvcommit.rq Reserve Committi ng Transitional States Initial State Stable States Reserve Aborting Reserve Timeout (reserve_timeout) <rsvTimeout.nt >rsvabort.rq <rsvabort.cf >rsvcommit.rq <rsvcommit.fl >rsvabort.rq Reserve Start <rsvcommit. f l uPA only RSM: Reservation State Machine aborted after failed reserve request (check availability) resource not available abort request

19 Reserve Held Reserve Checking Reserve Failed <rsv.fl <rsv.cf <rsvcommit.cf >rsv.rq >rsvcommit.rq Reserve Committi ng Transitional States Initial State Stable States Reserve Aborting Reserve Timeout (reserve_timeout) <rsvTimeout.nt >rsvabort.rq <rsvabort.cf >rsvcommit.rq <rsvcommit.fl >rsvabort.rq Reserve Start <rsvcommit. f l uPA only reserve request (check availability) Timeout RSM: Reservation State Machine timeout (no commit or abort for held resource)

20 Provision State Machine Simple state machine to explicitly provision / release a connection Provision state machine transits independently from the reservation state machine Provision state: Data plane is activated if the PSM is in “Provisioned” state AND start_time < current_time < end_time A connection can be repeatedly provisioned and released

PSM : Provision State Machine Releasing Provisioni ng >prov.rq >rel.rq <rel.cf <prov.cf 21 Schedule d Provision ed 21 Transitional States Initial State Stable States

LSM : Lifecycle State Machine Transitional States Stable States Final State Initial State Terminat ed Terminati ng Created >term.rq <term.cf Failed <forcedEnd >term.rq Passed EndTime endTimeEvent>term.rq

23 Reservation, Provisioning and Activation Reservation State Machine Provision State Machine Committed Reservation Provisioned /Scheduled start_time < current_time < end_time update transition timer Data Plane is activated according to the latest committed reservation, when PSM is in “Provisioned” state AND during the reservation period

24 Data plane activation Activation may happen at the timing of following events (if the condition is met), using the latest commited reservation information PSM transits to “Provisioned” At the start_time Reservation is updated (by commit of a reservation/modify) Data plane is recovered from an error Data plane activation/deactivation are notified by DataPlaneStateChange.nt notification messages. Errors are notified by a generic error message activateFailed: Activation failed at the time when uPA should activate its data plane deactivateFailed: Deactivation failed at the time when uPA should deactivate its data plane dataplaneError: Data plane is deactivate when deactivation is not expected. The error is recoverable. forcedEnd: Something unrecoverable is happened in uPA/NRM

25 Provisioning Sequences provision.rq provision.cf release.rq terminate.rq terminate.cf Start time RequesterProvider In service Reserved provision.rq provision.cf Start time RequesterProvider In service Reserved Manual Provisioning Automatic Provisioning End time On-demand reservation / provisioning start_time is same as or before the time a reservation is made provision.rq is issued immediately after a reservation is made

26 Provisioning Sequences (release and re- provision) provision.rq provision.cf release.rq release.cf provision.rq provision.cf Start time RequesterProvider In service Reserved Automatic Provisioning End time provision.rq provision.cf release.rq release.cf provision.rq provision.cf terminate.rq terminate.cf Start time RequesterProvider In service Reserved Manual Provisioning End time

27 Robustness of NSI CS2.0 Error scenarios were carefully considered and supported Supported scenarios include: NSI message delivery failure (transport error of management plane) uPA cannot provide data plane as reserved Temporaly and permanent failure scenarios Error notification messages are provided to notify errors from children to parents. uPA initiated commit timeout is supported uPA does not have to hold resources forever when no commit or abort message is received. uRA can get statuses of children NSAs by query requests. Simple (one level) and recursive (multi level) query operations are provided

28 NSI Development & Road Map OGF NSI-CS version 2.0 is in draft now (Jun. 2013) And will be finalized TODAY! Version 2 implementations are under development OpenNSA – NORDUnet (DK) OpenDRAC – SURFnet (NL) AutoBAHN – GEANT (PL) G-LAMBDA-A - AIST (JP) G-LAMBDA-K – KDDI R&D Labs (JP) DynamicKL – KISTI (KR) OSCARS – ESnet (US) First production services planned for 2013: NetherLight StarLight NORDUnet GEANT KRLight

29 Summary The NSI Connection Service protocol has been developed to allow dynamic service networks to inter-operate in a multi-provider environment. Version 2.0 of the NSI protocol is now being finalized. NSI v2.0 can be used as a key building block for the delivery of distributed virtual infrastructures

30 THANK YOU

31 Background With the trend towards Big Data, the federation of data and services provided by different clouds is becoming key to developing new services. To facilitate these new services, the efficient transfer of data between clouds is becoming more important, and reliable network bandwidth between datacenters is critical to achieving a stable high performance service.

32 Networks in Clouds Networks are often been taken for granted by Clouds. Activities to make network a manageable resource are on-going Longer distances may require using more than one provider Multi-provider network should be just another resource – easy integration with scheduling of computing/storage resources is important

33 Multi-domain cloud (3) Use (1) Request (2) Provide Inter-DC network