Download presentation
Presentation is loading. Please wait.
Published byThomasine Maxwell Modified over 9 years ago
2
EC (EU), MIC & NICT (JP) collaborative project Project running from April 2013 till March 2016 302 PMs of effort PL BE NL DE IT ES JP EU coordinator JP coordinator To create a large-scale testbed federated across two continents To define a common software architecture for testbeds FactsObjectives
3
The slice concept is adopted in FELIX –Experimental facilities to be provided dynamically on top of the FELIX physical infrastructure (federated testbeds) All the experimental facilities are controlled programmatically –Facilities are composed of computing and network resources (CR and NR) belonging to distributed SDN islands in FELIX infrastructure –Orchestrate resources in a multi-domain environment –In a slice, facilities are interconnected via TN service-controlled domains (transit network) User has access and control of a provided slice
4
Request configuration of slice(s) Control slice Manage slice Users The FELIX Space provides users with slices for their own use. Users request slices to an RO. –RO: Resource Orchestrator –RM: Resource Manager –PHY RES: physical resources (testbed) The User Space consists of any tools and applications that a user wants to deploy to control a slice or execute particular operations
5
FELIX technical documents/deliverables and architecture whitepaper available at www.ict-felix.euwww.ict-felix.eu
6
Transport Network Resource Manager (TNRM)
7
Features –Allocation (reservation), provisioning (generation), (re)creation, deletion of inter-domain links between STPs of remote networks Proxy between RO and NSI agent to set up connectivity between different domains –Access and interfaces: Standard northbound API (GENIv3) for federation with multiple clients –Makes FELIX transit network technology agnostic (e.g. NSI, GRE, etc.)
9
GENI→NSI Allocate→Reserve + Commit Delete/shutdown→Terminate Provision→- PerformOperationalAction start→provision (link up) PerformOperationalAction stop→release (link down)
10
Data model: GENI RSpec adopted by TNRM. RSpec and NSI messages are translated to each other. Base tools: Existing eiSoil framework (Python) used as base server. Jython (Java- interoperable Python) used to communicate calls from NSI (Java) with those from eiSoil (Python). TNRM RO eiSoil server src/main.py eiSoil vendor/tnrm_dele gate.py SimpleXMLRPCServer vendor/nsi2interface.py NSI CS2.0 Providers / Aggregator NSIv2 Requesting Agent Interface NSI2Interface.java NSI messages (JAVA method call with NSI parameters) (Python RPC With GENI RSpec) TNRM topology config vendor/config.py proxy.py Convert FELIX to NSIv2 reservation.py GENI v3 RPC MRO
11
CLI RO GENIv3 API TNRM GENIv3 API NSI AIST NSI Aggregator AIST NSI uPA JGN-X NSI uPA iCAIR NSI uPA NL NSI uPA PSNC NSI uPA NetherLight NSI Aggregator AIST Island PSNC Island FELIX domain NSI domain
12
<node client_id="urn:publicid:IDN+fms:aist:tnrm+stp" component_manager_id="urn:publicid:IDN+fms:aist:tnrm+authority+cm"> <property source_id="urn:publicid:IDN+fms:aist:tnrm+stp+urn:ogf:network:pionier.net.pl:2013:topology:server_port" dest_id="urn:publicid:IDN+fms:aist:tnrm+stp+urn:ogf:network:aist.go.jp:2013:topology:bi-ps" capacity="100"> <property source_id="urn:publicid:IDN+fms:aist:tnrm+stp+urn:ogf:network:aist.go.jp:2013:topology:bi-ps" dest_id="urn:publicid:IDN+fms:aist:tnrm+stp+urn:ogf:network:pionier.net.pl:2013:topology:server_port" capacity="100">
13
Goal: Integrate dynamic services offered by research networks into a federated SDN experimental environment –NSI-capable infrastructures are more and more stable, but BoD platforms compliant with NSI v2.0 are still under development It took us 6 months to establish a full end-to-end control plane workflow between FELIX endpoints, mainly due to extensive changes in the NSI implementations in transit networks, incompatibility, misconfigurations, etc. –FELIX provided valuable feedback to NSI infrastructure managers during the infrastructure setup Dataplane testing also provided feedback to misconfigurations in transit domains –FELIX partners proactively worked towards identfying issues and proposing solutions to the transit network administrators (unique consortium quality)
14
Stitching Entity Resource Manager (SERM)
15
Solves low level data plane issues when interconnecting SDN domains with other kinds of Transport Networks –Switching SDN traffic to/from VLAN-based services of Transport Networks Fully dynamic services: GLIF NSI, Géant BoD AutoBAHN Static services (created by mgnt procedures): Géant Plus Layer2, Géant MPVPN –Switching SDN traffic to/from tunnels: GRE (in progress) SDN domain and Transport Network interconnected by a data plane proxy device („stitching entity”) Low level data plane issues addressed –Taking into account limitations of VLANs availability at Transport Network domain boundary –Both SDN domain and many Transport Networks connected to stitching device by one or more ports –Managing mappings between Transport Network VLANs and SDN domain VLANs –Managing mappings between ingress (TN) and egress ports (SDN)
17
Northbound interface –GENI version 3 Southbound interface (towards stitching entity) –RPC interface of OpenFlow POX Controller –HTTP/REST interface of OpenFlow Ryu Controller –Can be easily extended to other kinds of APIs Open source (Apache 2.0-licensed), written in Python –https://github.com/dana-i2cat/felix/tree/stitching-entityhttps://github.com/dana-i2cat/felix/tree/stitching-entity Status so far: –Deployed in five Felix islands (PSNC, i2CAT, KDDI, AIST and EICT) –Controls only OpenFlow switches where VLAN translation flowmods are installed –Developed from scratch and fully supported by PSNC
19
Resource Orchestrator
20
GENIv3-enabled entry point for RMs in a facility –Upper layer to provide centralised approach per-domain, distributed per-federation –Registers GENIv3-enabled peers Managed RMs: CRM, SDNRM, SERM, TNRM Also other ROs: stackable, recursive management –Possible to perform policies per-domain, for every peer Aggregating information on resources and facilities –Keeps updated list of resources provided per registered peer –Extra monitoring of underlying physical facility (topology) and slice details Integrated with FELIX MS Maintenance, scalability and debugging –Open source (Apache 2.0-licensed), written in Python https://github.com/dana-i2cat/felix/tree/resource-orchestrator –Comprehensive logging to keep track of both automatic and experimenters’ requests –New types of RMs and data models can be easily added
21
Top (Master) RO – manages ROs in different domains Middle RO – manage RMs within the same domain
22
RO RMRM RMRM RMRM RMRM RMRM RMRM RMRM RMRM RMRM RMRM RMRM RMRM RMRM RMRM RMRM RMRM RMRM RMRM RMRM RMRM RMRM RMRM RMRM RMRM Centralized Full mesh Hybrid RO Global RO RO Continent RO RO Island RO RO RMRM RMRM RMRM RMRM RMRM RMRM RMRM RMRM Distributed Selected for implementation
23
Northbound API (to client) – GENIv3 Supported tools for experimenters – OMNI, jFed Southbound API – GENIv3 (peer), REST (MS)
24
Introduces a logic upper layer to perform homogenous operations on all RMs –Common control access to request resources on any underlying RM or RO –Intra-domain monitoring: overview on resources’ status, physical infrastructure, etc. Status so far –Stand-alone component, currently serving multiple infrastructures within the FELIX testbed Easy deploy and supported by NXW and i2CAT Compatible with popular GENI experimenters’ tools –Can be introduced in any other infrastructure (min. requirements &dependencies) –Alternative means for configuration and documentation Scripts and CLI to manage peers within RO Several configuration options Source and documentation under continued enhancement
25
SDN Resource Manager (SDN RM)
26
GENIv3-enabled RM to allocate OpenFlow resources (similar to FOAM) –Features (legacy Opt-in manager + OFAM) Management of resources through FlowVisor (v0.8.7) GUI –based administration Two request modes (web side): simple and advanced –Improvements (SDNRM) Support of GENIv3 API and GENIv3 Rspecs + OpenFlow extensions Possibility of enabling automatic approval of slices (Expedient) Default re-approval for resources previously granted by admin Automatic installation and configuration of FlowVisor (v1.4.0) Maintenance and debugging –Open source (FP7/OFELIA-licensed), written in Python https://github.com/dana-i2cat/felix/tree/ocf –Multiple log traces to keep track of requests Integrated with Expedient for web-based access to SDNRM’s logs
27
Northbound API (to client) – GENIv3, Expedient/OFELIA Supported tools for experimenters: OMNI, jFed
28
SDNRM works as an alternative to FOAM –SDNRM provides the following extra features: Support of GENIv3 API Automatic approval through Expedient and re-approval of previously granted resources through GENI API Automatic installation and configuration of FlowVisor (v1.4.0) –And a limitation: Approval through GENI API must be firstly approved by an administrator Status so far –Works both as a stand-alone module (GENIv3) or integrated with a GUI (Expedient, jFed) –Currently serving multiple infrastructures within the FELIX testbed Compatible with popular GENI experimenters’ tools SDNRMFOAM Supported interfacesGENIv3GENIv2 Supported RSpecsGENIv3 + OF ManagementGUICLI Experimenter accessGUI, CLICLI
29
Computing Resource Manager (CRM)
30
GENIv3-enabled RM to manage computing resources (VMs) –Based on OFELIA VTAM Allows allocating (reservation), provisioning (generation), (re)starting and deleting VMs on different physical servers Management of resources through XEN hypervisor GUI-based administration –Improvements (CRM) User SSH keys contextualisation in the Virtual Machines log-in through public keys Support of GENIv3 API and RSpecs Management of resources also through KVM hypervisor Automatic deletion of resources after expiration Maintenance and debugging –Open source (FP7/OFELIA-licensed), written in Python https://github.com/dana-i2cat/felix/tree/ocf –Multiple log traces to keep track of requests Integrated with Expedient for web-based access to CRM’s logs
31
Northbound API (to client) – GENIv3, Expedient/OFELIA Supported tools for experimenters: OMNI, jFed
32
CRM allows managing the VM lifecycle (allocating, provisioning, deleting…) through GENIv3 API –XEN and KVM support Two different flavours –SSH log-in through keys Live SSH key update possible through API (geni_update_users) –Supports for basic management operations (geni_start, geni_stop, geni_restart) Status so far –Works both as a stand-alone module (GENIv3) or integrated with a GUI (Expedient, jFed) –Currently serving multiple infrastructures within the FELIX testbed Compatible with popular GENI experimenters’ tools
33
C-BAS
34
A clearinghouse for SDN experimental facilities –Supports federation with GENI/FIRE SDN test-beds –Issues certificates and credentials to experimenters and tools –Acts as an anchor in distributed web of trust –Keeps event logs for accountability purposes Performs authentication using –certificates: X.509 version 3 –signed credentials: GENI SFA format Manages information about –Experimenters, their privileges, roles, SSH keys, etc. –Slices, projects, and slivers Open source implementation in Python –Extensible through plug-ins –Computationally efficient to serve large-scale experimental facilities
35
Supported APIs 1.Federation Service API version 2 Standard API’s to be supported by GENI-compatible Federations 2.FELIX AAA API An extension of above API to serve FELIX-specific services Supported tools for experimenters 1.Omni A GENI command line tool for reserving resources using GENI AM API 2.jFed Experimenter GUI and CLI Developed in Fed4FIRE to allow end-users provision and manage experiments 3.Expedient A web-based experimenters’ tool extended by OFELIA and FELIX projects
36
An attractive clearinghouse candidate for SDN experimental facilities –Standalone software component tested through deployment in FELIX islands continuously enhanced and fully supported by EICT compatible with popular experimenters’ tools –Shipped with a backend administrator's tool Create/revoke/manage user accounts Administrate slice and project objects Check event logs, etc. –Simplifies the process of federation with existing GENI/FIRE test-beds
37
Poznan Supercomputing and Networking Center Poland Nextworks Italy European Center for Information and Communication Technologies Gmbh Germany Fundacio Privada i2CAT, Internet I Innovacio Digital A Catalunya Spain SURFnet bv Netherlands KDDI Japan National Institute of Advanced Industrial Science and Technology Japan iMinds VZW Belgium
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.