EC (EU), MIC & NICT (JP) collaborative project Project running from April 2013 till March 2016 302 PMs of effort PL BE NL DE IT ES JP EU coordinator JP.

Slides:



Advertisements
Similar presentations
Connect communicate collaborate OpenFlow in GN3s Network Factory GN3 OpenFlow Facility Joan A. García-Espín on behalf of JRA2-T5 Partners i2CAT, Barcelona.
Advertisements

OFELIA – Japan interconnection Hagen Woesner, Coordinator of OFELIA project.
Sponsored by the National Science Foundation Lab Zero: A First Experiment.
Internet2 and AL2S Eric Boyd Senior Director of Strategic Projects
Grant agreement n° SDN architectures for orchestration of mobile cloud services with converged control of wireless access and optical transport network.
Internet2 Network: Convergence of Innovation, SDN, and Cloud Computing Eric Boyd Senior Director of Strategic Projects.
FEDERATED TEST-BEDS FOR LARGE SCALE INFRASTRUCTURE EXPERIMENTS.
Software Frameworks for Acquisition and Control European PhD – 2009 Horácio Fernandes.
FI-WARE – Future Internet Core Platform FI-WARE Cloud Hosting July 2011 High-level description.
 Tightly coupled containers of multiple resources of similar or different types  Lifecycle, Access, Billing & Identity control the resources placed.
Chapter 8: Network Operating Systems and Windows Server 2003-Based Networking Network+ Guide to Networks Third Edition.
Using the jFed tool to experiment from zero to hero Brecht Vermeulen FGRE, July 7 th, 2015.
VMware vCenter Server Module 4.
Virtual Machine Management
© 2012 Cisco and/or its affiliates. All rights reserved. 1 CCNA Security 1.1 Instructional Resource Chapter 10 – Implementing the Cisco Adaptive Security.
1 FGRE July 7 th – July 11 th Wifi: WelcomeATiMindS
MADE Mobile Agents based system for Distance Evaluation Vikram Jamwal KReSIT, IIT Bombay Guide : Prof. Sridhar Iyer.
GGF16-ghpnD. Simeonidou Lambda User Controlled Infrastructure For European Research LUCIFER.
 Cloud computing  Workflow  Workflow lifecycle  Workflow design  Workflow tools : xcp, eucalyptus, open nebula.
Building service testbeds on FIRE D5.2.5 Virtual Cluster on Federated Cloud Demonstration Kit August 2012 Version 1.0 Copyright © 2012 CESGA. All rights.
(1) Univ. of Rome Tor Vergata, (2) Consortium GARR, (3) CREATE-NET
Technology Overview. Agenda What’s New and Better in Windows Server 2003? Why Upgrade to Windows Server 2003 ?  From Windows NT 4.0  From Windows 2000.
National Science Foundation Arlington, Virginia January 7-8, 2013 Tom Lehman University of Maryland Mid-Atlantic Crossroads.
Digital Object Architecture
Using the jFed tool to experiment from zero to hero Brecht Vermeulen Thijs Walcarius GEC 22, March 24 th, 2015.
An Introduction to IBM Systems Director
Implementing Network Access Protection
Objectives Configure routing in Windows Server 2008 Configure Routing and Remote Access Services in Windows Server 2008 Network Address Translation 1.
TeraPaths TeraPaths: establishing end-to-end QoS paths - the user perspective Presented by Presented by Dimitrios Katramatos, BNL Dimitrios Katramatos,
1 Apache. 2 Module - Apache ♦ Overview This module focuses on configuring and customizing Apache web server. Apache is a commonly used Hypertext Transfer.
Software-defined Networking Capabilities, Needs in GENI for VMLab ( Prasad Calyam; Sudharsan Rajagopalan;
National Center for Supercomputing Applications NCSA OPIE Presentation November 2000.
GEC 15 Houston, Texas October 23, 2012 Tom Lehman Xi Yang University of Maryland Mid-Atlantic Crossroads (MAX)
Grant agreement n° WP4 Infrastructure virtualization and provisioning of end-to-end Cloud services Giada Landi, Jordi Ferrer Riera CONTENT Y1 EC.
Sponsored by the National Science Foundation GENI Exploring Networks of the Future
Sponsored by the National Science Foundation Lab Zero: A First Experiment using GENI Sarah Edwards, GENI Project Office.
Sponsored by the National Science Foundation 1 Last updated April 1, 2013 Are you ready for the tutorial? 1.Sign In 2.Grab a Worksheet 3.Did you do the.
TeraPaths TeraPaths: Establishing End-to-End QoS Paths through L2 and L3 WAN Connections Presented by Presented by Dimitrios Katramatos, BNL Dimitrios.
Connect. Communicate. Collaborate BANDWIDTH-ON-DEMAND SYSTEM CASE-STUDY BASED ON GN2 PROJECT EXPERIENCES Radosław Krzywania (speaker) PSNC Mauro Campanella.
NA-MIC National Alliance for Medical Image Computing UCSD: Engineering Core 2 Portal and Grid Infrastructure.
Sponsored by the National Science Foundation 1 ICDCS13: July 8, 2013 Are you ready for the tutorial? 1.Grab a Worksheet and instructions 2.Did you do the.
Sponsored by the National Science Foundation Lab Zero: A First Experiment using GENI.
GRID Overview Internet2 Member Meeting Spring 2003 Sandra Redman Information Technology and Systems Center and Information Technology Research Center National.
Building Dynamic Lightpaths in GÉANT Tangui Coulouarn, DeIC E-Infrastructure Autumn Workshop, Chiinău 8 September 2014.
Connect. Communicate. Collaborate AAI scenario: How AutoBAHN system will use the eduGAIN federation for Authentication and Authorization Simon Muyal,
Trusted Virtual Machine Images a step towards Cloud Computing for HEP? Tony Cass on behalf of the HEPiX Virtualisation Working Group October 19 th 2010.
SDN Management Layer DESIGN REQUIREMENTS AND FUTURE DIRECTION NO OF SLIDES : 26 1.
Sponsored by the National Science Foundation GENI Aggregate Manager API Tom Mitchell March 16, 2010.
Configuring and Troubleshooting Identity and Access Solutions with Windows Server® 2008 Active Directory®
Connect. Communicate. Collaborate Global On-demand Light Paths – Developing a Global Control Plane R.Krzywania PSNC A.Sevasti GRNET G.Roberts DANTE TERENA.
Sponsored by the National Science Foundation Establishing Policy-based Resource Quotas at Software-defined Exchanges Marshall Brinn, GPO June 16, 2015.
Overview –Monitoring the slice(s) dynamically provisioned over the FELIX testbed Cooperates with the Resource Orchestrator (RO) for overall view – Hierarchical.
Sponsored by the National Science Foundation Lab Zero: A First Experiment using GENI Sarah Edwards GENI Project Office.
Update on GÉANT BoD/AutoBAHN LHCONE Workshop: Networking for WLCG - CERN Tangui Coulouarn, DeIC 11 February 2013.
Internet2 Dynamic Circuit Services and Tools Andrew Lake, Internet2 July 15, 2007 JointTechs, Batavia, IL.
Sponsored by the National Science Foundation 1 March 15, 2011 GENI I&M Update: I&M Service Types, Arrangements, Assembling Goals Architecture Overview.
Software Defined Networking and OpenFlow Geddings Barrineau Ryan Izard.
Discussion on oneM2M and OSGi Interworking Group Name: ARC Source: Jessie, Huawei, Meeting Date: Agenda Item:
OGF 43, Washington 26 March FELIX background information Authorization NSI Proposed solution Summary.
Trusted Virtual Machine Images the HEPiX Point of View Tony Cass October 21 st 2011.
Active Distributed & Dynamic Optical Network Access Systems Next Generation Access Network Łukasz Podleski (PSNC) Work in the ADDONAS project is financially.
Designing a Federated Testbed as a Distributed System Robert Ricci, Jonathon Duerig, Gary Wong, Leigh Stoller, Srikanth Chikkulapelly, Woojin Seok 1.
SDN controllers App Network elements has two components: OpenFlow client, forwarding hardware with flow tables. The SDN controller must implement the network.
Multi-layer software defined networking in GÉANT
Using the jFed tool to experiment from zero to hero
GENUS Virtualisation Service for GÉANT and European NRENs
Establishing End-to-End Guaranteed Bandwidth Network Paths Across Multiple Administrative Domains The DOE-funded TeraPaths project at Brookhaven National.
Integration of Network Services Interface version 2 with the JUNOS Space SDK
Basic Tutorial Part II 31/12/2018.
Tutorial emulation/cloud on Virtual Wall
Presentation transcript:

EC (EU), MIC & NICT (JP) collaborative project Project running from April 2013 till March PMs of effort PL BE NL DE IT ES JP EU coordinator JP coordinator To create a large-scale testbed federated across two continents To define a common software architecture for testbeds FactsObjectives

The slice concept is adopted in FELIX –Experimental facilities to be provided dynamically on top of the FELIX physical infrastructure (federated testbeds) All the experimental facilities are controlled programmatically –Facilities are composed of computing and network resources (CR and NR) belonging to distributed SDN islands in FELIX infrastructure –Orchestrate resources in a multi-domain environment –In a slice, facilities are interconnected via TN service-controlled domains (transit network) User has access and control of a provided slice

Request configuration of slice(s) Control slice Manage slice Users The FELIX Space provides users with slices for their own use. Users request slices to an RO. –RO: Resource Orchestrator –RM: Resource Manager –PHY RES: physical resources (testbed) The User Space consists of any tools and applications that a user wants to deploy to control a slice or execute particular operations

FELIX technical documents/deliverables and architecture whitepaper available at

Transport Network Resource Manager (TNRM)

Features –Allocation (reservation), provisioning (generation), (re)creation, deletion of inter-domain links between STPs of remote networks Proxy between RO and NSI agent to set up connectivity between different domains –Access and interfaces: Standard northbound API (GENIv3) for federation with multiple clients –Makes FELIX transit network technology agnostic (e.g. NSI, GRE, etc.)

GENI→NSI Allocate→Reserve + Commit Delete/shutdown→Terminate Provision→- PerformOperationalAction start→provision (link up) PerformOperationalAction stop→release (link down)

Data model: GENI RSpec adopted by TNRM. RSpec and NSI messages are translated to each other. Base tools: Existing eiSoil framework (Python) used as base server. Jython (Java- interoperable Python) used to communicate calls from NSI (Java) with those from eiSoil (Python). TNRM RO eiSoil server src/main.py eiSoil vendor/tnrm_dele gate.py SimpleXMLRPCServer vendor/nsi2interface.py NSI CS2.0 Providers / Aggregator NSIv2 Requesting Agent Interface NSI2Interface.java NSI messages (JAVA method call with NSI parameters) (Python RPC With GENI RSpec) TNRM topology config vendor/config.py proxy.py Convert FELIX to NSIv2 reservation.py GENI v3 RPC MRO

CLI RO GENIv3 API TNRM GENIv3 API NSI AIST NSI Aggregator AIST NSI uPA JGN-X NSI uPA iCAIR NSI uPA NL NSI uPA PSNC NSI uPA NetherLight NSI Aggregator AIST Island PSNC Island FELIX domain NSI domain

<node client_id="urn:publicid:IDN+fms:aist:tnrm+stp" component_manager_id="urn:publicid:IDN+fms:aist:tnrm+authority+cm"> <property source_id="urn:publicid:IDN+fms:aist:tnrm+stp+urn:ogf:network:pionier.net.pl:2013:topology:server_port" dest_id="urn:publicid:IDN+fms:aist:tnrm+stp+urn:ogf:network:aist.go.jp:2013:topology:bi-ps" capacity="100"> <property source_id="urn:publicid:IDN+fms:aist:tnrm+stp+urn:ogf:network:aist.go.jp:2013:topology:bi-ps" dest_id="urn:publicid:IDN+fms:aist:tnrm+stp+urn:ogf:network:pionier.net.pl:2013:topology:server_port" capacity="100">

Goal: Integrate dynamic services offered by research networks into a federated SDN experimental environment –NSI-capable infrastructures are more and more stable, but BoD platforms compliant with NSI v2.0 are still under development It took us 6 months to establish a full end-to-end control plane workflow between FELIX endpoints, mainly due to extensive changes in the NSI implementations in transit networks, incompatibility, misconfigurations, etc. –FELIX provided valuable feedback to NSI infrastructure managers during the infrastructure setup Dataplane testing also provided feedback to misconfigurations in transit domains –FELIX partners proactively worked towards identfying issues and proposing solutions to the transit network administrators (unique consortium quality)

Stitching Entity Resource Manager (SERM)

Solves low level data plane issues when interconnecting SDN domains with other kinds of Transport Networks –Switching SDN traffic to/from VLAN-based services of Transport Networks Fully dynamic services: GLIF NSI, Géant BoD AutoBAHN Static services (created by mgnt procedures): Géant Plus Layer2, Géant MPVPN –Switching SDN traffic to/from tunnels: GRE (in progress) SDN domain and Transport Network interconnected by a data plane proxy device („stitching entity”) Low level data plane issues addressed –Taking into account limitations of VLANs availability at Transport Network domain boundary –Both SDN domain and many Transport Networks connected to stitching device by one or more ports –Managing mappings between Transport Network VLANs and SDN domain VLANs –Managing mappings between ingress (TN) and egress ports (SDN)

Northbound interface –GENI version 3 Southbound interface (towards stitching entity) –RPC interface of OpenFlow POX Controller –HTTP/REST interface of OpenFlow Ryu Controller –Can be easily extended to other kinds of APIs Open source (Apache 2.0-licensed), written in Python – Status so far: –Deployed in five Felix islands (PSNC, i2CAT, KDDI, AIST and EICT) –Controls only OpenFlow switches where VLAN translation flowmods are installed –Developed from scratch and fully supported by PSNC

Resource Orchestrator

GENIv3-enabled entry point for RMs in a facility –Upper layer to provide centralised approach per-domain, distributed per-federation –Registers GENIv3-enabled peers Managed RMs: CRM, SDNRM, SERM, TNRM Also other ROs: stackable, recursive management –Possible to perform policies per-domain, for every peer Aggregating information on resources and facilities –Keeps updated list of resources provided per registered peer –Extra monitoring of underlying physical facility (topology) and slice details Integrated with FELIX MS Maintenance, scalability and debugging –Open source (Apache 2.0-licensed), written in Python –Comprehensive logging to keep track of both automatic and experimenters’ requests –New types of RMs and data models can be easily added

Top (Master) RO – manages ROs in different domains Middle RO – manage RMs within the same domain

RO RMRM RMRM RMRM RMRM RMRM RMRM RMRM RMRM RMRM RMRM RMRM RMRM RMRM RMRM RMRM RMRM RMRM RMRM RMRM RMRM RMRM RMRM RMRM RMRM Centralized Full mesh Hybrid RO Global RO RO Continent RO RO Island RO RO RMRM RMRM RMRM RMRM RMRM RMRM RMRM RMRM Distributed Selected for implementation

Northbound API (to client) – GENIv3 Supported tools for experimenters – OMNI, jFed Southbound API – GENIv3 (peer), REST (MS)

Introduces a logic upper layer to perform homogenous operations on all RMs –Common control access to request resources on any underlying RM or RO –Intra-domain monitoring: overview on resources’ status, physical infrastructure, etc. Status so far –Stand-alone component, currently serving multiple infrastructures within the FELIX testbed Easy deploy and supported by NXW and i2CAT Compatible with popular GENI experimenters’ tools –Can be introduced in any other infrastructure (min. requirements &dependencies) –Alternative means for configuration and documentation Scripts and CLI to manage peers within RO Several configuration options Source and documentation under continued enhancement

SDN Resource Manager (SDN RM)

GENIv3-enabled RM to allocate OpenFlow resources (similar to FOAM) –Features (legacy Opt-in manager + OFAM) Management of resources through FlowVisor (v0.8.7) GUI –based administration Two request modes (web side): simple and advanced –Improvements (SDNRM) Support of GENIv3 API and GENIv3 Rspecs + OpenFlow extensions Possibility of enabling automatic approval of slices (Expedient) Default re-approval for resources previously granted by admin Automatic installation and configuration of FlowVisor (v1.4.0) Maintenance and debugging –Open source (FP7/OFELIA-licensed), written in Python –Multiple log traces to keep track of requests Integrated with Expedient for web-based access to SDNRM’s logs

Northbound API (to client) – GENIv3, Expedient/OFELIA Supported tools for experimenters: OMNI, jFed

SDNRM works as an alternative to FOAM –SDNRM provides the following extra features: Support of GENIv3 API Automatic approval through Expedient and re-approval of previously granted resources through GENI API Automatic installation and configuration of FlowVisor (v1.4.0) –And a limitation: Approval through GENI API must be firstly approved by an administrator Status so far –Works both as a stand-alone module (GENIv3) or integrated with a GUI (Expedient, jFed) –Currently serving multiple infrastructures within the FELIX testbed Compatible with popular GENI experimenters’ tools SDNRMFOAM Supported interfacesGENIv3GENIv2 Supported RSpecsGENIv3 + OF ManagementGUICLI Experimenter accessGUI, CLICLI

Computing Resource Manager (CRM)

GENIv3-enabled RM to manage computing resources (VMs) –Based on OFELIA VTAM Allows allocating (reservation), provisioning (generation), (re)starting and deleting VMs on different physical servers Management of resources through XEN hypervisor GUI-based administration –Improvements (CRM) User SSH keys contextualisation in the Virtual Machines  log-in through public keys Support of GENIv3 API and RSpecs Management of resources also through KVM hypervisor Automatic deletion of resources after expiration Maintenance and debugging –Open source (FP7/OFELIA-licensed), written in Python –Multiple log traces to keep track of requests Integrated with Expedient for web-based access to CRM’s logs

Northbound API (to client) – GENIv3, Expedient/OFELIA Supported tools for experimenters: OMNI, jFed

CRM allows managing the VM lifecycle (allocating, provisioning, deleting…) through GENIv3 API –XEN and KVM support Two different flavours –SSH log-in through keys Live SSH key update possible through API (geni_update_users) –Supports for basic management operations (geni_start, geni_stop, geni_restart) Status so far –Works both as a stand-alone module (GENIv3) or integrated with a GUI (Expedient, jFed) –Currently serving multiple infrastructures within the FELIX testbed Compatible with popular GENI experimenters’ tools

C-BAS

A clearinghouse for SDN experimental facilities –Supports federation with GENI/FIRE SDN test-beds –Issues certificates and credentials to experimenters and tools –Acts as an anchor in distributed web of trust –Keeps event logs for accountability purposes Performs authentication using –certificates: X.509 version 3 –signed credentials: GENI SFA format Manages information about –Experimenters, their privileges, roles, SSH keys, etc. –Slices, projects, and slivers Open source implementation in Python –Extensible through plug-ins –Computationally efficient to serve large-scale experimental facilities

Supported APIs 1.Federation Service API version 2 Standard API’s to be supported by GENI-compatible Federations 2.FELIX AAA API An extension of above API to serve FELIX-specific services Supported tools for experimenters 1.Omni A GENI command line tool for reserving resources using GENI AM API 2.jFed Experimenter GUI and CLI Developed in Fed4FIRE to allow end-users provision and manage experiments 3.Expedient A web-based experimenters’ tool extended by OFELIA and FELIX projects

An attractive clearinghouse candidate for SDN experimental facilities –Standalone software component tested through deployment in FELIX islands continuously enhanced and fully supported by EICT compatible with popular experimenters’ tools –Shipped with a backend administrator's tool Create/revoke/manage user accounts Administrate slice and project objects Check event logs, etc. –Simplifies the process of federation with existing GENI/FIRE test-beds

Poznan Supercomputing and Networking Center Poland Nextworks Italy European Center for Information and Communication Technologies Gmbh Germany Fundacio Privada i2CAT, Internet I Innovacio Digital A Catalunya Spain SURFnet bv Netherlands KDDI Japan National Institute of Advanced Industrial Science and Technology Japan iMinds VZW Belgium