Building service testbeds on FIRE The cloud to network interface in the BonFIRE infrastructure Gino Carrozzo, Nextworks TNC2012 Reykjavík, 21-24 May 2012.

Slides:



Advertisements
Similar presentations
Contents A brief problem statement
Advertisements

Connect communicate collaborate OpenFlow in GN3s Network Factory GN3 OpenFlow Facility Joan A. García-Espín on behalf of JRA2-T5 Partners i2CAT, Barcelona.
All rights reserved © 2006, Alcatel Grid Standardization & ETSI (May 2006) B. Berde, Alcatel R & I.
PHOSPHORUS Lambda User Controlled Infrastructure for European Research Artur Binczewski, Maciej Stroiński Poznan Supercomputing and Networking Center.
The Case for Enterprise Ready Virtual Private Clouds Timothy Wood, Alexandre Gerber *, K.K. Ramakrishnan *, Jacobus van der Merwe *, and Prashant Shenoy.
SA2 “Testbeds as a Service” LHCONE Meeting May 2/ Geneva, CH Jerry Sobieski (NORDUnet)
Grant agreement n° SDN architectures for orchestration of mobile cloud services with converged control of wireless access and optical transport network.
1 Chin Guok ESnet Network Engineer David Robertson DSD Computer Software Engineer Lawrence Berkeley National Laboratory.
1 Chin Guok ESnet Network Engineer David Robertson DSD Computer Software Engineer Lawrence Berkeley National Laboratory.
The Future of GÉANT: The Future Internet is Present in Europe Vasilis Maglaris Professor of Electrical & Computer Engineering, NTUA Chairman, NREN Policy.
FI-WARE – Future Internet Core Platform FI-WARE Cloud Hosting July 2011 High-level description.
1© Copyright 2015 EMC Corporation. All rights reserved. SDN INTELLIGENT NETWORKING IMPLICATIONS FOR END-TO-END INTERNETWORKING Simone Mangiante Senior.
Abstraction and Control of Transport Networks (ACTN) BoF
1 FGRE July 7 th – July 11 th Wifi: WelcomeATiMindS
Utility Computing Casey Rathbone 1http://cyberaide.org.edu.
GGF16-ghpnD. Simeonidou Lambda User Controlled Infrastructure For European Research LUCIFER.
Building service testbeds on FIRE D5.2.5 Virtual Cluster on Federated Cloud Demonstration Kit August 2012 Version 1.0 Copyright © 2012 CESGA. All rights.
Building service testbeds on FIRE A MULTI-CLOUD EXPERIMENTAL FACILITY Waterloo (CANADA), March 24 th Josep Martrat TIM Market Manager ATOS research and.
LIGHTNESS Introduction 10th Oct, 2012 Low latency and hIGH Throughput dynamic NEtwork infrastructureS for high performance datacentre interconnectS.
End-to-end resource management in DiffServ Networks –DiffServ focuses on singal domain –Users want end-to-end services –No consensus at this time –Two.
Linking European and Chinese Research Infrastructures and Communities.
This project is partially funded by European Commission under the 7th Framework Programme - Grant agreement no ECO 2 Clouds team Barbara Pernici,
3rd GA meeting, Dublin WP7 HEAnet Zero-carbon emission virtual infrastructures.
| BoD over GÉANT (& NRENs) for FIRE and GENI users GENI-FIRE Workshop Washington DC, 17th-18th Sept 2015 Michael Enrico CTO (GÉANT Association)
Connect. Communicate. Collaborate VPNs in GÉANT2 Otto Kreiter, DANTE UKERNA Networkshop 34 4th - 6th April 2006.
Connect. Communicate. Collaborate JRA3 - Bandwidth on Demand GGF16 Athens, 14 th February 2006 Afrodite Sevasti GRNET.
FIRE – GENI collaboration workshop Sep 2015 Washington.
Grids, Clouds and the Community. Cloud Technology and the NGS Steve Thorn Edinburgh University Matteo Turilli, Oxford University Presented by David Fergusson.
COMS E Cloud Computing and Data Center Networking Sambit Sahu
The FI-WARE Project – Base Platform for Future Service Infrastructures FI-WARE Interface to the network and Devices Chapter.
DataTAG Research and Technological Development for a Transatlantic Grid Abstract Several major international Grid development projects are underway at.
Connect communicate collaborate GÉANT3 Services Connectivity and Monitoring Services by and for NRENs Ann Harding, SWITCH TNC 2010.
TeraPaths TeraPaths: Establishing End-to-End QoS Paths through L2 and L3 WAN Connections Presented by Presented by Dimitrios Katramatos, BNL Dimitrios.
Connect. Communicate. Collaborate BANDWIDTH-ON-DEMAND SYSTEM CASE-STUDY BASED ON GN2 PROJECT EXPERIENCES Radosław Krzywania (speaker) PSNC Mauro Campanella.
TeraPaths The TeraPaths Collaboration Presented by Presented by Dimitrios Katramatos, BNL Dimitrios Katramatos, BNL.
Building Dynamic Lightpaths in GÉANT Tangui Coulouarn, DeIC E-Infrastructure Autumn Workshop, Chiinău 8 September 2014.
Connect. Communicate. Collaborate AAI scenario: How AutoBAHN system will use the eduGAIN federation for Authentication and Authorization Simon Muyal,
Cyberinfrastructure: An investment worth making Joe Breen University of Utah Center for High Performance Computing.
26/05/2005 Research Infrastructures - 'eInfrastructure: Grid initiatives‘ FP INFRASTRUCTURES-71 DIMMI Project a DI gital M ulti M edia I nfrastructure.
Internet2 Joint Techs Workshop, Feb 15, 2005, Salt Lake City, Utah ESnet On-Demand Secure Circuits and Advance Reservation System (OSCARS) Chin Guok
Connect. Communicate. Collaborate Global On-demand Light Paths – Developing a Global Control Plane R.Krzywania PSNC A.Sevasti GRNET G.Roberts DANTE TERENA.
Update on GÉANT BoD/AutoBAHN LHCONE Workshop: Networking for WLCG - CERN Tangui Coulouarn, DeIC 11 February 2013.
Point-to-point Architecture topics for discussion Remote I/O as a data access scenario Remote I/O is a scenario that, for the first time, puts the WAN.
GRID ANATOMY Advanced Computing Concepts – Dr. Emmanuel Pilli.
TeraPaths: A QoS Enabled Collaborative Data Sharing Infrastructure for Petascale Computing Research The TeraPaths Project Team Usatlas Tier 2 workshop.
Multi-Partner European Test Beds for Research Networking Graphic/image limits Graphic/image limits. Logo’s should be justified.
Javier Orellana JRA4 Coordinator Face to Face Partners Meeting University College London 11 December 2003 EGEE is proposed as a project funded by the European.
1 Prototype for the interoperability between FEDERICA slices and other IP domains by means of the IPsphere Framework Josep Pons Camps i2Cat.
PHOSPHORUS Lambda User Controlled Infrastructure for European Research Artur Binczewski Poznan Supercomputing and Networking Center.
TERENA Conference, Maastricht, 6 th June 2013 Fabio Farina Fabio Farina (GARR), Simon Vocella (GARR), Álvaro Monje (UPC), Celia Velayos (i2Cat), Chrysa.
1 Network related topics Bartosz Belter, Wojbor Bogacki, Marcin Garstka, Maciej Głowiak, Radosław Krzywania, Roman Łapacz FABRIC meeting Poznań, 25 September.
ESnet’s Use of OpenFlow To Facilitate Science Data Mobility Chin Guok Inder Monga, and Eric Pouyoul OGF 36 OpenFlow Workshop Chicago, Il Oct 8, 2012.
Networks ∙ Services ∙ People Sonja Filiposka, Yuri Demchenko, Tasos Karaliotas, Migiel de Vos, Damir Regvart TNC 2016 DISTRIBUTED CLOUD SERVICES.
OGF 43, Washington 26 March FELIX background information Authorization NSI Proposed solution Summary.
WP5 – Infrastructure Operations Test and Production Infrastructures StratusLab kick-off meeting June 2010, Orsay, France GRNET.
Javier Orellana EGEE-JRA4 Coordinator CERN March 2004 EGEE is proposed as a project funded by the European Union under contract IST Network.
1 Network Measurement Challenges LHC E2E Network Research Meeting October 25 th 2006 Joe Metzger Version 1.1.
The Internet2 Network and LHC Rick Summerhill Director Network Research, Architecture, and Technologies Internet2 LHC Meeting 23 October 2006 FERMI Lab,
New Paradigms: Clouds, Virtualization and Co.
FEDERICA Federated E-infrastructure Dedicated to European Researchers Innovating in Computing network Architectures Mauro Campanella - GARR Joint Techs.
Multi-layer software defined networking in GÉANT
GENUS Virtualisation Service for GÉANT and European NRENs
StratusLab Final Periodic Review
StratusLab Final Periodic Review
GÉANT Multi-Domain Bandwidth-on-Demand Service
Establishing End-to-End Guaranteed Bandwidth Network Paths Across Multiple Administrative Domains The DOE-funded TeraPaths project at Brookhaven National.
Integration of Network Services Interface version 2 with the JUNOS Space SDK
Next-generation Internet architecture
Network Technology Evolution
Presentation transcript:

Building service testbeds on FIRE The cloud to network interface in the BonFIRE infrastructure Gino Carrozzo, Nextworks TNC2012 Reykjavík, May 2012

BonFIRE 2 2 The cloud-to-net rationale Cloud computing (but also grids and SOAs at large, and BonFIRE) relies on a vital commodity: the network An ever-increasing number of distributed [super-] computing applications pose highly-demanding requirements for dynamicity and flexibility in Net + IT resource control (e.g. automated scaling up/down) …but their network service(s) still treated as “always-on” –Application layer unable to exploit the automatic control potentialities of the current optical (and not-optical) network technologies IT resources dynamics completely uncorrelated from the network ones Common trend to over-provision network services  inefficient resource utilization in the network, above all in case of fault recovery TNC2012, Reykjavík, May 2012

BonFIRE 3 3 Some reference network control technologies TNC2012, Reykjavík, May 2012 Cloud/User-to-Network Interface Academic solutionsCommercial Solutions AutoBAHN UAP OGF NSI NIPS-UNI (ICT-GEYSERS) G.UNI (IST-PHOSPHORUS) OIF UNI 2.0 Carrier’s Bandwidth on Demand systems (e.g. Verizon BoD) Network Control Academic solutionsCommercial Solutions GÉANT BoD Manticore I (i2Cat and Juniper) Manticore II (i2CAT and Cisco) FP7-RI-Mantychore (i2Cat) Grid-enabled G 2 MPLS control plane (IST- PHOSPHORUS) DRAGON (NSF project) OSCARS (EsNet, US) BAR (EGEE) ASON/GMPLS control plane OpenStack (open source, NetworkContainers in particular) D-RAC by Nortel

BonFIRE 4 4 BonFIRE: a cloud infrastructure for IoS community ( and beyond ) 6 testbeds distributed across Europe –Permanent: ~320 cores / 30TB –On demand: up to 3k+ cores Heterogeneous Cloud resources (compute, storage and networking) –Seamlessly accessible with a single experiment descriptor –OCCI based API –Application- specific contextualisation for compute resources (VM) –Experiment elasticity TNC2012, Reykjavík, May 2012

BonFIRE 5 5 Available resources TNC2012, Reykjavík, May 2012

BonFIRE 6 6 High level (simplified!) architecture TNC2012, Reykjavík, May 2012 Portal BonFIRE Broker Cloud Provider BonFIRE API Cloud API (OCCI) Experiment manager Enactor Experiment descriptor Cloud Provider OCCI Resource manager Open Virtual. Format OVF … … Monitoring ID Open Nebula Cells VMWare… ESXi … KVM XEN Policy manager Elasticity rules & engine Security Restf uly

BonFIRE 7 7 Three Usage Scenarios General classes of experiment supported by the facility –Scenario 1. Extended cloud scenario: a federated facility with heterogeneous virtualized resources and best-effort Internet interconnectivity –Scenario 2. Cloud with emulated network implications: experimental network emulation platform under full control of the experimenter –Scenario 3. Extended cloud with complex physical network implications: experimental cloud system federated with GÉANT BoD and FEDERICA (collaboration with NOVI) TNC2012, Reykjavík, May 2012 BonFIRE site facilityConnecting NREN EPCCJANET HLRSDFN HPLabsJANET IBBTBelNET INRIARENATER PSNCPIONIER

BonFIRE 8 8 Cloud-to-Network interface in BonFIRE TNC2012, Reykjavík, May 2012 Why? –To allow network-aware experiments with requirements for guaranteed bandwidth (Scenario 3) Network QoS control Advance reservations Solution: Integrate the GÉANT Bandwidth-on-Demand interfaces (AutoBAHN) in the architecture –EPCC-PSNC early deployment pilots EPCC BonFIRE Site PSNC BonFIRE Site Why GÉANT BoD? –Most mature solution in terms of specification, implementation and deployment in the multi-domain environment interconnecting some of the sites –Handles both on-demand and advance reservation requests –AutoBAHN is the reference BoD tech for PSNC and JANET Bonfire selected as usecase for GÉANT BoD service pilot

BonFIRE 9 9 Use-case 1 – Shared Site-to-Site BoD TNC2012, Reykjavík, May 2012 BonFIRE site ABonFIRE site B NREN AA + GÉANT + NREN BB Experiment 1 traffic flow Experiment 2 traffic flow Shared BoD Network Service A BoD service between two BonFIRE sites shared among different experiments Preliminary setup of network service (non experiment-based) Traffic flows from experiments multiplexed across the logical link Local configuration of the LAN switches/routers at BonFIRE sites for traffic routing and shaping

BonFIRE 10 Use-case 2 – Per-experiment BoD TNC2012, Reykjavík, May 2012 BonFIRE site ABonFIRE site B NREN AA + GÉANT + NREN BB Experiment 1 traffic flow Experiment 2 traffic flow Experiment 1 BoD Network Service Experiment 2 BoD Network Service A dedicated BoD service between two BonFIRE sites reserved for a single experiment On-demand setup of network service during experiment declaration Network service lifecycle equal to experiment lifecycle Local configuration of the LAN switches/routers at BonFIRE sites for traffic routing

BonFIRE 11 AutoBAHN & BonFIRE TNC2012, Reykjavík, May 2012 User Access Point (UAP) interface SOAP interface exposed by IDMs to provide access to BoD services BonFIRE as an AutoBAHN client to dynamically request BoD services between BonFIRE sites

BonFIRE 12 BoD architecture extensions TNC2012, Reykjavík, May 2012 Cloud-to-network adaptor Cloud-to-net IF (Net side) Controlled network BoD Site Link as new type of OCCI resource cloud-to-net IF plugin (client-side) BoD

BonFIRE 13 BoD Adaptor key functions TNC2012, Reykjavík, May 2012 Interaction with AutoBAHN –Request BoD service reservation –Monitor service status CPE control –Configure egress routers/switches on BonFIRE sites for traffic routing

BonFIRE 14 Cloud-to-Network Interface Development/Deployment plans TNC2012, Reykjavík, May 2012 First BoD client prototype Early deployment Consolidated prototype Full deployment Aug –BoD adaptor prototype integrated into BonFIRE Enactor –Final specification of the BonFIRE cloud to network interface Dec –Final prototype fully integrated into BonFIRE architecture –Final design of BonFIRE external interconnection In 2013 Extend deployment to other BonFIRE sites, possibly: –INRIA –IBBT Sept Early deployment in EPCC and PSNC sites –Bonfire selected as usecase for GÉANT BoD service pilot –Ready for Open-Call-2 network-aware experiments

BonFIRE 15 Cloud-to-network interface in action: experiments from the 2 nd Open Call TNC2012, Reykjavík, May 2012 Two Topics on call (deadline Mar 7 th 2012): A)Experiments in Scenarios (500K€) B)Cloud provider (100K€) 619 K€ EC funding for –4 new experiments –1 cloud provider (multi-cloud facility) 28 project proposals received –22 proposals on Theme A 7 proposals in the shortlist, 4 of which mentioning scenario 3 as of high interest for their research (57%) –Tentative start date: Sept 2012

BonFIRE 16 Future steps: mix BoD + FEDERICA resources for BonFIRE experiments TNC2012, Reykjavík, May 2012 EPCC & PSNC sites also connected to FEDERICA PoPs –Access to resources from four FEDERICA PoPs (PSNC, DFN, CESNET, GARR) –BonFIRE implementing SFA adaptor to request FEDERICA slices Nice to have –Provide experimenters with mixed virtual + physical network infrastructures

Building service testbeds on FIRE Thank you Acknowledgments: G. Landi (Nextworks), M. Giertych/B. Belter (PSNC), K. Kavoussanakis/A. Hume (Univ. Edinburgh – EPCC), J. Jofre, C. Velayos (i2CAT)