Co-ordination & Harmonisation of Advanced e-Infrastructures for Research and Education Data Sharing Research Infrastructures – Proposal n. 306819 A Standard-based.

Slides:



Advertisements
Similar presentations
Grid Initiatives for e-Science virtual communities in Europe and Latin America The VRC-driven GISELA Science Gateway Diego Scardaci.
Advertisements

Federated access to e-Infrastructures worldwide
Introduction on Science Gateway Understanding access and functionalities Catania, 09/06/2014Riccardo Rotondo
FP6−2004−Infrastructures−6-SSA [ Empowering e Science across the Mediterranean ] Grids and their role towards development F. Ruggieri – INFN (EUMEDGRID.
“It’s going to take a month to get a proof of concept going.” “I know VMM, but don’t know how it works with SPF and the Portal” “I know Azure, but.
Catania Science Gateway Framework Motivations, architecture, features Catania, 09/06/2014Riccardo Rotondo
Co-ordination & Harmonisation of Advanced e-Infrastructures for Research and Education Data Sharing Research Infrastructures – Proposal n The CHAIN-REDS.
The EPIKH Project (Exchange Programme to advance e-Infrastructure Know-How) Grid Engine Riccardo Rotondo
Co-ordination & Harmonisation of Advanced e-Infrastructures for Research and Education Data Sharing Co-funded.
Climate Sciences: Use Case and Vision Summary Philip Kershaw CEDA, RAL Space, STFC.
Co-ordination & Harmonisation of Advanced e-Infrastructures for Research and Education Data Sharing Grant.
INFSO-RI Enabling Grids for E-sciencE SA1: Cookbook (DSA1.7) Ian Bird CERN 18 January 2006.
Co-ordination & Harmonisation of Advanced e-Infrastructures for Research and Education Data Sharing Co-funded.
GILDA testbed GILDA Certification Authority GILDA Certification Authority User Support and Training Services in IGI IGI Site Administrators IGI Users IGI.
The EPIKH Project (Exchange Programme to advance e-Infrastructure Know-How) Part II : The EPIKH Project Abderrahman El Kharrim
NA-MIC National Alliance for Medical Image Computing UCSD: Engineering Core 2 Portal and Grid Infrastructure.
The EPIKH Project (Exchange Programme to advance e-Infrastructure Know-How) The EPIKH Project Roberto Barbera
Co-ordination & Harmonisation of Advanced e-Infrastructures for Research and Education Data Sharing Research Infrastructures Grant Agreement n
This project has received funding from the European Union’s Horizon 2020 research and innovation programme under grant agreement n° Data Repositories.
Data Repositories and Science Gateways for Open Science Presenter: Roberto Barbera – UNICT and INFN EGI Community Forum Bari – 11 November 2015.
Widening the number of e-Infrastructure users with Science Gateways and Identity Federations Giuseppe Andronico INFN -
How to integrate EGI portals with Identity Federations Roberto Barbera Univ. of Catania and INFN EGI Technical Forum – Prague,
OpenNebula: Experience at SZTAKI Peter Kacsuk, Sandor Acs, Mark Gergely, Jozsef Kovacs MTA SZTAKI EGI CF Helsinki.
Co-ordination & Harmonisation of Advanced e-Infrastructures for Research and Education Data Sharing Research Infrastructures – Grant Agreement n
Co-ordination & Harmonisation of Advanced e-Infrastructures for Research and Education Data Sharing Grant.
Science gateway e risultati dei progetti Europei di e-Infrastructure Roberto Barbera Univ. di Catania & INFN Riunione CCR.
Tutorial on Science Gateways, Roma, Riccardo Rotondo Introduction on Science Gateway Understanding access and functionalities.
Tutorial on Science Gateways, Roma, Catania Science Gateway Framework Motivations, architecture, features Riccardo Rotondo.
Rome - 24 January Earth Server EU FP7-INFRA project Scalability for Big Data Roberto Barbera - University of Catania and INFN - Italy
Introduction to Distributed Computing Infrastructures and the Catania Science Gateway Framework Roberto Barbera Univ. of Catania.
Co-ordination & Harmonisation of Advanced e-Infrastructures for Research and Education Data Sharing Grant.
Co-ordination & Harmonisation of Advanced e-Infrastructures for Research and Education Data Sharing Grant.
Co-ordination & Harmonisation of Advanced e-Infrastructures for Research and Education Data Sharing Research Infrastructures – Proposal n e-Infrastructures.
Co-ordination & Harmonisation of Advanced e-Infrastructures for Research and Education Data Sharing Research Infrastructures – Proposal n Standard-based.
The EPIKH Project (Exchange Programme to advance e-Infrastructure Know-How) The EPIKH project: results and opportunities Roberto Barbera
Utilizzo di portali per interfacciamento tra Grid e Cloud Workshop della Commissione Calcolo e Reti dell’INFN, May Laboratori Nazionali del.
Co-ordination & Harmonisation of Advanced e-Infrastructures Research Infrastructures – Grant Agreement n The CHAIN project and its worldwide interoperability.
Co-ordination & Harmonisation of Advanced e-Infrastructures for Research and Education Data Sharing Co-funded.
1 Globe adapted from wikipedia/commons/f/fa/ Globe.svg IDGF-SP International Desktop Grid Federation - Support Project SZTAKI.
Co-ordination & Harmonisation of Advanced e-Infrastructures Research Infrastructures – Grant Agreement n The CHAIN Project Federico Ruggieri, INFN.
This project has received funding from the European Union’s Horizon 2020 research and innovation programme under grant agreement n° The Sci-GaIA.
The Catania Grid Engine Mr. Riccardo Rotondo Consortium GARR, Rome, Italy
REST API to develop application for mobile devices Mario Torrisi Dipartimento di Fisica e Astronomia – Università degli Studi.
The Open Access Repository of INFN Roberto Barbera and Rita Ricceri – INFN
Co-ordination & Harmonisation of Advanced e-Infrastructures Research Infrastructures – Grant Agreement n CHAIN sustainability guidelines Dr. Ognjen.
The Catania Grid Engine and some implementations of the framework Diego Scardaci INFN The Catania Science Gateway Framework.
Visita al sito di Catania RECAS Attività correlate a RECAS condotte a Catania Roberto Barbera.
Co-ordination & Harmonisation of Advanced e-Infrastructures Research Infrastructures – Grant Agreement n CHAIN sustainability guidelines Dr. Ognjen.
The EPIKH Project (Exchange Programme to advance e-Infrastructure Know-How) The EPIKH project: results and opportunities Roberto Barbera
Co-ordination & Harmonisation of Advanced e-Infrastructures for Research and Education Data Sharing Research Infrastructures – Grant Agreement n
Co-ordination & Harmonisation of Advanced e-INfrastructures CHAIN Worldwide Interoperability Test Roberto Barbera – Univ. of Catania and INFN Diego Scardaci.
EGI-InSPIRE EGI-InSPIRE RI The European Grid Infrastructure Steven Newhouse Director, EGI.eu Project Director, EGI-InSPIRE 29/06/2016CoreGrid.
Conference Highlights GISELA-CHAIN Conference Mexico June 2012 Federico Ruggieri - INFN.
Co-ordination & Harmonisation of Advanced e-INfrastructures Technical program: advancement & issues Roberto Barbera University.
Co-ordination & Harmonisation of Advanced e-Infrastructures for Research and Education Data Sharing Grant.
Web and mobile access to digital repositories Mario Torrisi National Institute of Nuclear Physics – Division of
Co-ordination & Harmonisation of Advanced e-Infrastructures Research Infrastructures – Grant Agreement n The CHAIN Project Federico Ruggieri, INFN.
Co-ordination & Harmonisation of Advanced e-Infrastructures for Research and Education Data Sharing Grant.
Introduction to GISELA, CHAIN and EPIKH projects
The EPIKH Project Roberto Barbera
Giuseppina Inserra INFN Catania
The CHAIN-REDS Project: an overview
Ian Bird GDB Meeting CERN 9 September 2003
Technical Meeting with CNR and INAF 7 October 2014
The EPIKH Project Roberto Barbera
CHAIN-REDS computing solutions for Virtual Research Communities CHAIN-REDS Workshop – 11 December 2013 Roberto Barbera – University of Catania and.
The Sci-GaIA project and introduction to the Hackfest
Open Access Repository INFN Roberto Barbera (roberto
DARIAH Competence Centre: architecture and activity summary
Grid Engine Diego Scardaci (INFN – Catania)
Presentation transcript:

Co-ordination & Harmonisation of Advanced e-Infrastructures for Research and Education Data Sharing Research Infrastructures – Proposal n A Standard-based Science Gateway Framework to Seamlessly Access HPC, Grid and Cloud Resources Distributed Worldwide Big Data, Big Network Workshop – 10 October 2013 Diego Scardaci Roberto Barbera and Diego Scardaci – INFN Catania - Italy {roberto.barbera,

Outline  Introductory concepts and driving considerations  Vision and use cases  Current results  Activities in Latin America  Summary and outlook Big Data, Big Network Workshop - 10 October

Time 3 Evolution of distributed computing Mainframe Computing 80’s-90’s Cluster Computing 90’s-00’s Grid Computing 00’s-10’s Cloud Computing Cost of hw Cost of networks CPU power WAN bandwidth increase decrease

Big Data, Big Network Workshop - 10 October 2013 Research Networks at “global” scale 4

The “Global” Grid Big Data, Big Network Workshop - 10 October

Genesis II The “non-global” middleware Interoperability and easiness of access are issues Big Data, Big Network Workshop - 10 October

Cloud «sky» is not less … «cloudy» 7 Big Data, Big Network Workshop - 10 October 2013

8 The CHAIN-REDS Project ( Started: 1 Dec 2012 Duration: 30 months Targeted regions: Africa, Arab Region, Latin America, China, India, and Far-East Asia

Interoperability and interoperation (source: Wikipedia)  According to ISO/IEC (Information Technology Vocabulary, Fundamental Terms), interoperability is "The capability to communicate, execute programs, or transfer data among various functional units in a manner that requires the user to have little or no knowledge of the unique characteristics of those units“  In engineering, interoperation is the setup of ad hoc components and methods to make two or more systems work together as a combined system Big Data, Big Network Workshop - 10 October 2013 Adoption of standards is key 9

Vision  A scientist can seamlessly run applications on HPC machines, Grids and Clouds based on different middleware  (  to demonstrate interoperability  use case #1)  The cloud tenant of a real or virtual organisation can seamlessly and easily manage Cloud resources pledged by providers owning infrastructures based on different Cloud middleware stacks  (  to demonstrate interoperation  use case #2 ) Big Data, Big Network Workshop - 10 October

CHAIN-REDS Demo contributors Big Data, Big Network Workshop - 10 October

Use case #1 (scientist)  A user can sign in on a Science Gateway using his/her federated credentials, select an application from a menu and seamlessly execute it on HPC machines, Grids and Clouds  The fractions of executions on the three different platforms can be adjusted to simulate the need to “boost” the resources in case of temporary peaks of activity Big Data, Big Network Workshop - 10 October

A reference infrastructure: the EGI Federated Cloud 13 Web: Wiki: use case #1 use case #2 Big Data, Big Network Workshop - 10 October 2013

Catania Science Gateway Framework architecture Catania Science Gateway App. 1 App. 2 MyCloud Embedded Services Administrator Scientist Cloud tenant Users belonging to Identity Federations Users belonging to Identity Federations Grid/Cloud Engine (based on SAGA) CLEVER Orchestrator (based on OCCI) Cloud #2Cloud #nCloud #1 Use case #1Use case #2 Single logical domain HPC Clusters 14

Official Identity (Inter-)Federations currently supported by Catania Science Gateways VAMP Workshop 2013 – Helsinki, 30/9-1/10/

16 Other IdPs deployed in Latin America Big Data, Big Network Workshop - 10 October 2013

The Catania Grid & Cloud Engine Grid/Cloud Engine Users Tracking DB Science GW Interface SAGA/JSAGA API Job Engine Data Engine Users Track & Monit. Science GW 1 Science GW 2 Science GW 3 Grid/Cloud/Local MWs Liferay Portlets eToken Server New UpdatedNewUpdated New

Running applications on the EGI Federated Cloud Big Data, Big Network Workshop - 10 October 2013

Running applications on the EGI Federated Cloud 19

Running applications on various types of e-Infrastructures Big Data, Big Network Workshop - 10 October

Use case #2 (cloud tenant)  The cloud tenant of a real or virtual organisation can sign in on a Science Gateway using his/her federated credentials, select virtual machine(s) from a geographically shared repository and deploy/move/copy it/them across his/her personal cloud  The graphic user interface will be very intuitive including point & click and drag & drop functionalities  The virtual machine(s) will belong to the same domain name (chain-project.eu in the particular case) independently of the site where it/they will be instantiated and of the underlying Cloud middleware stack Big Data, Big Network Workshop - 10 October

Scenario of use case #2 Big Data, Big Network Workshop - 10 October 2013 Cloud 1 M/W 1’ Cloud n M/W n’ Cloud 7 M/W 7’ Cloud 6 M/W 6’ Cloud 5 M/W 5’ Cloud 4 M/W 4’ Cloud 3 M/W 3’ Cloud 2 M/W 2’ 22

Scenario of use case #2 Big Data, Big Network Workshop - 10 October 2013 Cloud 1 M/W 1’ Cloud n M/W n’ Cloud 7 M/W 7’ Cloud 6 M/W 6’ Cloud 5 M/W 5’ Cloud 4 M/W 4’ Cloud 3 M/W 3’ Cloud 2 M/W 2’ MyCloud 23

Actual testbed configuration for use case #2 Big Data, Big Network Workshop - 10 October 2013 Cloud FedCloud 24 IT ES EG ZA CZ GR 8 clouds 6 countries 3 m/w stacks 1 SME

Current functionalities: Federated authentication Fine-grained authorisation Single/multi-deployment of VMs on a cloud and across clouds Single/multi-move of VMs across clouds Single/multi-deletion of VMs on a cloud and across clouds SSH connection to VMs Direct web access to VMs hosting web services the CHAIN-REDS Science Gateway Big Data, Big Network Workshop - 10 October

Science Gateways in Latin America: the GISELA Science Gateway ( Big Data, Big Network Workshop - 10 October

El Servicio de Computación Avanzada América Latina y el Caribe  As part of the SCALAC model, the GISELA SG has been re-installed and updated at UNAM (Mexico) and named SCALAC SG  The SCALAC SG is in progress to be relocated at:  As part of the SCALAC commitments, CUDI (the Mexican NREN) is financing two pilot projects for new services: access to HPC and to Cloud infrastructures via the SCALAC Science Gateway  The SCALAC Science Gateway will be operational by the end of 2013  The SCALAC SG will be connected to the “Identity Provider of the Mexican Advanced Computing Services for e-Science” and to all other IdPs being established in Latin America in the context of the ELCIRA project 27

Summary and outlook  Standard-based interoperability should be enabled not only across middleware but also and more importantly across computing paradigms (Grid, HPC, Cloud, local clusters, desktops, etc.) in order to exploit big networks as much as possible  Catania Science Gateways successfully bridge e-Infrastructures developed according to different models architectures and make them interoperable at user application level thanks to standard adoption (SAGA, SAML, OCCI, CDMI, JSR286, etc.)  The MyCloud service allows seamless multi-cloud service operation across different OCCI-compliant middleware stacks on many sites worldwide  Next steps are:  Creation of the shared storage infrastructure to support stateful VMs  Allow deployed VMs to «find themselves» in MyCloud  Fostering the deployment of cloud infrastructures in the regions addressed by CHAIN- REDS to widen the testbed both in size and geographic coverage  Promotion of the EGI FedCloud model and possible extension of its infrastructure to other regions in order to support global VRCs Big Data, Big Network Workshop - 10 October

Co-ordination & Harmonisation of Advanced e-Infrastructures for Research and Education Data Sharing Research Infrastructures – Proposal n Questions ?