Presentation is loading. Please wait.

Presentation is loading. Please wait.

D. Scardaci INFN Catania

Similar presentations


Presentation on theme: "D. Scardaci INFN Catania"— Presentation transcript:

1 D. Scardaci (diego.scardaci@ct.infn.it) INFN Catania
Servizi per comunità virtuali sviluppati a Catania e loro interesse per IGI D. Scardaci INFN Catania

2 The Catania Science Gateway framework: Requirements Architecture
Outline The Catania Science Gateway framework: Requirements Architecture AuthN & AuthZ The Catania Grid Engine: Job Engine Data Engine Examples of already supported VRCs/apps The GILDA training material on Science Gateways Conclusions 2

3 Primary requirement: building Science Gateways should be like playing with
Standards Simplicity Easiness of use Re-usability Sc. Gtwy E Sc. Gtwy D Sc. Gtwy C Sc. Gtwy B Sc. Gtwy A

4 Embedded Applications
Our reference model Embedded Applications Administrator Power User Basic User App. 1 App. 2 App. N Gateway Science Standard-based Grid Engine Users from different organisations having different roles and privileges Other Middleware 4

5 Standards adopted The framework for Science Gateways developed at Catania is fully web-based and adopts official worldwide standards and protocols, through their most common implementations. These are: The JSR 168 and JSR 286 standards (also known as "portlet 1.0" and "portlet 2.0" standards); The OASIS Security Assertion Markup Language (SAML) standard and its Shibboleth and SimpleSAMLphp implementations; The Lightweight Direct Access Protocol, and its OpenLDAP implementation; The Cryptographic Token Interface Standard (PKCS#11) standard and its Cryptoki implementation; The Open Grid Forum (OGF) Simple API for Grid Applications (SAGA) standard and its JSAGA implementation. 5

6 Social Networks’ Bridge IdP
AuthN & AuthZ Schema Science Gateway Authorisation Authentication 1. Register to a Service GrIDP (“catch-all”) Social Networks’ Bridge IdP IDPCT (“catch-all”) IDP_y 2. Sign in LDAP 6

7 The Grid IDentity Pool (GrIDP) (http://gridp.ct.infn.it)
This is a “catch-all” Identity Federation

8 The Authentication Procedure
Identity Federations’ discovery service «catch-all» Identity Provider 8

9 The Social Networks’ Bridge IdP (https://idpsocial.ct.infn.it)
Identity Federations’ discovery service For more information watch the video 9

10 eduGAIN (www.edugain.org)
All the Science Gateways developed at Catania are Service Providers of the eduGAIN inter-federation! 10

11 The Catania Grid Engine
Liferay Portlets Science GW 1 Science GW 2 Science GW 3 Grid Engine eToken Server Science GW Interface Data Engine Job Engine Users Track & Monit. Users Tracking DB JSAGA API Grid MW 11/01/2012 11

12 It is compliant with the OGF SAGA standard;
Job Engine The Job Engine is made of a set of libraries to develop applications able to submit and manage jobs on a grid infrastructure; It is compliant with the OGF SAGA standard; It is optimized to be used in a Web Portal running an application server (e.g. Glassfish, Tomcat,…) based on J2EE; It could be used also in stand-alone mode; JSAGA is the SAGA implementation adopted. 12

13 Job Engine - Architecture
GRID INFRASTRUCTURE Jobs Check status/ Get output Worker Threads for Job Check Status MONITORING MODULE USERS TRACKING DB WT WT WT WT WT WT WT Jobs Submission WT WT WT Jobs Queue Worker Threads for Job Submission

14 A Simple API for Grid Applications (SAGA)
SAGA is an API that provides the basic functionality required to build distributed applications, tools and frameworks; It is independent of the details of the underlying infrastructure (e.g., the middleware); SAGA is an OGF specification: Several Implementations are available: A C++ and a Java implementation developed at the Louisiana State University / CCT and Vrije Universiteit Amsterdam ( A Java implementation developed at CCIN2P3 ( A Python implementation based on those above. 14

15 A Simple API for Grid Applications (SAGA)
SAGA is composed by: SAGA Core Libraries: containing the SAGA base system, the runtime and the API packages (file management, job management, etc.); SAGA Adaptors: libraries providing access to the underlying grid infrastructure (adaptors are available for Globus, gLite, etc.); SAGA defines a standard We then need an implementation! 15

16 JSAGA is a Java implementation of SAGA developed at CCIN2P3;
Enables uniform data and job management across different grid infrastructures/middleware; Makes extensions easy: adaptor interfaces are designed to minimize coding effort for integrating support of new technologies/middleware; Is OS indenpendent: most of the provided adaptors are written in full Java and they are tested both on Windows and Linux. 16

17 Job Engine - Requirements
The Job Engine has been designed with the following requirements in mind: Feature Description Status Middleware Independent Capacity to submit job towards resources running different middleware DONE Easiness Create code to run applications on the grid in a very short time Scalability Manage a huge number of parallel job submissions fully exploiting the HW of the machine where the Job Engine is installed Performance Have a good response time Accounting Register every grid operation performed by the users Fault Tolerance Hide middleware failure to the final users ALMOST DONE Workflow Providing a way to easily create workflow TO DO 17

18 Job Engine – Middleware Independent
JSAGA supports gLite, Globus, ARC, UNICORE, etc. Adding new adaptors in JSAGA is a easy job 18

19 A very intuitive API is exposed to the developers;
Job Engine – Easiness Allow to develop application able to submit jobs on the grid in a very short time; A very intuitive API is exposed to the developers; Support MPI applications; The developer has only to submit the job: The Job Engine periodically check the job status; When the Job is done, the job output is automatically downloaded by the Job Engine in the local machine. 19

20 Job Engine – Scalability (1/2)
The Job Engine is able to manage a huge number of parallel job submissions fully exploiting the HW of the machine where it is installed; It enqueues all the parallel requests received serving it according to the HW capabilities; The Job Engine thread pools can be configured to optimally exploit the HW capabilities; A burst of parallel job submissions cannot damage the Job Engine responsiveness thanks to the protection provided by the thread pool mechanism. 20

21 Job Engine – Scalability (2/2)
The answer time is linear; Response time depends on the HW capabilities and thread pools configuration. 21

22 Job Engine – Performance (1/2)
All the delays due to grid interactions are hidden to the final users: The Job Engine provide asynchronous functions for each job management actions (submit, check status, download output, cancel); Final users “feel” a response time equals to 0. The Job Engine is able to submit thousands of jobs in a short time: The Job Engine submit jobs using a configurable thread pool; The Job Engine is able to submit jobs in less than 1 hour with 50 threads in the thread pool; We could improve this measurement increasing the number of threads in the thread pool. 22

23 Job Engine – Performance (2/2)
23

24 Job Engine - Accounting
A very powerful accounting system is included in the job engine; It is fully compliant with EGI VO Portal Policy and EGI Grid Security Traceability and Logging Policy; The following values are stored in the DB for each job submitted: Users; Job Submission timestamp; Job Done timestamp; Application submitted; Job ID; Proxy used; VO; Site where the job is running (CE for gLite). 24

25 Job Engine – Fault Tolerance
Job Engine implements an advanced mechanism to guarantee job submission: Developers can set an appropriate value of “shallow retry”; Developers can specify a top BDII: the Job Engine gets all the WMS registered on the BDII for a given VO. Job Engine selects randomly a WMS to submit the job. Developers can specify a WMSs list: Job Engine selects randomly a WMS from the above list to submit the job. It supports the retry-count mechanism. TODO: Adding an automatic re-submission mechanism when the retry-count threshold is reached (hidden every failure to the final users). 25

26 Our idea is to provide in the next months:
Job Engine - Workflow Currently the Job Engine doesn’t provide any special features to create workflow; Application developers should create workflow using the standard Job Engine API; We are examining several workflow engine that could be integrated in the Job Engine; Our idea is to provide in the next months: An API allowing developers to define a workflow; A graphical tool able to generate a workflow compliant with the API. 26

27 The Catania Grid Engine
Liferay Portlets Science GW 1 Science GW 2 Science GW 3 Grid Engine eToken Server Science GW Interface Data Engine Job Engine Users Track & Monit. Users Tracking DB JSAGA API Grid MW 11/01/2012 27

28 Requirements Grid Storage complexity hidden to the end users;
Users just move files from/to the Storage Elements through the Science Gateway and see the Grid as an external “cloudy” storage accessible through a web interface; The Storage Service is accessible only thanks to Grid credentials (robot certificates) provided to users with federated e-identities; Underlining architecture exposes a file-system-like view (i.e., a Virtual File System, VFS) and users can perform the following actions: Creating, moving, deleting files/directories with the desired structure; Sharing files with other users (like Dropbox); Setting the number of backup copies (replicas) desired. 28

29 Technology: Back-end JSAGA API used to transfer data from/to Storage Elements; Hibernate to manage the VFS collecting information on files stored on Grid; Any changes/actions in the user view affect the VFS: The underlying DB is MySQL; An additional API has been developed in order to keep trace of every Grid transactions in the User Tracking DB (to be compliant with the EGI Portal and Traceability Policies). 29

30 Technology: Front-end
A portlet deployed in a Liferay-based Science Gateway to which the access is provided only to federated users with the right roles; The portlet view component includes elFinder, a web-based file manager developed in Javascript using jQuery UI for a dynamic and user friendly interface: 30

31 elFinder at work 31

32 Data Engine - Upload eTokenServer 3. Proxy request 4. Proxy transfer
1. Sign in 2. Upload Request 7. Upload on Grid SEs 5. File Upload 7. Tracking 6. Update DB User Track-ing DB DOGS DB 32

33 Data On Grid Services - Database
Il file sharing è gestito dalla tabella di RelativePath Table che identifica sia il science gateway che l’utente. 33

34 Current status of the Data Engine
Back-end Status Upload Complete Download VFS Management Partially (“delete” operation missing) Sharing To be done Front-end Status Portlet Complete (with a test user interface) elFinder Integration To be done (by the end of February) 34

35 PROs & CONs PROs: File transfer with 100% browser friendly protocols;
Using the SAGA/JSAGA standard makes Data Engine middleware independent; User friendly interface; All common file actions provided, included file sharing among users; CONs: All transfers go through the Science Gateway: Single point of failure; Network bottleneck; Limited space for huge files. 35

36 New Goals & Requirements
Achieve direct downloads/uploads from/to Storage Elements; No certificates on client side; No caching on intermediary servers; Requirements: Client: any web browser and only a web browser; Storage Elements providing a protocol supported by our clients (i.e.: HTTP and HTTPS); Our implementation is based and tested on DPM SEs; In principle it could work on any SEs supporting HTTP/HTTPS (dCache and StoRM). 36

37 DPM HTTP/HTTPS interface
Developed at CERN and out-of-the-box in gLite Enabled by a simple flag in site-info.def; Handled by an Apache CGI script running on the DPM head node: Listens on port 443 and 883; Role of the CGI script (redirector): Redirects requests to the DPM disk server which stores the requested DPM file; Enforces authentication and authorization executing credential mapping X.509 certs, Globus and VOMS proxies supported via the mod_gridsite Apache module; Authorization on DPM disks is verified through a short-time lived signature appended by the CGI redirector on the DPM head node.

38 DPM HTTP/HTTPS interface
DPM head node: Virtual host (port 443): main entrance for web access; Virtual host (port 883): same as above but redirects to HTTPS transport; DPM disk node: Virtual host (port 777): transport endpoint for HTTP web file access with redirector authorization; Virtual host (port 884): same as above using HTTPS. 38

39 Example SURL: TURL (returned by the DPM Head Node): 39

40 Example A SURL has been requested to the DPM head node (port 443) via HTTPS providing: DPNS path of the requested file; Client certificate (X.509 cert loaded in the client browser); The DPM CGI redirector, after credential verification and mapping, returns to the client a no-cert-required HTTP TURL (or a HTTPS TURL on port 884, if the request to the DPM head node was done through the port 883); The TURL is valid only for a couple of seconds and only for the requesting IP address; The client follows the redirect (HTTP 302/303) and retrieves the file directly from the DPM disk. 40

41 gLibrary REST APIs We cannot use the DPM Web interface as it is, because it requires a certificate on the client side Users of science gateways do not need to own a certificates SG Authentication is based on Shibboleth (username/password pair) gLibrary REST API: Shibboleth-enabled service that forwards client download/upload requests to the SE DPM HTTPS interface, using robot certificates; gLibrary REST API returns the same short-lived TURLs to the client that, in turns, can download/upload a file directly to the DPM disk server that holds it Example: 41

42 gLibrary DM REST APIs Download: Upload:
HTTP verb: GET Ex: Upload: Example: Returns a JSON object with “post_url” attribute The client submit the file to the “post_url” of the destination DPM SE using HTTP POST verb. 42

43 Social Networks’ Bridge IdP
Download workflow eToken Service (myproxy.ct.infn.it) 4. proxy request glibrary.ct.infn.it GrIDP (“catch-all”) Social Networks’ Bridge IdP 3. authorization DPM head node Science Gateway 5. download request with proxy and client IP REST API 6. TURL returned 1. authenticate 2. download request 7. TURL forwarded to client DPM disk servers 8. client starts direct download via HTTP(S) GET 43

44 Social Networks’ Bridge IdP
Upload workflow eToken Service (myproxy.ct.infn.it) 4. proxy request glibrary.ct.infn.it GrIDP (“catch-all”) Social Networks’ Bridge IdP 3. authorization DPM head node Science Gateway 5. upload request with proxy and client IP REST API 6. POST URL returned 1. authenticate 2. upload request 7. POST URL forwarded to client DPM disk servers 8. client starts direct upload via HTTP(S) POST to POST URL 44

45 eTokenServer Main Features
Deployed on Tomcat Application Server (ver ); Thread-safe access to the list of smart-cards based on a singleton; Evaluated performance of the server using Apache Jmeter: ~6-8 s waiting time for a new proxy; 20 ms if the proxy is cached! SSL encryption using a trusted host certificate; Caching of proxy certificates for each valid requestID (serial+vo+fqan+vomsproxy): If lifetime(requestID) > 3 h the cached proxy is sent back to the Science Gateway otherwise a new one is created. 45

46 retrieve serials/proxy ask for VOMS AC attributes
The working scenario (*) SSL encryption get results ask for a service list/create request execute service retrieve serials/proxy (*) eTokenServer MyProxy Server ask for VOMS AC attributes VOMS Server store long proxy 46

47 Third DM approach: the DECIDE Repo Manager
gLibrary DM APIs are a subset of the complete API provided by gLibrary: Download/Upload APIs to DPM Storage Elements; LFC Registration, Replication, Deletion; Metadata management (via AMGA); Encryption/Decryption API (via the Secure Storage System); In DECIDE we don’t use Uploads API but Encryption and Replication ones: Data should not leave hospital boundaries un-encrypted; We deployed Encryption/Replication APIs on a local Keystore service. 47

48 DECIDE Repo Manager Hospital boundary REST API Grid Resources LFC
4.Upload 5. Replication infn-se-01.ct.pi2s2.it Upload service REST API liferay.ct.infn.it 3. proxy requests gridsrv3-4.dir.garr.it keystore.ct.infn.it Grid Resources myproxy.ct.infn.it 2. Upload & encryption 7. proxy reqs LFN registration LFC 1. retrieve front-end REST API lfc-01.ct.trigrid.it 8. Metadata management 6. Metadata management User clients (Data Managers only) amga.ct.infn.it glibrary.ct.infn.it

49 L’e-Culture Science Gateway di INDICATE
Basato sulle API REST di gLibrary Usa l’interfaccia HTTPS degli Storage Element 49

50 L’e-Culture Science Gateway di INDICATE

51 Science Gateways in action: GATE @ EUMEDGRID
51

52 Science Gateways in action: MrBayes @ GISELA
52

53 Science Gateway Service Centric Model
GISELA - Second Project Review - Brussels - 8/12/2011

54 Science Gateways in action: GridEEG @ DECIDE
54

55 Science Gateways in action: DataManager @ DECIDE
Data are anonymised, encrypted and replicated 55

56 The Secure Storage Service
Provides gLite users with suitable and simple tools to store confidential data in Storage Elements in a transparent and secure way; The service is composed by the following components: CLI: commands integrated in the gLite User Interface to encrypt/upload and decrypt/ download files; API: allows the developer to write programs able to manage confidential data; Keystore: a new grid element used to store and retrieve the users’ keys. It is identified by an host X.509 digital certificate and all its Grid transactions are mutually authenticated and encrypted according to the GSI model. 56

57 Secure Storage vs. Hydra
Secure Storage Service gLite 3.2 support Yes File encryption/decryption Data block encryption/decryption (Posix-like API) No Grid Keystore Strong Encryption Replica management and automatic Key deletion Shamir’s shared secret scheme support 57

58 Il portale RICeVI per l’e-collaboration and l’e-learning
58

59 GILDA training material for Science Gateway development
59

60 Conclusions Now that millions of users can potentially access and use the Catania Science Gateways, and many of them we need to develop new «marketing» and «communication» strategies and create a portfolio of «appealing» applications to attract them; A new training format and programme as well as a more focused dissemination activity are being developed and implemented (this could be the topic of a dedicated meeting); Chiediamo formalmente all’Unità FUS ed al CT di IGI di valutare i servizi sviluppati a Catania da INFN e COMETA, che sono “trasversali” alle varie VRC e quindi di interesse generale, al fine di una loro formale adozione. 60

61 Credits Valeria Ardizzone (GARR); Roberto Barbera (UNICT & INFN)
Riccardo Bruno (COMETA); Antonio Calanducci (COMETA); Marco Fargetta (COMETA); Elisa Ingrà (GARR); Giuseppe La Rocca (INFN); Salvatore Monforte (INFN); Fabrizio Pistagna (INFN); Rita Ricceri (INFN); Riccardo Rotondo (GARR); 61


Download ppt "D. Scardaci INFN Catania"

Similar presentations


Ads by Google