Presentation is loading. Please wait.

Presentation is loading. Please wait.

CGW’12, Cracow, October 22-24, 2012112-Oct-12 Managing Cloud Resources for Medical Applications P. Nowakowski, T. Bartyński, T. Gubała, D. Harężlak, M.

Similar presentations


Presentation on theme: "CGW’12, Cracow, October 22-24, 2012112-Oct-12 Managing Cloud Resources for Medical Applications P. Nowakowski, T. Bartyński, T. Gubała, D. Harężlak, M."— Presentation transcript:

1 CGW’12, Cracow, October 22-24, 2012112-Oct-12 Managing Cloud Resources for Medical Applications P. Nowakowski, T. Bartyński, T. Gubała, D. Harężlak, M. Kasztelnik, J. Meizner, M. Bubak ACC CYFRONET AGH, Krakow, Poland

2 CGW’12, Cracow, October 22-24, 2012212-Oct-12 Install/configure each application service (which we call an Atomic Service) once – then use them multiple times in different workflows; Direct access to raw virtual machines is provided for developers, with multitudes of operating systems to choose from (IaaS solution); Install whatever you want (root access to Cloud Virtual Machines); The cloud platform takes over management and instantiation of Atomic Services; Many instances of Atomic Services can be spawned simultaneously; Large-scale computations can be delegated from the PC to the cloud/HPC via a dedicated interface; Smart deployment: computations can be executed close to data (or the other way round). Core concept: a cloud platform for medical application services and data Developer Application Install any scientific application in the cloud End user Access available applications and data in a secure manner Administrator Cloud infrastructure for e-science Manage cloud computing and storage resources Managed application

3 CGW’12, Cracow, October 22-24, 2012312-Oct-12 Atomic service instance: A running instance of an atomic service, hosted in the Cloud and capable of being directly interfaced, e.g. by the workflow management tools or VPH-Share GUIs. ! Virtual Machine: A self-contained operating system image, registered in the Cloud framework and capable of being managed by VPH-Share mechanisms. ! Atomic service: A VPH-Share application (or a component thereof) installed on a Virtual Machine and registered with the cloud management tools for deployment. ! Raw OS OS VPH-Share app. (or component) External APIs OS VPH-Share app. (or component) External APIs Cloud host A brief glossary

4 CGW’12, Cracow, October 22-24, 2012412-Oct-12 Platform for three user groups The goal of of the platform is to manage cloud/HPC resources in support of VPH-Share applications by: Providing a mechanism for application developers to install their applications/tools/services on the available resources Providing a mechanism for end users (domain scientists) to execute workflows and/or standalone applications on the available resources with minimum fuss Providing a mechanism for end users (domain scientists) to securely manage their binary data in a hybrid cloud environment Providing administrative tools facilitating configuration and monitoring of the platform Cloud Platform Interface Manage hardware resources Heuristically deploy services Ensure access to applications Keep track of binary data Enforce common security Hybrid cloud environment (public and private resources) Application Generic service Application Data Developer support Tools for deploying applications and registering datasets End user support Easy access to applications and binary data Admin support Management of VPH- Share hardware resources

5 CGW’12, Cracow, October 22-24, 2012512-Oct-12 Physical resources Atomic Service Instances Deployed by AMS on available resources as required by WF mgmt or generic AS invoker Raw OS (Linux variant) LOB Federated storage access Web Service cmd. wrapper Generic VNC server VPH-Share Tool / App. DRI Service Atmosphere persistence layer (internal registry) VM templates AS images Available cloud infrastructure Managed datasets 101101 011010 111011 101101 011010 111011 101101 011010 111011 AM Service LOB federated storage access Cloud stack clients HPC resource client/backend Data and Compute Cloud Platform VPH-Share Master UI AS mgmt. interface Generic AS invoker Computation UI extensions Data mgmt. interface Generic data retrieval Data mgmt. UI extensions Remote access to Atomic Svc. UIs Custom AS client Workflow description and execution Developer Scientist Admin Security mgmt. interface Security framework Web Service security agent Cloud Platform Architecture Modules available in first prototype

6 CGW’12, Cracow, October 22-24, 2012612-Oct-12 End user’s view of the cloud platform – contd. Log into Master Interface Select Atomic Service Instantiate Atomic Service Access and use application Atomic Services can be instantiated on demand Once instantiated, the service can be accessed by the end user Unused instances can be shut down by Atmosphere

7 CGW’12, Cracow, October 22-24, 2012712-Oct-12 Atmosphere Core component of the VPH-Share cloud platform, responsible for managing cloud resources and deploying Atomic Services accordingly. The Atmosphere Management Service receives requests from the Workflow Execution stating that a set of atomic services is required to process/produce certain data; queries the Component Registry to determine the relevant AS and data characteristics; collects infostructure metrics, analyzes available data and prepares an optimal deployment plan. AIR Also called the Atmosphere Internal Registry; stores all data on cloud resources, Atomic Services and their instances. Computing infrastructure (hybrid public/private cloud) 1. Application (or any other authorized entity) requests access to an Atomic Service 2. Poll AIR for data regarding this AS and the available computing resources 3. Heuristically determine whether to recycle an existing instance or spawn a new one. Also determine which computing resources to use when instantiating additional instances (based on cost information and performance metrics obtained from monitoring data) Cloud middleware Selection of low-level middleware libraries to manage specific types of cloud sites [Asynchronous process] Collect monitoring data and analyze health of the cloud infrastructure to ensure optimal deployment of application services 4. Call cloud middleware services to enforce the deployment plan 5. Deploy Atomic Service Instances as directed by Atmosphere Application -- or -- Workflow environment -- or -- End user

8 CGW’12, Cracow, October 22-24, 2012812-Oct-12 Deployment planning Applications are heuristically deployed on the available computing resources, with regard to the following considerations: where to deploy atomic services (partner’s private cloud site, public cloud infrastructure or hybrid installation), whether the data should be transferred to the site where the atomic service is deployed or the other way around, how many instances should be started, whether it is possible to reuse predeployed AS (instances shared among workflows) The deployment plan bases on the analysis of: workflow and atomic service resource demands, volume and location of input and output data, load of available resources, cost of acquiring resources on private and public cloud sites, cost of using cheaper instances (whenever possible and sufficient; e.g. EC2 Spot Instances or S3 Reduced Redundancy Storage for some noncritical (temporary) data), public cloud provider billing model

9 CGW’12, Cracow, October 22-24, 2012912-Oct-12 High Performance Execution Environment Provides virtualized access to high performance execution environments Seamlessly provides access to high performance computing to workflows that require more computational power than clouds can provide Deploys and extends the Application Hosting Environment – provides a set of web services to start and control applications on HPC resources GridFTP AHE Web Services (WSRF::Lite) Grid resources running Local Resource Manager (PBS, SGE, Loadleveler etc.) Application Hosting Environment Auxiliary component of the cloud platform, responsible for managing access to traditional (grid-based) high performance computing environments. Provides a Web Service interface for clients. Invoke the Web Service API of AHE to delegate computation to the grid Application -- or -- Workflow environment -- or -- End user Present security token (obtained from authentication service) Tomcat container WebDAV User access layer HARC Job Submission Service (OGSA BES / Globus GRAM) RealityGrid SWS Resource client layer Delegate credentials, instantiate computing tasks, poll for execution status and retrieve results on behalf of the client

10 CGW’12, Cracow, October 22-24, 20121012-Oct-12 Service-based access to high-performance computational resources AHE service host (ozone.chem.ucl.ac.uk) AHE service backend Provides credential delegation, data staging and execution monitoring features AHE service interface Provides RESTful access to AHE applications, enables data staging and delegation of security credentials The AHE service interface: Simplifies Grid Security (end user does not have to handle grid security and MyProxy configurations and generation) Simplifies application setup on the Grid(end user does not have to compile, optimize, install and configure applications) Simplifies basic Grid Workflow (AHE stages the data, runs and polls the job and fetches the results automatically) Simplifies Grid access through RESTful web-services (AHE provides a RESTful interface allowing clients and other web services to access the computational infrastructure and applications in a Software as a Service (SaaS) manner). Accessing grid resources through the AHE service frontend: 1.prepare (The end-users selects a grid application for an appropriate computational resource registered with AHE, and starts an AHE Application Instance (job)) 2.SetDataStaging (Sets up data staging information between the grid infrastructure and the user resource) 3.setProperty (Sets up job property) 4.start (Initiates data transfer, executes job, checks job status and fetches result once completed) 5.status (Polls the underlying grid infrastructure for job status) Developer Scientist HPC resources (National Grid Service)

11 CGW’12, Cracow, October 22-24, 20121112-Oct-12 Data Access for Large Binary Objects LOBCDER host (149.156.10.143) LOBCDER service backend Resource catalogue WebDAV servlet Resource factory Storage driver Storage driver Storage driver (SWIFT) SWIFT storage backend Core component host (vph.cyfronet.pl) Data Manager Portlet (VPH-Share Master Interface component) Atomic Service Instance (10.100.x.x) Service payload (VPH-Share application component) External host Generic WebDAV client GUI-based access Mounted on local FS (e.g. via davfs2) LOBCDER (the VPH-Share federated data storage component) enables data sharing in the context of VPH- Share applications The system is capable of interfacing various types of storage resources and supports SWIFT cloud storage (support for Amazon S3 is under development) LOBCDER exposes a WebDAV interface and can be accessed by any DAV-compliant client. It can also be mounted as a component of the local client filesystem using any DAV-to-FS driver (such as davfs2).

12 CGW’12, Cracow, October 22-24, 20121212-Oct-12 Data Reliability and Integrity Provides a mechanism which will keep track of binary data stored in the Cloud infrastructure Monitors data availability Advises the cloud platform when instantiating atomic services Shifts/replicate data between cloud sites, as required Binary data registry AIR Amazon S3OpenStack SwiftCumulus Register files Get metadata Migrate LOBs Get usage stats (etc.) Distributed Cloud storage Store and marshal data End-user features (browsing, querying, direct access to data) VPH Master Int. Data management portlet (with DRI management extensions) DRI Service A standalone application service, capable of autonomous operation. It periodically verifies access to any datasets submitted for validation and is capable of issuing alerts to dataset owners and system administrators in case of irregularities. Validation policy Configurable validation runtime (registry-driven) Runtime layer Extensible resource client layer

13 CGW’12, Cracow, October 22-24, 20121312-Oct-12 Security Framework Provides a policy-driven access system for the security framework. Provides a solution for an open-source based access control system based on fine-grained authorization policies. Implements Policy Enforcement, Policy Decision and Policy Management Ensures privacy and confidentiality of eHealthcare data Capable of expressing eHealth requirements and constraints in security policies (compliance) Tailored to the requirements of public clouds VPH Security Framework ApplicationWorkflow managemen t service DeveloperEnd userAdministrator VPH clients VPH Security Framework VPH Atomic Service Instances Public internet (or any authorized user capable of presenting a valid security token)

14 CGW’12, Cracow, October 22-24, 20121412-Oct-12 Authentication and authorization VPH-Share Master Int. Authentication widget Login feature Admin Developer Scientist Portlet BiomedTown Identity Provider Authentication service 2. Open login window and delegate credentials VPH-Share Atomic Service Instance Security Proxy 1. User selects „Log in with BiomedTown” Users and roles Security Policy Service payload (VPH-Share application component) 3. Validate credentials and spawn session cookie containing user token (created by the Master Interface) 5. Parse user token, retrieve roles and allow/deny access to the ASI according to the security policy 6’. Relay request if authorized 6’. Report error (HTTP/401) if not authorized 4. When invoking AS, pass user token along with request header Developers, admins and scientists obtain access to the cloud platform via the Master Interface UI The OpenID architecture enables the Master Interace to delegate authentication to any public identity provider (e.g. BiomedTown). Following authentication the MI obtains a secure user token containing the current user’s roles. This token is then used to authorize access to Atomic Service Instances, in accordance with their security policies.

15 CGW’12, Cracow, October 22-24, 20121512-Oct-12 Handling security on the ASI level VPH-Share Atomic Service Instance Security Proxy Security Policy Service payload (VPH-Share application component) Public AS API (SOAP/REST) 1. Incoming request Actual application API (localhost access only) Exposed externally by local web server (apache2/tomcat) 2. Intercept request a6b72bfb5f2466512a b2700cd27ed5f84f99 1422rdiaz!developer! rdiaz,Rodrigo Diaz,rodrigo.diaz@at osresearch.eu,,SPAIN, 08018 User token digital signature timestamp unique username assigned role(s) additional info 3. Decrypt and validate the digital signature with the Master Interface’s secret key. 4. If the digital signature checks out, consult the security policy to determine whether the user should be granted access on the basis of his/her assigned roles. 6. Intercept service response 7. Relay response The application API is only exposed to localhost clients Calls to Atomic Services are intercepted by the Security Proxy Each call carries a user token (passed in the request header) The user token is digitally signed to prevent forgery. This signature is validated by the Security Proxy The Security Proxy decides whether to allow or disallow the request on the basis of its internal security policy Cleared requests are forwarded to the local service instance 3’, 4’ Report error 3’, 4’. If the digital signature is invalid or if the security policy prevents access given the user’s existing roles, the Security Proxy throws a HTTP/401 (Forbidden) exception to the client. 5. Relay original request (if cleared) 5. Otherwise, relay the original request to the service payload. Include the user token for potential use by the service itself. 6-7. The service response is relayed to the original client. This mechanism is entirely transparent from the point of view of the person/application invoking the Atomic Service.

16 CGW’12, Cracow, October 22-24, 20121612-Oct-12 WP2 Component/ModuleTechnologies applied Cloud Resource Allocation Management Java application with Web Service (REST) interfaces, OSGi bundle hosted in a Karaf container, Camel integration framework Cloud Execution EnvironmentJava application with Web Service (REST) interfaces, OSGi bundle hosted in a Karaf container, Nagios monitoring framework, OpenStack and Amazon EC2 cloud platforms High Performance Execution Environment Application Hosting Environment with Web Service (REST/SOAP) interfaces Data Access for Large Binary ObjectsStandalone application preinstalled on VPH-Share Virtual Machines; connectors for OpenStack ObjectStore and Amazon S3; GridFTP for file transfer Data Reliability and IntegrityStandalone application wrapped as a VPH-Share Atomic Service, with Web Service (REST) interfaces; uses LOB tools for access to binary data Security FrameworkUniform security mechanism for SOAP/REST services; Master Interface SSO enabling shell access to virtual machines Platform Modules and Technologies

17 CGW’12, Cracow, October 22-24, 20121712-Oct-12 Behind the scenes: Instantiating an Atomic Service Template (1/2) Developer VPH-Share Master Int. Cloud Manager Development Mode Start Atomic Service Core Component Host (149.156.10.143) Cloud Facade (API) Atmosphere AMS Atmosphere Internal Registry MongoDB Comp. model Storage model Nova Head Node (149.156.10.131) OpenStack (API) Nova management interface Glance image store AS Images 1. Start AS 2. Request instantiation of Atomic Service 3. Get AS VM details 4. Call Nova to instantiate selected VM OpenStack WN (10.100.x.x) WN hypervisor (KVM) Mounted network storage Per-WN storage 6. Upload VM image to WN storage 5. Stage AS image on WN Atomic Service Instance Assigned local storage 7. Boot VM 7. The Cloud Manager portlet enables developers to create, deploy, save and instantiate Atomic Service Instances on cloud resources.

18 CGW’12, Cracow, October 22-24, 20121812-Oct-12 Behind the scenes: Instantiating an Atomic Service Template (2/2) Developer VPH-Share Master Int. Cloud Manager Development Mode Core Component Host (149.156.10.143) Cloud Facade (API) Atmosphere AMS Atmosphere Internal Registry MongoDB Comp. model Storage model Nova Head Node (149.156.10.131) OpenStack (API) Nova management interface 16. Retrieve ASI status, port mappings and access credentials 12. Register ASI as booting/running 10. Poll Nova for VM status OpenStack WN (10.100.x.x) Atomic Service Instance Assigned local storage WN hypervisor 8. Report VM is booting 11. Delegate query and relay reply IP Wrangler host (149.156.10.131) IP Wrangler Port mapping table 13. Configure IP Wrangler to enable port forwarding 15. Poll for ASI status and update view ASI details Atmosphere takes care of interpreting user requests and managing the underlying cloud platform. CYFRONET contributes a private cloud site for development purposes. 14. Register port mappings for this ASI 9. Report VM is running

19 CGW’12, Cracow, October 22-24, 20121912-Oct-12 Behind the scenes: Communicating with Atomic Service Instance Developer VPH-Share Master Int. Cloud Manager Development Mode OpenStack WN (10.100.x.x) Atomic Service Instance Assigned local storage IP Wrangler host (149.156.10.131) IP Wrangler Port mapping table ASI metadata Standard IP stack (accessible via public IP) 1. Look up ASI details (including IP Wrangler IP, port mappings and access credentials, if needed) 2. Initiate interaction 3. Relay4. Call ASI Note: Atomic Service Instances typically do not have public IPs The role of the IP Wrangler is to facilitate user interaction on arbitrary ports (e.g. SSH, VNC etc.) with VMs deployed on a computing cluster (such as is the case at CYFRONET) The IP Wrangler bridges communication on predetermined ports, according to the ASI configuration which is stored in AIR Web Service calls do not require nonstandard ports and are instead handled by appending data to the endpoint path

20 CGW’12, Cracow, October 22-24, 20122012-Oct-12 Behind the scenes: Saving the Instance as a new Atomic Service Developer VPH-Share Master Int. Cloud Manager Development Mode Save Atomic Service Core Component Host (149.156.10.143) Cloud Facade (API) Atmosphere AMS Atmosphere Internal Registry MongoDB Comp. model Storage model Nova Head Node (149.156.10.131) OpenStack (API) Nova management interface Glance image store AS Images 1. Create AS from ASI 2. Request storage of Atomic Service 8. Register AS as available. 3. Call Nova to persist ASI OpenStack WN (10.100.x.x) WN hypervisor (KVM) Mounted network storage Per-WN storage 6. Upload VM image to Glance 4. Store VM image in Glance Atomic Service Instance Assigned local storage 5. Image selected VM (incl. user space) 5. 7. Report success AS metadata Developers are able to save existing instances as new Atomic Services. Once saved, an Atomic Service can be instantiated by clients. 3’. Register AS as being saved.

21 CGW’12, Cracow, October 22-24, 20122112-Oct-12 More information on accessing the VPH- Share Infrastructure The Master Interface is deployed at new.physiomespace.com – Provides access to all VPH-Share cloud platform features – Tailored for domain experts (no in-depth technical knowledge necessary) – Uses OpenID authentication provided by BiomedTown – Contact Piotr Nowakowski (CYF) for details regarding access and account provisioning Further information about the project can be found at www.vph- share.eu Make sure to check out the DICE team website at CYF (dice.cyfronet.pl/projects/VPH-Share) for further information regarding the cloud platform and practical usage examples


Download ppt "CGW’12, Cracow, October 22-24, 2012112-Oct-12 Managing Cloud Resources for Medical Applications P. Nowakowski, T. Bartyński, T. Gubała, D. Harężlak, M."

Similar presentations


Ads by Google