H2020 EU PROJECT | Topic SC1-DTH | GA:

Slides:



Advertisements
Similar presentations
Polska Infrastruktura Informatycznego Wspomagania Nauki w Europejskiej Przestrzeni Badawczej Institute of Computer Science AGH ACC Cyfronet AGH The PL-Grid.
Advertisements

Building Portals to access Grid Middleware National Technical University of Athens Konstantinos Dolkas, On behalf of Andreas Menychtas.
LEAD Portal: a TeraGrid Gateway and Application Service Architecture Marcus Christie and Suresh Marru Indiana University LEAD Project (
Scientific Workflow Support in the PL-Grid Infrastructure with HyperFlow Bartosz Baliś, Tomasz Bartyński, Kamil Figiela, Maciej Malawski, Piotr Nowakowski,
PROGRESS: ICWS'2003 Web Services Communication within the PROGRESS Grid-Portal Environment Michał Kosiedowski.
1 PUNCH PUNCH (Purdue University Network Computing Hubs) is a distributed network-computing infrastructure that allows geographically dispersed users to.
Operating systems This work is licensed under a Creative Commons Attribution-Noncommercial- Share Alike 3.0 License. Skills: none IT concepts: popular.
IPlant Collaborative Tools and Services Workshop iPlant Collaborative Tools and Services Workshop Overview of Atmosphere.
Towards auto-scaling in Atmosphere cloud platform Tomasz Bartyński 1, Marek Kasztelnik 1, Bartosz Wilk 1, Marian Bubak 1,2 AGH University of Science and.
Distributed Cloud Environment for PL-Grid Applications Piotr Nowakowski, Tomasz Bartyński, Tomasz Gubała, Daniel Harężlak, Marek Kasztelnik, J. Meizner,
Customized cloud platform for computing on your terms !
IPlant Collaborative Tools and Services Workshop iPlant Collaborative Tools and Services Workshop Overview of Atmosphere.
A framework to support collaborative Velo: Knowledge Management for Collaborative (Science | Biology) Projects A framework to support collaborative 1.
The PROGRESS Grid Service Provider Maciej Bogdański Portals & Portlets 2003 Edinburgh, July 14th-17th.
GRNET Greek Research & Education Network GRNET Simple Storage – GSS Ioannis Liabotis, Panos Louridas Amsterdam, June 2007.
Group 1 : Grid Computing Laboratory of Information Technology Supervisors: Alexander Ujhinsky Nikolay Kutovskiy.
Basic Grid Registry configuration – there is not any backup data Grid Registry configuration where every domain has duplicated information Find all services.
PROGRESS: ICCS'2003 GRID SERVICE PROVIDER: How to improve flexibility of grid user interfaces? Michał Kosiedowski.
DataNet – Flexible Metadata Overlay over File Resources Daniel Harężlak 1, Marek Kasztelnik 1, Maciej Pawlik 1, Bartosz Wilk 1, Marian Bubak 1,2 1 ACC.
High Level Architecture (HLA)  used for building interactive simulations  connects geographically distributed nodes  time management (for time- and.
E-science grid facility for Europe and Latin America Using Secure Storage Service inside the EELA-2 Infrastructure Diego Scardaci INFN (Italy)
EC-project number: Universal Grid Client: Grid Operation Invoker Tomasz Bartyński 1, Marian Bubak 1,2 Tomasz Gubała 1,3, Maciej Malawski 1,2 1 Academic.
EC-project number: ViroLab Virtual Laboratory Marian Bubak ICS / CYFRONET AGH Krakow virolab.cyfronet.pl.
1 Taverna CISTIB Ernesto Coto Taverna Open Workshop, October 2014.
Holding slide prior to starting show. A Portlet Interface for Computational Electromagnetics on the Grid Maria Lin and David Walker Cardiff University.
NA-MIC National Alliance for Medical Image Computing UCSD: Engineering Core 2 Portal and Grid Infrastructure.
Getting started DIRAC Project. Outline  DIRAC information system  Documentation sources  DIRAC users and groups  Registration with DIRAC  Getting.
NEES Cyberinfrastructure Center at the San Diego Supercomputer Center, UCSD George E. Brown, Jr. Network for Earthquake Engineering Simulation NEES TeraGrid.
Cooperative experiments in VL-e: from scientific workflows to knowledge sharing Z.Zhao (1) V. Guevara( 1) A. Wibisono(1) A. Belloum(1) M. Bubak(1,2) B.
Lightweight construction of rich scientific applications Daniel Harężlak(1), Marek Kasztelnik(1), Maciej Pawlik(1), Bartosz Wilk(1) and Marian Bubak(1,
Federating PL-Grid Computational Resources with the Atmosphere Cloud Platform Piotr Nowakowski, Marek Kasztelnik, Tomasz Bartyński, Tomasz Gubała, Daniel.
Bookkeeping Tutorial. 2 Bookkeeping content  Contains records of all “jobs” and all “files” that are produced by production jobs  Job:  In fact technically.
Powered by iPlant Consuming iPlant Services in Your Portals.
The Gateway Computational Web Portal Marlon Pierce Indiana University March 15, 2002.
PEPC 2003, Geneva, , PROGRESS Computing Portal Poznań Supercomputing and Networking Center (PSNC) Poland Poland Cezary Mazurek.
PROGRESS: GEW'2003 Using Resources of Multiple Grids with the Grid Service Provider Michał Kosiedowski.
Grid Execution Management for Legacy Code Architecture Exposing legacy applications as Grid services: the GEMLCA approach Centre.
1 PSI/PhUSE Single Day Event – SAS Applications – June 11, 2009 SAS Drug Development from the Inside Magnus Mengelbier Director.
REST API to develop application for mobile devices Mario Torrisi Dipartimento di Fisica e Astronomia – Università degli Studi.
InSilicoLab – Grid Environment for Supporting Numerical Experiments in Chemistry Joanna Kocot, Daniel Harężlak, Klemens Noga, Mariusz Sterzel, Tomasz Szepieniec.
IPlant Collaborative Tools and Services Workshop iPlant Collaborative Tools and Services Workshop Overview of Atmosphere.
The LGI Pilot job portal EGI Technical Forum 20 September 2011 Jan Just Keijser Willem van Engen Mark Somers.
1ACC Cyfronet AGH, Krakow, Poland
PLG-Data and rimrock Services as Building
Distributed Computing Environments (DICE) team – product portfolio
Department of Computer Science AGH
Demo of the Model Execution Environment WP2 Infrastructure Platform
Demo of the Model Execution Environment WP2 Infrastructure Platform
Model Execution Environment Current status of the WP2 Infrastructure Platform Marian Bubak1, Daniel Harężlak1, Marek Kasztelnik1 , Piotr Nowakowski1, Steven.
From VPH-Share to PL-Grid: Atmosphere as an Advanced Frontend
Model Execution Environment for Investigation of Heart Valve Diseases
Data Management System for Investigation of Heart Valve Diseases
Grid2Win: Porting of gLite middleware to Windows XP platform
Grid2Win: Porting of gLite middleware to Windows XP platform
Tools and Services Workshop Overview of Atmosphere
WP2 Model Execution Environment
Grid2Win: Porting of gLite middleware to Windows XP platform
PROCESS - H2020 Project Work Package WP6 JRA3
1ACC Cyfronet AGH, Kraków, Poland
Case Study: Algae Bloom in a Water Reservoir
Storing and Accessing G-OnRamp’s Assembly Hubs outside of Galaxy
Mariusz Sterzel1 , Lukasz Dutka1, Tomasz Szepieniec1
Infrastructure for Personalised Medicine: It’s MEE that Matters!
Final Review 27th March Final Review 27th March 2019.
Marian Bubak1,2, Tomasz Gubala2, Marek Kasztelnik2,
The ViroLab Virtual Laboratory for Viral Diseases
H2020 EU PROJECT | Topic SC1-DTH | GA:
A Survey of Interactive Execution Environments
Introduction to the SHIWA Simulation Platform EGI User Forum,
DIBBs Brown Dog BDFiddle
Presentation transcript:

H2020 EU PROJECT | Topic SC1-DTH-07-2018 | GA: 826494 T2.5 (M1-M48) (HPC) User Interfaces and APIs Marek Kasztelnik, Tomasz Gubała, Marian Bubak | Academic Computer Centre Cyfronet AGH, Kraków, Poland 12/12/2018 http://dice.cyfronet.pl/ H2020 EU PROJECT | Topic SC1-DTH-07-2018 | GA: 826494

T2.5 User Interfaces and APIs Goal: run models on HPC, browse models, inputs and outputs 5 main elements: Infrastructure -> Prometheus (T2.3) Run jobs on Prometheus using a REST API -> Rimrock (https://submit.plgrid.pl) Manage data stored on Prometheus using Web and/or a REST API -> PLGData (https://data.plgrid.pl) Models repository and versioning -> GitLab Organize model execution on patient data -> Model Execution Environment (MEE) 2 5 1 4 3

Patient and relevant clinical data T2.5 Model Execution Environment Organize research on patient data Integrated with PLGrid infrastructure (automatic execution on the HPC cluster and data management) Give possibilities to upload and download files to/from Prometheus storage But can be integrated for PRIMAGE with other file storage infrastructure (cloud) Connected with GitLab repositories for model versioning Simulations are organised in pipelines (more on them in the WP5 presentation) Patient and relevant clinical data Run Browse inputs and outputs Select model version

T2.5 Rimrock – Robust Remote Process Controller Submit Prometheus Computations using REST API Possibility to submit and monitor Slurm jobs executing on the Prometheus supercomputer using a REST API Secured by PLGrid authentication and authorisation framework Example - submit a job to Prometheus using the curl tool: proxy="`cat {path-to-proxy-file} | base64 | tr -d '\n'`“ curl -k -X POST --data '{"host":"prometheus.cyfronet.pl", "script":"#!/bin/bash\n#SBATCH -A {grantid}\n echo hello\n exit 0"}' \--header "Content-Type:application/json" --header "PROXY:$proxy" https://submit.plgrid.pl/api/jobs In PRIMAGE the MEE environment will do this on your behalf

T2.5 PLGData – Browse Prometheus files Prometheus web file browser and a REST API Possibility to create, delete, rename and change access rights to files on the HPC cluster file system using a Web UI and a REST API Secured by PLGrid authentication and authorisation framework Example - download a file from Prometheus: curl -X GET https://data.plgrid.pl/download/people/plguserlogin/graph.png --data-urlencode proxy="`cat grid_proxy`"

T2.5 Graphical User Interface for Prometheus Jobs Additional Option: using a GUI inside a running Prometheus job Prometheus supports starting Graphical User Interfaces from inside of a running Slurm job (e.g. Matlab UI or Ansys RSM): Load the “pro-viz”module Start a new job using “pro-viz” command (not supported yet by Rimrock) Use VNC to start Graphical User Interface on job resources Supported applications: Ansys RSM Ansys Electronic Desktop (Maxwell) Matlab Mathematica

http://dice.cyfronet.pl/ Marek Kasztelnik | m.kasztelnik@cyfronet.pl